man.bsd.lv manual page server

Manual Page Search Parameters

TUNING(7) Miscellaneous Information Manual TUNING(7)

tuningperformance tuning under DragonFly

Modern DragonFly systems typically have just three partitions on the main drive. In order, a UFS /boot, swap, and a HAMMER or HAMMER2 root. In prior years the installer created separate PFSs for half a dozen directories, but now we just put (almost) everything in the root. The installer will separate stuff that doesn't need to be backed up into a /build subdirectory and create null-mounts for things like /usr/obj, but it no longer creates separate PFSs for these. If desired, you can make /build its own mount to separate-out the components of the filesystem which do not need to be persistent.

Generally speaking the /boot partition should be 1GB in size. This is the minimum recommended size, giving you room for backup kernels and alternative boot schemes. DragonFly always installs debug-enabled kernels and modules and these can take up quite a bit of disk space (but will not take up any extra ram).

In the old days we recommended that swap be sized to at least 2x main memory. These days swap is often used for other activities, including tmpfs(5) and swapcache(8). We recommend that swap be sized to the larger of 2x main memory or 1GB if you have a fairly small disk and 16GB or more if you have a modestly endowed system. If you have a modest SSD + large HDD combination, we recommend a large dedicated swap partition on the SSD. For example, if you have a 128GB SSD and 2TB or more of HDD storage, dedicating upwards of 64GB of the SSD to swap and using swapcache(8) will significantly improve your HDD's performance.

In an all-SSD or mostly-SSD system, swapcache(8) is not normally used and should be left disabled (the default), but you may still want to have a large swap partition to support tmpfs(5) use. Our synth/poudriere build machines run with at least 200GB of swap and use tmpfs for all the builder jails. 50-100 GB is swapped out at the peak of the build. As a result, actual system storage bandwidth is minimized and performance increased.

If you are on a minimally configured machine you may, of course, configure far less swap or no swap at all but we recommend at least some swap. The kernel's VM paging algorithms are tuned to perform best when there is swap space configured. Configuring too little swap can lead to inefficiencies in the VM page scanning code as well as create issues later on if you add more memory to your machine, so don't be shy about it. Swap is a good idea even if you don't think you will ever need it as it allows the machine to page out completely unused data and idle programs (like getty), maximizing the ram available for your activities.

If you intend to use the swapcache(8) facility with a SSD + HDD combination we recommend configuring as much swap space as you can on the SSD. However, keep in mind that each 1GByte of swapcache requires around 1MByte of ram, so don't scale your swap beyond the equivalent ram that you reasonably want to eat to support it.

Finally, on larger systems with multiple drives, if the use of SSD swap is not in the cards or if it is and you need higher-than-normal swapcache bandwidth, you can configure swap on up to four drives and the kernel will interleave the storage. The swap partitions on the drives should be approximately the same size. The kernel can handle arbitrary sizes but internal data structures scale to 4 times the largest swap partition. Keeping the swap partitions near the same size will allow the kernel to optimally stripe swap space across the N disks. Do not worry about overdoing it a little, swap space is the saving grace of UNIX and even if you do not normally use much swap, having some allows the system to move idle program data out of ram and allows the machine to more easily handle abnormal runaway programs. However, keep in mind that any sort of swap space failure can lock the system up. Most machines are configured with only one or two swap partitions.

Most DragonFly systems have a single HAMMER or HAMMER2 root. PFSs can be used to administratively separate domains for backup purposes but tend to be a hassle otherwise so if you don't need the administrative separation you don't really need to use multiple PFSs. All the PFSs share the same allocation layer so there is no longer a need to size each individual mount. Instead you should review the hammer(8) manual page and use the 'hammer viconfig' facility to adjust snapshot retention and other parameters. By default HAMMER1 keeps 60 days worth of snapshots, and HAMMER2 keeps none. By convention /build is not backed up and contained only directory trees that do not need to be backed-up or snapshotted.

If a very large work area is desired it is often beneficial to configure it as its own filesystem in a completely independent partition so allocation blowouts (if they occur) do not affect the main system. By convention a large work area is named /build. Similarly if a machine is going to have a large number of users you might want to separate your /home out as well.

A number of run-time mount(8) options exist that can help you tune the system. The most obvious and most dangerous one is async. Do not ever use it; it is far too dangerous. A less dangerous and more useful mount(8) option is called noatime. UNIX filesystems normally update the last-accessed time of a file or directory whenever it is accessed. However, neither HAMMER nor HAMMER2 implement atime so there is usually no need to mess with this option. The lack of atime updates can create issues with certain programs such as when detecting whether unread mail is present, but applications for the most part no longer depend on it.

The single most important thing you can do to improve performance is to` have at least one solid-state drive in your system, and to configure your swap space on that drive. If you are using a combination of a smaller SSD and a very larger HDD, you can use swapcache(8) to automatically cache data from your HDD. But even if you do not, having swap space configured on your SSD will significantly improve performance under even modest paging loads. It is particularly useful to configure a significant amount of swap on a workstation, 32GB or more is not uncommon, to handle bloated leaky applications such as browsers.

sysctl(8) variables permit system behavior to be monitored and controlled at run-time. Some sysctls simply report on the behavior of the system; others allow the system behavior to be modified; some may be set at boot time using rc.conf(5), but most will be set via sysctl.conf(5). There are several hundred sysctls in the system, including many that appear to be candidates for tuning but actually are not. In this document we will only cover the ones that have the greatest effect on the system.

The kern.gettimeofday_quick sysctl defaults to 0 (off). Setting this sysctl to 1 causes () calls in libc to use a tick-granular time from the kpmap instead of making a system call. Setting this feature can be useful when running benchmarks which make large numbers of gettimeofday() calls, such as postgres.

The kern.ipc.shm_use_phys sysctl defaults to 1 (on) and may be set to 0 (off) or 1 (on). Setting this parameter to 1 will cause all System V shared memory segments to be mapped to unpageable physical RAM. This feature only has an effect if you are either (A) mapping small amounts of shared memory across many (hundreds) of processes, or (B) mapping large amounts of shared memory across any number of processes. This feature allows the kernel to remove a great deal of internal memory management page-tracking overhead at the cost of wiring the shared memory into core, making it unswappable.

The vfs.write_behind sysctl defaults to 1 (on). This tells the filesystem to issue media writes as full clusters are collected, which typically occurs when writing large sequential files. The idea is to avoid saturating the buffer cache with dirty buffers when it would not benefit I/O performance. However, this may stall processes and under certain circumstances you may wish to turn it off.

The vfs.lorunningspace and vfs.hirunningspace sysctls determines how much outstanding write I/O may be queued to disk controllers system wide at any given moment. The default is usually sufficient, particularly when SSDs are part of the mix. Note that setting too high a value can lead to extremely poor clustering performance. Do not set this value arbitrarily high! Also, higher write queueing values may add latency to reads occurring at the same time. The vfs.bufcache_bw controls data cycling within the buffer cache. I/O bandwidth less than this specification (per second) will cycle into the much larger general VM page cache while I/O bandwidth in excess of this specification will be recycled within the buffer cache, reducing the load on the rest of the VM system at the cost of bypassing normal VM caching mechanisms. The default value is 200 megabytes/s (209715200), which means that the system will try harder to cache data coming off a slower hard drive and less hard trying to cache data coming off a fast SSD.

This parameter is particularly important if you have NVMe drives in your system as these storage devices are capable of transferring well over 2GBytes/sec into the system and can blow normal VM paging and caching algorithms to bits.

There are various other buffer-cache and VM page cache related sysctls. We do not recommend modifying their values.

The net.inet.tcp.sendspace and net.inet.tcp.recvspace sysctls are of particular interest if you are running network intensive applications. They control the amount of send and receive buffer space allowed for any given TCP connection. However, DragonFly now auto-tunes these parameters using a number of other related sysctls (run 'sysctl net.inet.tcp' to get a list) and usually no longer need to be tuned manually. We do not recommend increasing or decreasing the defaults if you are managing a very large number of connections. Note that the routing table (see route(8)) can be used to introduce route-specific send and receive buffer size defaults.

As an additional management tool you can use pipes in your firewall rules (see ipfw(8)) to limit the bandwidth going to or from particular IP blocks or ports. For example, if you have a T1 you might want to limit your web traffic to 70% of the T1's bandwidth in order to leave the remainder available for mail and interactive use. Normally a heavily loaded web server will not introduce significant latencies into other services even if the network link is maxed out, but enforcing a limit can smooth things out and lead to longer term stability. Many people also enforce artificial bandwidth limitations in order to ensure that they are not charged for using too much bandwidth.

Setting the send or receive TCP buffer to values larger than 65535 will result in a marginal performance improvement unless both hosts support the window scaling extension of the TCP protocol, which is controlled by the net.inet.tcp.rfc1323 sysctl. These extensions should be enabled and the TCP buffer size should be set to a value larger than 65536 in order to obtain good performance from certain types of network links; specifically, gigabit WAN links and high-latency satellite links. RFC 1323 support is enabled by default.

The net.inet.tcp.always_keepalive sysctl determines whether or not the TCP implementation should attempt to detect dead TCP connections by intermittently delivering “keepalives” on the connection. By default, this is now enabled for all applications. We do not recommend turning it off. The extra network bandwidth is minimal and this feature will clean-up stalled and long-dead connections that might not otherwise be cleaned up. In the past people using dialup connections often did not want to use this feature in order to be able to retain connections across long disconnections, but in modern day the only default that makes sense is for the feature to be turned on.

The net.inet.tcp.delayed_ack TCP feature is largely misunderstood. Historically speaking this feature was designed to allow the acknowledgement to transmitted data to be returned along with the response. For example, when you type over a remote shell the acknowledgement to the character you send can be returned along with the data representing the echo of the character. With delayed acks turned off the acknowledgement may be sent in its own packet before the remote service has a chance to echo the data it just received. This same concept also applies to any interactive protocol (e.g. SMTP, WWW, POP3) and can cut the number of tiny packets flowing across the network in half. The DragonFly delayed-ack implementation also follows the TCP protocol rule that at least every other packet be acknowledged even if the standard 100ms timeout has not yet passed. Normally the worst a delayed ack can do is slightly delay the teardown of a connection, or slightly delay the ramp-up of a slow-start TCP connection. While we aren't sure we believe that the several FAQs related to packages such as SAMBA and SQUID which advise turning off delayed acks may be referring to the slow-start issue.

The net.inet.tcp.inflight_enable sysctl turns on bandwidth delay product limiting for all TCP connections. This feature is now turned on by default and we recommend that it be left on. It will slightly reduce the maximum bandwidth of a connection but the benefits of the feature in reducing packet backlogs at router constriction points are enormous. These benefits make it a whole lot easier for router algorithms to manage QOS for multiple connections. The limiting feature reduces the amount of data built up in intermediate router and switch packet queues as well as reduces the amount of data built up in the local host's interface queue. With fewer packets queued up, interactive connections, especially over slow modems, will also be able to operate with lower round trip times. However, note that this feature only affects data transmission (uploading / server-side). It does not affect data reception (downloading).

The system will attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain optimum throughput. This feature is useful if you are serving data over modems, GigE, or high speed WAN links (or any other link with a high bandwidth*delay product), especially if you are also using window scaling or have configured a large send window.

For production use setting net.inet.tcp.inflight_min to at least 6144 may be beneficial. Note, however, that setting high minimums may effectively disable bandwidth limiting depending on the link.

Adjusting net.inet.tcp.inflight_stab is not recommended. This parameter defaults to 50, representing +5% fudge when calculating the bwnd from the bw. This fudge is on top of an additional fixed +2*maxseg added to bwnd. The fudge factor is required to stabilize the algorithm at very high speeds while the fixed 2*maxseg stabilizes the algorithm at low speeds. If you increase this value excessive packet buffering may occur.

The net.inet.ip.portrange.* sysctls control the port number ranges automatically bound to TCP and UDP sockets. There are three ranges: A low range, a default range, and a high range, selectable via an IP_PORTRANGE () call. Most network programs use the default range which is controlled by net.inet.ip.portrange.first and net.inet.ip.portrange.last, which defaults to 1024 and 5000 respectively. Bound port ranges are used for outgoing connections and it is possible to run the system out of ports under certain circumstances. This most commonly occurs when you are running a heavily loaded web proxy. The port range is not an issue when running serves which handle mainly incoming connections such as a normal web server, or has a limited number of outgoing connections such as a mail relay. For situations where you may run yourself out of ports we recommend increasing net.inet.ip.portrange.last modestly. A value of 10000 or 20000 or 30000 may be reasonable. You should also consider firewall effects when changing the port range. Some firewalls may block large ranges of ports (usually low-numbered ports) and expect systems to use higher ranges of ports for outgoing connections. For this reason we do not recommend that net.inet.ip.portrange.first be lowered.

The kern.ipc.somaxconn sysctl limits the size of the listen queue for accepting new TCP connections. The default value of 128 is typically too low for robust handling of new connections in a heavily loaded web server environment. For such environments, we recommend increasing this value to 1024 or higher. The service daemon may itself limit the listen queue size (e.g. sendmail(8), apache) but will often have a directive in its configuration file to adjust the queue size up. Larger listen queues also do a better job of fending off denial of service attacks.

The kern.maxvnodes specifies how many vnodes and related file structures the kernel will cache. The kernel uses a modestly generous default for this parameter based on available physical memory. You generally do not want to mess with this parameter as it directly effects how well the kernel can cache not only file structures but also the underlying file data.

However, situations may crop up where you wish to cache less filesystem data in order to make more memory available for programs. Not only will this reduce kernel memory use for vnodes and inodes, it will also have a tendancy to reduce the impact of the buffer cache on main memory because recycling a vnode also frees any underlying data that has been cached for that vnode.

It is, in fact, possible for the system to have more files open than the value of this tunable, but as files are closed the system will try to reduce the actual number of cached vnodes to match this value. The read-only kern.openfiles sysctl may be interrogated to determine how many files are currently open on the system.

The vm.swap_idle_enabled sysctl is useful in large multi-user systems where you have lots of users entering and leaving the system and lots of idle processes. Such systems tend to generate a great deal of continuous pressure on free memory reserves. Turning this feature on and adjusting the swapout hysteresis (in idle seconds) via vm.swap_idle_threshold1 and vm.swap_idle_threshold2 allows you to depress the priority of pages associated with idle processes more quickly than the normal pageout algorithm. This gives a helping hand to the pageout daemon. Do not turn this option on unless you need it, because the tradeoff you are making is to essentially pre-page memory sooner rather than later, eating more swap and disk bandwidth. In a small system this option will have a detrimental effect but in a large system that is already doing moderate paging this option allows the VM system to stage whole processes into and out of memory more easily.

Some aspects of the system behavior may not be tunable at runtime because memory allocations they perform must occur early in the boot process. To change loader tunables, you must set their values in loader.conf(5) and reboot the system.

kern.maxusers is automatically sized at boot based on the amount of memory available in the system. The value can be read (but not written) via sysctl.

You can change this value as a loader tunable if the default resource limits are not sufficient. This tunable works primarily by adjusting kern.maxproc, so you can opt to override that instead. It is generally easier formulate an adjustment to kern.maxproc instead of kern.maxusers.

kern.maxproc controls most kernel auto-scaling components. If kernel resource limits are not scaled high enough, setting this tunables to a higher value is usually sufficient. Generally speaking you will want to set this tunable to the upper limit for the number of process threads you want the kernel to be able to handle. The kernel may still decide to cap maxproc at a lower value if there is insufficient ram to scale resources as desired.

Only set this tunable if the defaults are not sufficient. Do not use this tunable to try to trim kernel resource limits, you will not actually save much memory by doing so and you will leave the system more vulnerable to DOS attacks and runaway processes.

Setting this tunable will scale the maximum number processes, pipes and sockets, total open files the system can support, and increase mbuf and mbuf-cluster limits. These other elements can also be separately overridden to fine-tune the setup. We rcommend setting this tunable first to create a baseline.

Setting a high value presumes that you have enough physical memory to support the resource utilization. For example, your system would need approximately 128GB of ram to reasonably support a maxproc value of 4 million (4000000). The default maxproc given that much ram will typically be in the 250000 range.

Note that the PID is currently limited to 6 digits, so a system cannot have more than a million processes operating anyway (though the aggregate number of threads can be far greater). And yes, there is in fact no reason why a very well-endowed system couldn't have that many processes.

kern.nbuf sets how many filesystem buffers the kernel should cache. Filesystem buffers can be up to 128KB each. UFS typically uses an 8KB blocksize while HAMMER and HAMMER2 typically uses 64KB. The system defaults usually suffice for this parameter. Cached buffers represent wired physical memory so specifying a value that is too large can result in excessive kernel memory use, and is also not entirely necessary since the pages backing the buffers are also cached by the VM page cache (which does not use wired memory). The buffer cache significantly improves the hot path for cached file accesses and dirty data.

The kernel reserves (128KB * nbuf) bytes of KVM. The actual physical memory use depends on the filesystem buffer size. It is generally more flexible to manage the filesystem cache via kern.maxfiles than via kern.nbuf, but situations do arise where you might want to increase or decrease the latter.

The kern.dfldsiz and kern.dflssiz tunables set the default soft limits for process data and stack size respectively. Processes may increase these up to the hard limits by calling setrlimit(2). The kern.maxdsiz, kern.maxssiz, and kern.maxtsiz tunables set the hard limits for process data, stack, and text size respectively; processes may not exceed these limits. The kern.sgrowsiz tunable controls how much the stack segment will grow when a process needs to allocate more stack.

kern.ipc.nmbclusters and kern.ipc.nmbjclusters may be adjusted to increase the number of network mbufs the system is willing to allocate. Each normal cluster represents approximately 2K of memory, so a value of 1024 represents 2M of kernel memory reserved for network buffers. Each 'j' cluster is typically 4KB, so a value of 1024 represents 4M of kernel memory. You can do a simple calculation to figure out how many you need but keep in mind that tcp buffer sizing is now more dynamic than it used to be.

The defaults usually suffice but you may want to bump it up on service-heavy machines. Modern machines often need a large number of mbufs to operate services efficiently, values of 65536, even upwards of 262144 or more are common. If you are running a server, it is better to be generous than to be frugal. Remember the memory calculation though.

Under no circumstances should you specify an arbitrarily high value for this parameter, it could lead to a boot-time crash. The -m option to netstat(1) may be used to observe network cluster use.

There are a number of kernel options that you may have to fiddle with in a large-scale system. In order to change these options you need to be able to compile a new kernel from source. The config(8) manual page and the handbook are good starting points for learning how to do this. Generally speaking, removing options to trim the size of the kernel is not going to save very much memory on a modern system. In the grand scheme of things, saving a megabyte or two is in the noise on a system that likely has multiple gigabytes of memory.

If your motherboard is AHCI-capable then we strongly recommend turning on AHCI mode in the BIOS if it is not already the default.

The type of tuning you do depends heavily on where your system begins to bottleneck as load increases. If your system runs out of CPU (idle times are perpetually 0%) then you need to consider upgrading the CPU or moving to an SMP motherboard (multiple CPU's), or perhaps you need to revisit the programs that are causing the load and try to optimize them. If your system is paging to swap a lot you need to consider adding more memory. If your system is saturating the disk you typically see high CPU idle times and total disk saturation. systat(1) can be used to monitor this. There are many solutions to saturated disks: increasing memory for caching, mirroring disks, distributing operations across several machines, and so forth.

Finally, you might run out of network suds. Optimize the network path as much as possible. If you are operating a machine as a router you may need to setup a pf(4) firewall (also see firewall(7). DragonFly has a very good fair-share queueing algorithm for QOS in pf(4).

Generally speaking memory is at a premium when doing bulk compiles. Machines dedicated to bulk building usually reduce kern.maxvnodes to 1000000 (1 million) vnodes or lower. Don't get too cocky here, this parameter should never be reduced below around 100000 on reasonably well endowed machines.

Bulk build setups also often benefit from a relatively large amount of SSD swap, allowing the system to 'burst' high-memory-usage situations while still maintaining optimal concurrency for other periods during the build which do not use as much run-time memory and prefer more parallelism.

The primary sources of kernel memory usage are:

kern.maxvnodes
The maximum number of cached vnodes in the system. These can eat quite a bit of kernel memory, primarily due to auxiliary structures tracked by the HAMMER filesystem. It is relatively easy to configure a smaller value, but we do not recommend reducing this parameter below 100000. Smaller values directly impact the number of discrete files the kernel can cache data for at once.
kern.ipc.nmbclusters, kern.ipc.nmbjclusters
Calculate approximately 2KB per normal cluster and 4KB per jumbo cluster. Do not make these values too low or you risk deadlocking the network stack.
kern.nbuf
The number of filesystem buffers managed by the kernel. The kernel wires the underlying cached VM pages, typically 8KB (UFS) or 64KB (HAMMER) per buffer.
swap/swapcache
Swap memory requires approximately 1MB of physical ram for each 1GB of swap space. When swapcache is used, additional memory may be required to keep VM objects around longer (only really reducable by reducing the value of kern.maxvnodes which you can do post-boot if you desire).
tmpfs
Tmpfs is very useful but keep in mind that while the file data itself is backed by swap, the meta-data (the directory topology) requires wired kernel memory.
mmu page tables
Even though the underlying data pages themselves can be paged to swap, the page tables are usually wired into memory. This can create problems when a large number of processes are ()ing very large files. Sometimes turning on machdep.pmap_mmu_optimize suffices to reduce overhead. Page table kernel memory use can be observed by using 'vmstat -z'
kern.ipc.shm_use_phys
It is sometimes necessary to force shared memory to use physical memory when running a large database which uses shared memory to implement its own data caching. The use of sysv shared memory in this regard allows the database to distinguish between data which it knows it can access instantly (i.e. without even having to page-in from swap) verses data which it might require and I/O to fetch.

If you use this feature be very careful with regards to the database's shared memory configuration as you will be wiring the memory.

netstat(1), systat(1), dm(4), dummynet(4), nata(4), pf(4), login.conf(5), pf.conf(5), rc.conf(5), sysctl.conf(5), firewall(7), hier(7), boot(8), ccdconfig(8), config(8), disklabel(8), fsck(8), ifconfig(8), ipfw(8), loader(8), mount(8), newfs(8), route(8), sysctl(8), tunefs(8)

The tuning manual page was inherited from FreeBSD and first appeared in FreeBSD 4.3, May 2001.

The tuning manual page was originally written by Matthew Dillon.

August 24, 2018 DragonFly-5.6.1