# SOME DESCRIPTIVE TITLE # Copyright (C) YEAR The FreeBSD Project # This file is distributed under the same license as the FreeBSD Documentation package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: FreeBSD Documentation VERSION\n" "POT-Creation-Date: 2026-02-22 15:58+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "Language: \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #. type: YAML Front Matter: description #: documentation/content/en/books/handbook/zfs/_index.adoc:1 #, no-wrap msgid "ZFS is an advanced file system designed to solve major problems found in previous storage subsystem software" msgstr "" #. type: YAML Front Matter: part #: documentation/content/en/books/handbook/zfs/_index.adoc:1 #, no-wrap msgid "Part III. System Administration" msgstr "" #. type: YAML Front Matter: title #: documentation/content/en/books/handbook/zfs/_index.adoc:1 #, no-wrap msgid "Chapter 22. The Z File System (ZFS)" msgstr "" #. type: Title = #: documentation/content/en/books/handbook/zfs/_index.adoc:15 #, no-wrap msgid "The Z File System (ZFS)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:53 msgid "" "ZFS is an advanced file system designed to solve major problems found in " "previous storage subsystem software." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:55 msgid "" "Originally developed at Sun(TM), ongoing open source ZFS development has " "moved to the http://open-zfs.org[OpenZFS Project]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:57 msgid "ZFS has three major design goals:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:59 msgid "" "Data integrity: All data includes a crossref:zfs[zfs-term-checksum,checksum] " "of the data. ZFS calculates checksums and writes them along with the data. " "When reading that data later, ZFS recalculates the checksums. If the " "checksums do not match, meaning detecting one or more data errors, ZFS will " "attempt to automatically correct errors when ditto-, mirror-, or parity-" "blocks are available." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:60 msgid "" "Pooled storage: adding physical storage devices to a pool, and allocating " "storage space from that shared pool. Space is available to all file systems " "and volumes, and increases by adding new storage devices to the pool." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:65 msgid "" "Performance: caching mechanisms provide increased performance. " "crossref:zfs[zfs-term-arc,ARC] is an advanced memory-based read cache. ZFS " "provides a second level disk-based read cache with crossref:zfs[zfs-term-" "l2arc,L2ARC], and a disk-based synchronous write cache named " "crossref:zfs[zfs-term-zil,ZIL]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:67 msgid "" "A complete list of features and terminology is in crossref:zfs[zfs-term, ZFS " "Features and Terminology]." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/zfs/_index.adoc:69 #, no-wrap msgid "What Makes ZFS Different" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:82 msgid "" "More than a file system, ZFS is fundamentally different from traditional " "file systems. Combining the traditionally separate roles of volume manager " "and file system provides ZFS with unique advantages. The file system is now " "aware of the underlying structure of the disks. Traditional file systems " "could exist on a single disk alone at a time. If there were two disks then " "creating two separate file systems was necessary. A traditional hardware " "RAID configuration avoided this problem by presenting the operating system " "with a single logical disk made up of the space provided by physical disks " "on top of which the operating system placed a file system. Even with " "software RAID solutions like those provided by GEOM, the UFS file system " "living on top of the RAID believes it's dealing with a single device. ZFS' " "combination of the volume manager and the file system solves this and allows " "the creation of file systems that all share a pool of available storage. " "One big advantage of ZFS' awareness of the physical disk layout is that " "existing file systems grow automatically when adding extra disks to the " "pool. This new space then becomes available to the file systems. ZFS can " "also apply different properties to each file system. This makes it useful to " "create separate file systems and datasets instead of a single monolithic " "file system." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/zfs/_index.adoc:84 #, no-wrap msgid "Quick Start Guide" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:88 msgid "" "FreeBSD can mount ZFS pools and datasets during system initialization. To " "enable it, add this line to [.filename]#/etc/rc.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:92 #, no-wrap msgid "zfs_enable=\"YES\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:95 msgid "Then start the service:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:99 #, no-wrap msgid "# service zfs start\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:103 msgid "" "The examples in this section assume three SCSI disks with the device names " "[.filename]#da0#, [.filename]#da1#, and [.filename]#da2#. Users of SATA " "hardware should instead use [.filename]#ada# device names." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:105 #, no-wrap msgid "Single Disk Pool" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:108 msgid "To create a simple, non-redundant pool using a single disk device:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:112 #, no-wrap msgid "# zpool create example /dev/da0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:115 msgid "To view the new pool, review the output of `df`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:124 #, no-wrap msgid "" "# df\n" "Filesystem 1K-blocks Used Avail Capacity Mounted on\n" "/dev/ad0s1a 2026030 235230 1628718 13% /\n" "devfs 1 1 0 100% /dev\n" "/dev/ad0s1d 54098308 1032846 48737598 2% /usr\n" "example 17547136 0 17547136 0% /example\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:128 msgid "" "This output shows creating and mounting of the `example` pool, and that is " "now accessible as a file system. Create files for users to browse:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:139 #, no-wrap msgid "" "# cd /example\n" "# ls\n" "# touch testfile\n" "# ls -al\n" "total 4\n" "drwxr-xr-x 2 root wheel 3 Aug 29 23:15 .\n" "drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..\n" "-rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:143 msgid "" "This pool is not using any advanced ZFS features and properties yet. To " "create a dataset on this pool with compression enabled:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:148 #, no-wrap msgid "" "# zfs create example/compressed\n" "# zfs set compression=gzip example/compressed\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:152 msgid "" "The `example/compressed` dataset is now a ZFS compressed file system. Try " "copying some large files to [.filename]#/example/compressed#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:154 msgid "Disable compression with:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:158 #, no-wrap msgid "# zfs set compression=off example/compressed\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:161 msgid "To unmount a file system, use `zfs umount` and then verify with `df`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:171 #, no-wrap msgid "" "# zfs umount example/compressed\n" "# df\n" "Filesystem 1K-blocks Used Avail Capacity Mounted on\n" "/dev/ad0s1a 2026030 235232 1628716 13% /\n" "devfs 1 1 0 100% /dev\n" "/dev/ad0s1d 54098308 1032864 48737580 2% /usr\n" "example 17547008 0 17547008 0% /example\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:174 msgid "" "To re-mount the file system to make it accessible again, use `zfs mount` and " "verify with `df`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:185 #, no-wrap msgid "" "# zfs mount example/compressed\n" "# df\n" "Filesystem 1K-blocks Used Avail Capacity Mounted on\n" "/dev/ad0s1a 2026030 235234 1628714 13% /\n" "devfs 1 1 0 100% /dev\n" "/dev/ad0s1d 54098308 1032864 48737580 2% /usr\n" "example 17547008 0 17547008 0% /example\n" "example/compressed 17547008 0 17547008 0% /example/compressed\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:188 msgid "Running `mount` shows the pool and file systems:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:197 #, no-wrap msgid "" "# mount\n" "/dev/ad0s1a on / (ufs, local)\n" "devfs on /dev (devfs, local)\n" "/dev/ad0s1d on /usr (ufs, local, soft-updates)\n" "example on /example (zfs, local)\n" "example/compressed on /example/compressed (zfs, local)\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:203 msgid "" "Use ZFS datasets like any file system after creation. Set other available " "features on a per-dataset basis when needed. The example below creates a " "new file system called `data`. It assumes the file system contains " "important files and configures it to store two copies of each data block." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:208 #, no-wrap msgid "" "# zfs create example/data\n" "# zfs set copies=2 example/data\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:211 msgid "Use `df` to see the data and space usage:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:222 #, no-wrap msgid "" "# df\n" "Filesystem 1K-blocks Used Avail Capacity Mounted on\n" "/dev/ad0s1a 2026030 235234 1628714 13% /\n" "devfs 1 1 0 100% /dev\n" "/dev/ad0s1d 54098308 1032864 48737580 2% /usr\n" "example 17547008 0 17547008 0% /example\n" "example/compressed 17547008 0 17547008 0% /example/compressed\n" "example/data 17547008 0 17547008 0% /example/data\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:227 msgid "" "Notice that all file systems in the pool have the same available space. " "Using `df` in these examples shows that the file systems use the space they " "need and all draw from the same pool. ZFS gets rid of concepts such as " "volumes and partitions, and allows several file systems to share the same " "pool." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:229 msgid "To destroy the file systems and then the pool that is no longer needed:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:235 #, no-wrap msgid "" "# zfs destroy example/compressed\n" "# zfs destroy example/data\n" "# zpool destroy example\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:238 #, no-wrap msgid "RAID-Z" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:244 msgid "" "Disks fail. One way to avoid data loss from disk failure is to use RAID. " "ZFS supports this feature in its pool design. RAID-Z pools require three or " "more disks but provide more usable space than mirrored pools." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:246 msgid "" "This example creates a RAID-Z pool, specifying the disks to add to the pool:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:250 #, no-wrap msgid "# zpool create storage raidz da0 da1 da2\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:258 msgid "" "Sun(TM) recommends that the number of devices used in a RAID-Z configuration " "be between three and nine. For environments requiring a single pool " "consisting of 10 disks or more, consider breaking it up into smaller RAID-Z " "groups. If two disks are available, ZFS mirroring provides redundancy if " "required. Refer to man:zpool[8] for more details." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:262 msgid "" "The previous example created the `storage` zpool. This example makes a new " "file system called `home` in that pool:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:266 #, no-wrap msgid "# zfs create storage/home\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:269 msgid "Enable compression and store an extra copy of directories and files:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:274 #, no-wrap msgid "" "# zfs set copies=2 storage/home\n" "# zfs set compression=gzip storage/home\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:277 msgid "" "To make this the new home directory for users, copy the user data to this " "directory and create the appropriate symbolic links:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:284 #, no-wrap msgid "" "# cp -rp /home/* /storage/home\n" "# rm -rf /home /usr/home\n" "# ln -s /storage/home /home\n" "# ln -s /storage/home /usr/home\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:288 msgid "" "Users data is now stored on the freshly-created [.filename]#/storage/home#. " "Test by adding a new user and logging in as that user." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:290 msgid "Create a file system snapshot to roll back to later:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:294 #, no-wrap msgid "# zfs snapshot storage/home@08-30-08\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:297 msgid "ZFS creates snapshots of a dataset, not a single directory or file." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:300 msgid "" "The `@` character is a delimiter between the file system name or the volume " "name. Before deleting an important directory, back up the file system, then " "roll back to an earlier snapshot in which the directory still exists:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:304 #, no-wrap msgid "# zfs rollback storage/home@08-30-08\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:308 msgid "" "To list all available snapshots, run `ls` in the file system's " "[.filename]#.zfs/snapshot# directory. For example, to see the snapshot " "taken:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:312 #, no-wrap msgid "# ls /storage/home/.zfs/snapshot\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:317 msgid "" "Write a script to take regular snapshots of user data. Over time, snapshots " "can use up a lot of disk space. Remove the previous snapshot using the " "command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:321 #, no-wrap msgid "# zfs destroy storage/home@08-30-08\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:325 msgid "" "After testing, make [.filename]#/storage/home# the real [.filename]#/home# " "with this command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:329 #, no-wrap msgid "# zfs set mountpoint=/home storage/home\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:332 msgid "" "Run `df` and `mount` to confirm that the system now treats the file system " "as the real [.filename]#/home#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:348 #, no-wrap msgid "" "# mount\n" "/dev/ad0s1a on / (ufs, local)\n" "devfs on /dev (devfs, local)\n" "/dev/ad0s1d on /usr (ufs, local, soft-updates)\n" "storage on /storage (zfs, local)\n" "storage/home on /home (zfs, local)\n" "# df\n" "Filesystem 1K-blocks Used Avail Capacity Mounted on\n" "/dev/ad0s1a 2026030 235240 1628708 13% /\n" "devfs 1 1 0 100% /dev\n" "/dev/ad0s1d 54098308 1032826 48737618 2% /usr\n" "storage 26320512 0 26320512 0% /storage\n" "storage/home 26320512 0 26320512 0% /home\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:352 msgid "" "This completes the RAID-Z configuration. Add daily status updates about the " "created file systems to the nightly man:periodic[8] runs by adding this line " "to [.filename]#/etc/periodic.conf#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:356 #, no-wrap msgid "daily_status_zfs_enable=\"YES\"\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:359 #, no-wrap msgid "Recovering RAID-Z" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:363 msgid "" "Every software RAID has a method of monitoring its `state`. View the status " "of RAID-Z devices using:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:367 #, no-wrap msgid "# zpool status -x\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:370 msgid "" "If all pools are crossref:zfs[zfs-term-online,Online] and everything is " "normal, the message shows:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:374 #, no-wrap msgid "all pools are healthy\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:378 msgid "" "If there is a problem, perhaps a disk being in the crossref:zfs[zfs-term-" "offline,Offline] state, the pool state will look like this:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:390 #, no-wrap msgid "" " pool: storage\n" " state: DEGRADED\n" "status: One or more devices has been taken offline by the administrator.\n" "\tSufficient replicas exist for the pool to continue functioning in a\n" "\tdegraded state.\n" "action: Online the device using 'zpool online' or replace the device with\n" "\t'zpool replace'.\n" " scrub: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:397 #, no-wrap msgid "" "\tNAME STATE READ WRITE CKSUM\n" "\tstorage DEGRADED 0 0 0\n" "\t raidz1 DEGRADED 0 0 0\n" "\t da0 ONLINE 0 0 0\n" "\t da1 OFFLINE 0 0 0\n" "\t da2 ONLINE 0 0 0\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:399 #: documentation/content/en/books/handbook/zfs/_index.adoc:434 #: documentation/content/en/books/handbook/zfs/_index.adoc:480 #: documentation/content/en/books/handbook/zfs/_index.adoc:525 #: documentation/content/en/books/handbook/zfs/_index.adoc:548 #: documentation/content/en/books/handbook/zfs/_index.adoc:580 #: documentation/content/en/books/handbook/zfs/_index.adoc:659 #: documentation/content/en/books/handbook/zfs/_index.adoc:713 #: documentation/content/en/books/handbook/zfs/_index.adoc:750 #: documentation/content/en/books/handbook/zfs/_index.adoc:779 #: documentation/content/en/books/handbook/zfs/_index.adoc:859 #: documentation/content/en/books/handbook/zfs/_index.adoc:935 #: documentation/content/en/books/handbook/zfs/_index.adoc:967 #: documentation/content/en/books/handbook/zfs/_index.adoc:1067 #: documentation/content/en/books/handbook/zfs/_index.adoc:1111 #: documentation/content/en/books/handbook/zfs/_index.adoc:1136 #: documentation/content/en/books/handbook/zfs/_index.adoc:1157 #, no-wrap msgid "errors: No known data errors\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:402 msgid "" "\"OFFLINE\" shows the administrator took [.filename]#da1# offline using:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:406 #, no-wrap msgid "# zpool offline storage da1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:410 msgid "" "Power down the computer now and replace [.filename]#da1#. Power up the " "computer and return [.filename]#da1# to the pool:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:414 #, no-wrap msgid "# zpool replace storage da1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:417 msgid "" "Next, check the status again, this time without `-x` to display all pools:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:425 #, no-wrap msgid "" "# zpool status storage\n" " pool: storage\n" " state: ONLINE\n" " scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:432 #: documentation/content/en/books/handbook/zfs/_index.adoc:478 #, no-wrap msgid "" "\tNAME STATE READ WRITE CKSUM\n" "\tstorage ONLINE 0 0 0\n" "\t raidz1 ONLINE 0 0 0\n" "\t da0 ONLINE 0 0 0\n" "\t da1 ONLINE 0 0 0\n" "\t da2 ONLINE 0 0 0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:437 msgid "In this example, everything is normal." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:439 #, no-wrap msgid "Data Verification" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:443 msgid "" "ZFS uses checksums to verify the integrity of stored data. Creating file " "systems automatically enables them." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:450 msgid "" "Disabling Checksums is possible but _not_ recommended! Checksums take little " "storage space and provide data integrity. Most ZFS features will not work " "properly with checksums disabled. Disabling these checksums will not " "increase performance noticeably." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:453 msgid "" "Verifying the data checksums (called _scrubbing_) ensures integrity of the " "`storage` pool with:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:457 #, no-wrap msgid "# zpool scrub storage\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:463 msgid "" "The duration of a scrub depends on the amount of data stored. Larger " "amounts of data will take proportionally longer to verify. Since scrubbing " "is I/O intensive, ZFS allows a single scrub to run at a time. After " "scrubbing completes, view the status with `zpool status`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:471 #, no-wrap msgid "" "# zpool status storage\n" " pool: storage\n" " state: ONLINE\n" " scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013\n" "config:\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:484 msgid "" "Displaying the completion date of the last scrubbing helps decide when to " "start another. Routine scrubs help protect data from silent corruption and " "ensure the integrity of the pool." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:486 msgid "Refer to man:zfs[8] and man:zpool[8] for other ZFS options." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/zfs/_index.adoc:488 #, no-wrap msgid "`zpool` Administration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:495 msgid "" "ZFS administration uses two main utilities. The `zpool` utility controls " "the operation of the pool and allows adding, removing, replacing, and " "managing disks. The crossref:zfs[zfs-zfs,`zfs`] utility allows creating, " "destroying, and managing datasets, both crossref:zfs[zfs-term-" "filesystem,file systems] and crossref:zfs[zfs-term-volume,volumes]." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:497 #, no-wrap msgid "Creating and Destroying Storage Pools" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:506 msgid "" "Creating a ZFS storage pool requires permanent decisions, as the pool " "structure cannot change after creation. The most important decision is " "which types of vdevs to group the physical disks into. See the list of " "crossref:zfs[zfs-term-vdev,vdev types] for details about the possible " "options. After creating the pool, most vdev types do not allow adding disks " "to the vdev. The exceptions are mirrors, which allow adding new disks to " "the vdev, and stripes, which upgrade to mirrors by attaching a new disk to " "the vdev. Although adding new vdevs expands a pool, the pool layout cannot " "change after pool creation. Instead, back up the data, destroy the pool, " "and recreate it." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:508 msgid "Create a simple mirror pool:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:517 #, no-wrap msgid "" "# zpool create mypool mirror /dev/ada1 /dev/ada2\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:523 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada1 ONLINE 0 0 0\n" " ada2 ONLINE 0 0 0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:528 msgid "" "To create more than one vdev with a single command, specify groups of disks " "separated by the vdev type keyword, `mirror` in this example:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:537 #, no-wrap msgid "" "# zpool create mypool mirror /dev/ada1 /dev/ada2 mirror /dev/ada3 /dev/ada4\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:546 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada1 ONLINE 0 0 0\n" " ada2 ONLINE 0 0 0\n" " mirror-1 ONLINE 0 0 0\n" " ada3 ONLINE 0 0 0\n" " ada4 ONLINE 0 0 0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:557 msgid "" "Pools can also use partitions rather than whole disks. Putting ZFS in a " "separate partition allows the same disk to have other partitions for other " "purposes. In particular, it allows adding partitions with bootcode and file " "systems needed for booting. This allows booting from disks that are also " "members of a pool. ZFS adds no performance penalty on FreeBSD when using a " "partition rather than a whole disk. Using partitions also allows the " "administrator to _under-provision_ the disks, using less than the full " "capacity. If a future replacement disk of the same nominal size as the " "original actually has a slightly smaller capacity, the smaller partition " "will still fit, using the replacement disk." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:559 msgid "" "Create a crossref:zfs[zfs-term-vdev-raidz,RAID-Z2] pool using partitions:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:568 #, no-wrap msgid "" "# zpool create mypool raidz2 /dev/ada0p3 /dev/ada1p3 /dev/ada2p3 /dev/ada3p3 /dev/ada4p3 /dev/ada5p3\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:578 #: documentation/content/en/books/handbook/zfs/_index.adoc:777 #: documentation/content/en/books/handbook/zfs/_index.adoc:965 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " raidz2-0 ONLINE 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " ada1p3 ONLINE 0 0 0\n" " ada2p3 ONLINE 0 0 0\n" " ada3p3 ONLINE 0 0 0\n" " ada4p3 ONLINE 0 0 0\n" " ada5p3 ONLINE 0 0 0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:587 msgid "" "Destroy a pool that is no longer needed to reuse the disks. Destroying a " "pool requires unmounting the file systems in that pool first. If any " "dataset is in use, the unmount operation fails without destroying the pool. " "Force the pool destruction with `-f`. This can cause undefined behavior in " "applications which had open files on those datasets." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:589 #, no-wrap msgid "Adding and Removing Devices" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:593 msgid "" "Two ways exist for adding disks to a pool: attaching a disk to an existing " "vdev with `zpool attach`, or adding vdevs to the pool with `zpool add`. " "Some crossref:zfs[zfs-term-vdev,vdev types] allow adding disks to the vdev " "after creation." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:602 msgid "" "A pool created with a single disk lacks redundancy. It can detect " "corruption but can not repair it, because there is no other copy of the " "data. The crossref:zfs[zfs-term-copies,copies] property may be able to " "recover from a small failure such as a bad sector, but does not provide the " "same level of protection as mirroring or RAID-Z. Starting with a pool " "consisting of a single disk vdev, use `zpool attach` to add a new disk to " "the vdev, creating a mirror. Also use `zpool attach` to add new disks to a " "mirror group, increasing redundancy and read performance. When partitioning " "the disks used for the pool, replicate the layout of the first disk on to " "the second. Use `gpart backup` and `gpart restore` to make this process " "easier." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:604 msgid "" "Upgrade the single disk (stripe) vdev [.filename]#ada0p3# to a mirror by " "attaching [.filename]#ada1p3#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:612 #: documentation/content/en/books/handbook/zfs/_index.adoc:809 #, no-wrap msgid "" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:616 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:620 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool attach mypool ada0p3 ada1p3\n" "Make sure to wait until resilvering finishes before rebooting.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:622 #, no-wrap msgid "If you boot from pool 'mypool', you may need to update boot code on newly attached disk _ada1p3_.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:624 #, no-wrap msgid "Assuming you use GPT partitioning and _da0_ is your new boot disk you may use the following command:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:638 #, no-wrap msgid "" " gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0\n" "# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1\n" "bootcode written to ada1\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" "status: One or more devices is currently being resilvered. The pool will\n" " continue to function, possibly in a degraded state.\n" "action: Wait for the resilver to complete.\n" " scan: resilver in progress since Fri May 30 08:19:19 2014\n" " 527M scanned out of 781M at 47.9M/s, 0h0m to go\n" " 527M resilvered, 67.53% done\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:644 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " ada1p3 ONLINE 0 0 0 (resilvering)\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:651 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:15:58 2014\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:657 #: documentation/content/en/books/handbook/zfs/_index.adoc:690 #: documentation/content/en/books/handbook/zfs/_index.adoc:748 #: documentation/content/en/books/handbook/zfs/_index.adoc:815 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " ada1p3 ONLINE 0 0 0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:667 msgid "" "When adding disks to the existing vdev is not an option, as for RAID-Z, an " "alternative method is to add another vdev to the pool. Adding vdevs " "provides higher performance by distributing writes across the vdevs. Each " "vdev provides its own redundancy. Mixing vdev types like `mirror` and `RAID-" "Z` is possible but discouraged. Adding a non-redundant vdev to a pool " "containing mirror or RAID-Z vdevs risks the data on the entire pool. " "Distributing writes means a failure of the non-redundant disk will result in " "the loss of a fraction of every block written to the pool." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:672 msgid "" "ZFS stripes data across each of the vdevs. For example, with two mirror " "vdevs, this is effectively a RAID 10 that stripes writes across two sets of " "mirrors. ZFS allocates space so that each vdev reaches 100% full at the " "same time. Having vdevs with different amounts of free space will lower " "performance, as more data writes go to the less full vdev." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:674 msgid "" "When attaching new devices to a boot pool, remember to update the bootcode." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:676 msgid "" "Attach a second mirror group ([.filename]#ada2p3# and [.filename]#ada3p3#) " "to the existing mirror:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:684 #, no-wrap msgid "" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:19:35 2014\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:702 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool add mypool mirror ada2p3 ada3p3\n" "# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2\n" "bootcode written to ada2\n" "# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3\n" "bootcode written to ada3\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:711 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " ada1p3 ONLINE 0 0 0\n" " mirror-1 ONLINE 0 0 0\n" " ada2p3 ONLINE 0 0 0\n" " ada3p3 ONLINE 0 0 0\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:717 msgid "" "Removing vdevs from a pool is impossible and removal of disks from a mirror " "is exclusive if there is enough remaining redundancy. If a single disk " "remains in a mirror group, that group ceases to be a mirror and becomes a " "stripe, risking the entire pool if that remaining disk fails." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:719 msgid "Remove a disk from a three-way mirror group:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:727 #, no-wrap msgid "" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:734 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " ada1p3 ONLINE 0 0 0\n" " ada2p3 ONLINE 0 0 0\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:742 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool detach mypool ada2p3\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014\n" "config:\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:753 #, no-wrap msgid "Checking the Status of a Pool" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:759 msgid "" "Pool status is important. If a drive goes offline or ZFS detects a read, " "write, or checksum error, the corresponding error count increases. The " "`status` output shows the configuration and status of each device in the " "pool and the status of the entire pool. Actions to take and details about " "the last crossref:zfs[zfs-zpool-scrub,`scrub`] are also shown." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:767 #, no-wrap msgid "" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: scrub repaired 0 in 2h25m with 0 errors on Sat Sep 14 04:25:50 2013\n" "config:\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:782 #, no-wrap msgid "Clearing Errors" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:788 msgid "" "When detecting an error, ZFS increases the read, write, or checksum error " "counts. Clear the error message and reset the counts with `zpool clear " "_mypool_`. Clearing the error state can be important for automated scripts " "that alert the administrator when the pool encounters an error. Without " "clearing old errors, the scripts may fail to report further errors." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:790 #, no-wrap msgid "Replacing a Functioning Device" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:799 msgid "" "It may be desirable to replace one disk with a different disk. When " "replacing a working disk, the process keeps the old disk online during the " "replacement. The pool never enters a crossref:zfs[zfs-term-" "degraded,degraded] state, reducing the risk of data loss. Running `zpool " "replace` copies the data from the old disk to the new one. After the " "operation completes, ZFS disconnects the old disk from the vdev. If the new " "disk is larger than the old disk, it may be possible to grow the zpool, " "using the new space. See crossref:zfs[zfs-zpool-online,Growing a Pool]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:801 msgid "Replace a functioning device in the pool:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:819 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool replace mypool ada1p3 ada2p3\n" "Make sure to wait until resilvering finishes before rebooting.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:821 #, no-wrap msgid "When booting from the pool 'zroot', update the boot code on the newly attached disk 'ada2p3'.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:823 #, no-wrap msgid "Assuming GPT partitioning is used and [.filename]#da0# is the new boot disk, use the following command:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:836 #, no-wrap msgid "" " gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0\n" "# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" "status: One or more devices is currently being resilvered. The pool will\n" " continue to function, possibly in a degraded state.\n" "action: Wait for the resilver to complete.\n" " scan: resilver in progress since Mon Jun 2 14:21:35 2014\n" " 604M scanned out of 781M at 46.5M/s, 0h0m to go\n" " 604M resilvered, 77.39% done\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:844 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " replacing-1 ONLINE 0 0 0\n" " ada1p3 ONLINE 0 0 0\n" " ada2p3 ONLINE 0 0 0 (resilvering)\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:851 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: resilvered 781M in 0h0m with 0 errors on Mon Jun 2 14:21:52 2014\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:857 #: documentation/content/en/books/handbook/zfs/_index.adoc:933 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " ada2p3 ONLINE 0 0 0\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:862 #, no-wrap msgid "Dealing with Failed Devices" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:871 msgid "" "When a disk in a pool fails, the vdev to which the disk belongs enters the " "crossref:zfs[zfs-term-degraded,degraded] state. The data is still " "available, but with reduced performance because ZFS computes missing data " "from the available redundancy. To restore the vdev to a fully functional " "state, replace the failed physical device. ZFS is then instructed to begin " "the crossref:zfs[zfs-term-resilver,resilver] operation. ZFS recomputes data " "on the failed device from available redundancy and writes it to the " "replacement device. After completion, the vdev returns to crossref:zfs[zfs-" "term-online,online] status." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:876 msgid "" "If the vdev does not have any redundancy, or if devices have failed and " "there is not enough redundancy to compensate, the pool enters the " "crossref:zfs[zfs-term-faulted,faulted] state. Unless enough devices can " "reconnect the pool becomes inoperative requiring a data restore from backups." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:879 msgid "" "When replacing a failed disk, the name of the failed disk changes to the " "GUID of the new disk. A new device name parameter for `zpool replace` is " "not required if the replacement device has the same device name." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:881 msgid "Replace a failed disk using `zpool replace`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:893 #, no-wrap msgid "" "# zpool status\n" " pool: mypool\n" " state: DEGRADED\n" "status: One or more devices could not be opened. Sufficient replicas exist for\n" " the pool to continue functioning in a degraded state.\n" "action: Attach the missing device and online it using 'zpool online'.\n" " see: http://illumos.org/msg/ZFS-8000-2Q\n" " scan: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:899 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool DEGRADED 0 0 0\n" " mirror-0 DEGRADED 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " 316502962686821739 UNAVAIL 0 0 0 was /dev/ada1p3\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:912 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool replace mypool 316502962686821739 ada2p3\n" "# zpool status\n" " pool: mypool\n" " state: DEGRADED\n" "status: One or more devices is currently being resilvered. The pool will\n" " continue to function, possibly in a degraded state.\n" "action: Wait for the resilver to complete.\n" " scan: resilver in progress since Mon Jun 2 14:52:21 2014\n" " 641M scanned out of 781M at 49.3M/s, 0h0m to go\n" " 640M resilvered, 82.04% done\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:920 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool DEGRADED 0 0 0\n" " mirror-0 DEGRADED 0 0 0\n" " ada0p3 ONLINE 0 0 0\n" " replacing-1 UNAVAIL 0 0 0\n" " 15732067398082357289 UNAVAIL 0 0 0 was /dev/ada1p3/old\n" " ada2p3 ONLINE 0 0 0 (resilvering)\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:927 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: resilvered 781M in 0h0m with 0 errors on Mon Jun 2 14:52:38 2014\n" "config:\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:938 #, no-wrap msgid "Scrubbing a Pool" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:944 msgid "" "Routinely crossref:zfs[zfs-term-scrub,scrub] pools, ideally at least once " "every month. The `scrub` operation is disk-intensive and will reduce " "performance while running. Avoid high-demand periods when scheduling " "`scrub` or use crossref:zfs[zfs-advanced-tuning-" "scrub_delay,`vfs.zfs.scrub_delay`] to adjust the relative priority of the " "`scrub` to keep it from slowing down other workloads." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:955 #, no-wrap msgid "" "# zpool scrub mypool\n" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" " scan: scrub in progress since Wed Feb 19 20:52:54 2014\n" " 116G scanned out of 8.60T at 649M/s, 3h48m to go\n" " 0 repaired, 1.32% done\n" "config:\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:970 msgid "To cancel a scrub operation if needed, run `zpool scrub -s _mypool_`." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:972 #, no-wrap msgid "Self-Healing" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:983 msgid "" "The checksums stored with data blocks enable the file system to _self-" "heal_. This feature will automatically repair data whose checksum does not " "match the one recorded on another device that is part of the storage pool. " "For example, a mirror configuration with two disks where one drive is " "starting to malfunction and cannot properly store the data any more. This " "is worse when the data was not accessed for a long time, as with long term " "archive storage. Traditional file systems need to run commands that check " "and repair the data like man:fsck[8]. These commands take time, and in " "severe cases, an administrator has to decide which repair operation to " "perform. When ZFS detects a data block with a mismatched checksum, it tries " "to read the data from the mirror disk. If that disk can provide the correct " "data, ZFS will give that to the application and correct the data on the disk " "with the wrong checksum. This happens without any interaction from a system " "administrator during normal pool operation." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:985 msgid "" "The next example shows this self-healing behavior by creating a mirrored " "pool of disks [.filename]#/dev/ada0# and [.filename]#/dev/ada1#." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:994 #, no-wrap msgid "" "# zpool create healer mirror /dev/ada0 /dev/ada1\n" "# zpool status healer\n" " pool: healer\n" " state: ONLINE\n" " scan: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1000 #: documentation/content/en/books/handbook/zfs/_index.adoc:1155 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " healer ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0 ONLINE 0 0 0\n" " ada1 ONLINE 0 0 0\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1005 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool list\n" "NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\n" "healer 960M 92.5K 960M - - 0% 0% 1.00x ONLINE -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1008 msgid "" "Copy some important data to the pool to protect from data errors using the " "self-healing feature and create a checksum of the pool for later comparison." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1018 #, no-wrap msgid "" "# cp /some/important/data /healer\n" "# zfs list\n" "NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT\n" "healer 960M 67.7M 892M 7% 1.00x ONLINE -\n" "# sha1 /healer > checksum.txt\n" "# cat checksum.txt\n" "SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1022 msgid "" "Simulate data corruption by writing random data to the beginning of one of " "the disks in the mirror. To keep ZFS from healing the data when detected, " "export the pool before the corruption and import it again afterwards." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1030 msgid "" "This is a dangerous operation that can destroy vital data, shown here for " "demonstration alone. *Do not try* it during normal operation of a storage " "pool. Nor should this intentional corruption example run on any disk with a " "file system not using ZFS on another partition in it. Do not use any other " "disk device names other than the ones that are part of the pool. Ensure " "proper backups of the pool exist and test them before running the command!" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1040 #, no-wrap msgid "" "# zpool export healer\n" "# dd if=/dev/random of=/dev/ada1 bs=1m count=200\n" "200+0 records in\n" "200+0 records out\n" "209715200 bytes transferred in 62.992162 secs (3329227 bytes/sec)\n" "# zpool import healer\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1046 msgid "" "The pool status shows that one device has experienced an error. Note that " "applications reading data from the pool did not receive any incorrect data. " "ZFS provided data from the [.filename]#ada0# device with the correct " "checksums. To find the device with the wrong checksum, look for one whose " "`CKSUM` column contains a nonzero value." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1059 #, no-wrap msgid "" "# zpool status healer\n" " pool: healer\n" " state: ONLINE\n" " status: One or more devices has experienced an unrecoverable error. An\n" " attempt was made to correct the error. Applications are unaffected.\n" " action: Determine if the device needs to be replaced, and clear the errors\n" " using 'zpool clear' or replace the device with 'zpool replace'.\n" " see: http://illumos.org/msg/ZFS-8000-4J\n" " scan: none requested\n" " config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1065 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " healer ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0 ONLINE 0 0 0\n" " ada1 ONLINE 0 0 1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1071 msgid "" "ZFS detected the error and handled it by using the redundancy present in the " "unaffected [.filename]#ada0# mirror disk. A checksum comparison with the " "original one will reveal whether the pool is consistent again." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1078 #, no-wrap msgid "" "# sha1 /healer >> checksum.txt\n" "# cat checksum.txt\n" "SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f\n" "SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1087 msgid "" "Generate checksums before and after the intentional tampering while the pool " "data still matches. This shows how ZFS is capable of detecting and " "correcting any errors automatically when the checksums differ. Note this is " "possible with enough redundancy present in the pool. A pool consisting of a " "single device has no self-healing capabilities. That is also the reason why " "checksums are so important in ZFS; do not disable them for any reason. ZFS " "requires no man:fsck[8] or similar file system consistency check program to " "detect and correct this, and keeps the pool available while there is a " "problem. A scrub operation is now required to overwrite the corrupted data " "on [.filename]#ada1#." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1103 #, no-wrap msgid "" "# zpool scrub healer\n" "# zpool status healer\n" " pool: healer\n" " state: ONLINE\n" "status: One or more devices has experienced an unrecoverable error. An\n" " attempt was made to correct the error. Applications are unaffected.\n" "action: Determine if the device needs to be replaced, and clear the errors\n" " using 'zpool clear' or replace the device with 'zpool replace'.\n" " see: http://illumos.org/msg/ZFS-8000-4J\n" " scan: scrub in progress since Mon Dec 10 12:23:30 2012\n" " 10.4M scanned out of 67.0M at 267K/s, 0h3m to go\n" " 9.63M repaired, 15.56% done\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1109 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " healer ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0 ONLINE 0 0 0\n" " ada1 ONLINE 0 0 627 (repairing)\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1115 msgid "" "The scrub operation reads data from [.filename]#ada0# and rewrites any data " "with a wrong checksum on [.filename]#ada1#, shown by the `(repairing)` " "output from `zpool status`. After the operation is complete, the pool " "status changes to:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1128 #, no-wrap msgid "" "# zpool status healer\n" " pool: healer\n" " state: ONLINE\n" "status: One or more devices has experienced an unrecoverable error. An\n" " attempt was made to correct the error. Applications are unaffected.\n" "action: Determine if the device needs to be replaced, and clear the errors\n" " using 'zpool clear' or replace the device with 'zpool replace'.\n" " see: http://illumos.org/msg/ZFS-8000-4J\n" " scan: scrub repaired 66.5M in 0h2m with 0 errors on Mon Dec 10 12:26:25 2012\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1134 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " healer ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" " ada0 ONLINE 0 0 0\n" " ada1 ONLINE 0 0 2.72K\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1140 msgid "" "After the scrubbing operation completes with all the data synchronized from " "[.filename]#ada0# to [.filename]#ada1#, crossref:zfs[zfs-zpool-clear,clear] " "the error messages from the pool status by running `zpool clear`." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1149 #, no-wrap msgid "" "# zpool clear healer\n" "# zpool status healer\n" " pool: healer\n" " state: ONLINE\n" " scan: scrub repaired 66.5M in 0h2m with 0 errors on Mon Dec 10 12:26:25 2012\n" "config:\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1160 msgid "" "The pool is now back to a fully working state, with all error counts now " "zero." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1162 #, no-wrap msgid "Growing a Pool" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1172 msgid "" "The smallest device in each vdev limits the usable size of a redundant " "pool. Replace the smallest device with a larger device. After completing a " "crossref:zfs[zfs-zpool-replace,replace] or crossref:zfs[zfs-term-" "resilver,resilver] operation, the pool can grow to use the capacity of the " "new device. For example, consider a mirror of a 1 TB drive and a 2 TB " "drive. The usable space is 1 TB. When replacing the 1 TB drive with " "another 2 TB drive, the resilvering process copies the existing data onto " "the new drive. As both of the devices now have 2 TB capacity, the mirror's " "available space grows to 2 TB." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1175 msgid "" "Start expansion by using `zpool online -e` on each device. After expanding " "all devices, the extra space becomes available to the pool." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1177 #, no-wrap msgid "Importing and Exporting Pools" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1185 msgid "" "_Export_ pools before moving them to another system. ZFS unmounts all " "datasets, marking each device as exported but still locked to prevent use by " "other disks. This allows pools to be _imported_ on other machines, other " "operating systems that support ZFS, and even different hardware " "architectures (with some caveats, see man:zpool[8]). When a dataset has " "open files, use `zpool export -f` to force exporting the pool. Use this " "with caution. The datasets are forcibly unmounted, potentially resulting in " "unexpected behavior by the applications which had open files on those " "datasets." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1187 msgid "Export a pool that is not in use:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1191 #, no-wrap msgid "# zpool export mypool\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1199 msgid "" "Importing a pool automatically mounts the datasets. If this is undesired " "behavior, use `zpool import -N` to prevent it. `zpool import -o` sets " "temporary properties for this specific import. `zpool import altroot=` " "allows importing a pool with a base mount point instead of the root of the " "file system. If the pool was last used on a different system and was not " "properly exported, force the import using `zpool import -f`. `zpool import " "-a` imports all pools that do not appear to be in use by another system." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1201 msgid "List all available pools for import:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1210 #, no-wrap msgid "" "# zpool import\n" " pool: mypool\n" " id: 9930174748043525076\n" " state: ONLINE\n" " action: The pool can be imported using its name or numeric identifier.\n" " config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1213 #, no-wrap msgid "" " mypool ONLINE\n" " ada2p3 ONLINE\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1216 msgid "Import the pool with an alternative root directory:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1224 #, no-wrap msgid "" "# zpool import -o altroot=/mnt mypool\n" "# zfs list\n" "zfs list\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool 110K 47.0G 31K /mnt/mypool\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1227 #, no-wrap msgid "Upgrading a Storage Pool" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1233 msgid "" "After upgrading FreeBSD, or if importing a pool from a system using an older " "version, manually upgrade the pool to the latest ZFS version to support " "newer features. Consider whether the pool may ever need importing on an " "older system before upgrading. Upgrading is a one-way process. Upgrade " "older pools is possible, but downgrading pools with newer features is not." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1235 msgid "Upgrade a v28 pool to support `Feature Flags`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1248 #, no-wrap msgid "" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" "status: The pool is formatted using a legacy on-disk format. The pool can\n" " still be used, but some features are unavailable.\n" "action: Upgrade the pool using 'zpool upgrade'. Once this is done, the\n" " pool will no longer be accessible on software that does not support feat\n" " flags.\n" " scan: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1254 #: documentation/content/en/books/handbook/zfs/_index.adoc:1302 #, no-wrap msgid "" " NAME STATE READ WRITE CKSUM\n" " mypool ONLINE 0 0 0\n" " mirror-0 ONLINE 0 0 0\n" "\t ada0 ONLINE 0 0 0\n" "\t ada1 ONLINE 0 0 0\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1258 #: documentation/content/en/books/handbook/zfs/_index.adoc:1306 #, no-wrap msgid "" "errors: No known data errors\n" "# zpool upgrade\n" "This system supports ZFS pool feature flags.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1261 #, no-wrap msgid "" "The following pools are formatted with legacy version numbers and are upgraded to use feature flags.\n" "After being upgraded, these pools will no longer be accessible by software that does not support feature flags.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1265 #, no-wrap msgid "" "VER POOL\n" "--- ------------\n" "28 mypool\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1270 #, no-wrap msgid "" "Use 'zpool upgrade -v' for a list of available legacy versions.\n" "Every feature flags pool has all supported features enabled.\n" "# zpool upgrade mypool\n" "This system supports ZFS pool feature flags.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1277 #, no-wrap msgid "" "Successfully upgraded 'mypool' from version 28 to feature flags.\n" "Enabled the following features on 'mypool':\n" " async_destroy\n" " empty_bpobj\n" " lz4_compress\n" " multi_vdev_crash_dump\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1281 msgid "" "The newer features of ZFS will not be available until `zpool upgrade` has " "completed. Use `zpool upgrade -v` to see what new features the upgrade " "provides, as well as which features are already supported." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1283 msgid "Upgrade a pool to support new feature flags:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1296 #, no-wrap msgid "" "# zpool status\n" " pool: mypool\n" " state: ONLINE\n" "status: Some supported features are not enabled on the pool. The pool can\n" " still be used, but some features are unavailable.\n" "action: Enable all features using 'zpool upgrade'. Once this is done,\n" " the pool may no longer be accessible by software that does not support\n" " the features. See zpool-features(7) for details.\n" " scan: none requested\n" "config:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1308 #, no-wrap msgid "All pools are formatted using feature flags.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1312 #, no-wrap msgid "" "Some supported features are not enabled on the following pools. Once a\n" "feature is enabled the pool may become incompatible with software\n" "that does not support the feature. See zpool-features(7) for details.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1325 #, no-wrap msgid "" "POOL FEATURE\n" "---------------\n" "zstore\n" " multi_vdev_crash_dump\n" " spacemap_histogram\n" " enabled_txg\n" " hole_birth\n" " extensible_dataset\n" " bookmarks\n" " filesystem_limits\n" "# zpool upgrade mypool\n" "This system supports ZFS pool feature flags.\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1333 #, no-wrap msgid "" "Enabled the following features on 'mypool':\n" " spacemap_histogram\n" " enabled_txg\n" " hole_birth\n" " extensible_dataset\n" " bookmarks\n" " filesystem_limits\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1340 msgid "" "Update the boot code on systems that boot from a pool to support the new " "pool version. Use `gpart bootcode` on the partition that contains the boot " "code. Two types of bootcode are available, depending on way the system " "boots: GPT (the most common option) and EFI (for more modern systems)." msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1342 msgid "For legacy boot using GPT, use the following command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1346 #, no-wrap msgid "# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1349 msgid "For systems using EFI to boot, execute the following command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1353 #, no-wrap msgid "# gpart bootcode -p /boot/boot1.efi -i 1 ada1\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1357 msgid "" "Apply the bootcode to all bootable disks in the pool. See man:gpart[8] for " "more information." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1360 #, no-wrap msgid "Displaying Recorded Pool History" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1366 msgid "" "ZFS records commands that change the pool, including creating datasets, " "changing properties, or replacing a disk. Reviewing history about a pool's " "creation is useful, as is checking which user performed a specific action " "and when. History is not kept in a log file, but is part of the pool " "itself. The command to review this history is aptly named `zpool history`:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1375 #, no-wrap msgid "" "# zpool history\n" "History for 'tank':\n" "2013-02-26.23:02:35 zpool create tank mirror /dev/ada0 /dev/ada1\n" "2013-02-27.18:50:58 zfs set atime=off tank\n" "2013-02-27.18:51:09 zfs set checksum=fletcher4 tank\n" "2013-02-27.18:51:18 zfs create tank/backup\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1380 msgid "" "The output shows `zpool` and `zfs` commands altering the pool in some way " "along with a timestamp. Commands like `zfs list` are not included. When " "specifying no pool name, ZFS displays history of all pools." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1383 msgid "" "`zpool history` can show even more information when providing the options `-" "i` or `-l`. `-i` displays user-initiated events as well as internally " "logged ZFS events." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1395 #, no-wrap msgid "" "# zpool history -i\n" "History for 'tank':\n" "2013-02-26.23:02:35 [internal pool create txg:5] pool spa 28; zfs spa 28; zpl 5;uts 9.1-RELEASE 901000 amd64\n" "2013-02-27.18:50:53 [internal property set txg:50] atime=0 dataset = 21\n" "2013-02-27.18:50:58 zfs set atime=off tank\n" "2013-02-27.18:51:04 [internal property set txg:53] checksum=7 dataset = 21\n" "2013-02-27.18:51:09 zfs set checksum=fletcher4 tank\n" "2013-02-27.18:51:13 [internal create txg:55] dataset = 39\n" "2013-02-27.18:51:18 zfs create tank/backup\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1399 msgid "" "Show more details by adding `-l`. Showing history records in a long format, " "including information like the name of the user who issued the command and " "the hostname on which the change happened." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1408 #, no-wrap msgid "" "# zpool history -l\n" "History for 'tank':\n" "2013-02-26.23:02:35 zpool create tank mirror /dev/ada0 /dev/ada1 [user 0 (root) on :global]\n" "2013-02-27.18:50:58 zfs set atime=off tank [user 0 (root) on myzfsbox:global]\n" "2013-02-27.18:51:09 zfs set checksum=fletcher4 tank [user 0 (root) on myzfsbox:global]\n" "2013-02-27.18:51:18 zfs create tank/backup [user 0 (root) on myzfsbox:global]\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1414 msgid "" "The output shows that the `root` user created the mirrored pool with disks " "[.filename]#/dev/ada0# and [.filename]#/dev/ada1#. The hostname `myzfsbox` " "is also shown in the commands after the pool's creation. The hostname " "display becomes important when exporting the pool from one system and " "importing on another. It's possible to distinguish the commands issued on " "the other system by the hostname recorded for each command." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1417 msgid "" "Combine both options to `zpool history` to give the most detailed " "information possible for any given pool. Pool history provides valuable " "information when tracking down the actions performed or when needing more " "detailed output for debugging." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1419 #, no-wrap msgid "Performance Monitoring" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1426 msgid "" "A built-in monitoring system can display pool I/O statistics in real time. " "It shows the amount of free and used space on the pool, read and write " "operations performed per second, and I/O bandwidth used. By default, ZFS " "monitors and displays all pools in the system. Provide a pool name to limit " "monitoring to that pool. A basic example:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1434 #, no-wrap msgid "" "# zpool iostat\n" " capacity operations bandwidth\n" "pool alloc free read write read write\n" "---------- ----- ----- ----- ----- ----- -----\n" "data 288G 1.53T 2 11 11.3K 57.1K\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1440 msgid "" "To continuously see I/O activity, specify a number as the last parameter, " "indicating an interval in seconds to wait between updates. The next " "statistic line prints after each interval. Press kbd:[Ctrl+C] to stop this " "continuous monitoring. Give a second number on the command line after the " "interval to specify the total number of statistics to display." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1445 msgid "" "Display even more detailed I/O statistics with `-v`. Each device in the " "pool appears with a statistics line. This is useful for seeing read and " "write operations performed on each device, and can help determine if any " "individual device is slowing down the pool. This example shows a mirrored " "pool with two devices:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1457 #, no-wrap msgid "" "# zpool iostat -v\n" " capacity operations bandwidth\n" "pool alloc free read write read write\n" "----------------------- ----- ----- ----- ----- ----- -----\n" "data 288G 1.53T 2 12 9.23K 61.5K\n" " mirror 288G 1.53T 2 12 9.23K 61.5K\n" " ada1 - - 0 4 5.61K 61.7K\n" " ada2 - - 1 4 5.04K 61.7K\n" "----------------------- ----- ----- ----- ----- ----- -----\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1460 #, no-wrap msgid "Splitting a Storage Pool" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1467 msgid "" "ZFS can split a pool consisting of one or more mirror vdevs into two pools. " "Unless otherwise specified, ZFS detaches the last member of each mirror and " "creates a new pool containing the same data. Be sure to make a dry run of " "the operation with `-n` first. This displays the details of the requested " "operation without actually performing it. This helps confirm that the " "operation will do what the user intends." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/zfs/_index.adoc:1469 #, no-wrap msgid "`zfs` Administration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1473 msgid "" "The `zfs` utility can create, destroy, and manage all existing ZFS datasets " "within a pool. To manage the pool itself, use crossref:zfs[zfs-" "zpool,`zpool`]." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1475 #, no-wrap msgid "Creating and Destroying Datasets" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1486 msgid "" "Unlike traditional disks and volume managers, space in ZFS is _not_ " "preallocated. With traditional file systems, after partitioning and " "assigning the space, there is no way to add a new file system without adding " "a new disk. With ZFS, creating new file systems is possible at any time. " "Each crossref:zfs[zfs-term-dataset,_dataset_] has properties including " "features like compression, deduplication, caching, and quotas, as well as " "other useful properties like readonly, case sensitivity, network file " "sharing, and a mount point. Nesting datasets within each other is possible " "and child datasets will inherit properties from their ancestors. " "crossref:zfs[zfs-zfs-allow,Delegate], crossref:zfs[zfs-zfs-send,replicate], " "crossref:zfs[zfs-zfs-snapshot,snapshot], crossref:zfs[zfs-zfs-jail,jail] " "allows administering and destroying each dataset as a unit. Creating a " "separate dataset for each different type or set of files has advantages. " "The drawbacks to having a large number of datasets are that some commands " "like `zfs list` will be slower, and that mounting of hundreds or even " "thousands of datasets will slow the FreeBSD boot process." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1488 msgid "" "Create a new dataset and enable crossref:zfs[zfs-term-compression-lz4,LZ4 " "compression] on it:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1523 #, no-wrap msgid "" "# zfs list\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool 781M 93.2G 144K none\n" "mypool/ROOT 777M 93.2G 144K none\n" "mypool/ROOT/default 777M 93.2G 777M /\n" "mypool/tmp 176K 93.2G 176K /tmp\n" "mypool/usr 616K 93.2G 144K /usr\n" "mypool/usr/home 184K 93.2G 184K /usr/home\n" "mypool/usr/ports 144K 93.2G 144K /usr/ports\n" "mypool/usr/src 144K 93.2G 144K /usr/src\n" "mypool/var 1.20M 93.2G 608K /var\n" "mypool/var/crash 148K 93.2G 148K /var/crash\n" "mypool/var/log 178K 93.2G 178K /var/log\n" "mypool/var/mail 144K 93.2G 144K /var/mail\n" "mypool/var/tmp 152K 93.2G 152K /var/tmp\n" "# zfs create -o compress=lz4 mypool/usr/mydataset\n" "# zfs list\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool 781M 93.2G 144K none\n" "mypool/ROOT 777M 93.2G 144K none\n" "mypool/ROOT/default 777M 93.2G 777M /\n" "mypool/tmp 176K 93.2G 176K /tmp\n" "mypool/usr 704K 93.2G 144K /usr\n" "mypool/usr/home 184K 93.2G 184K /usr/home\n" "mypool/usr/mydataset 87.5K 93.2G 87.5K /usr/mydataset\n" "mypool/usr/ports 144K 93.2G 144K /usr/ports\n" "mypool/usr/src 144K 93.2G 144K /usr/src\n" "mypool/var 1.20M 93.2G 610K /var\n" "mypool/var/crash 148K 93.2G 148K /var/crash\n" "mypool/var/log 178K 93.2G 178K /var/log\n" "mypool/var/mail 144K 93.2G 144K /var/mail\n" "mypool/var/tmp 152K 93.2G 152K /var/tmp\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1526 msgid "" "Destroying a dataset is much quicker than deleting the files on the dataset, " "as it does not involve scanning the files and updating the corresponding " "metadata." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1528 msgid "Destroy the created dataset:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1563 #, no-wrap msgid "" "# zfs list\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool 880M 93.1G 144K none\n" "mypool/ROOT 777M 93.1G 144K none\n" "mypool/ROOT/default 777M 93.1G 777M /\n" "mypool/tmp 176K 93.1G 176K /tmp\n" "mypool/usr 101M 93.1G 144K /usr\n" "mypool/usr/home 184K 93.1G 184K /usr/home\n" "mypool/usr/mydataset 100M 93.1G 100M /usr/mydataset\n" "mypool/usr/ports 144K 93.1G 144K /usr/ports\n" "mypool/usr/src 144K 93.1G 144K /usr/src\n" "mypool/var 1.20M 93.1G 610K /var\n" "mypool/var/crash 148K 93.1G 148K /var/crash\n" "mypool/var/log 178K 93.1G 178K /var/log\n" "mypool/var/mail 144K 93.1G 144K /var/mail\n" "mypool/var/tmp 152K 93.1G 152K /var/tmp\n" "# zfs destroy mypool/usr/mydataset\n" "# zfs list\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool 781M 93.2G 144K none\n" "mypool/ROOT 777M 93.2G 144K none\n" "mypool/ROOT/default 777M 93.2G 777M /\n" "mypool/tmp 176K 93.2G 176K /tmp\n" "mypool/usr 616K 93.2G 144K /usr\n" "mypool/usr/home 184K 93.2G 184K /usr/home\n" "mypool/usr/ports 144K 93.2G 144K /usr/ports\n" "mypool/usr/src 144K 93.2G 144K /usr/src\n" "mypool/var 1.21M 93.2G 612K /var\n" "mypool/var/crash 148K 93.2G 148K /var/crash\n" "mypool/var/log 178K 93.2G 178K /var/log\n" "mypool/var/mail 144K 93.2G 144K /var/mail\n" "mypool/var/tmp 152K 93.2G 152K /var/tmp\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1571 msgid "" "In modern versions of ZFS, `zfs destroy` is asynchronous, and the free space " "might take minutes to appear in the pool. Use `zpool get freeing " "_poolname_` to see the `freeing` property, that shows which datasets are " "having their blocks freed in the background. If there are child datasets, " "like crossref:zfs[zfs-term-snapshot,snapshots] or other datasets, destroying " "the parent is impossible. To destroy a dataset and its children, use `-r` " "to recursively destroy the dataset and its children. Use `-n -v` to list " "datasets and snapshots destroyed by this operation, without actually destroy " "anything. Space reclaimed by destroying snapshots is also shown." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1573 #, no-wrap msgid "Creating and Destroying Volumes" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1578 msgid "" "A volume is a special dataset type. Rather than mounting as a file system, " "expose it as a block device under [.filename]#/dev/zvol/poolname/dataset#. " "This allows using the volume for other file systems, to back the disks of a " "virtual machine, or to make it available to other network hosts using " "protocols like iSCSI or HAST." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1583 msgid "" "Format a volume with any file system or without a file system to store raw " "data. To the user, a volume appears to be a regular disk. Putting ordinary " "file systems on these _zvols_ provides features that ordinary disks or file " "systems do not have. For example, using the compression property on a 250 " "MB volume allows creation of a compressed FAT file system." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1597 #, no-wrap msgid "" "# zfs create -V 250m -o compression=on tank/fat32\n" "# zfs list tank\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "tank 258M 670M 31K /tank\n" "# newfs_msdos -F32 /dev/zvol/tank/fat32\n" "# mount -t msdosfs /dev/zvol/tank/fat32 /mnt\n" "# df -h /mnt | grep fat32\n" "Filesystem Size Used Avail Capacity Mounted on\n" "/dev/zvol/tank/fat32 249M 24k 249M 0% /mnt\n" "# mount | grep fat32\n" "/dev/zvol/tank/fat32 on /mnt (msdosfs, local)\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1601 msgid "" "Destroying a volume is much the same as destroying a regular file system " "dataset. The operation is nearly instantaneous, but it may take minutes to " "reclaim the free space in the background." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1603 #, no-wrap msgid "Renaming a Dataset" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1610 msgid "" "To change the name of a dataset, use `zfs rename`. To change the parent of " "a dataset, use this command as well. Renaming a dataset to have a different " "parent dataset will change the value of those properties inherited from the " "parent dataset. Renaming a dataset unmounts then remounts it in the new " "location (inherited from the new parent dataset). To prevent this behavior, " "use `-u`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1612 msgid "Rename a dataset and move it to be under a different parent dataset:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1648 #, no-wrap msgid "" "# zfs list\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool 780M 93.2G 144K none\n" "mypool/ROOT 777M 93.2G 144K none\n" "mypool/ROOT/default 777M 93.2G 777M /\n" "mypool/tmp 176K 93.2G 176K /tmp\n" "mypool/usr 704K 93.2G 144K /usr\n" "mypool/usr/home 184K 93.2G 184K /usr/home\n" "mypool/usr/mydataset 87.5K 93.2G 87.5K /usr/mydataset\n" "mypool/usr/ports 144K 93.2G 144K /usr/ports\n" "mypool/usr/src 144K 93.2G 144K /usr/src\n" "mypool/var 1.21M 93.2G 614K /var\n" "mypool/var/crash 148K 93.2G 148K /var/crash\n" "mypool/var/log 178K 93.2G 178K /var/log\n" "mypool/var/mail 144K 93.2G 144K /var/mail\n" "mypool/var/tmp 152K 93.2G 152K /var/tmp\n" "# zfs rename mypool/usr/mydataset mypool/var/newname\n" "# zfs list\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool 780M 93.2G 144K none\n" "mypool/ROOT 777M 93.2G 144K none\n" "mypool/ROOT/default 777M 93.2G 777M /\n" "mypool/tmp 176K 93.2G 176K /tmp\n" "mypool/usr 616K 93.2G 144K /usr\n" "mypool/usr/home 184K 93.2G 184K /usr/home\n" "mypool/usr/ports 144K 93.2G 144K /usr/ports\n" "mypool/usr/src 144K 93.2G 144K /usr/src\n" "mypool/var 1.29M 93.2G 614K /var\n" "mypool/var/crash 148K 93.2G 148K /var/crash\n" "mypool/var/log 178K 93.2G 178K /var/log\n" "mypool/var/mail 144K 93.2G 144K /var/mail\n" "mypool/var/newname 87.5K 93.2G 87.5K /var/newname\n" "mypool/var/tmp 152K 93.2G 152K /var/tmp\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1653 msgid "" "Renaming snapshots uses the same command. Due to the nature of snapshots, " "rename cannot change their parent dataset. To rename a recursive snapshot, " "specify `-r`; this will also rename all snapshots with the same name in " "child datasets." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1663 #, no-wrap msgid "" "# zfs list -t snapshot\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool/var/newname@first_snapshot 0 - 87.5K -\n" "# zfs rename mypool/var/newname@first_snapshot new_snapshot_name\n" "# zfs list -t snapshot\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool/var/newname@new_snapshot_name 0 - 87.5K -\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1666 #, no-wrap msgid "Setting Dataset Properties" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1676 msgid "" "Each ZFS dataset has properties that control its behavior. Most properties " "are automatically inherited from the parent dataset, but can be overridden " "locally. Set a property on a dataset with `zfs set _property=value " "dataset_`. Most properties have a limited set of valid values, `zfs get` " "will display each possible property and valid values. Using `zfs inherit` " "reverts most properties to their inherited values. User-defined properties " "are also possible. They become part of the dataset configuration and " "provide further information about the dataset or its contents. To " "distinguish these custom properties from the ones supplied as part of ZFS, " "use a colon (`:`) to create a custom namespace for the property." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1683 #, no-wrap msgid "" "# zfs set custom:costcenter=1234 tank\n" "# zfs get custom:costcenter tank\n" "NAME PROPERTY VALUE SOURCE\n" "tank custom:costcenter 1234 local\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1687 msgid "" "To remove a custom property, use `zfs inherit` with `-r`. If the custom " "property is not defined in any of the parent datasets, this option removes " "it (but the pool's history still records the change)." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1696 #, no-wrap msgid "" "# zfs inherit -r custom:costcenter tank\n" "# zfs get custom:costcenter tank\n" "NAME PROPERTY VALUE SOURCE\n" "tank custom:costcenter - -\n" "# zfs get all tank | grep custom:costcenter\n" "#\n" msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:1699 #, no-wrap msgid "Getting and Setting Share Properties" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1705 msgid "" "Two commonly used and useful dataset properties are the NFS and SMB share " "options. Setting these defines if and how ZFS shares datasets on the " "network. At present, FreeBSD supports setting NFS sharing alone. To get " "the current status of a share, enter:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1714 #, no-wrap msgid "" "# zfs get sharenfs mypool/usr/home\n" "NAME PROPERTY VALUE SOURCE\n" "mypool/usr/home sharenfs on local\n" "# zfs get sharesmb mypool/usr/home\n" "NAME PROPERTY VALUE SOURCE\n" "mypool/usr/home sharesmb off local\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1717 msgid "To enable sharing of a dataset, enter:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1721 #, no-wrap msgid "# zfs set sharenfs=on mypool/usr/home\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1725 msgid "" "Set other options for sharing datasets through NFS, such as `-alldirs`, `-" "maproot` and `-network`. To set options on a dataset shared through NFS, " "enter:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1729 #, no-wrap msgid "# zfs set sharenfs=\"-alldirs,-maproot=root,-network=192.168.1.0/24\" mypool/usr/home\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:1732 #, no-wrap msgid "Managing Snapshots" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1746 msgid "" "crossref:zfs[zfs-term-snapshot,Snapshots] are one of the most powerful " "features of ZFS. A snapshot provides a read-only, point-in-time copy of the " "dataset. With Copy-On-Write (COW), ZFS creates snapshots fast by preserving " "older versions of the data on disk. If no snapshots exist, ZFS reclaims " "space for future use when data is rewritten or deleted. Snapshots preserve " "disk space by recording just the differences between the current dataset and " "a previous version. Allowing snapshots on whole datasets, not on individual " "files or directories. A snapshot from a dataset duplicates everything " "contained in it. This includes the file system properties, files, " "directories, permissions, and so on. Snapshots use no extra space when " "first created, but consume space as the blocks they reference change. " "Recursive snapshots taken with `-r` create snapshots with the same name on " "the dataset and its children, providing a consistent moment-in-time snapshot " "of the file systems. This can be important when an application has files on " "related datasets or that depend upon each other. Without snapshots, a " "backup would have copies of the files from different points in time." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1755 msgid "" "Snapshots in ZFS provide a variety of features that even other file systems " "with snapshot functionality lack. A typical example of snapshot use is as a " "quick way of backing up the current state of the file system when performing " "a risky action like a software installation or a system upgrade. If the " "action fails, rolling back to the snapshot returns the system to the same " "state when creating the snapshot. If the upgrade was successful, delete the " "snapshot to free up space. Without snapshots, a failed upgrade often " "requires restoring backups, which is tedious, time consuming, and may " "require downtime during which the system is unusable. Rolling back to " "snapshots is fast, even while the system is running in normal operation, " "with little or no downtime. The time savings are enormous with multi-" "terabyte storage systems considering the time required to copy the data from " "backup. Snapshots are not a replacement for a complete backup of a pool, " "but offer a quick and easy way to store a dataset copy at a specific time." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:1757 #, no-wrap msgid "Creating Snapshots" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1761 msgid "" "To create snapshots, use `zfs snapshot _dataset_@_snapshotname_`. Adding `-" "r` creates a snapshot recursively, with the same name on all child datasets." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1763 msgid "Create a recursive snapshot of the entire pool:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1801 #, no-wrap msgid "" "# zfs list -t all\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool 780M 93.2G 144K none\n" "mypool/ROOT 777M 93.2G 144K none\n" "mypool/ROOT/default 777M 93.2G 777M /\n" "mypool/tmp 176K 93.2G 176K /tmp\n" "mypool/usr 616K 93.2G 144K /usr\n" "mypool/usr/home 184K 93.2G 184K /usr/home\n" "mypool/usr/ports 144K 93.2G 144K /usr/ports\n" "mypool/usr/src 144K 93.2G 144K /usr/src\n" "mypool/var 1.29M 93.2G 616K /var\n" "mypool/var/crash 148K 93.2G 148K /var/crash\n" "mypool/var/log 178K 93.2G 178K /var/log\n" "mypool/var/mail 144K 93.2G 144K /var/mail\n" "mypool/var/newname 87.5K 93.2G 87.5K /var/newname\n" "mypool/var/newname@new_snapshot_name 0 - 87.5K -\n" "mypool/var/tmp 152K 93.2G 152K /var/tmp\n" "# zfs snapshot -r mypool@my_recursive_snapshot\n" "# zfs list -t snapshot\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool@my_recursive_snapshot 0 - 144K -\n" "mypool/ROOT@my_recursive_snapshot 0 - 144K -\n" "mypool/ROOT/default@my_recursive_snapshot 0 - 777M -\n" "mypool/tmp@my_recursive_snapshot 0 - 176K -\n" "mypool/usr@my_recursive_snapshot 0 - 144K -\n" "mypool/usr/home@my_recursive_snapshot 0 - 184K -\n" "mypool/usr/ports@my_recursive_snapshot 0 - 144K -\n" "mypool/usr/src@my_recursive_snapshot 0 - 144K -\n" "mypool/var@my_recursive_snapshot 0 - 616K -\n" "mypool/var/crash@my_recursive_snapshot 0 - 148K -\n" "mypool/var/log@my_recursive_snapshot 0 - 178K -\n" "mypool/var/mail@my_recursive_snapshot 0 - 144K -\n" "mypool/var/newname@new_snapshot_name 0 - 87.5K -\n" "mypool/var/newname@my_recursive_snapshot 0 - 87.5K -\n" "mypool/var/tmp@my_recursive_snapshot 0 - 152K -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1806 msgid "" "Snapshots are not shown by a normal `zfs list` operation. To list " "snapshots, append `-t snapshot` to `zfs list`. `-t all` displays both file " "systems and snapshots." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1810 msgid "" "Snapshots are not mounted directly, showing no path in the `MOUNTPOINT` " "column. ZFS does not mention available disk space in the `AVAIL` column, as " "snapshots are read-only after their creation. Compare the snapshot to the " "original dataset:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1817 #, no-wrap msgid "" "# zfs list -rt all mypool/usr/home\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool/usr/home 184K 93.2G 184K /usr/home\n" "mypool/usr/home@my_recursive_snapshot 0 - 184K -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1824 msgid "" "Displaying both the dataset and the snapshot together reveals how snapshots " "work in crossref:zfs[zfs-term-cow,COW] fashion. They save the changes " "(_delta_) made and not the complete file system contents all over again. " "This means that snapshots take little space when making changes. Observe " "space usage even more by copying a file to the dataset, then creating a " "second snapshot:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1834 #, no-wrap msgid "" "# cp /etc/passwd /var/tmp\n" "# zfs snapshot mypool/var/tmp@after_cp\n" "# zfs list -rt all mypool/var/tmp\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool/var/tmp 206K 93.2G 118K /var/tmp\n" "mypool/var/tmp@my_recursive_snapshot 88K - 152K -\n" "mypool/var/tmp@after_cp 0 - 118K -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1839 msgid "" "The second snapshot contains the changes to the dataset after the copy " "operation. This yields enormous space savings. Notice that the size of the " "snapshot `_mypool/var/tmp@my_recursive_snapshot_` also changed in the `USED` " "column to show the changes between itself and the snapshot taken afterwards." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:1841 #, no-wrap msgid "Comparing Snapshots" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1847 msgid "" "ZFS provides a built-in command to compare the differences in content " "between two snapshots. This is helpful with a lot of snapshots taken over " "time when the user wants to see how the file system has changed over time. " "For example, `zfs diff` lets a user find the latest snapshot that still " "contains a file deleted by accident. Doing this for the two snapshots " "created in the previous section yields this output:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1858 #, no-wrap msgid "" "# zfs list -rt all mypool/var/tmp\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool/var/tmp 206K 93.2G 118K /var/tmp\n" "mypool/var/tmp@my_recursive_snapshot 88K - 152K -\n" "mypool/var/tmp@after_cp 0 - 118K -\n" "# zfs diff mypool/var/tmp@my_recursive_snapshot\n" "M /var/tmp/\n" "+ /var/tmp/passwd\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1862 msgid "" "The command lists the changes between the specified snapshot (in this case " "`_mypool/var/tmp@my_recursive_snapshot_`) and the live file system. The " "first column shows the change type:" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:1868 #, no-wrap msgid "+" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:1870 #, no-wrap msgid "Adding the path or file." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:1871 #, no-wrap msgid "-" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:1873 #, no-wrap msgid "Deleting the path or file." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:1874 #, no-wrap msgid "M" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:1876 #, no-wrap msgid "Modifying the path or file." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:1877 #, no-wrap msgid "R" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:1878 #, no-wrap msgid "Renaming the path or file." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1883 msgid "" "Comparing the output with the table, it becomes clear that ZFS added " "[.filename]#passwd# after creating the snapshot `_mypool/var/" "tmp@my_recursive_snapshot_`. This also resulted in a modification to the " "parent directory mounted at `_/var/tmp_`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1885 msgid "" "Comparing two snapshots is helpful when using the ZFS replication feature to " "transfer a dataset to a different host for backup purposes." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1887 msgid "" "Compare two snapshots by providing the full dataset name and snapshot name " "of both datasets:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1899 #, no-wrap msgid "" "# cp /var/tmp/passwd /var/tmp/passwd.copy\n" "# zfs snapshot mypool/var/tmp@diff_snapshot\n" "# zfs diff mypool/var/tmp@my_recursive_snapshot mypool/var/tmp@diff_snapshot\n" "M /var/tmp/\n" "+ /var/tmp/passwd\n" "+ /var/tmp/passwd.copy\n" "# zfs diff mypool/var/tmp@my_recursive_snapshot mypool/var/tmp@after_cp\n" "M /var/tmp/\n" "+ /var/tmp/passwd\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1903 msgid "" "A backup administrator can compare two snapshots received from the sending " "host and determine the actual changes in the dataset. See the " "crossref:zfs[zfs-zfs-send,Replication] section for more information." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:1905 #, no-wrap msgid "Snapshot Rollback" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1918 msgid "" "When at least one snapshot is available, roll back to it at any time. Most " "often this is the case when the current state of the dataset is no longer " "valid or an older version is preferred. Scenarios such as local development " "tests gone wrong, botched system updates hampering the system functionality, " "or the need to restore deleted files or directories are all too common " "occurrences. To roll back a snapshot, use `zfs rollback _snapshotname_`. " "If a lot of changes are present, the operation will take a long time. " "During that time, the dataset always remains in a consistent state, much " "like a database that conforms to ACID principles is performing a rollback. " "This is happening while the dataset is live and accessible without requiring " "a downtime. Once the snapshot rolled back, the dataset has the same state " "as it had when the snapshot was originally taken. Rolling back to a " "snapshot discards all other data in that dataset not part of the snapshot. " "Taking a snapshot of the current state of the dataset before rolling back to " "a previous one is a good idea when requiring some data later. This way, the " "user can roll back and forth between snapshots without losing data that is " "still valuable." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1920 msgid "" "In the first example, roll back a snapshot because a careless `rm` operation " "removed more data than intended." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1934 #, no-wrap msgid "" "# zfs list -rt all mypool/var/tmp\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool/var/tmp 262K 93.2G 120K /var/tmp\n" "mypool/var/tmp@my_recursive_snapshot 88K - 152K -\n" "mypool/var/tmp@after_cp 53.5K - 118K -\n" "mypool/var/tmp@diff_snapshot 0 - 120K -\n" "# ls /var/tmp\n" "passwd passwd.copy vi.recover\n" "# rm /var/tmp/passwd*\n" "# ls /var/tmp\n" "vi.recover\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1939 msgid "" "At this point, the user notices the removal of extra files and wants them " "back. ZFS provides an easy way to get them back using rollbacks, when " "performing snapshots of important data on a regular basis. To get the files " "back and start over from the last snapshot, issue the command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1945 #, no-wrap msgid "" "# zfs rollback mypool/var/tmp@diff_snapshot\n" "# ls /var/tmp\n" "passwd passwd.copy vi.recover\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1950 msgid "" "The rollback operation restored the dataset to the state of the last " "snapshot. Rolling back to a snapshot taken much earlier with other " "snapshots taken afterwards is also possible. When trying to do this, ZFS " "will issue this warning:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1963 #, no-wrap msgid "" "# zfs list -rt snapshot mypool/var/tmp\n" "AME USED AVAIL REFER MOUNTPOINT\n" "mypool/var/tmp@my_recursive_snapshot 88K - 152K -\n" "mypool/var/tmp@after_cp 53.5K - 118K -\n" "mypool/var/tmp@diff_snapshot 0 - 120K -\n" "# zfs rollback mypool/var/tmp@my_recursive_snapshot\n" "cannot rollback to 'mypool/var/tmp@my_recursive_snapshot': more recent snapshots exist\n" "use '-r' to force deletion of the following snapshots:\n" "mypool/var/tmp@after_cp\n" "mypool/var/tmp@diff_snapshot\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1970 msgid "" "This warning means that snapshots exist between the current state of the " "dataset and the snapshot to which the user wants to roll back. To complete " "the rollback delete these snapshots. ZFS cannot track all the changes " "between different states of the dataset, because snapshots are read-only. " "ZFS will not delete the affected snapshots unless the user specifies `-r` to " "confirm that this is the desired action. If that is the intention, and " "understanding the consequences of losing all intermediate snapshots, issue " "the command:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:1979 #, no-wrap msgid "" "# zfs rollback -r mypool/var/tmp@my_recursive_snapshot\n" "# zfs list -rt snapshot mypool/var/tmp\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool/var/tmp@my_recursive_snapshot 8K - 152K -\n" "# ls /var/tmp\n" "vi.recover\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1982 msgid "" "The output from `zfs list -t snapshot` confirms the removal of the " "intermediate snapshots as a result of `zfs rollback -r`." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:1984 #, no-wrap msgid "Restoring Individual Files from Snapshots" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:1991 msgid "" "Snapshots live in a hidden directory under the parent dataset: " "[.filename]#.zfs/snapshots/snapshotname#. By default, these directories " "will not show even when executing a standard `ls -a` . Although the " "directory doesn't show, access it like any normal directory. The property " "named `snapdir` controls whether these hidden directories show up in a " "directory listing. Setting the property to `visible` allows them to appear " "in the output of `ls` and other commands that deal with directory contents." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2002 #, no-wrap msgid "" "# zfs get snapdir mypool/var/tmp\n" "NAME PROPERTY VALUE SOURCE\n" "mypool/var/tmp snapdir hidden default\n" "# ls -a /var/tmp\n" ". .. passwd vi.recover\n" "# zfs set snapdir=visible mypool/var/tmp\n" "# ls -a /var/tmp\n" ". .. .zfs passwd vi.recover\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2007 msgid "" "Restore individual files to a previous state by copying them from the " "snapshot back to the parent dataset. The directory structure below " "[.filename]#.zfs/snapshot# has a directory named like the snapshots taken " "earlier to make it easier to identify them. The next example shows how to " "restore a file from the hidden [.filename]#.zfs# directory by copying it " "from the snapshot containing the latest version of the file:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2018 #, no-wrap msgid "" "# rm /var/tmp/passwd\n" "# ls -a /var/tmp\n" ". .. .zfs vi.recover\n" "# ls /var/tmp/.zfs/snapshot\n" "after_cp my_recursive_snapshot\n" "# ls /var/tmp/.zfs/snapshot/after_cp\n" "passwd vi.recover\n" "# cp /var/tmp/.zfs/snapshot/after_cp/passwd /var/tmp\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2025 msgid "" "Even if the `snapdir` property is set to hidden, running `ls .zfs/snapshot` " "will still list the contents of that directory. The administrator decides " "whether to display these directories. This is a per-dataset setting. " "Copying files or directories from this hidden [.filename]#.zfs/snapshot# is " "simple enough. Trying it the other way around results in this error:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2030 #, no-wrap msgid "" "# cp /etc/rc.conf /var/tmp/.zfs/snapshot/after_cp/\n" "cp: /var/tmp/.zfs/snapshot/after_cp/rc.conf: Read-only file system\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2034 msgid "" "The error reminds the user that snapshots are read-only and cannot change " "after creation. Copying files into and removing them from snapshot " "directories are both disallowed because that would change the state of the " "dataset they represent." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2037 msgid "" "Snapshots consume space based on how much the parent file system has changed " "since the time of the snapshot. The `written` property of a snapshot tracks " "the space the snapshot uses." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2041 msgid "" "To destroy snapshots and reclaim the space, use `zfs destroy " "_dataset_@_snapshot_`. Adding `-r` recursively removes all snapshots with " "the same name under the parent dataset. Adding `-n -v` to the command " "displays a list of the snapshots to be deleted and an estimate of the space " "it would reclaim without performing the actual destroy operation." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2043 #, no-wrap msgid "Managing Clones" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2052 msgid "" "A clone is a copy of a snapshot treated more like a regular dataset. Unlike " "a snapshot, a clone is writeable and mountable, and has its own properties. " "After creating a clone using `zfs clone`, destroying the originating " "snapshot is impossible. To reverse the child/parent relationship between " "the clone and the snapshot use `zfs promote`. Promoting a clone makes the " "snapshot become a child of the clone, rather than of the original parent " "dataset. This will change how ZFS accounts for the space, but not actually " "change the amount of space consumed. Mounting the clone anywhere within the " "ZFS file system hierarchy is possible, not only below the original location " "of the snapshot." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2054 msgid "To show the clone feature use this example dataset:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2062 #, no-wrap msgid "" "# zfs list -rt all camino/home/joe\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "camino/home/joe 108K 1.3G 87K /usr/home/joe\n" "camino/home/joe@plans 21K - 85.5K -\n" "camino/home/joe@backup 0K - 87K -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2068 msgid "" "A typical use for clones is to experiment with a specific dataset while " "keeping the snapshot around to fall back to in case something goes wrong. " "Since snapshots cannot change, create a read/write clone of a snapshot. " "After achieving the desired result in the clone, promote the clone to a " "dataset and remove the old file system. Removing the parent dataset is not " "strictly necessary, as the clone and dataset can coexist without problems." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2075 #, no-wrap msgid "" "# zfs clone camino/home/joe@backup camino/home/joenew\n" "# ls /usr/home/joe*\n" "/usr/home/joe:\n" "backup.txz plans.txt\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2082 #, no-wrap msgid "" "/usr/home/joenew:\n" "backup.txz plans.txt\n" "# df -h /usr/home\n" "Filesystem Size Used Avail Capacity Mounted on\n" "usr/home/joe 1.3G 31k 1.3G 0% /usr/home/joe\n" "usr/home/joenew 1.3G 31k 1.3G 0% /usr/home/joenew\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2091 msgid "" "Creating a clone makes it an exact copy of the state the dataset was in when " "taking the snapshot. Changing the clone independently from its originating " "dataset is possible now. The connection between the two is the snapshot. " "ZFS records this connection in the property `origin`. Promoting the clone " "with `zfs promote` makes the clone an independent dataset. This removes the " "value of the `origin` property and disconnects the newly independent dataset " "from the snapshot. This example shows it:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2101 #, no-wrap msgid "" "# zfs get origin camino/home/joenew\n" "NAME PROPERTY VALUE SOURCE\n" "camino/home/joenew origin camino/home/joe@backup -\n" "# zfs promote camino/home/joenew\n" "# zfs get origin camino/home/joenew\n" "NAME PROPERTY VALUE SOURCE\n" "camino/home/joenew origin - -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2106 msgid "" "After making some changes like copying [.filename]#loader.conf# to the " "promoted clone, for example, the old directory becomes obsolete in this " "case. Instead, the promoted clone can replace it. To do this, `zfs " "destroy` the old dataset first and then `zfs rename` the clone to the old " "dataset name (or to an entirely different name)." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2117 #, no-wrap msgid "" "# cp /boot/defaults/loader.conf /usr/home/joenew\n" "# zfs destroy -f camino/home/joe\n" "# zfs rename camino/home/joenew camino/home/joe\n" "# ls /usr/home/joe\n" "backup.txz loader.conf plans.txt\n" "# df -h /usr/home\n" "Filesystem Size Used Avail Capacity Mounted on\n" "usr/home/joe 1.3G 128k 1.3G 0% /usr/home/joe\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2126 msgid "" "The cloned snapshot is now an ordinary dataset. It contains all the data " "from the original snapshot plus the files added to it like " "[.filename]#loader.conf#. Clones provide useful features to ZFS users in " "different scenarios. For example, provide jails as snapshots containing " "different sets of installed applications. Users can clone these snapshots " "and add their own applications as they see fit. Once satisfied with the " "changes, promote the clones to full datasets and provide them to end users " "to work with like they would with a real dataset. This saves time and " "administrative overhead when providing these jails." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2128 #, no-wrap msgid "Replication" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2137 msgid "" "Keeping data on a single pool in one location exposes it to risks like theft " "and natural or human disasters. Making regular backups of the entire pool " "is vital. ZFS provides a built-in serialization feature that can send a " "stream representation of the data to standard output. Using this feature, " "storing this data on another pool connected to the local system is possible, " "as is sending it over a network to another system. Snapshots are the basis " "for this replication (see the section on crossref:zfs[zfs-zfs-snapshot,ZFS " "snapshots]). The commands used for replicating data are `zfs send` and `zfs " "receive`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2139 msgid "These examples show ZFS replication with these two pools:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2146 #, no-wrap msgid "" "# zpool list\n" "NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\n" "backup 960M 77K 896M - - 0% 0% 1.00x ONLINE -\n" "mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2154 msgid "" "The pool named _mypool_ is the primary pool where writing and reading data " "happens on a regular basis. Using a second standby pool _backup_ in case " "the primary pool becomes unavailable. Note that this fail-over is not done " "automatically by ZFS, but must be manually done by a system administrator " "when needed. Use a snapshot to provide a consistent file system version to " "replicate. After creating a snapshot of _mypool_, copy it to the _backup_ " "pool by replicating snapshots. This does not include changes made since the " "most recent snapshot." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2161 #, no-wrap msgid "" "# zfs snapshot mypool@backup1\n" "# zfs list -t snapshot\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool@backup1 0 - 43.6M -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2166 msgid "" "Now that a snapshot exists, use `zfs send` to create a stream representing " "the contents of the snapshot. Store this stream as a file or receive it on " "another pool. Write the stream to standard output, but redirect to a file " "or pipe or an error appears:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2172 #, no-wrap msgid "" "# zfs send mypool@backup1\n" "Error: Stream can not be written to a terminal.\n" "You must redirect standard output.\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2176 msgid "" "To back up a dataset with `zfs send`, redirect to a file located on the " "mounted backup pool. Ensure that the pool has enough free space to " "accommodate the size of the sent snapshot, which means the data contained in " "the snapshot, not the changes from the previous snapshot." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2184 #, no-wrap msgid "" "# zfs send mypool@backup1 > /backup/backup1\n" "# zpool list\n" "NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\n" "backup 960M 63.7M 896M - - 0% 6% 1.00x ONLINE -\n" "mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2188 msgid "" "The `zfs send` transferred all the data in the snapshot called _backup1_ to " "the pool named _backup_. To create and send these snapshots automatically, " "use a man:cron[8] job." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2194 msgid "" "Instead of storing the backups as archive files, ZFS can receive them as a " "live file system, allowing direct access to the backed up data. To get to " "the actual data contained in those streams, use `zfs receive` to transform " "the streams back into files and directories. The example below combines " "`zfs send` and `zfs receive` using a pipe to copy the data from one pool to " "another. Use the data directly on the receiving pool after the transfer is " "complete. It is only possible to replicate a dataset to an empty dataset." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2202 #, no-wrap msgid "" "# zfs snapshot mypool@replica1\n" "# zfs send -v mypool@replica1 | zfs receive backup/mypool\n" "send from @ to mypool@replica1 estimated size is 50.1M\n" "total estimated size is 50.1M\n" "TIME SENT SNAPSHOT\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2207 #, no-wrap msgid "" "# zpool list\n" "NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\n" "backup 960M 63.7M 896M - - 0% 6% 1.00x ONLINE -\n" "mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -\n" msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:2210 #, no-wrap msgid "Incremental Backups" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2215 msgid "" "`zfs send` can also determine the difference between two snapshots and send " "individual differences between the two. This saves disk space and transfer " "time. For example:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2227 #, no-wrap msgid "" "# zfs snapshot mypool@replica2\n" "# zfs list -t snapshot\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "mypool@replica1 5.72M - 43.6M -\n" "mypool@replica2 0 - 44.1M -\n" "# zpool list\n" "NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\n" "backup 960M 61.7M 898M - - 0% 6% 1.00x ONLINE -\n" "mypool 960M 50.2M 910M - - 0% 5% 1.00x ONLINE -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2233 msgid "" "Create a second snapshot called _replica2_. This second snapshot contains " "changes made to the file system between now and the previous snapshot, " "_replica1_. Using `zfs send -i` and indicating the pair of snapshots " "generates an incremental replica stream containing the changed data. This " "succeeds if the initial snapshot already exists on the receiving side." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2240 #, no-wrap msgid "" "# zfs send -v -i mypool@replica1 mypool@replica2 | zfs receive /backup/mypool\n" "send from @replica1 to mypool@replica2 estimated size is 5.02M\n" "total estimated size is 5.02M\n" "TIME SENT SNAPSHOT\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2245 #, no-wrap msgid "" "# zpool list\n" "NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\n" "backup 960M 80.8M 879M - - 0% 8% 1.00x ONLINE -\n" "mypool 960M 50.2M 910M - - 0% 5% 1.00x ONLINE -\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2251 #, no-wrap msgid "" "# zfs list\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "backup 55.4M 240G 152K /backup\n" "backup/mypool 55.3M 240G 55.2M /backup/mypool\n" "mypool 55.6M 11.6G 55.0M /mypool\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2258 #, no-wrap msgid "" "# zfs list -t snapshot\n" "NAME USED AVAIL REFER MOUNTPOINT\n" "backup/mypool@replica1 104K - 50.2M -\n" "backup/mypool@replica2 0 - 55.2M -\n" "mypool@replica1 29.9K - 50.0M -\n" "mypool@replica2 0 - 55.0M -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2263 msgid "" "The incremental stream replicated the changed data rather than the entirety " "of _replica1_. Sending the differences alone took much less time to " "transfer and saved disk space by not copying the whole pool each time. This " "is useful when replicating over a slow network or one charging per " "transferred byte." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2268 msgid "" "A new file system, _backup/mypool_, is available with the files and data " "from the pool _mypool_. Specifying `-p` copies the dataset properties " "including compression settings, quotas, and mount points. Specifying `-R` " "copies all child datasets of the dataset along with their properties. " "Automate sending and receiving to create regular backups on the second pool." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:2270 #, no-wrap msgid "Sending Encrypted Backups over SSH" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2278 msgid "" "Sending streams over the network is a good way to keep a remote backup, but " "it does come with a drawback. Data sent over the network link is not " "encrypted, allowing anyone to intercept and transform the streams back into " "data without the knowledge of the sending user. This is undesirable when " "sending the streams over the internet to a remote host. Use SSH to securely " "encrypt data sent over a network connection. Since ZFS requires redirecting " "the stream from standard output, piping it through SSH is easy. To keep the " "contents of the file system encrypted in transit and on the remote system, " "consider using https://wiki.freebsd.org/PEFS[PEFS]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2281 msgid "" "Change some settings and take security precautions first. This describes " "the necessary steps required for the `zfs send` operation; for more " "information on SSH, see crossref:security[openssh,\"OpenSSH\"]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2283 msgid "Change the configuration as follows:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2285 msgid "" "Passwordless SSH access between sending and receiving host using SSH keys" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2286 msgid "" "ZFS requires the privileges of the `root` user to send and receive streams. " "This requires logging in to the receiving system as `root`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2287 msgid "Security reasons prevent `root` from logging in by default." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2289 msgid "" "Use the crossref:zfs[zfs-zfs-allow,ZFS Delegation] system to allow a non-" "`root` user on each system to perform the respective send and receive " "operations. On the sending system:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2293 #, no-wrap msgid "# zfs allow -u someuser send,snapshot mypool\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2296 msgid "" "To mount the pool, the unprivileged user must own the directory, and regular " "users need permission to mount file systems." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2298 msgid "On the receiving system:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2307 #, no-wrap msgid "" "# sysctl vfs.usermount=1\n" "vfs.usermount: 0 -> 1\n" "# echo vfs.usermount=1 >> /etc/sysctl.conf\n" "# zfs create recvpool/backup\n" "# zfs allow -u someuser create,mount,receive recvpool/backup\n" "# chown someuser /recvpool/backup\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2310 msgid "" "The unprivileged user can receive and mount datasets now, and replicates the " "_home_ dataset to the remote system:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2315 #, no-wrap msgid "" "% zfs snapshot -r mypool/home@monday\n" "% zfs send -R mypool/home@monday | ssh someuser@backuphost zfs recv -dvu recvpool/backup\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2325 msgid "" "Create a recursive snapshot called _monday_ of the file system dataset " "_home_ on the pool _mypool_. Then `zfs send -R` includes the dataset, all " "child datasets, snapshots, clones, and settings in the stream. Pipe the " "output through SSH to the waiting `zfs receive` on the remote host " "_backuphost_. Using an IP address or fully qualified domain name is good " "practice. The receiving machine writes the data to the _backup_ dataset on " "the _recvpool_ pool. Adding `-d` to `zfs recv` overwrites the name of the " "pool on the receiving side with the name of the snapshot. `-u` causes the " "file systems to not mount on the receiving side. Using `-v` shows more " "details about the transfer, including the elapsed time and the amount of " "data transferred." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2327 #, no-wrap msgid "Dataset, User, and Group Quotas" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2333 msgid "" "Use crossref:zfs[zfs-term-quota,Dataset quotas] to restrict the amount of " "space consumed by a particular dataset. crossref:zfs[zfs-term-" "refquota,Reference Quotas] work in much the same way, but count the space " "used by the dataset itself, excluding snapshots and child datasets. " "Similarly, use crossref:zfs[zfs-term-userquota,user] and crossref:zfs[zfs-" "term-groupquota,group] quotas to prevent users or groups from using up all " "the space in the pool or dataset." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2338 msgid "" "The following examples assume that the users already exist in the system. " "Before adding a user to the system, make sure to create their home dataset " "first and set the `mountpoint` to `/home/_bob_`. Then, create the user and " "make the home directory point to the dataset's `mountpoint` location. This " "will properly set owner and group permissions without shadowing any pre-" "existing home directory paths that might exist." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2340 msgid "To enforce a dataset quota of 10 GB for [.filename]#storage/home/bob#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2344 #, no-wrap msgid "# zfs set quota=10G storage/home/bob\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2347 msgid "" "To enforce a reference quota of 10 GB for [.filename]#storage/home/bob#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2351 #, no-wrap msgid "# zfs set refquota=10G storage/home/bob\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2354 msgid "To remove a quota of 10 GB for [.filename]#storage/home/bob#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2358 #, no-wrap msgid "# zfs set quota=none storage/home/bob\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2361 msgid "" "The general format is `userquota@_user_=_size_`, and the user's name must be " "in one of these formats:" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2363 msgid "POSIX compatible name such as _joe_." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2364 msgid "POSIX numeric ID such as _789_." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2365 msgid "SID name such as _joe.bloggs@example.com_." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2366 msgid "SID numeric ID such as _S-1-123-456-789_." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2368 msgid "For example, to enforce a user quota of 50 GB for the user named _joe_:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2372 #, no-wrap msgid "# zfs set userquota@joe=50G\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2375 msgid "To remove any quota:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2379 #, no-wrap msgid "# zfs set userquota@joe=none\n" msgstr "" #. type: delimited block = 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2386 msgid "" "User quota properties are not displayed by `zfs get all`. Non-`root` users " "can't see other's quotas unless granted the `userquota` privilege. Users " "with this privilege are able to view and set everyone's quota." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2389 msgid "" "The general format for setting a group quota is: `groupquota@_group_=_size_`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2391 msgid "To set the quota for the group _firstgroup_ to 50 GB, use:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2395 #, no-wrap msgid "# zfs set groupquota@firstgroup=50G\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2398 msgid "" "To remove the quota for the group _firstgroup_, or to make sure that one is " "not set, instead use:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2402 #, no-wrap msgid "# zfs set groupquota@firstgroup=none\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2406 msgid "" "As with the user quota property, non-`root` users can see the quotas " "associated with the groups to which they belong. A user with the " "`groupquota` privilege or `root` can view and set all quotas for all groups." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2410 msgid "" "To display the amount of space used by each user on a file system or " "snapshot along with any quotas, use `zfs userspace`. For group information, " "use `zfs groupspace`. For more information about supported options or how " "to display specific options alone, refer to man:zfs[1]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2412 msgid "" "Privileged users and `root` can list the quota for [.filename]#storage/home/" "bob# using:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2416 #, no-wrap msgid "# zfs get quota storage/home/bob\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2419 #, no-wrap msgid "Reservations" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2424 msgid "" "crossref:zfs[zfs-term-reservation,Reservations] guarantee an always-" "available amount of space on a dataset. The reserved space will not be " "available to any other dataset. This useful feature ensures that free space " "is available for an important dataset or log files." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2426 msgid "" "The general format of the `reservation` property is `reservation=_size_`, so " "to set a reservation of 10 GB on [.filename]#storage/home/bob#, use:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2430 #, no-wrap msgid "# zfs set reservation=10G storage/home/bob\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2433 msgid "To clear any reservation:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2437 #, no-wrap msgid "# zfs set reservation=none storage/home/bob\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2441 msgid "" "The same principle applies to the `refreservation` property for setting a " "crossref:zfs[zfs-term-refreservation,Reference Reservation], with the " "general format `refreservation=_size_`." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2443 msgid "" "This command shows any reservations or refreservations that exist on " "[.filename]#storage/home/bob#:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2448 #, no-wrap msgid "" "# zfs get reservation storage/home/bob\n" "# zfs get refreservation storage/home/bob\n" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2451 #, no-wrap msgid "Compression" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2458 msgid "" "ZFS provides transparent compression. Compressing data written at the block " "level saves space and also increases disk throughput. If data compresses by " "25% the compressed data writes to the disk at the same rate as the " "uncompressed version, resulting in an effective write speed of 125%. " "Compression can also be a great alternative to crossref:zfs[zfs-zfs-" "deduplication,Deduplication] because it does not require extra memory." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2465 msgid "" "ZFS offers different compression algorithms, each with different trade-" "offs. The introduction of LZ4 compression in ZFS v5000 enables compressing " "the entire pool without the large performance trade-off of other " "algorithms. The biggest advantage to LZ4 is the _early abort_ feature. If " "LZ4 does not achieve at least 12.5% compression in the header part of the " "data, ZFS writes the block uncompressed to avoid wasting CPU cycles trying " "to compress data that is either already compressed or uncompressible. For " "details about the different compression algorithms available in ZFS, see the " "crossref:zfs[zfs-term-compression,Compression] entry in the terminology " "section." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2467 msgid "" "The administrator can see the effectiveness of compression using dataset " "properties." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2476 #, no-wrap msgid "" "# zfs get used,compressratio,compression,logicalused mypool/compressed_dataset\n" "NAME PROPERTY VALUE SOURCE\n" "mypool/compressed_dataset used 449G -\n" "mypool/compressed_dataset compressratio 1.11x -\n" "mypool/compressed_dataset compression lz4 local\n" "mypool/compressed_dataset logicalused 496G -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2481 msgid "" "The dataset is using 449 GB of space (the used property). Without " "compression, it would have taken 496 GB of space (the `logicalused` " "property). This results in a 1.11:1 compression ratio." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2488 msgid "" "Compression can have an unexpected side effect when combined with " "crossref:zfs[zfs-term-userquota,User Quotas]. User quotas restrict how much " "actual space a user consumes on a dataset _after compression_. If a user " "has a quota of 10 GB, and writes 10 GB of compressible data, they will still " "be able to store more data. If they later update a file, say a database, " "with more or less compressible data, the amount of space available to them " "will change. This can result in the odd situation where a user did not " "increase the actual amount of data (the `logicalused` property), but the " "change in compression caused them to reach their quota limit." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2492 msgid "" "Compression can have a similar unexpected interaction with backups. Quotas " "are often used to limit data storage to ensure there is enough backup space " "available. Since quotas do not consider compression ZFS may write more data " "than would fit with uncompressed backups." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2494 #, no-wrap msgid "Zstandard Compression" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2498 msgid "" "OpenZFS 2.0 added a new compression algorithm. Zstandard (Zstd) offers " "higher compression ratios than the default LZ4 while offering much greater " "speeds than the alternative, gzip. OpenZFS 2.0 is available starting with " "FreeBSD 12.1-RELEASE via package:sysutils/openzfs[] and has been the default " "in since FreeBSD 13.0-RELEASE." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2502 msgid "" "Zstd provides a large selection of compression levels, providing fine-" "grained control over performance versus compression ratio. One of the main " "advantages of Zstd is that the decompression speed is independent of the " "compression level. For data written once but read often, Zstd allows the " "use of the highest compression levels without a read performance penalty." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2507 msgid "" "Even with frequent data updates, enabling compression often provides higher " "performance. One of the biggest advantages comes from the compressed ARC " "feature. ZFS's Adaptive Replacement Cache (ARC) caches the compressed " "version of the data in RAM, decompressing it each time. This allows the " "same amount of RAM to store more data and metadata, increasing the cache hit " "ratio." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2513 msgid "" "ZFS offers 19 levels of Zstd compression, each offering incrementally more " "space savings in exchange for slower compression. The default level is " "`zstd-3` and offers greater compression than LZ4 without being much slower. " "Levels above 10 require large amounts of memory to compress each block and " "systems with less than 16 GB of RAM should not use them. ZFS uses a " "selection of the Zstd_fast_ levels also, which get correspondingly faster " "but supports lower compression ratios. ZFS supports `zstd-fast-1` through " "`zstd-fast-10`, `zstd-fast-20` through `zstd-fast-100` in increments of 10, " "and `zstd-fast-500` and `zstd-fast-1000` which provide minimal compression, " "but offer high performance." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2517 msgid "" "If ZFS is not able to get the required memory to compress a block with Zstd, " "it will fall back to storing the block uncompressed. This is unlikely to " "happen except at the highest levels of Zstd on memory constrained systems. " "ZFS counts how often this has occurred since loading the ZFS module with " "`kstat.zfs.misc.zstd.compress_alloc_fail`." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2519 #, no-wrap msgid "Deduplication" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2525 msgid "" "When enabled, crossref:zfs[zfs-term-deduplication,deduplication] uses the " "checksum of each block to detect duplicate blocks. When a new block is a " "duplicate of an existing block, ZFS writes a new reference to the existing " "data instead of the whole duplicate block. Tremendous space savings are " "possible if the data contains a lot of duplicated files or repeated " "information. Warning: deduplication requires a large amount of memory, and " "enabling compression instead provides most of the space savings without the " "extra cost." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2527 msgid "To activate deduplication, set the `dedup` property on the target pool:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2531 #, no-wrap msgid "# zfs set dedup=on pool\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2536 msgid "" "Deduplicating only affects new data written to the pool. Merely activating " "this option will not deduplicate data already written to the pool. A pool " "with a freshly activated deduplication property will look like this example:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2542 #, no-wrap msgid "" "# zpool list\n" "NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\n" "pool 2.84G 2.19M 2.83G - - 0% 0% 1.00x ONLINE -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2547 msgid "" "The `DEDUP` column shows the actual rate of deduplication for the pool. A " "value of `1.00x` shows that data has not deduplicated yet. The next example " "copies some system binaries three times into different directories on the " "deduplicated pool created above." msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2553 #, no-wrap msgid "" "# for d in dir1 dir2 dir3; do\n" "> mkdir $d && cp -R /usr/bin $d &\n" "> done\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2556 msgid "To observe deduplicating of redundant data, use:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2562 #, no-wrap msgid "" "# zpool list\n" "NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\n" "pool 2.84G 20.9M 2.82G - - 0% 0% 3.00x ONLINE -\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2567 msgid "" "The `DEDUP` column shows a factor of `3.00x`. Detecting and deduplicating " "copies of the data uses a third of the space. The potential for space " "savings can be enormous, but comes at the cost of having enough memory to " "keep track of the deduplicated blocks." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2570 msgid "" "Deduplication is not always beneficial when the data in a pool is not " "redundant. ZFS can show potential space savings by simulating deduplication " "on an existing pool:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2575 #, no-wrap msgid "" "# zdb -S pool\n" "Simulated DDT histogram:\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2591 #, no-wrap msgid "" "bucket allocated referenced\n" "______ ______________________________ ______________________________\n" "refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE\n" "------ ------ ----- ----- ----- ------ ----- ----- -----\n" " 1 2.58M 289G 264G 264G 2.58M 289G 264G 264G\n" " 2 206K 12.6G 10.4G 10.4G 430K 26.4G 21.6G 21.6G\n" " 4 37.6K 692M 276M 276M 170K 3.04G 1.26G 1.26G\n" " 8 2.18K 45.2M 19.4M 19.4M 20.0K 425M 176M 176M\n" " 16 174 2.83M 1.20M 1.20M 3.33K 48.4M 20.4M 20.4M\n" " 32 40 2.17M 222K 222K 1.70K 97.2M 9.91M 9.91M\n" " 64 9 56K 10.5K 10.5K 865 4.96M 948K 948K\n" " 128 2 9.50K 2K 2K 419 2.11M 438K 438K\n" " 256 5 61.5K 12K 12K 1.90K 23.0M 4.47M 4.47M\n" " 1K 2 1K 1K 1K 2.98K 1.49M 1.49M 1.49M\n" " Total 2.82M 303G 275G 275G 3.20M 319G 287G 287G\n" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2593 #, no-wrap msgid "dedup = 1.05, compress = 1.11, copies = 1.00, dedup * compress / copies = 1.16\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2603 msgid "" "After `zdb -S` finishes analyzing the pool, it shows the space reduction " "ratio that activating deduplication would achieve. In this case, `1.16` is " "a poor space saving ratio mainly provided by compression. Activating " "deduplication on this pool would not save any amount of space, and is not " "worth the amount of memory required to enable deduplication. Using the " "formula _ratio = dedup * compress / copies_, system administrators can plan " "the storage allocation, deciding whether the workload will contain enough " "duplicate blocks to justify the memory requirements. If the data is " "reasonably compressible, the space savings may be good. Good practice is to " "enable compression first as compression also provides greatly increased " "performance. Enable deduplication in cases where savings are considerable " "and with enough available memory for the crossref:zfs[zfs-term-" "deduplication,DDT]." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2605 #, no-wrap msgid "ZFS and Jails" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2611 msgid "" "Use `zfs jail` and the corresponding `jailed` property to delegate a ZFS " "dataset to a crossref:jails[jails,Jail]. `zfs jail _jailid_` attaches a " "dataset to the specified jail, and `zfs unjail` detaches it. To control the " "dataset from within a jail, set the `jailed` property. ZFS forbids mounting " "a jailed dataset on the host because it may have mount points that would " "compromise the security of the host." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/zfs/_index.adoc:2613 #, no-wrap msgid "Delegated Administration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2621 msgid "" "A comprehensive permission delegation system allows unprivileged users to " "perform ZFS administration functions. For example, if each user's home " "directory is a dataset, users need permission to create and destroy " "snapshots of their home directories. A user performing backups can get " "permission to use replication features. ZFS allows a usage statistics " "script to run with access to only the space usage data for all users. " "Delegating the ability to delegate permissions is also possible. Permission " "delegation is possible for each subcommand and most properties." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2623 #, no-wrap msgid "Delegating Dataset Creation" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2629 msgid "" "`zfs allow _someuser_ create _mydataset_` gives the specified user " "permission to create child datasets under the selected parent dataset. A " "caveat: creating a new dataset involves mounting it. That requires setting " "the FreeBSD `vfs.usermount` man:sysctl[8] to `1` to allow non-root users to " "mount a file system. Another restriction aimed at preventing abuse: non-" "`root` users must own the mountpoint where mounting the file system." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2631 #, no-wrap msgid "Delegating Permission Delegation" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2635 msgid "" "`zfs allow _someuser_ allow _mydataset_` gives the specified user the " "ability to assign any permission they have on the target dataset, or its " "children, to other users. If a user has the `snapshot` permission and the " "`allow` permission, that user can then grant the `snapshot` permission to " "other users." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/zfs/_index.adoc:2637 #, no-wrap msgid "Advanced Topics" msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2640 #, no-wrap msgid "Tuning" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2643 msgid "Adjust tunables to make ZFS perform best for different workloads." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2647 msgid "" "[[zfs-advanced-tuning-arc_max]] `_vfs.zfs.arc.max_` starting with 13.x " "(`vfs.zfs.arc_max` for 12.x) - Upper size of the crossref:zfs[zfs-term-" "arc,ARC]. The default is all RAM but 1 GB, or 5/8 of all RAM, whichever is " "more. Use a lower value if the system runs any other daemons or processes " "that may require memory. Adjust this value at runtime with man:sysctl[8] and " "set it in [.filename]#/boot/loader.conf# or [.filename]#/etc/sysctl.conf#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2654 msgid "" "[[zfs-advanced-tuning-arc_meta_limit]] `_vfs.zfs.arc.meta_limit_` starting " "with 13.x (`vfs.zfs.arc_meta_limit` for 12.x) - Limit the amount of the " "crossref:zfs[zfs-term-arc,ARC] used to store metadata. The default is one " "fourth of `vfs.zfs.arc.max`. Increasing this value will improve performance " "if the workload involves operations on a large number of files and " "directories, or frequent metadata operations, at the cost of less file data " "fitting in the crossref:zfs[zfs-term-arc,ARC]. Adjust this value at runtime " "with man:sysctl[8] in [.filename]#/boot/loader.conf# or [.filename]#/etc/" "sysctl.conf#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2659 msgid "" "[[zfs-advanced-tuning-arc_min]] `_vfs.zfs.arc.min_` starting with 13.x " "(`vfs.zfs.arc_min` for 12.x) - Lower size of the crossref:zfs[zfs-term-" "arc,ARC]. The default is one half of `vfs.zfs.arc.meta_limit`. Adjust this " "value to prevent other applications from pressuring out the entire " "crossref:zfs[zfs-term-arc,ARC]. Adjust this value at runtime with " "man:sysctl[8] and in [.filename]#/boot/loader.conf# or [.filename]#/etc/" "sysctl.conf#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2660 msgid "" "[[zfs-advanced-tuning-vdev-cache-size]] `_vfs.zfs.vdev.cache.size_` - A " "preallocated amount of memory reserved as a cache for each device in the " "pool. The total amount of memory used will be this value multiplied by the " "number of devices. Set this value at boot time and in [.filename]#/boot/" "loader.conf#." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2661 msgid "" "[[zfs-advanced-tuning-min-auto-ashift]] `_vfs.zfs.min_auto_ashift_` - Lower " "`ashift` (sector size) used automatically at pool creation time. The value " "is a power of two. The default value of `9` represents `2^9 = 512`, a sector " "size of 512 bytes. To avoid _write amplification_ and get the best " "performance, set this value to the largest sector size used by a device in " "the pool." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2667 msgid "" "Common drives have 4 KB sectors. Using the default `ashift` of `9` with " "these drives results in write amplification on these devices. Data " "contained in a single 4 KB write is instead written in eight 512-byte " "writes. ZFS tries to read the native sector size from all devices when " "creating a pool, but drives with 4 KB sectors report that their sectors are " "512 bytes for compatibility. Setting `vfs.zfs.min_auto_ashift` to `12` " "(`2^12 = 4096`) before creating a pool forces ZFS to use 4 KB blocks for " "best performance on these drives." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2670 msgid "" "Forcing 4 KB blocks is also useful on pools with planned disk upgrades. " "Future disks use 4 KB sectors, and `ashift` values cannot change after " "creating a pool." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2674 msgid "" "In some specific cases, the smaller 512-byte block size might be " "preferable. When used with 512-byte disks for databases or as storage for " "virtual machines, less data transfers during small random reads. This can " "provide better performance when using a smaller ZFS record size." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2678 msgid "" "[[zfs-advanced-tuning-prefetch_disable]] `_vfs.zfs.prefetch.disable_` - " "Disable prefetch. A value of `0` enables and `1` disables it. The default is " "`0`, unless the system has less than 4 GB of RAM. Prefetch works by reading " "larger blocks than requested into the crossref:zfs[zfs-term-arc,ARC] in " "hopes to soon need the data. If the workload has a large number of random " "reads, disabling prefetch may actually improve performance by reducing " "unnecessary reads. Adjust this value at any time with man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2679 msgid "" "[[zfs-advanced-tuning-vdev-trim_on_init]] `_vfs.zfs.vdev.trim_on_init_` - " "Control whether new devices added to the pool have the `TRIM` command run on " "them. This ensures the best performance and longevity for SSDs, but takes " "extra time. If the device has already been secure erased, disabling this " "setting will make the addition of the new device faster. Adjust this value " "at any time with man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2680 msgid "" "[[zfs-advanced-tuning-vdev-max_pending]] `_vfs.zfs.vdev.max_pending_` - " "Limit the number of pending I/O requests per device. A higher value will " "keep the device command queue full and may give higher throughput. A lower " "value will reduce latency. Adjust this value at any time with man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2686 msgid "" "[[zfs-advanced-tuning-top_maxinflight]] `_vfs.zfs.top_maxinflight_` - Upper " "number of outstanding I/Os per top-level crossref:zfs[zfs-term-vdev,vdev]. " "Limits the depth of the command queue to prevent high latency. The limit is " "per top-level vdev, meaning the limit applies to each crossref:zfs[zfs-term-" "vdev-mirror,mirror], crossref:zfs[zfs-term-vdev-raidz,RAID-Z], or other vdev " "independently. Adjust this value at any time with man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2688 msgid "" "[[zfs-advanced-tuning-l2arc_write_max]] `_vfs.zfs.l2arc_write_max_` - Limit " "the amount of data written to the crossref:zfs[zfs-term-l2arc,L2ARC] per " "second. This tunable extends the longevity of SSDs by limiting the amount of " "data written to the device. Adjust this value at any time with man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2694 msgid "" "[[zfs-advanced-tuning-l2arc_write_boost]] `_vfs.zfs.l2arc_write_boost_` - " "Adds the value of this tunable to crossref:zfs[zfs-advanced-tuning-" "l2arc_write_max,`vfs.zfs.l2arc_write_max`] and increases the write speed to " "the SSD until evicting the first block from the crossref:zfs[zfs-term-" "l2arc,L2ARC]. This \"Turbo Warmup Phase\" reduces the performance loss from " "an empty crossref:zfs[zfs-term-l2arc,L2ARC] after a reboot. Adjust this " "value at any time with man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2699 msgid "" "[[zfs-advanced-tuning-scrub_delay]]`_vfs.zfs.scrub_delay_` - Number of ticks " "to delay between each I/O during a crossref:zfs[zfs-term-scrub,`scrub`]. To " "ensure that a `scrub` does not interfere with the normal operation of the " "pool, if any other I/O is happening the `scrub` will delay between each " "command. This value controls the limit on the total IOPS (I/Os Per Second)" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2707 msgid "" "generated by the `scrub`. The granularity of the setting is determined by " "the value of `kern.hz` which defaults to 1000 ticks per second. Changing " "this setting results in a different effective IOPS limit. The default value " "is `4`, resulting in a limit of: 1000 ticks/sec / 4 = 250 IOPS. Using a " "value of _20_ would give a limit of: 1000 ticks/sec / 20 = 50 IOPS. Recent " "activity on the pool limits the speed of `scrub`, as determined by " "crossref:zfs[zfs-advanced-tuning-scan_idle,`vfs.zfs.scan_idle`]. Adjust this " "value at any time with man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2723 msgid "" "[[zfs-advanced-tuning-resilver_delay]] `_vfs.zfs.resilver_delay_` - Number " "of milliseconds of delay inserted between each I/O during a crossref:zfs[zfs-" "term-resilver,resilver]. To ensure that a resilver does not interfere with " "the normal operation of the pool, if any other I/O is happening the resilver " "will delay between each command. This value controls the limit of total IOPS " "(I/Os Per Second) generated by the resilver. ZFS determins the granularity " "of the setting by the value of `kern.hz` which defaults to 1000 ticks per " "second. Changing this setting results in a different effective IOPS limit. " "The default value is 2, resulting in a limit of: 1000 ticks/sec / 2 = 500 " "IOPS. Returning the pool to an crossref:zfs[zfs-term-online,Online] state " "may be more important if another device failing could crossref:zfs[zfs-term-" "faulted,Fault] the pool, causing data loss. A value of 0 will give the " "resilver operation the same priority as other operations, speeding the " "healing process. Other recent activity on the pool limits the speed of " "resilver, as determined by crossref:zfs[zfs-advanced-tuning-" "scan_idle,`vfs.zfs.scan_idle`]. Adjust this value at any time with " "man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2727 msgid "" "[[zfs-advanced-tuning-scan_idle]] `_vfs.zfs.scan_idle_` - Number of " "milliseconds since the last operation before considering the pool is idle. " "ZFS disables the rate limiting for crossref:zfs[zfs-term-scrub,`scrub`] and " "crossref:zfs[zfs-term-resilver,resilver] when the pool is idle. Adjust this " "value at any time with man:sysctl[8]." msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2729 msgid "" "[[zfs-advanced-tuning-txg-timeout]] `_vfs.zfs.txg.timeout_` - Upper number " "of seconds between crossref:zfs[zfs-term-txg,transaction group]s. The " "current transaction group writes to the pool and a fresh transaction group " "starts if this amount of time elapsed since the previous transaction group. " "A transaction group may trigger earlier if writing enough data. The default " "value is 5 seconds. A larger value may improve read performance by delaying " "asynchronous writes, but this may cause uneven performance when writing the " "transaction group. Adjust this value at any time with man:sysctl[8]." msgstr "" #. type: Title === #: documentation/content/en/books/handbook/zfs/_index.adoc:2731 #, no-wrap msgid "ZFS on i386" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2734 msgid "" "Some of the features provided by ZFS are memory intensive, and may require " "tuning for upper efficiency on systems with limited RAM." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:2735 #, no-wrap msgid "Memory" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2743 msgid "" "As a lower value, the total system memory should be at least one gigabyte. " "The amount of recommended RAM depends upon the size of the pool and which " "features ZFS uses. A general rule of thumb is 1 GB of RAM for every 1 TB of " "storage. If using the deduplication feature, a general rule of thumb is 5 " "GB of RAM per TB of storage to deduplicate. While some users use ZFS with " "less RAM, systems under heavy load may panic due to memory exhaustion. ZFS " "may require further tuning for systems with less than the recommended RAM " "requirements." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:2744 #, no-wrap msgid "Kernel Configuration" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2747 msgid "" "Due to the address space limitations of the i386(TM) platform, ZFS users on " "the i386(TM) architecture must add this option to a custom kernel " "configuration file, rebuild the kernel, and reboot:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2751 #, no-wrap msgid "options KVA_PAGES=512\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2756 msgid "" "This expands the kernel address space, allowing the `vm.kvm_size` tunable to " "push beyond the imposed limit of 1 GB, or the limit of 2 GB for PAE. To " "find the most suitable value for this option, divide the desired address " "space in megabytes by four. In this example `512` for 2 GB." msgstr "" #. type: Title ==== #: documentation/content/en/books/handbook/zfs/_index.adoc:2757 #, no-wrap msgid "Loader Tunables" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2761 msgid "" "Increases the [.filename]#kmem# address space on all FreeBSD architectures. " "A test system with 1 GB of physical memory benefitted from adding these " "options to [.filename]#/boot/loader.conf# and then restarting:" msgstr "" #. type: delimited block . 4 #: documentation/content/en/books/handbook/zfs/_index.adoc:2768 #, no-wrap msgid "" "vm.kmem_size=\"330M\"\n" "vm.kmem_size_max=\"330M\"\n" "vfs.zfs.arc.max=\"40M\"\n" "vfs.zfs.vdev.cache.size=\"5M\"\n" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2771 msgid "" "For a more detailed list of recommendations for ZFS-related tuning, see " "https://wiki.freebsd.org/ZFSTuningGuide[]." msgstr "" #. type: Title == #: documentation/content/en/books/handbook/zfs/_index.adoc:2773 #, no-wrap msgid "Further Resources" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2776 msgid "https://openzfs.org/[OpenZFS]" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2777 msgid "https://wiki.freebsd.org/ZFSTuningGuide[FreeBSD Wiki - ZFS Tuning]" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2778 msgid "" "https://calomel.org/zfs_raid_speed_capacity.html[Calomel Blog - ZFS Raidz " "Performance, Capacity and Integrity]" msgstr "" #. type: Title == #: documentation/content/en/books/handbook/zfs/_index.adoc:2780 #, no-wrap msgid "ZFS Features and Terminology" msgstr "" #. type: Plain text #: documentation/content/en/books/handbook/zfs/_index.adoc:2789 msgid "" "More than a file system, ZFS is fundamentally different. ZFS combines the " "roles of file system and volume manager, enabling new storage devices to add " "to a live system and having the new space available on the existing file " "systems in that pool at once. By combining the traditionally separate " "roles, ZFS is able to overcome previous limitations that prevented RAID " "groups being able to grow. A _vdev_ is a top level device in a pool and can " "be a simple disk or a RAID transformation such as a mirror or RAID-Z array. " "ZFS file systems (called _datasets_) each have access to the combined free " "space of the entire pool. Used blocks from the pool decrease the space " "available to each file system. This approach avoids the common pitfall with " "extensive partitioning where free space becomes fragmented across the " "partitions." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2795 #, no-wrap msgid "[[zfs-term-pool]]pool" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2798 #, no-wrap msgid "" "A storage _pool_ is the most basic building block of ZFS. A pool consists of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes).\n" "These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a GUID. The ZFS version number on the pool determines the features available." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2799 #, no-wrap msgid "[[zfs-term-vdev]]vdev Types" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2832 #, no-wrap msgid "" "A pool consists of one or more vdevs, which themselves are a single disk or a group of disks, transformed to a RAID. When using a lot of vdevs, ZFS spreads data across the vdevs to increase performance and maximize usable space. All vdevs must be at least 128 MB in size.\n" "\n" "* [[zfs-term-vdev-disk]] _Disk_ - The most basic vdev type is a standard block device. This can be an entire disk (such as [.filename]#/dev/ada0# or [.filename]#/dev/da0#) or a partition ([.filename]#/dev/ada0p3#). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation.\n" "+\n" "[CAUTION]\n" "====\n" "Using an entire disk as part of a bootable pool is strongly discouraged, as this may render the pool unbootable.\n" "Likewise, you should not use an entire disk as part of a mirror or RAID-Z vdev.\n" "Reliably determining the size of an unpartitioned disk at boot time is impossible and there's no place to put in boot code.\n" "====\n" "\n" "* [[zfs-term-vdev-file]] _File_ - Regular files may make up ZFS pools, which is useful for testing and experimentation. Use the full path to the file as the device path in `zpool create`.\n" "* [[zfs-term-vdev-mirror]] _Mirror_ - When creating a mirror, specify the `mirror` keyword followed by the list of member devices for the mirror. A mirror consists of two or more devices, writing all data to all member devices. A mirror vdev will hold as much data as its smallest member. A mirror vdev can withstand the failure of all but one of its members without losing any data.\n" "+\n" "[NOTE]\n" "====\n" "To upgrade a regular single disk vdev to a mirror vdev at any time, use `zpool\n" "crossref:zfs[zfs-zpool-attach,attach]`.\n" "====\n" "\n" "* [[zfs-term-vdev-raidz]] _RAID-Z_ - ZFS uses RAID-Z, a variation on standard RAID-5 that offers better distribution of parity and eliminates the \"RAID-5 write hole\" in which the data and parity information become inconsistent after an unexpected restart. ZFS supports three levels of RAID-Z which provide varying levels of redundancy in exchange for decreasing levels of usable storage. ZFS uses RAID-Z1 through RAID-Z3 based on the number of parity devices in the array and the number of disks which can fail before the pool stops being operational.\n" "+\n" "In a RAID-Z1 configuration with four disks, each 1 TB, usable storage is 3 TB and the pool will still be able to operate in degraded mode with one faulted disk. If another disk goes offline before replacing and resilvering the faulted disk would result in losing all pool data.\n" "+\n" "In a RAID-Z3 configuration with eight disks of 1 TB, the volume will provide 5 TB of usable space and still be able to operate with three faulted disks. Sun(TM) recommends no more than nine disks in a single vdev. If more disks make up the configuration, the recommendation is to divide them into separate vdevs and stripe the pool data across them.\n" "+\n" "A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create something like a RAID-60 array. A RAID-Z group's storage capacity is about the size of the smallest disk multiplied by the number of non-parity disks. Four 1 TB disks in RAID-Z1 has an effective size of about 3 TB, and an array of eight 1 TB disks in RAID-Z3 will yield 5 TB of usable space.\n" "* [[zfs-term-vdev-spare]] _Spare_ - ZFS has a special pseudo-vdev type for keeping track of available hot spares. Note that installed hot spares are not deployed automatically; manually configure them to replace the failed device using `zfs replace`.\n" "* [[zfs-term-vdev-log]] _Log_ - ZFS Log Devices, also known as ZFS Intent Log\n" " (crossref:zfs[zfs-term-zil,ZIL]) move the intent log from the regular pool devices to a dedicated device, typically an SSD. Having a dedicated log device improves the performance of applications with a high volume of synchronous writes like databases. Mirroring of log devices is possible, but RAID-Z is not supported. If using a lot of log devices, writes will be load-balanced across them.\n" "* [[zfs-term-vdev-cache]] _Cache_ - Adding a cache vdev to a pool will add the\n" " storage of the cache to the crossref:zfs[zfs-term-l2arc,L2ARC]. Mirroring cache devices is impossible. Since a cache device stores only new copies of existing data, there is no risk of data loss." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2833 #, no-wrap msgid "[[zfs-term-txg]] Transaction Group (TXG)" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2854 #, no-wrap msgid "" "Transaction Groups are the way ZFS groups blocks changes together and writes them to the pool. Transaction groups are the atomic unit that ZFS uses to ensure consistency. ZFS assigns each transaction group a unique 64-bit consecutive identifier. There can be up to three active transaction groups at a time, one in each of these three states:\n" "\n" "* _Open_ - A new transaction group begins in the open state and accepts new\n" " writes. There is always a transaction group in the open state, but the\n" " transaction group may refuse new writes if it has reached a limit. Once the\n" " open transaction group has reached a limit, or reaching the\n" " crossref:zfs[zfs-advanced-tuning-txg-timeout,`vfs.zfs.txg.timeout`], the transaction group advances to the next state.\n" "* _Quiescing_ - A short state that allows any pending operations to finish without blocking the creation of a new open transaction group. Once all the transactions in the group have completed, the transaction group advances to the final state.\n" "* _Syncing_ - Write all the data in the transaction group to stable storage.\n" " This process will in turn change other data, such as metadata and space maps,\n" " that ZFS will also write to stable storage. The process of syncing involves\n" " several passes. On the first and biggest, all the changed data blocks; next\n" " come the metadata, which may take several passes to complete. Since allocating\n" " space for the data blocks generates new metadata, the syncing state cannot\n" " finish until a pass completes that does not use any new space. The syncing\n" " state is also where _synctasks_ complete. Synctasks are administrative\n" " operations such as creating or destroying snapshots and datasets that complete\n" " the uberblock change. Once the sync state completes the transaction group in\n" " the quiescing state advances to the syncing state. All administrative\n" " functions, such as crossref:zfs[zfs-term-snapshot,`snapshot`] write as part of the transaction group. ZFS adds a created synctask to the open transaction group, and that group advances as fast as possible to the syncing state to reduce the latency of administrative commands." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2855 #, no-wrap msgid "[[zfs-term-arc]]Adaptive Replacement Cache (ARC)" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2857 #, no-wrap msgid "ZFS uses an Adaptive Replacement Cache (ARC), rather than a more traditional Least Recently Used (LRU) cache. An LRU cache is a simple list of items in the cache, sorted by how recently object was used, adding new items to the head of the list. When the cache is full, evicting items from the tail of the list makes room for more active objects. An ARC consists of four lists; the Most Recently Used (MRU) and Most Frequently Used (MFU) objects, plus a ghost list for each. These ghost lists track evicted objects to prevent adding them back to the cache. This increases the cache hit ratio by avoiding objects that have a history of occasional use. Another advantage of using both an MRU and MFU is that scanning an entire file system would evict all data from an MRU or LRU cache in favor of this freshly accessed content. With ZFS, there is also an MFU that tracks the most frequently used objects, and the cache of the most commonly accessed blocks remains." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2858 #, no-wrap msgid "[[zfs-term-l2arc]]L2ARC" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2876 #, no-wrap msgid "" "L2ARC is the second level of the ZFS caching system. RAM stores the primary\n" "ARC. Since the amount of available RAM is often limited, ZFS can also use\n" "crossref:zfs[zfs-term-vdev-cache,cache vdevs]. Solid State Disks (SSDs) are\n" "often used as these cache devices due to their higher speed and lower latency\n" "compared to traditional spinning disks. L2ARC is entirely optional, but having\n" "one will increase read speeds for cached files on the SSD instead of having to\n" "read from the regular disks. L2ARC can also speed up\n" "crossref:zfs[zfs-term-deduplication,deduplication] because a deduplication table\n" "(DDT) that does not fit in RAM but does fit in the L2ARC will be much faster\n" "than a DDT that must read from disk. Limits on the data rate added to the cache\n" "devices prevents prematurely wearing out SSDs with extra writes. Until the cache\n" "is full (the first block evicted to make room), writes to the L2ARC limit to the\n" "sum of the write limit and the boost limit, and afterwards limit to the write\n" "limit. A pair of man:sysctl[8] values control these rate limits.\n" "crossref:zfs[zfs-advanced-tuning-l2arc_write_max,`vfs.zfs.l2arc_write_max`]\n" "controls the number of bytes written to the cache per second, while\n" "crossref:zfs[zfs-advanced-tuning-l2arc_write_boost,`vfs.zfs.l2arc_write_boost`] adds to this limit during the \"Turbo Warmup Phase\" (Write Boost)." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2877 #, no-wrap msgid "[[zfs-term-zil]]ZIL" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2879 #, no-wrap msgid "ZIL accelerates synchronous transactions by using storage devices like SSDs that are faster than those used in the main storage pool. When an application requests a synchronous write (a guarantee that the data is stored to disk rather than merely cached for later writes), writing the data to the faster ZIL storage then later flushing it out to the regular disks greatly reduces latency and improves performance. Synchronous workloads like databases will profit from a ZIL alone. Regular asynchronous writes such as copying files will not use the ZIL at all." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2880 #, no-wrap msgid "[[zfs-term-cow]]Copy-On-Write" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2882 #, no-wrap msgid "Unlike a traditional file system, ZFS writes a different block rather than overwriting the old data in place. When completing this write the metadata updates to point to the new location. When a shorn write (a system crash or power loss in the middle of writing a file) occurs, the entire original contents of the file are still available and ZFS discards the incomplete write. This also means that ZFS does not require a man:fsck[8] after an unexpected shutdown." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2883 #, no-wrap msgid "[[zfs-term-dataset]]Dataset" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2893 #, no-wrap msgid "" "_Dataset_ is the generic term for a ZFS file system, volume, snapshot or clone.\n" "Each dataset has a unique name in the format _poolname/path@snapshot_. The root\n" "of the pool is a dataset as well. Child datasets have hierarchical names like\n" "directories. For example, _mypool/home_, the home dataset, is a child of\n" "_mypool_ and inherits properties from it. Expand this further by creating\n" "_mypool/home/user_. This grandchild dataset will inherit properties from the\n" "parent and grandparent. Set properties on a child to override the defaults\n" "inherited from the parent and grandparent. Administration of datasets and their\n" "children can be crossref:zfs[zfs-zfs-allow,delegated]." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2894 #, no-wrap msgid "[[zfs-term-filesystem]]File system" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2896 #, no-wrap msgid "A ZFS dataset is most often used as a file system. Like most other file systems, a ZFS file system mounts somewhere in the systems directory hierarchy and contains files and directories of its own with permissions, flags, and other metadata." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2897 #, no-wrap msgid "[[zfs-term-volume]]Volume" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2899 #, no-wrap msgid "ZFS can also create volumes, which appear as disk devices. Volumes have a lot of the same features as datasets, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2900 #, no-wrap msgid "[[zfs-term-snapshot]]Snapshot" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2920 #, no-wrap msgid "" "The crossref:zfs[zfs-term-cow,copy-on-write] (COW) design of ZFS allows for\n" "nearly instantaneous, consistent snapshots with arbitrary names. After taking a\n" "snapshot of a dataset, or a recursive snapshot of a parent dataset that will\n" "include all child datasets, new data goes to new blocks, but without reclaiming\n" "the old blocks as free space. The snapshot contains the original file system\n" "version and the live file system contains any changes made since taking the\n" "snapshot using no other space. New data written to the live file system uses new\n" "blocks to store this data. The snapshot will grow as the blocks are no longer\n" "used in the live file system, but in the snapshot alone. Mount these snapshots\n" "read-only allows recovering of previous file versions. A\n" "crossref:zfs[zfs-zfs-snapshot,rollback] of a live file system to a specific\n" "snapshot is possible, undoing any changes that took place after taking the\n" "snapshot. Each block in the pool has a reference counter which keeps track of\n" "the snapshots, clones, datasets, or volumes use that block. As files and\n" "snapshots get deleted, the reference count decreases, reclaiming the free space\n" "when no longer referencing a block. Marking snapshots with a\n" "crossref:zfs[zfs-zfs-snapshot,hold] results in any attempt to destroy it will\n" "returns an `EBUSY` error. Each snapshot can have holds with a unique name each.\n" "The crossref:zfs[zfs-zfs-snapshot,release] command removes the hold so the snapshot can deleted. Snapshots, cloning, and rolling back works on volumes, but independently mounting does not." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2921 #, no-wrap msgid "[[zfs-term-clone]]Clone" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2923 #, no-wrap msgid "Cloning a snapshot is also possible. A clone is a writable version of a snapshot, allowing the file system to fork as a new dataset. As with a snapshot, a clone initially consumes no new space. As new data written to a clone uses new blocks, the size of the clone grows. When blocks are overwritten in the cloned file system or volume, the reference count on the previous block decreases. Removing the snapshot upon which a clone bases is impossible because the clone depends on it. The snapshot is the parent, and the clone is the child. Clones can be _promoted_, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no new space. Since the amount of space used by the parent and child reverses, it may affect existing quotas and reservations." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2924 #, no-wrap msgid "[[zfs-term-checksum]]Checksum" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2936 #, no-wrap msgid "" "Every block is also checksummed. The checksum algorithm used is a per-dataset\n" "property, see crossref:zfs[zfs-zfs-set,`set`]. The checksum of each block is\n" "transparently validated when read, allowing ZFS to detect silent corruption. If\n" "the data read does not match the expected checksum, ZFS will attempt to recover\n" "the data from any available redundancy, like mirrors or RAID-Z. Triggering a\n" "validation of all checksums with crossref:zfs[zfs-term-scrub,`scrub`]. Checksum algorithms include:\n" "\n" "* `fletcher2`\n" "* `fletcher4`\n" "* `sha256`\n" " The `fletcher` algorithms are faster, but `sha256` is a strong cryptographic hash and has a much lower chance of collisions at the cost of some performance. Deactivating checksums is possible, but strongly discouraged." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2937 #, no-wrap msgid "[[zfs-term-compression]]Compression" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2951 #, no-wrap msgid "" "Each dataset has a compression property, which defaults to off. Set this property to an available compression algorithm. This causes compression of all new data written to the dataset. Beyond a reduction in space used, read and write throughput often increases because fewer blocks need reading or writing.\n" "\n" "[[zfs-term-compression-lz4]]\n" "* _LZ4_ - Added in ZFS pool version 5000 (feature flags), LZ4 is now the recommended compression algorithm. LZ4 works about 50% faster than LZJB when operating on compressible data, and is over three times faster when operating on uncompressible data. LZ4 also decompresses about 80% faster than LZJB. On modern CPUs, LZ4 can often compress at over 500 MB/s, and decompress at over 1.5 GB/s (per single CPU core).\n" "\n" "[[zfs-term-compression-lzjb]]\n" "* _LZJB_ - The default compression algorithm. Created by Jeff Bonwick (one of the original creators of ZFS). LZJB offers good compression with less CPU overhead compared to GZIP. In the future, the default compression algorithm will change to LZ4.\n" "\n" "[[zfs-term-compression-gzip]]\n" "* _GZIP_ - A popular stream compression algorithm available in ZFS. One of the main advantages of using GZIP is its configurable level of compression. When setting the `compress` property, the administrator can choose the level of compression, ranging from `gzip1`, the lowest level of compression, to `gzip9`, the highest level of compression. This gives the administrator control over how much CPU time to trade for saved disk space.\n" "\n" "[[zfs-term-compression-zle]]\n" "* _ZLE_ - Zero Length Encoding is a special compression algorithm that compresses continuous runs of zeros alone. This compression algorithm is useful when the dataset contains large blocks of zeros." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2952 #, no-wrap msgid "[[zfs-term-copies]]Copies" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2955 #, no-wrap msgid "" "When set to a value greater than 1, the `copies` property instructs ZFS to\n" "maintain copies of each block in the crossref:zfs[zfs-term-filesystem,file\n" "system] or crossref:zfs[zfs-term-volume,volume]. Setting this property on important datasets provides added redundancy from which to recover a block that does not match its checksum. In pools without redundancy, the copies feature is the single form of redundancy. The copies feature can recover from a single bad sector or other forms of minor corruption, but it does not protect the pool from the loss of an entire disk." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2956 #, no-wrap msgid "[[zfs-term-deduplication]]Deduplication" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2958 #, no-wrap msgid "Checksums make it possible to detect duplicate blocks when writing data. With deduplication, the reference count of an existing, identical block increases, saving storage space. ZFS keeps a deduplication table (DDT) in memory to detect duplicate blocks. The table contains a list of unique checksums, the location of those blocks, and a reference count. When writing new data, ZFS calculates checksums and compares them to the list. When finding a match it uses the existing block. Using the SHA256 checksum algorithm with deduplication provides a secure cryptographic hash. Deduplication is tunable. If `dedup` is `on`, then a matching checksum means that the data is identical. Setting `dedup` to `verify`, ZFS performs a byte-for-byte check on the data ensuring they are actually identical. If the data is not identical, ZFS will note the hash collision and store the two blocks separately. As the DDT must store the hash of each unique block, it consumes a large amount of memory. A general rule of thumb is 5-6 GB of ram per 1 TB of deduplicated data). In situations not practical to have enough RAM to keep the entire DDT in memory, performance will suffer greatly as the DDT must read from disk before writing each new block. Deduplication can use L2ARC to store the DDT, providing a middle ground between fast system memory and slower disks. Consider using compression instead, which often provides nearly as much space savings without the increased memory." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2959 #, no-wrap msgid "[[zfs-term-scrub]]Scrub" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2970 #, no-wrap msgid "" "Instead of a consistency check like man:fsck[8], ZFS has `scrub`. `scrub` reads\n" "all data blocks stored on the pool and verifies their checksums against the\n" "known good checksums stored in the metadata. A periodic check of all the data\n" "stored on the pool ensures the recovery of any corrupted blocks before needing\n" "them. A scrub is not required after an unclean shutdown, but good practice is at\n" "least once every three months. ZFS verifies the checksum of each block during\n" "normal use, but a scrub makes certain to check even infrequently used blocks for\n" "silent corruption. ZFS improves data security in archival storage situations.\n" "Adjust the relative priority of `scrub` with\n" "crossref:zfs[zfs-advanced-tuning-scrub_delay,`vfs.zfs.scrub_delay`] to prevent the scrub from degrading the performance of other workloads on the pool." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2971 #, no-wrap msgid "[[zfs-term-quota]]Dataset Quota" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2985 #, no-wrap msgid "" "ZFS provides fast and accurate dataset, user, and group space accounting as well as quotas and space reservations. This gives the administrator fine grained control over space allocation and allows reserving space for critical file systems.\n" "\n" "ZFS supports different types of quotas: the dataset quota, the\n" "crossref:zfs[zfs-term-refquota,reference quota (refquota)], the\n" "crossref:zfs[zfs-term-userquota,user quota], and the\n" "crossref:zfs[zfs-term-groupquota,group quota].\n" "\n" "Quotas limit the total size of a dataset and its descendants, including snapshots of the dataset, child datasets, and the snapshots of those datasets.\n" "\n" "[NOTE]\n" "====\n" "Volumes do not support quotas, as the `volsize` property acts as an implicit quota.\n" "====" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2986 #, no-wrap msgid "[[zfs-term-refquota]]Reference Quota" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2988 #, no-wrap msgid "A reference quota limits the amount of space a dataset can consume by enforcing a hard limit. This hard limit includes space referenced by the dataset alone and does not include space used by descendants, such as file systems or snapshots." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2989 #, no-wrap msgid "[[zfs-term-userquota]]User Quota" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2991 #, no-wrap msgid "User quotas are useful to limit the amount of space used by the specified user." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2992 #, no-wrap msgid "[[zfs-term-groupquota]]Group Quota" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2994 #, no-wrap msgid "The group quota limits the amount of space that a specified group can consume." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:2995 #, no-wrap msgid "[[zfs-term-reservation]]Dataset Reservation" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3003 #, no-wrap msgid "" "The `reservation` property makes it possible to guarantee an amount of space\n" "for a specific dataset and its descendants. This means that setting a 10 GB\n" "reservation on [.filename]#storage/home/bob# prevents other datasets from using\n" "up all free space, reserving at least 10 GB of space for this dataset. Unlike a\n" "regular crossref:zfs[zfs-term-refreservation,`refreservation`], space used by snapshots and descendants is not counted against the reservation. For example, if taking a snapshot of [.filename]#storage/home/bob#, enough disk space other than the `refreservation` amount must exist for the operation to succeed. Descendants of the main data set are not counted in the `refreservation` amount and so do not encroach on the space set.\n" "\n" "Reservations of any sort are useful in situations such as planning and testing the suitability of disk space allocation in a new system, or ensuring that enough space is available on file systems for audio logs or system recovery procedures and files." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3004 #, no-wrap msgid "[[zfs-term-refreservation]]Reference Reservation" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3010 #, no-wrap msgid "" "The `refreservation` property makes it possible to guarantee an amount of space\n" "for the use of a specific dataset _excluding_ its descendants. This means that\n" "setting a 10 GB reservation on [.filename]#storage/home/bob#, and another\n" "dataset tries to use the free space, reserving at least 10 GB of space for this\n" "dataset. In contrast to a regular crossref:zfs[zfs-term-reservation,reservation], space used by snapshots and descendant datasets is not counted against the reservation. For example, if taking a snapshot of [.filename]#storage/home/bob#, enough disk space other than the `refreservation` amount must exist for the operation to succeed. Descendants of the main data set are not counted in the `refreservation` amount and so do not encroach on the space set." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3011 #, no-wrap msgid "[[zfs-term-resilver]]Resilver" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3012 #, no-wrap msgid "When replacing a failed disk, ZFS must fill the new disk with the lost data. _Resilvering_ is the process of using the parity information distributed across the remaining drives to calculate and write the missing data to the new drive." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3013 #, no-wrap msgid "[[zfs-term-online]]Online" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3015 #, no-wrap msgid "A pool or vdev in the `Online` state has its member devices connected and fully operational. Individual devices in the `Online` state are functioning." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3016 #, no-wrap msgid "[[zfs-term-offline]]Offline" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3020 #, no-wrap msgid "" "The administrator puts individual devices in an `Offline` state if enough\n" "redundancy exists to avoid putting the pool or vdev into a\n" "crossref:zfs[zfs-term-faulted,Faulted] state. An administrator may choose to offline a disk in preparation for replacing it, or to make it easier to identify." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3021 #, no-wrap msgid "[[zfs-term-degraded]]Degraded" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3028 #, no-wrap msgid "" "A pool or vdev in the `Degraded` state has one or more disks that disappeared\n" "or failed. The pool is still usable, but if other devices fail, the pool may\n" "become unrecoverable. Reconnecting the missing devices or replacing the failed\n" "disks will return the pool to an crossref:zfs[zfs-term-online,Online] state\n" "after the reconnected or new device has completed the\n" "crossref:zfs[zfs-term-resilver,Resilver] process." msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3029 #, no-wrap msgid "[[zfs-term-faulted]]Faulted" msgstr "" #. type: Table #: documentation/content/en/books/handbook/zfs/_index.adoc:3034 #, no-wrap msgid "" "A pool or vdev in the `Faulted` state is no longer operational. Accessing the\n" "data is no longer possible. A pool or vdev enters the `Faulted` state when the\n" "number of missing or failed devices exceeds the level of redundancy in the vdev.\n" "If reconnecting missing devices the pool will return to an\n" "crossref:zfs[zfs-term-online,Online] state. Insufficient redundancy to compensate for the number of failed disks loses the pool contents and requires restoring from backups." msgstr ""