ZPOOL

Section: Maintenance Commands (8)
Updated: May 2, 2019

 

Index

NAME
SYNOPSIS
DESCRIPTION
Virtual Devices (vdevs)
Device Failure and Recovery
Hot Spares
Intent Log
Cache Devices
Pool checkpoint
Special Allocation Class
Properties
Subcommands
EXIT STATUS
EXAMPLES
ENVIRONMENT VARIABLES
INTERFACE STABILITY
SEE ALSO

Return to Main Contents

BSD mandoc
Linux  

NAME

zpool - configure ZFS storage pools  

SYNOPSIS

-?V
add [-fgLnP ] [-o property = value ] pool vdev ...
attach [-f ] [-o property = value ] pool device new_device
checkpoint [-d, -discard ] pool
clear pool [device ]
create [-dfn ] [-m mountpoint ] [-o property = value ... ] [-o feature@feature = value ] [-O file-system-property = value ... ] [-R root ] pool vdev ...
destroy [-f ] pool
detach pool device
events [-vHf [pool | -c ] ]
export [-a ] [-f ] pool ...
get [-Hp [-o field [, field ... ] ] ] all | property [, property ... ] [pool ... ]
history [-il ] [pool ... ]
import [-D ] [-d dir | device ]
import -a [-DflmN ] [-F [-n [-T [-X ] ] ] ] [--rewind-to-checkpoint ] [-c cachefile | -d dir | device ] [-o mntopts ] [-o property = value ... ] [-R root ]
import [-Dflm ] [-F [-n [-T [-X ] ] ] ] [--rewind-to-checkpoint ] [-c cachefile | -d dir | device ] [-o mntopts ] [-o property = value ... ] [-R root ] [-s ] pool | id [newpool [-t ] ]
initialize [-c | -s ] pool [device ... ]
iostat [[[-c SCRIPT [-lq | -rw ] ] ] ] [-T u | d ] [-ghHLnpPvy ] [[pool ... | [pool vdev ... | [vdev ... ] ] ] ] [interval [count ] ]
labelclear [-f ] device
list [-HgLpPv [-o property [, property ... ] ] ] [-T u | d ] [pool ... ] [interval [count ] ]
offline [-f ] [-t ] pool device ...
online [-e ] pool device ...
reguid pool
reopen [-n ] pool
remove [-np ] pool device ...
remove -s pool
replace [-f ] [-o property = value ] pool device [new_device ]
resilver pool ...
scrub [-s | -p ] pool ...
trim [-d ] [-r rate ] [-c | -s ] pool [device ... ]
set property = value pool
split [-gLlnP ] [-o property = value ... ] [-R root ] pool newpool [device ... ]
status [-c SCRIPT ] [-DigLpPstvx ] [-T u | d ] [pool ... ] [interval [count ] ]
sync [pool ... ]
upgrade
upgrade -v
upgrade [-V version ] -a | pool ...
version  

DESCRIPTION

The command configures ZFS storage pools. A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets. All datasets within a storage pool share the same space. See zfs(8) for information on managing datasets.  

Virtual Devices (vdevs)

A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported:

disk
A block device, typically located under /dev ZFS can use individual slices or partitions, though the recommended mode of operation is to use whole disks. A disk can be specified by a full path, or it can be a shorthand name Po the relative portion of the path under /dev Pc . A whole disk can be specified by omitting the slice or partition designation. For example, sda is equivalent to /dev/sda When given a whole disk, ZFS automatically labels the disk, if necessary.
file
A regular file. The use of files as a backing store is strongly discouraged. It is designed primarily for experimental purposes, as the fault tolerance of a file is only as good as the file system of which it is a part. A file must be specified by a full path.
mirror
A mirror of two or more devices. Data is replicated in an identical fashion across all components of a mirror. A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices failing before data integrity is compromised.
raidz , raidz1 , raidz2 , raidz3
A variation on RAID-5 that allows for better distribution of parity and eliminates the RAID-5 Qq write hole (in which data and parity become inconsistent after a power loss) Data and parity is striped across all disks within a raidz group.

A raidz group can have single-, double-, or triple-parity, meaning that the raidz group can sustain one, two, or three failures, respectively, without losing any data. The raidz1 vdev type specifies a single-parity raidz group; the raidz2 vdev type specifies a double-parity raidz group; and the raidz3 vdev type specifies a triple-parity raidz group. The raidz vdev type is an alias for raidz1

A raidz group with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised. The minimum number of devices in a raidz group is one more than the number of parity disks. The recommended number is between 3 and 9 to help increase performance.

spare
A pseudo-vdev which keeps track of available hot spares for a pool. For more information, see the Sx Hot Spares section.
log
A separate intent log device. If more than one log device is specified, then writes are load-balanced between devices. Log devices can be mirrored. However, raidz vdev types are not supported for the intent log. For more information, see the Sx Intent Log section.
dedup
A device dedicated solely for deduplication tables. The redundancy of this device should match the redundancy of the other normal devices in the pool. If more than one dedup device is specified, then allocations are load-balanced between those devices.
special
A device dedicated solely for allocating various kinds of internal metadata, and optionally small file blocks. The redundancy of this device should match the redundancy of the other normal devices in the pool. If more than one special device is specified, then allocations are load-balanced between those devices.

For more information on special allocations, see the Sx Special Allocation Class section.

cache
A device used to cache storage pool data. A cache device cannot be configured as a mirror or raidz group. For more information, see the Sx Cache Devices section.

Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.

A pool can have any number of virtual devices at the top of the configuration Po known as Qq root vdevs Pc . Data is dynamically distributed across all top-level devices to balance data among devices. As new virtual devices are added, ZFS automatically places data on the newly available devices.

Virtual devices are specified one at a time on the command line, separated by whitespace. The keywords mirror and raidz are used to distinguish where a group ends and another begins. For example, the following creates two root vdevs, each a mirror of two disks:

# zpool create mypool mirror sda sdb mirror sdc sdd
 

Device Failure and Recovery

ZFS supports a rich set of mechanisms for handling device failure and data corruption. All metadata and data is checksummed, and ZFS automatically repairs bad data from a good copy when corruption is detected.

In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or raidz groups. While ZFS supports running in a non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged. A single case of bit corruption can render some or all of your data unavailable.

A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has corrupted metadata, or one or more faulted devices, and insufficient replicas to continue functioning.

The health of the top-level vdev, such as mirror or raidz device, is potentially impacted by the state of its associated vdevs, or component devices. A top-level vdev or component device is in one of the following states:

DEGRADED
One or more top-level vdevs is in the degraded state because one or more component devices are offline. Sufficient replicas exist to continue functioning.

One or more component devices is in the degraded or faulted state, but sufficient replicas exist to continue functioning. The underlying conditions are as follows:

FAULTED
One or more top-level vdevs is in the faulted state because one or more component devices are offline. Insufficient replicas exist to continue functioning.

One or more component devices is in the faulted state, and insufficient replicas exist to continue functioning. The underlying conditions are as follows:

OFFLINE
The device was explicitly taken offline by the zpool offline command.
ONLINE
The device is online and functioning.
REMOVED
The device was physically removed while the system was running. Device removal detection is hardware-dependent and may not be supported on all platforms.
UNAVAIL
The device could not be opened. If a pool is imported when a device was unavailable, then the device will be identified by a unique identifier instead of its path since the path was never correct in the first place.

If a device is removed and later re-attached to the system, ZFS attempts to put the device online automatically. Device attach detection is hardware-dependent and might not be supported on all platforms.  

Hot Spares

ZFS allows devices to be associated with pools as Qq hot spares . These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a spare vdev with any number of devices. For example,
# zpool create pool mirror sda sdb spare sdc sdd

Spares can be shared across multiple pools, and can be added with the zpool add command and removed with the zpool remove command. Once a spare replacement is initiated, a new spare vdev is created within the configuration that will remain there until the original device is replaced. At this point, the hot spare becomes available again if another device fails.

If a pool has a shared spare that is currently being used, the pool can not be exported since other pools may use this shared spare, which may lead to potential data corruption.

Shared spares add some risk. If the pools are imported on different hosts, and both pools suffer a device failure at the same time, both could attempt to use the spare at the same time. This may not be detected, resulting in data corruption.

An in-progress spare replacement can be cancelled by detaching the hot spare. If the original faulted device is detached, then the hot spare assumes its place in the configuration, and is removed from the spare list of all active pools.

Spares cannot replace log devices.  

Intent Log

The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous transactions. For instance, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync(2) to ensure data stability. By default, the intent log is allocated from blocks within the main pool. However, it might be possible to get better performance using separate intent log devices such as NVRAM or a dedicated disk. For example:
# zpool create pool sda sdb log sdc

Multiple log devices can also be specified, and they can be mirrored. See the Sx EXAMPLES section for an example of mirroring multiple log devices.

Log devices can be added, replaced, attached, detached and removed. In addition, log devices are imported and exported as part of the pool that contains them. Mirrored devices can be removed by specifying the top-level mirror vdev.  

Cache Devices

Devices can be added to a storage pool as Qq cache devices . These devices provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content.

To create a pool with cache devices, specify a cache vdev with any number of devices. For example:

# zpool create pool sda sdb cache sdc sdd

Cache devices cannot be mirrored or part of a raidz configuration. If a read error is encountered on a cache device, that read I/O is reissued to the original storage pool device, which might be part of a mirrored or raidz configuration.

The content of the cache devices is considered volatile, as is the case with other system caches.  

Pool checkpoint

Before starting critical procedures that include destructive actions (e.g zfs destroy ), an administrator can checkpoint the pool's state and in the case of a mistake or failure, rewind the entire pool back to the checkpoint. Otherwise, the checkpoint can be discarded when the procedure has completed successfully.

A pool checkpoint can be thought of as a pool-wide snapshot and should be used with care as it contains every part of the pool's state, from properties to vdev configuration. Thus, while a pool has a checkpoint certain operations are not allowed. Specifically, vdev removal/attach/detach, mirror splitting, and changing the pool's guid. Adding a new vdev is supported but in the case of a rewind it will have to be added again. Finally, users of this feature should keep in mind that scrubs in a pool that has a checkpoint do not repair checkpointed data.

To create a checkpoint for a pool:

# zpool checkpoint pool

To later rewind to its checkpointed state, you need to first export it and then rewind it during import:

# zpool export pool
# zpool import --rewind-to-checkpoint pool

To discard the checkpoint from a pool:

# zpool checkpoint -d pool

Dataset reservations (controlled by the reservation or refreservation zfs properties) may be unenforceable while a checkpoint exists, because the checkpoint is allowed to consume the dataset's reservation. Finally, data that is part of the checkpoint but has been freed in the current state of the pool won't be scanned during a scrub.  

Special Allocation Class

The allocations in the special class are dedicated to specific block types. By default this includes all metadata, the indirect blocks of user data, and any deduplication tables. The class can also be provisioned to accept small file blocks.

A pool must always have at least one normal (non-dedup/special) vdev before other devices can be assigned to the special class. If the special class becomes full, then allocations intended for it will spill back into the normal class.

Deduplication tables can be excluded from the special class by setting the zfs_ddt_data_is_special zfs module parameter to false (0).

Inclusion of small file blocks in the special class is opt-in. Each dataset can control the size of small file blocks allowed in the special class by setting the special_small_blocks dataset property. It defaults to zero, so you must opt-in by setting it to a non-zero value. See zfs(8) for more info on setting this property.  

Properties

Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool.

The following are read-only properties:

allocated
Amount of storage used within the pool. See fragmentation and free for more information.
capacity
Percentage of pool space used. This property can also be referred to by its shortened column name, cap
expandsize
Amount of uninitialized space within the pool or device that can be used to increase the total capacity of the pool. Uninitialized space consists of any space on an EFI labeled vdev which has not been brought online Po e.g, using zpool online -e Pc . This space occurs when a LUN is dynamically expanded.
fragmentation
The amount of fragmentation in the pool. As the amount of space allocated increases, it becomes more difficult to locate free space. This may result in lower write performance compared to pools with more unfragmented free space.
free
The amount of free space available in the pool. By contrast, the zfs(8) available property describes how much new data can be written to ZFS filesystems/volumes. The zpool free property is not generally useful for this purpose, and can be substantially more than the zfs available space. This discrepancy is due to several factors, including raidz party; zfs reservation, quota, refreservation, and refquota properties; and space set aside by spa_slop_shift (see zfs-module-parameters5 for more information).
freeing
After a file system or snapshot is destroyed, the space it was using is returned to the pool asynchronously. freeing is the amount of space remaining to be reclaimed. Over time freeing will decrease while free increases.
health
The current health of the pool. Health can be one of ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL
guid
A unique identifier for the pool.
load_guid
A unique identifier for the pool. Unlike the guid property, this identifier is generated every time we load the pool (e.g. does not persist across imports/exports) and never changes while the pool is loaded (even if a reguid operation takes place).
size
Total size of the storage pool.
unsupported@ feature_guid
Information about unsupported features that are enabled on the pool. See zpool-features5 for details.

The space usage properties report actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a raidz configuration depends on the characteristics of the data being written. In addition, ZFS reserves some space for internal accounting that the zfs(8) command takes into account, but the command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable.

The following property can be set at creation time and import time:

altroot
Alternate root directory. If set, this directory is prepended to any mount points within the pool. This can be used when examining an unknown pool where the mount points cannot be trusted, or in an alternate boot environment, where the typical paths are not valid. altroot is not a persistent property. It is valid only while the system is up. Setting altroot defaults to using cachefile = none though this may be overridden using an explicit setting.

The following property can be set only at import time:

readonly = on | off
If set to on the pool will be imported in read-only mode. This property can also be referred to by its shortened column name, rdonly

The following properties can be set at creation time and import time, and later changed with the zpool set command:

ashift = ashift
Pool sector size exponent, to the power of 2 (internally referred to as ashift ). Values from 9 to 16, inclusive, are valid; also, the value 0 (the default) means to auto-detect using the kernel's block layer and a ZFS internal exception list. I/O operations will be aligned to the specified size boundaries. Additionally, the minimum (disk) write size will be set to the specified size, so this represents a space vs. performance trade-off. For optimal performance, the pool sector size should be greater than or equal to the sector size of the underlying disks. The typical case for setting this property is when performance is important and the underlying disks use 4KiB sectors but report 512B sectors to the OS (for compatibility reasons); in that case, set ashift=12 (which is 1<<12 = 4096). When set, this property is used as the default hint value in subsequent vdev operations (add, attach and replace). Changing this value will not modify any existing vdev, not even on disk replacement; however it can be used, for instance, to replace a dying 512B sectors disk with a newer 4KiB sectors device: this will probably result in bad performance but at the same time could prevent loss of data.
autoexpand = on | off
Controls automatic pool expansion when the underlying LUN is grown. If set to on the pool will be resized according to the size of the expanded device. If the device is part of a mirror or raidz then all devices within that mirror/raidz group must be expanded before the new space is made available to the pool. The default behavior is off This property can also be referred to by its shortened column name, expand
autoreplace = on | off
Controls automatic device replacement. If set to off device replacement must be initiated by the administrator by using the zpool replace command. If set to on any new device, found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default behavior is off This property can also be referred to by its shortened column name, replace Autoreplace can also be used with virtual disks (like device mapper) provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. See the vdev_id8 man page for more details. Autoreplace and autoonline require the ZFS Event Daemon be configured and running. See the zed(8) man page for more details.
bootfs = (unset) | pool / dataset
Identifies the default bootable dataset for the root pool. This property is expected to be set mainly by the installation and upgrade programs. Not all Linux distribution boot processes use the bootfs property.
cachefile = path | none
Controls the location of where the pool configuration is cached. Discovering all pools on system startup requires a cached copy of the configuration data that is stored on the root file system. All pools in this cache are automatically imported when the system boots. Some environments, such as install and clustering, need to cache this information in a different location so that pools are not automatically imported. Setting this property caches the pool configuration in a different location that can later be imported with zpool import -c Setting it to the value none creates a temporary pool that is never cached, and the Qq (empty string) uses the default location.

Multiple pools can share the same cache file. Because the kernel destroys and recreates this file when pools are added and removed, care should be taken when attempting to access this file. When the last pool using a cachefile is exported or destroyed, the file will be empty.

comment = text
A text string consisting of printable ASCII characters that will be stored such that it is available even if the pool becomes faulted. An administrator can provide additional information about a pool using this property.
dedupditto = number
This property is deprecated. In a future release, it will no longer have any effect.

Threshold for the number of block ditto copies. If the reference count for a deduplicated block increases above this number, a new ditto copy of this block is automatically stored. The default setting is 0 which causes no ditto copies to be created for deduplicated blocks. The minimum legal nonzero setting is 100

delegation = on | off
Controls whether a non-privileged user is granted access based on the dataset permissions defined on the dataset. See zfs(8) for more information on ZFS delegated administration.
failmode = wait | continue | panic
Controls the system behavior in the event of catastrophic pool failure. This condition is typically a result of a loss of connectivity to the underlying storage device(s) or a failure of all devices within the pool. The behavior of such an event is determined as follows:

wait
Blocks all I/O access until the device connectivity is recovered and the errors are cleared. This is the default behavior.
continue
Returns Er EIO to any new write I/O requests but allows reads to any of the remaining healthy devices. Any write requests that have yet to be committed to disk would be blocked.
panic
Prints out a message to the console and generates a system crash dump.

autotrim = on | off
When set to on space which has been recently freed, and is no longer allocated by the pool, will be periodically trimmed. This allows block device vdevs which support BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system supports hole-punching, to reclaim unused blocks. The default setting for this property is off

Automatic TRIM does not immediately reclaim blocks after a free. Instead, it will optimistically delay allowing smaller ranges to be aggregated in to a few larger ones. These can then be issued more efficiently to the storage.

Be aware that automatic trimming of recently freed data blocks can put significant stress on the underlying storage devices. This will vary depending of how well the specific device handles these commands. For lower end devices it is often possible to achieve most of the benefits of automatic trimming by running an on-demand (manual) TRIM periodically using the zpool trim command.

feature@ feature_name = enabled
The value of this property is the current state of feature_name The only valid value when setting this property is enabled which moves feature_name to the enabled state. See zpool-features5 for details on feature states.
listsnapshots = on | off
Controls whether information about snapshots associated with this pool is output when zfs list is run without the -t option. The default value is off This property can also be referred to by its shortened name, listsnaps
multihost = on | off
Controls whether a pool activity check should be performed during zpool import When a pool is determined to be active it cannot be imported, even with the -f option. This property is intended to be used in failover configurations where multiple hosts have access to a pool on shared storage.

Multihost provides protection on import only. It does not protect against an individual device being used in multiple pools, regardless of the type of vdev. See the discussion under zpool create.

When this property is on, periodic writes to storage occur to show the pool is in use. See zfs_multihost_interval in the zfs-module-parameters5 man page. In order to enable this property each host must set a unique hostid. See genhostid(1) zgenhostid(8) spl-module-parameters5 for additional details. The default value is off

version = version
The current on-disk version of the pool. This can be increased, but never decreased. The preferred method of updating pools is with the zpool upgrade command, though this property can be used when a specific version is needed for backwards compatibility. Once feature flags are enabled on a pool this property will no longer have a value.

 

Subcommands

All subcommands that modify state are logged persistently to the pool in their original form.

The command provides subcommands to create and destroy storage pools, add capacity to storage pools, and provide information about the storage pools. The following subcommands are supported:

-?
Displays a help message.
-V, -version
An alias for the zpool version subcommand.
add [-fgLnP ] [-o property = value ] pool vdev ...
Adds the specified virtual devices to the given pool. The vdev specification is described in the Sx Virtual Devices section. The behavior of the -f option, and the device checks performed are described in the zpool create subcommand.

-f
Forces use of vdev s even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner.
-g
Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands.
-L
Display real paths for vdev s resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it.
-n
Displays the configuration that would be used without actually adding the vdev s The actual pool creation can still fail due to insufficient privileges or device sharing.
-P
Display real paths for vdev s instead of only the last component of the path. This can be used in conjunction with the -L flag.
-o property = value
Sets the given pool properties. See the Sx Properties section for a list of valid properties that can be set. The only property supported at the moment is ashift.

attach [-f ] [-o property = value ] pool device new_device
Attaches new_device to the existing device The existing device cannot be part of a raidz configuration. If device is not currently part of a mirrored configuration, device automatically transforms into a two-way mirror of device and new_device If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on. In either case, new_device begins to resilver immediately.

-f
Forces use of new_device even if it appears to be in use. Not all devices can be overridden in this manner.
-o property = value
Sets the given pool properties. See the Sx Properties section for a list of valid properties that can be set. The only property supported at the moment is ashift.

checkpoint [-d, -discard ] pool
Checkpoints the current state of pool , which can be later restored by zpool import --rewind-to-checkpoint The existence of a checkpoint in a pool prohibits the following zpool commands: remove attach detach split and reguid In addition, it may break reservation boundaries if the pool lacks free space. The zpool status command indicates the existence of a checkpoint or the progress of discarding a checkpoint from a pool. The zpool list command reports how much space the checkpoint takes from the pool.

-d, -discard
Discards an existing checkpoint from pool

clear pool [device ]
Clears device errors in a pool. If no arguments are specified, all device errors within the pool are cleared. If one or more devices is specified, only those errors associated with the specified device or devices are cleared. If multihost is enabled, and the pool has been suspended, this will not resume I/O. While the pool was suspended, it may have been imported on another host, and resuming I/O could result in pool damage.
create [-dfn ] [-m mountpoint ] [-o property = value ... ] [-o feature@feature = value ... ] [-O file-system-property = value ... ] [-R root ] [-t tname ] pool vdev ...
Creates a new storage pool containing the virtual devices specified on the command line. The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore (Qq _ ) dash (Qq - ) colon (Qq : ) space (Qq   ) and period (Qq . ) The pool names mirror raidz spare and log are reserved, as are names beginning with mirror raidz spare and the pattern c[0-9] The vdev specification is described in the Sx Virtual Devices section.

The command attempts to verify that each device specified is accessible and not currently in use by another subsystem. However this check is not robust enough to detect simultaneous attempts to use a new device in different pools, even if multihost is enabled. The administrator must ensure that simultaneous invocations of any combination of zpool replace zpool create zpool add or zpool labelclear do not refer to the same device. Using the same device in two pools will result in pool corruption.

There are some uses, such as being currently mounted, or specified as the dedicated dump device, that prevents a device from ever being used by ZFS. Other uses, such as having a preexisting UFS file system, can be overridden with the -f option.

The command also checks that the replication strategy for the pool is consistent. An attempt to combine redundant and non-redundant storage in a single pool, or to mix disks and files, results in an error unless -f is specified. The use of differently sized devices within a single raidz or mirror group is also flagged as an error unless -f is specified.

Unless the -R option is specified, the default mount point is / pool The mount point must not exist or must be empty, or else the root dataset cannot be mounted. This can be overridden with the -m option.

By default all supported features are enabled on the new pool unless the -d option is specified.

-d
Do not enable any features on the new pool. Individual features can be enabled by setting their corresponding properties to enabled with the -o option. See zpool-features5 for details about feature properties.
-f
Forces use of vdev s even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner.
-m mountpoint
Sets the mount point for the root dataset. The default mount point is /pool or altroot/pool if altroot is specified. The mount point must be an absolute path, legacy or none For more information on dataset mount points, see zfs(8).
-n
Displays the configuration that would be used without actually creating the pool. The actual pool creation can still fail due to insufficient privileges or device sharing.
-o property = value
Sets the given pool properties. See the Sx Properties section for a list of valid properties that can be set.
-o feature@feature = value
Sets the given pool feature. See the zpool-features5 section for a list of valid features that can be set. Value can be either disabled or enabled.
-O file-system-property = value
Sets the given file system properties in the root file system of the pool. See the Sx Properties section of zfs(8) for a list of valid properties that can be set.
-R root
Equivalent to -o cachefile = none -o altroot = root
-t tname
Sets the in-core pool name to tname while the on-disk name will be the name specified as the pool name pool This will set the default cachefile property to none. This is intended to handle name space collisions when creating pools for other systems, such as virtual machines or physical machines whose pools live on network block devices.

destroy [-f ] pool
Destroys the given pool, freeing up any devices for other use. This command tries to unmount any active datasets before destroying the pool.

-f
Forces any active datasets contained within the pool to be unmounted.

detach pool device
Detaches device from a mirror. The operation is refused if there are no other valid replicas of the data. If device may be re-added to the pool later on then consider the zpool offline command instead.
events [-vHf [pool | -c ] ]
Lists all recent events generated by the ZFS kernel modules. These events are consumed by the zed(8) and used to automate administrative tasks such as replacing a failed device with a hot spare. For more information about the subclasses and event payloads that can be generated see the zfs-events5 man page.

-c
Clear all previous events.
-f
Follow mode.
-H
Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space.
-v
Print the entire payload for each event.

export [-a ] [-f ] pool ...
Exports the given pools from the system. All devices are marked as exported, but are still considered in use by other subsystems. The devices can be moved between systems (even those of different endianness) and imported as long as a sufficient number of devices are present.

Before exporting the pool, all datasets within the pool are unmounted. A pool can not be exported if it has a shared spare that is currently being used.

For pools to be portable, you must give the command whole disks, not just partitions, so that ZFS can label the disks with portable EFI labels. Otherwise, disk drivers on platforms of different endianness will not recognize the disks.

-a
Exports all pools imported on the system.
-f
Forcefully unmount all datasets, using the unmount -f command.

This command will forcefully export the pool even if it has a shared spare that is currently being used. This may lead to potential data corruption.

get [-Hp [-o field [, field ... ] ] ] all | property [, property ... ] [pool ... ]
Retrieves the given list of properties Po or all properties if all is used Pc for the specified storage pool(s). These properties are displayed with the following fields:
        name          Name of storage pool
        property      Property name
        value         Property value
        source        Property source, either 'default' or 'local'.

See the Sx Properties section for more information on the available pool properties.

-H
Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space.
-o field
A comma-separated list of columns to display. name , property , value , source is the default value.
-p
Display numbers in parsable (exact) values.

history [-il ] [pool ... ]
Displays the command history of the specified pool(s) or all pools if no pool is specified.

-i
Displays internally logged ZFS events in addition to user initiated events.
-l
Displays log records in long format, which in addition to standard format includes, the user name, the hostname, and the zone in which the operation was performed.

import [-D ] [-d dir | device ]
Lists pools available to import. If the -d option is not specified, this command searches for devices in /dev The -d option can be specified multiple times, and all directories are searched. If the device appears to be part of an exported pool, this command displays a summary of the pool with the name of the pool, a numeric identifier, as well as the vdev layout and current health of the device for each device or file. Destroyed pools, pools that were previously destroyed with the zpool destroy command, are not listed unless the -D option is specified.

The numeric identifier is unique, and can be used instead of the pool name when multiple exported pools of the same name are available.

-c cachefile
Reads configuration from the given cachefile that was created with the cachefile pool property. This cachefile is used instead of searching for devices.
-d dir | device
Uses device or searches for devices or files in dir The -d option can be specified multiple times.
-D
Lists destroyed pools only.

import -a [-DflmN ] [-F [-n [-T [-X ] ] ] ] [-c cachefile | -d dir | device ] [-o mntopts ] [-o property = value ... ] [-R root ] [-s ]
Imports all pools found in the search directories. Identical to the previous command, except that all pools with a sufficient number of devices available are imported. Destroyed pools, pools that were previously destroyed with the zpool destroy command, will not be imported unless the -D option is specified.

-a
Searches for and imports all pools found.
-c cachefile
Reads configuration from the given cachefile that was created with the cachefile pool property. This cachefile is used instead of searching for devices.
-d dir | device
Uses device or searches for devices or files in dir The -d option can be specified multiple times. This option is incompatible with the -c option.
-D
Imports destroyed pools only. The -f option is also required.
-f
Forces import, even if the pool appears to be potentially active.
-F
Recovery mode for a non-importable pool. Attempt to return the pool to an importable state by discarding the last few transactions. Not all damaged pools can be recovered by using this option. If successful, the data from the discarded transactions is irretrievably lost. This option is ignored if the pool is importable or already imported.
-l
Indicates that this command will request encryption keys for all encrypted datasets it attempts to mount as it is bringing the pool online. Note that if any datasets have a keylocation of prompt this command will block waiting for the keys to be entered. Without this flag encrypted datasets will be left unavailable until the keys are loaded.
-m
Allows a pool to import when there is a missing log device. Recent transactions can be lost because the log device will be discarded.
-n
Used with the -F recovery option. Determines whether a non-importable pool can be made importable again, but does not actually perform the pool recovery. For more details about pool recovery mode, see the -F option, above.
-N
Import the pool without mounting any file systems.
-o mntopts
Comma-separated list of mount options to use when mounting datasets within the pool. See zfs(8) for a description of dataset properties and mount options.
-o property = value
Sets the specified property on the imported pool. See the Sx Properties section for more information on the available pool properties.
-R root
Sets the cachefile property to none and the altroot property to root
--rewind-to-checkpoint
Rewinds pool to the checkpointed state. Once the pool is imported with this flag there is no way to undo the rewind. All changes and data that were written after the checkpoint are lost! The only exception is when the readonly mounting option is enabled. In this case, the checkpointed state of the pool is opened and an administrator can see how the pool would look like if they were to fully rewind.
-s
Scan using the default search path, the libblkid cache will not be consulted. A custom search path may be specified by setting the ZPOOL_IMPORT_PATH environment variable.
-X
Used with the -F recovery option. Determines whether extreme measures to find a valid txg should take place. This allows the pool to be rolled back to a txg which is no longer guaranteed to be consistent. Pools imported at an inconsistent txg may contain uncorrectable checksum errors. For more details about pool recovery mode, see the -F option, above. WARNING: This option can be extremely hazardous to the health of your pool and should only be used as a last resort.
-T
Specify the txg to use for rollback. Implies -FX For more details about pool recovery mode, see the -X option, above. WARNING: This option can be extremely hazardous to the health of your pool and should only be used as a last resort.

import [-Dflm ] [-F [-n [-t [-T [-X ] ] ] ] ] [-c cachefile | -d dir | device ] [-o mntopts ] [-o property = value ... ] [-R root ] [-s ] pool | id [newpool ]
Imports a specific pool. A pool can be identified by its name or the numeric identifier. If newpool is specified, the pool is imported using the name newpool Otherwise, it is imported with the same name as its exported name.

If a device is removed from a system without running zpool export first, the device appears as potentially active. It cannot be determined if this was a failed export, or whether the device is really in use from another host. To import a pool in this state, the -f option is required.

-c cachefile
Reads configuration from the given cachefile that was created with the cachefile pool property. This cachefile is used instead of searching for devices.
-d dir | device
Uses device or searches for devices or files in dir The -d option can be specified multiple times. This option is incompatible with the -c option.
-D
Imports destroyed pool. The -f option is also required.
-f
Forces import, even if the pool appears to be potentially active.
-F
Recovery mode for a non-importable pool. Attempt to return the pool to an importable state by discarding the last few transactions. Not all damaged pools can be recovered by using this option. If successful, the data from the discarded transactions is irretrievably lost. This option is ignored if the pool is importable or already imported.
-l
Indicates that this command will request encryption keys for all encrypted datasets it attempts to mount as it is bringing the pool online. Note that if any datasets have a keylocation of prompt this command will block waiting for the keys to be entered. Without this flag encrypted datasets will be left unavailable until the keys are loaded.
-m
Allows a pool to import when there is a missing log device. Recent transactions can be lost because the log device will be discarded.
-n
Used with the -F recovery option. Determines whether a non-importable pool can be made importable again, but does not actually perform the pool recovery. For more details about pool recovery mode, see the -F option, above.
-o mntopts
Comma-separated list of mount options to use when mounting datasets within the pool. See zfs(8) for a description of dataset properties and mount options.
-o property = value
Sets the specified property on the imported pool. See the Sx Properties section for more information on the available pool properties.
-R root
Sets the cachefile property to none and the altroot property to root
-s
Scan using the default search path, the libblkid cache will not be consulted. A custom search path may be specified by setting the ZPOOL_IMPORT_PATH environment variable.
-X
Used with the -F recovery option. Determines whether extreme measures to find a valid txg should take place. This allows the pool to be rolled back to a txg which is no longer guaranteed to be consistent. Pools imported at an inconsistent txg may contain uncorrectable checksum errors. For more details about pool recovery mode, see the -F option, above. WARNING: This option can be extremely hazardous to the health of your pool and should only be used as a last resort.
-T
Specify the txg to use for rollback. Implies -FX For more details about pool recovery mode, see the -X option, above. WARNING: This option can be extremely hazardous to the health of your pool and should only be used as a last resort.
-t
Used with newpool Specifies that newpool is temporary. Temporary pool names last until export. Ensures that the original pool name will be used in all label updates and therefore is retained upon export. Will also set -o cachefile=none when not explicitly specified.

initialize [-c | -s ] pool [device ... ]
Begins initializing by writing to all unallocated regions on the specified devices, or all eligible devices in the pool if no individual devices are specified. Only leaf data or log devices may be initialized.

-c, -cancel
Cancel initializing on the specified devices, or all eligible devices if none are specified. If one or more target devices are invalid or are not currently being initialized, the command will fail and no cancellation will occur on any device.
-s -suspend
Suspend initializing on the specified devices, or all eligible devices if none are specified. If one or more target devices are invalid or are not currently being initialized, the command will fail and no suspension will occur on any device. Initializing can then be resumed by running zpool initialize with no flags on the relevant target devices.

iostat [[[-c SCRIPT [-lq | -rw ] ] ] ] [-T u | d ] [-ghHLnpPvy ] [[pool ... | [pool vdev ... | [vdev ... ] ] ] ] [interval [count ] ]
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may be observed via iostat(1). If writes are located nearby, they may be merged into a single larger operation. Additional I/O may be generated depending on the level of vdev redundancy. To filter output, you may pass in a list of pools, a pool and list of vdevs in that pool, or a list of any vdevs from any pool. If no items are specified, statistics for every pool in the system are shown. When given an interval the statistics are printed every interval seconds until ^C is pressed. If -n flag is specified the headers are displayed only once, otherwise they are displayed periodically. If count is specified, the command exits after count reports are printed. The first report printed is always the statistics since boot regardless of whether interval and count are passed. However, this behavior can be suppressed with the -y flag. Also note that the units of K M G ... that are printed in the report are in base 1024. To get the raw values, use the -p flag.

-c [SCRIPT1 [, SCRIPT2 ... ] ]
Run a script (or scripts) on each vdev and include the output as a new column in the zpool iostat output. Users can run any script found in their ~/.zpool.d directory or from the system /etc/zfs/zpool.d directory. Script names containing the slash (/) character are not allowed. The default search path can be overridden by setting the ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run -c if they have the ZPOOL_SCRIPTS_AS_ROOT environment variable set. If a script requires the use of a privileged command, like smartctl(8), then it's recommended you allow the user access to it in /etc/sudoers or add the user to the /etc/sudoers.d/zfs file.

If -c is passed without a script name, it prints a list of all scripts. -c also sets verbose mode ( -v ).

Script output should be in the form of "name=value". The column name is set to "name" and the value is set to "value". Multiple lines can be used to output multiple columns. The first line of output not in the "name=value" format is displayed without a column title, and no more output after that is displayed. This can be useful for printing error messages. Blank or NULL values are printed as a '-' to make output awk-able.

The following environment variables are set before running each script:

VDEV_PATH
Full path to the vdev

VDEV_UPATH
Underlying path to the vdev (/dev/sd*). For use with device mapper, multipath, or partitioned vdevs.

VDEV_ENC_SYSFS_PATH
The sysfs path to the enclosure for the vdev (if any).

-T u | d
Display a time stamp. Specify u for a printed representation of the internal representation of time. See time(2). Specify d for standard date format. See date(1).
-g
Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands.
-H
Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space.
-L
Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it.
-n
Print headers only once when passed
-p
Display numbers in parsable (exact) values. Time values are in nanoseconds.
-P
Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the -L flag.
-r
Print request size histograms for the leaf vdev's IO. This includes histograms of individual IOs (ind) and aggregate IOs (agg). These stats can be useful for observing how well IO aggregation is working. Note that TRIM IOs may exceed 16M, but will be counted as 16M.
-v
Verbose statistics Reports usage statistics for individual vdevs within the pool, in addition to the pool-wide statistics.
-y
Omit statistics since boot. Normally the first line of output reports the statistics since boot. This option suppresses that first line of output. interval
-w
Display latency histograms:

total_wait Total IO time (queuing + disk IO time). disk_wait Disk IO time (time reading/writing the disk). syncq_wait Amount of time IO spent in synchronous priority queues. Does not include disk time. asyncq_wait Amount of time IO spent in asynchronous priority queues. Does not include disk time. scrub Amount of time IO spent in scrub queue. Does not include disk time.

-l
Include average latency statistics:

total_wait Average total IO time (queuing + disk IO time). disk_wait Average disk IO time (time reading/writing the disk). syncq_wait Average amount of time IO spent in synchronous priority queues. Does not include disk time. asyncq_wait Average amount of time IO spent in asynchronous priority queues. Does not include disk time. scrub Average queuing time in scrub queue. Does not include disk time. trim Average queuing time in trim queue. Does not include disk time.

-q
Include active queue statistics. Each priority queue has both pending ( pend and active ( activ IOs. Pending IOs are waiting to be issued to the disk, and active IOs have been issued to disk and are waiting for completion. These stats are broken out by priority queue:

syncq_read/write Current number of entries in synchronous priority queues. asyncq_read/write Current number of entries in asynchronous priority queues. scrubq_read Current number of entries in scrub queue. trimq_write Current number of entries in trim queue.

All queue statistics are instantaneous measurements of the number of entries in the queues. If you specify an interval, the measurements will be sampled from the end of the interval.

labelclear [-f ] device
Removes ZFS label information from the specified device The device must not be part of an active pool configuration.

-f
Treat exported or foreign devices as inactive.

list [-HgLpPv [-o property [, property ... ] ] ] [-T u | d ] [pool ... ] [interval [count ] ]
Lists the given pools along with a health status and space usage. If no pool s are specified, all pools in the system are listed. When given an interval the information is printed every interval seconds until ^C is pressed. If count is specified, the command exits after count reports are printed.

-g
Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands.
-H
Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space.
-o property
Comma-separated list of properties to display. See the Sx Properties section for a list of valid properties. The default list is name , size , allocated , free , checkpoint, expandsize , fragmentation capacity , dedupratio , health , altroot
-L
Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it.
-p
Display numbers in parsable (exact) values.
-P
Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the -L flag.
-T u | d
Display a time stamp. Specify u for a printed representation of the internal representation of time. See time(2). Specify d for standard date format. See date(1).
-v
Verbose statistics. Reports usage statistics for individual vdevs within the pool, in addition to the pool-wise statistics.

offline [-f ] [-t ] pool device ...
Takes the specified physical device offline. While the device is offline, no attempt is made to read or write to the device. This command is not applicable to spares.

-f
Force fault. Instead of offlining the disk, put it into a faulted state. The fault will persist across imports unless the -t flag was specified.
-t
Temporary. Upon reboot, the specified physical device reverts to its previous state.

online [-e ] pool device ...
Brings the specified physical device online. This command is not applicable to spares.

-e
Expand the device to use all available space. If the device is part of a mirror or raidz then all devices must be expanded before the new space will become available to the pool.

reguid pool
Generates a new unique identifier for the pool. You must ensure that all devices in this pool are online and healthy before performing this action.
reopen [-n ] pool
Reopen all the vdevs associated with the pool.

-n
Do not restart an in-progress scrub operation. This is not recommended and can result in partially resilvered devices unless a second scrub is performed.

remove [-np ] pool device ...
Removes the specified device from the pool. This command supports removing hot spare, cache, log, and both mirrored and non-redundant primary top-level vdevs, including dedup and special vdevs. When the primary pool storage includes a top-level raidz vdev only hot spare, cache, and log devices can be removed.

Removing a top-level vdev reduces the total amount of space in the storage pool. The specified device will be evacuated by copying all allocated space from it to the other devices in the pool. In this case, the zpool remove command initiates the removal and returns, while the evacuation continues in the background. The removal progress can be monitored with zpool status If an IO error is encountered during the removal process it will be cancelled. The device_removal feature flag must be enabled to remove a top-level vdev, see zpool-features5.

A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the same. Non-log devices or data devices that are part of a mirrored configuration can be removed using the zpool detach command.

-n
Do not actually perform the removal ("no-op"). Instead, print the estimated amount of memory that will be used by the mapping table after the removal completes. This is nonzero only for top-level vdevs.

-p
Used in conjunction with the -n flag, displays numbers as parsable (exact) values.

remove -s pool
Stops and cancels an in-progress removal of a top-level vdev.
replace [-f ] [-o property = value ] pool device [new_device ]
Replaces old_device with new_device This is equivalent to attaching new_device waiting for it to resilver, and then detaching old_device

The size of new_device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration.

new_device is required if the pool is not redundant. If new_device is not specified, it defaults to old_device This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev path as the old device, even though it is actually a different disk. ZFS recognizes this.

-f
Forces use of new_device even if it appears to be in use. Not all devices can be overridden in this manner.
-o property = value
Sets the given pool properties. See the Sx Properties section for a list of valid properties that can be set. The only property supported at the moment is ashift

scrub [-s | -p ] pool ...
Begins a scrub or resumes a paused scrub. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror or raidz) devices, ZFS automatically repairs any damage discovered during the scrub. The zpool status command reports the progress of the scrub and summarizes the results of the scrub upon completion.

Scrubbing and resilvering are very similar operations. The difference is that resilvering only examines data that ZFS knows to be out of date Po for example, when attaching a new device to a mirror or replacing an existing device Pc , whereas scrubbing examines all data to discover silent errors due to hardware faults or disk failure.

Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows one at a time. If a scrub is paused, the zpool scrub resumes it. If a resilver is in progress, ZFS does not allow a scrub to be started until the resilver completes.

Note that, due to changes in pool data on a live system, it is possible for scrubs to progress slightly beyond 100% completion. During this period, no completion time estimate will be provided.

-s
Stop scrubbing.

-p
Pause scrubbing. Scrub pause state and progress are periodically synced to disk. If the system is restarted or pool is exported during a paused scrub, even after import, scrub will remain paused until it is resumed. Once resumed the scrub will pick up from the place where it was last checkpointed to disk. To resume a paused scrub issue zpool scrub again.

resilver pool ...
Starts a resilver. If an existing resilver is already running it will be restarted from the beginning. Any drives that were scheduled for a deferred resilver will be added to the new one. This requires the resilver_defer feature.
trim [-d ] [-c | -s ] pool [device ... ]
Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.

A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. See the documentation for the autotrim property above for the types of vdev devices which can be trimmed.

-d -secure
Causes a secure TRIM to be initiated. When performing a secure TRIM, the device guarantees that data stored on the trimmed blocks has been erased. This requires support from the device and is not supported by all SSDs.
-r -rate rate
Controls the rate at which the TRIM operation progresses. Without this option TRIM is executed as quickly as possible. The rate, expressed in bytes per second, is applied on a per-vdev basis and may be set differently for each leaf vdev.
-c, -cancel
Cancel trimming on the specified devices, or all eligible devices if none are specified. If one or more target devices are invalid or are not currently being trimmed, the command will fail and no cancellation will occur on any device.
-s -suspend
Suspend trimming on the specified devices, or all eligible devices if none are specified. If one or more target devices are invalid or are not currently being trimmed, the command will fail and no suspension will occur on any device. Trimming can then be resumed by running zpool trim with no flags on the relevant target devices.

set property = value pool
Sets the given property on the specified pool. See the Sx Properties section for more information on what properties can be set and acceptable values.
split [-gLlnP ] [-o property = value ... ] [-R root ] pool newpool [device ... ]
Splits devices off pool creating newpool All vdevs in pool must be mirrors and the pool must not be in the process of resilvering. At the time of the split, newpool will be a replica of pool By default, the last device in each mirror is split from pool to create newpool

The optional device specification causes the specified device(s) to be included in the new pool and, should any devices remain unspecified, the last device in each mirror is used as would be by default.

-g
Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands.
-L
Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it.
-l
Indicates that this command will request encryption keys for all encrypted datasets it attempts to mount as it is bringing the new pool online. Note that if any datasets have a keylocation of prompt this command will block waiting for the keys to be entered. Without this flag encrypted datasets will be left unavailable until the keys are loaded.
-n
Do dry run, do not actually perform the split. Print out the expected configuration of newpool
-P
Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the -L flag.
-o property = value
Sets the specified property for newpool See the Sx Properties section for more information on the available pool properties.
-R root
Set altroot for newpool to root and automatically import it.

status [-c [SCRIPT1 [, SCRIPT2 ... ] ] ] [-DigLpPstvx ] [-T u | d ] [pool ... ] [interval [count ] ]
Displays the detailed health status for the given pools. If no pool is specified, then the status of each pool in the system is displayed. For more information on pool and device health, see the Sx Device Failure and Recovery section.

If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change.

-c [SCRIPT1 [, SCRIPT2 ... ] ]
Run a script (or scripts) on each vdev and include the output as a new column in the zpool status output. See the -c option of zpool iostat for complete details.
-i
Display vdev initialization status.
-g
Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands.
-L
Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it.
-p
Display numbers in parsable (exact) values.
-P
Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the -L flag.
-D
Display a histogram of deduplication statistics, showing the allocated (physically present on disk) and referenced (logically referenced in the pool) block counts and sizes by reference count.
-s
Display the number of leaf VDEV slow IOs. This is the number of IOs that didn't complete in zio_slow_io_ms milliseconds (default 30 seconds). This does not necessarily mean the IOs failed to complete, just took an unreasonably long amount of time. This may indicate a problem with the underlying storage.
-t
Display vdev TRIM status.
-T u | d
Display a time stamp. Specify u for a printed representation of the internal representation of time. See time(2). Specify d for standard date format. See date(1).
-v
Displays verbose data error information, printing out a complete list of all data errors since the last complete pool scrub.
-x
Only display status for pools that are exhibiting errors or are otherwise unavailable. Warnings about pools not using the latest on-disk format will not be included.

sync [pool ... ]
This command forces all in-core dirty data to be written to the primary pool storage and not the ZIL. It will also update administrative information including quota reporting. Without arguments, zpool sync will sync all pools on the system. Otherwise, it will sync only the specified pool(s).
upgrade
Displays pools which do not have all supported features enabled and pools formatted using a legacy ZFS version number. These pools can continue to be used, but some features may not be available. Use zpool upgrade -a to enable all features on all pools.
upgrade -v
Displays legacy ZFS versions supported by the current software. See zpool-features5 for a description of feature flags features supported by the current software.
upgrade [-V version ] -a | pool ...
Enables all supported features on the given pool. Once this is done, the pool will no longer be accessible on systems that do not support feature flags. See zpool-features5 for details on compatibility with systems that support feature flags, but do not support all features enabled on the pool.

-a
Enables all supported features on all pools.
-V version
Upgrade to the specified legacy version. If the -V flag is specified, no features will be enabled on the pool. This option can only be used to increase the version number up to the last supported legacy version number.

version
Displays the software version of the userland utility and the zfs kernel module.

 

EXIT STATUS

The following exit values are returned:

0
Successful completion.
1
An error occurred.
2
Invalid command line options were specified.

 

EXAMPLES

Example 1 Creating a RAID-Z Storage Pool
The following command creates a pool with a single raidz root vdev that consists of six disks.
# zpool create tank raidz sda sdb sdc sdd sde sdf
Example 2 Creating a Mirrored Storage Pool
The following command creates a pool with two mirrors, where each mirror contains two disks.
# zpool create tank mirror sda sdb mirror sdc sdd
Example 3 Creating a ZFS Storage Pool by Using Partitions
The following command creates an unmirrored pool using two disk partitions.
# zpool create tank sda1 sdb2
Example 4 Creating a ZFS Storage Pool by Using Files
The following command creates an unmirrored pool using files. While not recommended, a pool based on files can be useful for experimental purposes.
# zpool create tank /path/to/file/a /path/to/file/b
Example 5 Adding a Mirror to a ZFS Storage Pool
The following command adds two mirrored disks to the pool tank assuming the pool is already made up of two-way mirrors. The additional space is immediately available to any datasets within the pool.
# zpool add tank mirror sda sdb
Example 6 Listing Available ZFS Storage Pools
The following command lists all available pools on the system. In this case, the pool zion is faulted due to a missing device. The results from this command are similar to the following:
# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
zion       -      -      -         -      -      -      -  FAULTED -
Example 7 Destroying a ZFS Storage Pool
The following command destroys the pool tank and any datasets contained within.
# zpool destroy -f tank
Example 8 Exporting a ZFS Storage Pool
The following command exports the devices in pool tank so that they can be relocated or later imported.
# zpool export tank
Example 9 Importing a ZFS Storage Pool
The following command displays available pools, and then imports the pool tank for use on the system. The results from this command are similar to the following:
# zpool import
  pool: tank
    id: 15451357997522795478
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        tank        ONLINE
          mirror    ONLINE
            sda     ONLINE
            sdb     ONLINE

# zpool import tank
Example 10 Upgrading All ZFS Storage Pools to the Current Version
The following command upgrades all ZFS Storage pools to the current version of the software.
# zpool upgrade -a
This system is currently running ZFS version 2.
Example 11 Managing Hot Spares
The following command creates a new pool with an available hot spare:
# zpool create tank mirror sda sdb spare sdc

If one of the disks were to fail, the pool would be reduced to the degraded state. The failed device can be replaced using the following command:

# zpool replace tank sda sdd

Once the data has been resilvered, the spare is automatically removed and is made available for use should another device fail. The hot spare can be permanently removed from the pool using the following command:

# zpool remove tank sdc
Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
The following command creates a ZFS storage pool consisting of two, two-way mirrors and mirrored log devices:
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \
  sde sdf
Example 13 Adding Cache Devices to a ZFS Pool
The following command adds two disks for use as cache devices to a ZFS storage pool:
# zpool add pool cache sdc sdd

Once added, the cache devices gradually fill with content from main memory. Depending on the size of your cache devices, it could take over an hour for them to fill. Capacity and reads can be monitored using the iostat option as follows:

# zpool iostat -v pool 5
Example 14 Removing a Mirrored top-level (Log or Data) Device
The following commands remove the mirrored log device mirror-2 and mirrored top-level data device mirror-1

Given this configuration:

  pool: tank
 state: ONLINE
 scrub: none requested
config:

         NAME        STATE     READ WRITE CKSUM
         tank        ONLINE       0     0     0
           mirror-0  ONLINE       0     0     0
             sda     ONLINE       0     0     0
             sdb     ONLINE       0     0     0
           mirror-1  ONLINE       0     0     0
             sdc     ONLINE       0     0     0
             sdd     ONLINE       0     0     0
         logs
           mirror-2  ONLINE       0     0     0
             sde     ONLINE       0     0     0
             sdf     ONLINE       0     0     0

The command to remove the mirrored log mirror-2 is:

# zpool remove tank mirror-2

The command to remove the mirrored data mirror-1 is:

# zpool remove tank mirror-1
Example 15 Displaying expanded space on a device
The following command displays the detailed information for the pool data This pool is comprised of a single raidz vdev where one of its devices increased its capacity by 10GB. In this example, the pool will not be able to utilize this extra capacity until all the devices under the raidz vdev have been expanded.
# zpool list -v data
NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
  raidz1    23.9G  14.6G  9.30G         -    48%
    sda         -      -      -         -      -
    sdb         -      -      -       10G      -
    sdc         -      -      -         -      -
Example 16 Adding output columns
Additional columns can be added to the zpool status and zpool iostat output with -c option.
# zpool status -c vendor,model,size
   NAME     STATE  READ WRITE CKSUM vendor  model        size
   tank     ONLINE 0    0     0
   mirror-0 ONLINE 0    0     0
   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T

# zpool iostat -vc slaves
   capacity operations bandwidth
   pool       alloc free  read  write read  write slaves
   ---------- ----- ----- ----- ----- ----- ----- ---------
   tank       20.4G 7.23T 26    152   20.7M 21.6M
   mirror     20.4G 7.23T 26    152   20.7M 21.6M
   U1         -     -     0     31    1.46K 20.6M sdb sdff
   U10        -     -     0     1     3.77K 13.3K sdas sdgw
   U11        -     -     0     1     288K  13.3K sdat sdgx
   U12        -     -     0     1     78.4K 13.3K sdau sdgy
   U13        -     -     0     1     128K  13.3K sdav sdgz
   U14        -     -     0     1     63.2K 13.3K sdfk sdg

 

ENVIRONMENT VARIABLES

ZFS_ABORT
Cause zpool to dump core on exit for the purposes of running ::findleaks

ZPOOL_IMPORT_PATH
The search path for devices or files to use with the pool. This is a colon-separated list of directories in which zpool looks for device nodes and files. Similar to the -d option in zpool import

ZPOOL_VDEV_NAME_GUID
Cause zpool subcommands to output vdev guids by default. This behavior is identical to the zpool status -g command line option.

ZPOOL_VDEV_NAME_FOLLOW_LINKS
Cause zpool subcommands to follow links for vdev names by default. This behavior is identical to the zpool status -L command line option.

ZPOOL_VDEV_NAME_PATH
Cause zpool subcommands to output full vdev path names by default. This behavior is identical to the zpool status -p command line option.

ZFS_VDEV_DEVID_OPT_OUT
Older ZFS on Linux implementations had issues when attempting to display pool config VDEV names if a devid NVP value is present in the pool's config.

For example, a pool that originated on illumos platform would have a devid value in the config and zpool status would fail when listing the config. This would also be true for future Linux based pools.

A pool can be stripped of any devid values on import or prevented from adding them on zpool create or zpool add by setting ZFS_VDEV_DEVID_OPT_OUT

ZPOOL_SCRIPTS_AS_ROOT
Allow a privileged user to run the zpool status/iostat with the -c option. Normally, only unprivileged users are allowed to run -c

ZPOOL_SCRIPTS_PATH
The search path for scripts when running zpool status/iostat with the -c option. This is a colon-separated list of directories and overrides the default ~/.zpool.d and /etc/zfs/zpool.d search paths.

ZPOOL_SCRIPTS_ENABLED
Allow a user to run zpool status/iostat with the -c option. If ZPOOL_SCRIPTS_ENABLED is not set, it is assumed that the user is allowed to run zpool status/iostat -c

 

INTERFACE STABILITY

Evolving  

SEE ALSO

zfs-events5, zfs-module-parameters5, zpool-features5, zed(8), zfs(8)