zpool(8) 맨 페이지 - 윈디하나의 솔라나라

개요

섹션
맨 페이지 이름
검색(S)

zpool(8)

System Administration Commands                                        zpool(8)



NAME
       zpool - configures ZFS storage pools

SYNOPSIS
       zpool [-?]


       zpool help command | help | property property-name


       zpool help -l properties


       zpool add [-f] [-o property=value] ... [-n [-l]] pool vdev ...


       zpool attach [-f] pool device new_device


       zpool clear [-nF [-f]] pool [device]


       zpool create [-f] [-n [-l]] [-B] [-N] [-o property=value] ...
            [-O file-system-property=value] ... [-m mountpoint]
            [-R root] pool vdev ...


       zpool destroy [-f] pool


       zpool detach pool device


       zpool export [-f] pool ...


       zpool get [-Hp] [-o all | field[,...]] [-s source[,...]]
            all | property[,...] pool ...


       zpool history [-il] [pool] ...


       zpool import [-d path ... | -c cachefile] [-D] [-l]
       [-S section[,...]] [-s  all | field[,...]]


       zpool import [-d path ... |-c cachefile] [-D] [-F [-n]]  <pool | id>


       zpool import [-o mntopts] [-o property=value] ... [-d path ... |
            -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a
            pool | id [newpool]


       zpool import [-o mntopts] [-o property=value] ... [-d path ... |
            -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n [-l]]]
            [-t tmppool] pool | id [newpool]


       zpool iostat [-T d|u ] [-v [-l]] [pool] ... [interval[count]]


       zpool label [-d path ... | -c cachefile] -C ...


       zpool label [-d path ... | -c cachefile] -R ...


       zpool list [-H] [-o property[,...]] [-T d|u ] [pool] ... [interval[count]]


       zpool monitor -t provider [-T d|u] [[-p] -o field[,...]] [pool] ...
            [interval [count]]


       zpool offline [-t] pool device ...


       zpool online [-e] pool device ...


       zpool reguid pool


       zpool remove pool device ...


       zpool remove -s pool


       zpool replace [-f] pool device [new_device]


       zpool scrub [-s] pool ...


       zpool set property=value pool


       zpool split [-n [-l]] [-R altroot]  [-o mntopts] [-o property=value] pool
            newpool [device ...]


       zpool status [-S section[,...]] [-s  all | field[,...]]
            [-l] [-v] [-x] [-T d|u ] [pool] ... [interval[count]]


       zpool upgrade


       zpool upgrade -v


       zpool upgrade [-n] [-V version [-f]] -a | pool ...

DESCRIPTION
       The  zpool  command  configures  ZFS storage pools. A storage pool is a
       collection of devices that provides physical storage and data  replica‐
       tion for ZFS datasets.


       All datasets within a storage pool share the same space. See zfs(8) for
       information on managing datasets.

   Virtual Devices (vdevs)
       A virtual device describes a single device or a collection  of  devices
       organized  according  to certain performance and fault characteristics.
       The following virtual devices are supported:

       disk

           A block device, typically located under /dev/dsk. ZFS can use indi‐
           vidual  slices or partitions, though the recommended mode of opera‐
           tion is to use whole disks. A disk can be specified by a full path,
           or  it  can  be  a shorthand name (the relative portion of the path
           under /dev/dsk). A whole disk can  be  specified  by  omitting  the
           slice  or  partition designation. Alternatively, whole disks can be
           specified using the /dev/chassis/.../disk path that  describes  the
           disk's current location. When given a whole disk, ZFS automatically
           labels the disk, if necessary.


       file

           A regular file. The use of files as a  backing  store  is  strongly
           discouraged. It is designed primarily for experimental purposes, as
           the fault tolerance of a file is only as good as the file system of
           which it is a part. A file must be specified by a full path.


       mirror

           A mirror of two or more devices. Data is replicated in an identical
           fashion across all components of a mirror. A mirror with N disks of
           size  X  can  hold  X bytes and can withstand (N-1) devices failing
           before data integrity is compromised.


       raidz
       raidz1
       raidz2
       raidz3

           A variation on RAID-5 that allows for better distribution of parity
           and  eliminates  the  "RAID-5 write hole" (in which data and parity
           become inconsistent after a power loss). Data and parity is striped
           across all disks within a raidz group.

           A  raidz group can have single-, double-, or triple parity, meaning
           that the raidz group can  sustain  one,  two,  or  three  failures,
           respectively, without losing any data. The raidz1  vdev type speci‐
           fies a single-parity raidz group; the raidz2  vdev type specifies a
           double-parity  raidz  group;  and the raidz3  vdev type specifies a
           triple-parity raidz group. The raidz  vdev type  is  an  alias  for
           raidz1.

           A  raidz  group with N disks of size X with P parity disks can hold
           approximately (N-P)*X bytes and can withstand P  device(s)  failing
           before data integrity is compromised. The minimum number of devices
           in a raidz group is one more than the number of parity  disks.  The
           recommended number is between 3 and 9 to help increase performance.





       spare

           A special pseudo-vdev which keeps track of available hot spares for
           a pool. For more information, see the "Hot Spares" section.


       log

           A separate-intent log device. If more than one log device is speci‐
           fied,  then  writes  are load-balanced between devices. Log devices
           can be mirrored. However, raidz  vdev types are not  supported  for
           the intent log. For more information, see the "Intent Log" section.


       meta

           A  device  used to optimize reads of certain types of ZFS metadata,
           in particular, deduplication entries. If more than one meta  device
           is  specified,  operations will be load balanced between them. Meta
           devices can be mirrored. However, raidz  vdev types  are  not  sup‐
           ported. For more information, see the "Meta Devices" section.


       cache

           A  device used to cache storage pool data. A cache device cannot be
           configured as a mirror or raidz group. For  more  information,  see
           the "Cache Devices" section.



       Virtual  devices  cannot be nested, so a mirror or raidz virtual device
       can only contain files or disks. Mirrors of mirrors (or other  combina‐
       tions) are not allowed.


       A pool can have any number of virtual devices at the top of the config‐
       uration (known as top-level vdevs).  Data  is  dynamically  distributed
       across all top-level devices to balance data among devices. As new vir‐
       tual devices are added, ZFS automatically  places  data  on  the  newly
       available devices.


       Virtual  devices are specified one at a time on the command line, sepa‐
       rated by whitespace. The keywords mirror and raidz are used to  distin‐
       guish where a group ends and another begins. For example, the following
       creates two top-level vdevs, each a mirror of two disks:

         # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0



       Alternatively, the following command could be used:

         # zpool create tank \
         mirror \
             /dev/chassis/RACK29.U01-04/DISK_00/disk \
             /dev/chassis/RACK29.U05-08/DISK_00/disk \
         mirror \
             /dev/chassis/RACK29.U01-04/DISK_01/disk \
             /dev/chassis/RACK29.U05-08/DISK_01/disk


   Pool or Device Failure and Recovery
       ZFS supports a rich set of mechanisms for handling device  failure  and
       data corruption. All metadata and data is checksummed, and ZFS automat‐
       ically repairs bad data from a good copy when corruption is detected.


       In order to take advantage of these features, a pool must make  use  of
       some  form  of redundancy, using either mirrored or raidz groups. While
       ZFS supports running in a non-redundant configuration, where each  top-
       level  vdev is simply a disk or file, this is strongly discouraged as a
       single case of bit corruption can render  some  or  all  of  your  data
       unavailable.


       A pool's health status is described by one of these states:

       DEGRADED

           A  pool  with  one  or  more  failed devices, but the data is still
           available due to a redundant configuration.


       ONLINE

           A pool that has all devices operating normally.


       SUSPENDED

           A pool that is waiting for device connectivity to  be  restored.  A
           suspended  pool remains in the wait state until the device issue is
           resolved.


       UNAVAIL

           A pool with corrupted metadata, or one or more unavailable  devices
           and insufficient replicas to continue functioning.


       UNKNOWN

           A pool is not imported and its actual status has not been verified.
           This health status is used by zpool label command.


       CLEARED

           A pool has at least one device with ZFS metadata cleared  by  using
           zpool label -C command. A cleared pool cannot be imported.



       The  health  of  the top-level vdev, such as mirror or raidz device, is
       potentially impacted by the state of its associated vdevs, or component
       devices.  A top-level vdev or component device is in one of the follow‐
       ing states:

       DEGRADED

           One or more top-level vdevs is in the degraded state because one or
           more  component  devices  are offline. Sufficient replicas exist to
           continue functioning.

           One or more component devices is in the degraded or faulted  state,
           but sufficient replicas exist to continue functioning. The underly‐
           ing conditions are as follows:

               o      The number of checksum errors exceeds acceptable  levels
                      and  the  device is degraded as an indication that some‐
                      thing may be wrong. ZFS continues to use the  device  as
                      necessary.


               o      The  number of I/O errors exceeds acceptable levels. The
                      device could not be marked as faulted because there  are
                      insufficient replicas to continue functioning.



       OFFLINE

           The  device  was explicitly taken offline by the zpool offline com‐
           mand.


       ONLINE

           The device is online and functioning.


       REMOVING

           The top-level vdev is being  removed  through  an  explicit  remove
           request.  As  data  on  this vdev is migrated to the remaining data
           devices in the pool, system performance may be impacted.


       REMOVED

           The device was physically removed while  the  system  was  running.
           Device  removal detection is hardware-dependent and may not be sup‐
           ported on all platforms.


       UNAVAIL

           The device could not be opened. If a pool is imported when a device
           was  unavailable,  then  the  device will be identified by a unique
           identifier instead of its path since the path was never correct  in
           the first place.


       UNKNOWN

           The  device  is  not  currently active but its actual state has not
           been verified. This state is used by zpool label command.


       CLEARED

           ZFS metadata on this device has been cleared  by  using  the  zpool
           label -C command.



       If a device is removed and later reattached to the system, ZFS attempts
       to put the device online  automatically.  Device  attach  detection  is
       hardware-dependent and might not be supported on all platforms.

   Hot Spares
       ZFS  allows  devices  to  be associated with pools as hot spares. These
       devices are not actively used in the pool, but when  an  active  device
       fails,  it  is  automatically replaced by a hot spare. To create a pool
       with hot spares, specify a spare vdev with any number of  devices.  For
       example,

         # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0



       Spares  can  be  added  with the zpool add command and removed with the
       zpool remove command. Once a spare  replacement  is  initiated,  a  new
       spare  vdev  is created within the configuration that will remain there
       until the original device is replaced. At this  point,  the  hot  spare
       becomes available again if another device fails.


       An  in-progress spare replacement can be cancelled by detaching the hot
       spare. If the original faulted device is detached, then the  hot  spare
       assumes  its  place in the configuration, and is removed from the spare
       list of all active pools.


       If the original failed device  is  physically  replaced,  brought  back
       online,  or  the  errors are cleared, either through an FMA event or by
       using the zpool online or zpool clear commands, and the  state  of  the
       original  device  becomes  healthy,  the INUSE spare device will become
       AVAIL again.


       Spares cannot replace log devices or meta devices.

   Intent Log
       The ZFS Intent Log (ZIL) satisfies POSIX requirements  for  synchronous
       transactions.  For instance, databases often require their transactions
       to be on stable storage devices when returning from a system call.  NFS
       and  other applications can also use fsync to ensure data stability. By
       default, the intent log is allocated from blocks within the main  pool.
       However,  it might be possible to get better performance using separate
       intent log devices such as NVRAM or a dedicated disk. For example:

         # zpool create pool c0d0 c1d0 log c2d0



       Multiple log devices can also be specified, and they can  be  mirrored.
       See  the  EXAMPLES  section  for  an  example of mirroring multiple log
       devices.


       Log devices can be added, replaced, attached, detached,  and  imported,
       and  exported  as  part of the larger pool. Mirrored log devices can be
       removed by specifying the top-level mirror for the log.

   Meta Devices
       Devices can be added to a storage pool as meta devices.  These  devices
       store  copies of critical metadata which needs to be accessed in a non-
       sequential manner. This functionality is especially useful for dedupli‐
       cation  entries.  Since  copies of the metadata are also written to the
       main storage pool, I/O errors to this device can be recovered and  this
       device does not have to be mirrored.


       To create a pool with meta devices, specify a meta vdev with any number
       of devices. For example:

         # zpool create pool c0d0 c1d0 meta c2d0 c3d0



       Multiple meta devices can be specified, and they can be  mirrored,  but
       they cannot be part of a raidz configuration.


       Meta  devices can be added, replaced, attached, detached, imported, and
       exported as part of the larger pool.

   Cache Devices
       Devices can be added to a storage pool as cache devices. These  devices
       provide  an  additional  layer of caching between main memory and disk.
       For read-heavy workloads, where the working set  size  is  much  larger
       than  what can be cached in main memory, using cache devices allow much
       more of this working set to be served from  low  latency  media.  Using
       cache  devices provides the greatest performance improvement for random
       read-workloads of mostly static content.


       To create a pool with cache devices, specify a  cache   vdev  with  any
       number of devices. For example:

         # zpool create pool c0d0 c1d0 cache c2d0 c3d0



       Cache devices cannot be mirrored or part of a raidz configuration. If a
       read error is encountered on a cache device, that read I/O is  reissued
       to  the original storage pool device, which might be part of a mirrored
       or raidz configuration.


       The content of the cache devices is considered volatile, as is the case
       with other system caches.

   Processes
       Each imported pool has an associated process, named zpool-poolname. The
       threads in this process are the pool's I/O  processing  threads,  which
       handle the compression, checksumming, and other tasks for all I/O asso‐
       ciated with the pool. This process exists to provides  visibility  into
       the  CPU  utilization  of  the system's storage pools. The existence of
       this process is an unstable interface.

   Properties
       Each pool has several properties associated with  it.  Some  properties
       are  read-only  statistics while others are configurable and change the
       behavior of the pool. The following are read-only properties:

       allocated

           Amount of storage space within the pool that  has  been  physically
           allocated.  This  property can also be referred to by its shortened
           column name, alloc.


       capacity

           Percentage of pool space used. This property can also  be  referred
           to by its shortened column name, cap.


       dedupratio

           The deduplication ratio specified for a pool, expressed as a multi‐
           plier. This value is expressed as  a  single  decimal  number.  For
           example,  a  dedupratio  value of 1.76 indicates that 1.76 units of
           data were stored but only 1 unit of disk space  was  actually  con‐
           sumed.  This property can also be referred to by its shortened col‐
           umn name, dedup.

           Deduplication can be enabled as follows:


             # zfs set dedup=on pool/dataset

           The default value is off.

           See zfs(8) for a description of the deduplication feature.


       free

           Number of blocks within the pool that are not allocated.


       health

           The current health of the pool. Health  can  be  ONLINE,  DEGRADED,
           UNAVAIL, CLEARED, UNKNOWN, or SUSPENDED.


       lastscrub=timestamp

           The start time of the last successful scrub.


       size

           Total size of the storage pool.



       These  space usage properties report actual physical space available to
       the storage pool. The physical space can be different  from  the  total
       amount  of  space  that  any  contained  datasets can actually use. The
       amount of space used in a raidz configuration depends on the character‐
       istics  of the data being written. In addition, ZFS reserves some space
       for internal accounting that the zfs(8) command takes into account, but
       the  zpool  command  does not. For non-full pools of a reasonable size,
       these effects should be invisible. For small pools, or pools  that  are
       close  to  being  completely  full, these discrepancies may become more
       noticeable.


       The following property can be set at creation time:

       allocunit=value

           This sets the allocation unit ZFS will use to read and  write  from
           and to the vdev. In general this property should not need to be set
           by hand. The value for 'allocunit' must be  a  power  of  2  number
           between  512 and 8192(8K). If an invalid or unsupported 'allocunit'
           is specified (for example a smaller 'allocunit'  than  the  logical
           sectorsize of the device), an error will be returned.

           Please note that the allocunit is used by zfs to do allocations and
           that has a consequence that allocated blocks  that  zfs  write  and
           read later will be aligned on this boundary. Overriding it manually
           may have performance and/or space usage implications, so it  should
           not be done without a clear need for that.



       The following property can be set at creation time and import time:

       altroot

           Alternate  root  directory.  If set, this directory is prepended to
           any mount points within the pool. This can be used  when  examining
           an  unknown pool where the mount points cannot be trusted, or in an
           alternate boot environment, where the typical paths are not  valid.
           altroot  is  not  a persistent property. It is valid only while the
           system is up. Setting altroot  defaults  to  using  cachefile=none,
           though this may be overridden using an explicit setting.



       The following property can be set at import time:

       readonly=on | off

           Controls  whether  the pool can be modified. When enabled, any syn‐
           chronous data that exists only in the intent log is not  accessible
           until the pool is imported in read-write mode.

           Importing a pool in read-only mode has the following limitations:


               o      Attempts  to  set  additional pool properties during the
                      import are ignored.


               o      All file system mounts  are  converted  to  include  the
                      read-only (ro) mount option.

           A  pool that has been imported in read-only mode can be restored to
           read-write mode by exporting and importing the pool.



       The following property is set automatically when a pool is created.  In
       general  this  property  should  not need to be set by hand except in a
       case where a pool has been cloned in some manner, resulting in the guid
       value  losing  its uniqueness. It can be reset on an imported pool with
       the zpool reguid command.


       guid    A unique identifier for the pool




       The following properties can be set at creation time and  import  time,
       and later changed with the zpool set command:

       autoexpand=on | off

           Controls automatic pool expansion when the underlying LUN is grown.
           If set to on, the pool will be resized according to the size of the
           expanded  device.  If  the device is part of a mirror or raidz then
           all devices within that mirror or  raidz  group  must  be  expanded
           before  the  new  space  is made available to the pool. The default
           behavior is off. This property can  also  be  referred  to  by  its
           shortened column name, expand.

           Do  not  use the format command to get the new size of the LUN, and
           to relabel it. The zpool will reflect the new size of the LUN auto‐
           matically.


       autoreplace=on | off

           Controls  automatic  device  replacement.  If  set  to  off, device
           replacement must be initiated by the  administrator  by  using  the
           zpool  replace  command. If set to on, any new device, found in the
           same physical location as a device that previously belonged to  the
           pool, is automatically formatted and replaced. The default behavior
           is off. This property can also be referred to by its shortened col‐
           umn name, replace.


       bootfs=pool/dataset

           Identifies  the  default  bootable  dataset for the root pool. This
           property is expected to be  set  mainly  by  the  installation  and
           upgrade programs.


       cachefile=path | none

           Controls  the  location  of where the pool configuration is cached.
           Discovering all pools on system startup requires a cached  copy  of
           the  configuration data that is stored on the root file system. All
           pools in this cache are  automatically  imported  when  the  system
           boots.  Some  environments, such as install and clustering, need to
           cache this information in a different location so  that  pools  are
           not  automatically  imported. Setting this property caches the pool
           configuration in a different location that can  later  be  imported
           with  zpool import -c. Setting it to the special value none creates
           a temporary pool that is never cached, and  the  special  value  ''
           (empty string) uses the default location.

           Multiple  pools  can  share the same cache file. Because the kernel
           destroys and re-creates this file when pools are added and removed,
           care  should be taken when attempting to access this file. When the
           last pool using a cachefile is exported or destroyed, the  file  is
           removed.


       clustered=on | off

           Controls  whether  a  pool  is  imported as a global pool in Oracle
           Solaris Cluster. This property can only be set at pool import  time
           on  a  system running Oracle Solaris Cluster. An attempt to set the
           property will fail if the pool is already  imported  or  if  Oracle
           Solaris Cluster is not installed and booted.

           If  this  property  is  set to on, all file systems of the pool are
           globally mounted and accessible from all nodes of the cluster.  The
           default behavior is off.

           Currently  there is a restriction on setting certain ZFS properties
           while the file system is globally mounted. The properties mentioned
           below  are allowed to be set when the file system is not mounted or
           locally mounted, but not when the file system is globally  mounted.
           Once set in those contexts, the properties will be functional after
           a subsequent global remount:



             atime
             devices
             exec
             readonly
             rstchown
             setuid
             xattr
             sync
             canmount
             mountpoint
             zoned


           A ZFS file system must have its "zoned" property set to "off" for a
           global  mount to succeed. Attempts to set the "zoned" property of a
           global mounted ZFS file system will fail.

           Some of the above restrictions may be lifted in the future.


       dedupditto=number

           Sets a threshold for number of copies. If the reference count for a
           deduplicated block goes above this threshold, another ditto copy of
           the block is stored automatically. The default value is 0.


       delegation=on | off

           Controls whether a non-privileged user is granted access  based  on
           the  dataset  permissions defined on the dataset. The default value
           is on. See zfs(8) for more information on ZFS delegated administra‐
           tion.


       failmode=wait  | continue | panic

           Controls  the  system  behavior  in  the event of catastrophic pool
           failure. This condition is typically a result of a loss of  connec‐
           tivity  to  the  underlying  storage  device(s) or a failure of all
           devices within the pool. The behavior of such an  event  is  deter‐
           mined as follows:

           wait

               Blocks all I/O access to the pool until the device connectivity
               is recovered and the errors are cleared. A pool remains in  the
               wait  state  until  the  device  issue is resolved. This is the
               default behavior.


           continue

               Returns EIO to any new write I/O requests but allows  reads  to
               any  of  the remaining healthy devices. Any write requests that
               have yet to be committed to disk would be blocked.  This  value
               might still result in a panic if other pool issues occur at the
               same time.


           panic

               Prints out a message to the  console  and  generates  a  system
               crash dump.



       listshares=on | off

           Controls  whether  share information in this pool is displayed with
           the zfs list command. The default value is off.


       listsnapshots=on | off

           Controls whether information about snapshots associated  with  this
           pool  is  output  when  zfs  list is run without the -t option. The
           default value is off. This property can also be referred to by  its
           shortened column name, listsnaps.


       scrubinterval=manual | timeinterval

           When scrubinterval is set to manual, scrub scheduling is disabled.

           When  scrubinterval  is set to a time interval, a new scrub will be
           initiated after the time specified  by  this  property  had  passed
           since  the start of the last scrub, which had either completed suc‐
           cessfully or been canceled explicitly via zpool scrub -s.

           The following units are recognized: s (second, default), h  (hour),
           d  (day),  w  (week,  7  days), m (month, 30 days) and y (year, 365
           days); internally, these are stored as seconds in the property  but
           displayed s/h/d/w/m/y by "zfs get".

           Only a single unit may be used, i.e. this is not allowed:


             # zpool set scrubinterval=1w3d

           Instead that should be expressed as 10d.

           The default value is 1m.


       version=version

           The current on-disk version of the pool. This can be increased, but
           never decreased. The preferred method of updating pools is with the
           zpool upgrade command, though this property can be used when a spe‐
           cific version is needed for backward compatibility.  This  property
           can  be  any  number  between 1 and the current version reported by
           zpool upgrade -v.

           If the pool is a boot pool then this property can not be  set  with
           the zpool set command.


   Device status properties
       For  reasons  such  as debugging, the zpool subcommands like import and
       status can report  various  device  specific  information  using  a  -s
       option.  This  section  lists those device specific properties that are
       currently supported and their descriptions. These properties  are  spe‐
       cific to the context it is reported in, for example if 'checksum' prop‐
       erty is reported against a vdev that is a  number  of  checksum  errors
       detected specifically on that vdev

       allocunit

           Allocation  unit  used by a vdev or a disk(toplevel). The allocated
           blocks that zfs write and read later will be aligned on this bound‐
           ary.

           Aliases: aunit


       alloc

           Total allocated space on a vdev or a disk.


       free

           Total allocatable space on a vdev or a disk.


       pctfull

           Percentage of allocated space on a vdev or a disk.


       lsize

           Logical sector size reported by a disk.


       psize

           Physical sector size reported by a disk.


       checksum

           Number of checksum errors detected by zfs.

           Aliases: cksum


       name

           Name of the pool, a vdev or a disk.


       read

           Number of read errors detected by zfs.


       state

           State  of  the  pool, a vdev or a disk. See 'Pool or Device Failure
           and Recovery' for details about various states.


       write

           Number of write errors detected by zfs.



       These properties are boolean, but only reported if it is active.

       repair

           Display whether a repair is currently running for a vdev or a disk.

           Aliases: rpair


       resilver

           Display whether a resilver is currently running for  a  vdev  or  a
           disk.

           Aliases: rslvr


       slow

           Display whether a disk and a vdev marked as slow.


   Subcommands
       All  subcommands  that modify state are logged persistently to the pool
       in their original form.


       The zpool command provides subcommands to create  and  destroy  storage
       pools, add capacity to storage pools, and provide information about the
       storage pools. The following subcommands are supported:

       zpool -?

           Displays a help message.


       zpool help command  | help | property property-name

           Displays zpool command usage. You can display help for  a  specific
           command  or property. If you display help for a specific command or
           property, the command syntax or available property values are  dis‐
           played.  Using zpool help without any arguments displays a complete
           list of zpool commands.


       zpool help -l properties

           Displays zpool property information, including whether the property
           value  is  editable  and their possible values. If you display help
           for a specific subcommand or property, the command syntax or  prop‐
           erty  value  is  displayed.  Using zpool help without any arguments
           displays a complete list of zpool subcommands.


       zpool add [-f] [-o property=value] ... [-n [-l]] pool vdev ...

           Adds the specified virtual devices to  the  given  pool.  The  vdev
           specification  is  described  in the "Virtual Devices" section. The
           behavior of the -f option, and  the  device  checks  performed  are
           described in the zpool create subcommand.


           -f

               Forces  use  of  vdevs, even if they appear in use or specify a
               conflicting replication level. Not all devices can be  overrid‐
               den in this manner.


           -o property=value

               Sets  the  specified property for all vdevs specified in a com‐
               mand. Only 'allocunit' is supported at the moment.


           -n

               Displays the configuration that would be used without  actually
               adding  the  vdevs. The actual pool creation can still fail due
               to insufficient privileges or device sharing.


           -l

               If possible, have  -n  display  the  configuration  in  current
               /dev/chassis location form.

           Do  not  add a disk that is currently configured as a quorum device
           to a ZFS storage pool. After a disk is in the pool, that  disk  can
           then be configured as a quorum device.


       zpool attach [-f] pool  device new_device

           Attaches  new_device  to  an  existing  zpool  device. The existing
           device cannot be part of a raidz configuration. If  device  is  not
           currently  part  of  a mirrored configuration, device automatically
           transforms into a two-way  mirror  of  device  and  new_device.  If
           device  is part of a two-way mirror, attaching new_device creates a
           three-way mirror, and so on. In either case, new_device  begins  to
           resilver immediately.

           -f

               Forces use of new_device, even if its appears to be in use. Not
               all devices can be overridden in this manner.



       zpool clear [-nF [-f]]  pool [device] ...

           Clears device errors in a pool. If no arguments are specified,  all
           device  errors  within the pool are cleared. If one or more devices
           is specified, only  those  errors  associated  with  the  specified
           device or devices are cleared.

           -F

               Initiates  recovery  mode  for  an unopenable pool. Attempts to
               discard the last few transactions in the pool to return  it  to
               an  openable  state.  Not all damaged pools can be recovered by
               using this option. If successful, the data from  the  discarded
               transactions is irretrievably lost.


           -n

               Used  in combination with the -F flag. Check whether discarding
               transactions would make the pool openable, but do not  actually
               discard any transactions.


           -f

               This  is a special pool recovery option that can be used if the
               fmadm acquit or fmadm repair commands fail to  clear  a  pool's
               faults.  If  the system reboots, FMA replays the pool faults so
               you will need to resolve the  FMA  faults  after  the  pool  is
               recovered.



       zpool create [-f] [-n [-l]]  [-B] [-N] [-o property=value]  ... [-O
       file-system-property=value] ... [-m  mountpoint] [-R root]  pool vdev
       ...

           Creates a new storage pool containing the virtual devices specified
           on the command line. The pool name must begin with  a  letter,  and
           can  contain  alphanumeric  characters,  as well as underscore (_),
           dash (-), colon (:), space ( ), and period (.). The pool names mir‐
           ror,  raidz,  spare,  and  log, and meta are reserved, as are names
           beginning with  the  pattern  c[0-9].  The  vdev  specification  is
           described in the "Virtual Devices" section.

           The  command  verifies that each device specified is accessible and
           not currently in use by another subsystem.  There  are  some  uses,
           such as being currently mounted, or specified as the dedicated dump
           device, that prevents a device from ever being used by  ZFS.  Other
           uses, such as having a preexisting UFS file system, can be overrid‐
           den with the -f option.

           The command also checks that the replication strategy for the  pool
           is  consistent.  An  attempt to combine redundant and non-redundant
           storage in a single pool, or to mix disks and files, results in  an
           error  unless -f is specified. The use of differently sized devices
           within a single raidz or mirror group is also flagged as  an  error
           unless -f is specified.

           Unless  the  -R  option  is  specified,  the default mount point is
           /pool. The mount point must not exist or must be empty, or else the
           root  dataset cannot be mounted. This can be overridden with the -m
           option.

           -B

               When operating on a whole disk device, creates the boot  parti‐
               tion,  if  one is required to boot from EFI (GPT) labeled disks
               on the platform. The -B option has no effect  on  devices  that
               are not whole disks.


           -N

               Creates  the pool without mounting or sharing the newly created
               root file system of the pool.


           -f

               Forces use of vdevs, even if they appear in use  or  specify  a
               conflicting  replication level. Not all devices can be overrid‐
               den in this manner.


           -l

               If possible, have  -n  display  the  configuration  in  current
               /dev/chassis location form.


           -n

               Displays  the configuration that would be used without actually
               creating the pool. The actual pool creation can still fail  due
               to insufficient privileges or if a device is currently in use.


           -o property=value [-o  property=value] ...

               Sets  the  given  pool properties. See the "Properties" section
               for a list of valid properties that can be set.


           -O file-system-property=value
           [-O file-system-property=value] ...

               Sets the given properties for the pool's top-level file system.
               See  the  "Properties"  section  of  zfs(8) for a list of valid
               properties that can be set.



           -R root

               Equivalent to -o cachefile=none,altroot=root.


           -m mountpoint

               Sets the mount point for the pool's top-level file system.  The
               default  mount point is /pool, or /altroot if altroot is speci‐
               fied. The mount point must be  an  absolute  path,  legacy,  or
               none. For more information on dataset mount points, see zfs(8).



       zpool destroy [-f] pool

           Attempts to destroy a pool that is no longer required, and the pool
           devices are no longer available or accessible to the system. The -f
           option  might  be  required.  Then,  use the zpool label command to
           remove the destroyed pool information from the pool devices, if you
           want to use the remaining pool devices again.

           -f

               Forces  any  active  datasets  contained  within the pool to be
               unmounted.



       zpool detach pool  device

           Detaches a device or a spare from a mirrored storage pool. A  spare
           can  also  be  detached  from  a RAID-Z storage pool if an existing
           device was physically replaced. Or,  you  can  detach  an  existing
           device  in a RAID-Z storage pool if it was replaced by a spare. The
           operation is refused if there are no other valid  replicas  of  the
           data.


       zpool export [-f] pool  ...

           Exports  the given pools from the system. All devices are marked as
           exported, but are still considered in use by other subsystems.  The
           devices can be moved between systems (even those of different endi‐
           anness) and imported as long as a sufficient number of devices  are
           present.

           Before  exporting  the  pool,  all  datasets  within  the  pool are
           unmounted.

           For pools to be portable, you must give  the  zpool  command  whole
           disks, not just slices, so that ZFS can label the disks with porta‐
           ble EFI labels. Otherwise, disk drivers on platforms  of  different
           endianness will not recognize the disks.

           -f

               Forcibly unmount all datasets, using the unmount -f command.

               This command will forcibly export the pool.



       zpool get [-Hp] [-o  all | field[,...]] [-s source[,...]]   all |
       property[,...] pool ...

           Retrieves the given list of properties (or all properties if all is
           used) for the specified storage pool(s).

           See  the "Properties" section for more information on the available
           pool properties.



           -H           Scripted mode. Does not display headers and  separates
                        fields by a single tab instead of arbitrary space.


           -p           Displays numbers in parseable (exact) values.


           -o fields    Comma-separated list of fields to display. By default,
                        the  properties  are  displayed  with  the   following
                        fields:


                          name          Name of storage pool
                          property      Property name
                          value         Property value
                          source        Property source, either 'default' or 'local'.




           -s source    A  comma-separated  list  of sources to display. Those
                        properties coming from a source other  than  those  in
                        this  list are ignored. Each source must be one of the
                        following:


                          local, default, none

                        The default value is all sources.


           See the "Properties" section for more information on the  available
           pool properties.


       zpool history [-il] [pool] ...

           Displays the command history of the specified pools or all pools if
           no pool is specified.

           -i

               Displays internally logged ZFS events in addition to user  ini‐
               tiated events.


           -l

               Displays log records in long format, which in addition to stan‐
               dard format includes, the user name, the hostname, and the zone
               in which the operation was performed.



       zpool import [-d path ... | -c cachefile] [-D] [-l] [-S section[,...]]
       [-s  all | field[,...]]

           Lists pools available to import. If the -d option is not specified,
           this command searches for devices in /dev/dsk. The -d option can be
           specified multiple times, and all directories and device paths  are
           searched.  If  the  device  appears to be part of an exported pool,
           this command displays a summary of the pool with the  name  of  the
           pool,  a numeric identifier, as well as the vdev layout and current
           health of the device for each device or file. Pools that were  pre‐
           viously  destroyed  with  the zpool destroy command, are not listed
           unless the -D option is specified.

           The numeric identifier is unique, and can be used  instead  of  the
           pool  name when multiple exported pools of the same name are avail‐
           able.

           -c cachefile

               Reads configuration from the given cachefile that  was  created
               with  the  "cachefile"  pool  property.  This cachefile is used
               instead of searching for devices.


           -d path

               Searches for devices or files in path,  where  path  can  be  a
               directory or a device path. The -d option can be specified mul‐
               tiple times.


           -D

               Lists destroyed or cleared pools only.


           -l

               If possible, display information in current /dev/chassis  loca‐
               tion form.


           -s  all | field[,...]]

               A comma-separated list of device status property fields to dis‐
               play. The list of status fields  available  are:  name,  state,
               read,  write,  checksum,  repair,  resilver,  slow,  allocunit,
               psize, lsize, alloc, free and pctfull. See 'Device status prop‐
               erties' section for more details.

               When  used  in combination with -S, 'config' section is implic‐
               itly included in the sections displayed.


           -S section[,...]]

               A comma-separated list of sections to display. The list of sta‐
               tus  sections  available  are:  pool,  id, state, scan, config,
               dedup, errors.

               Without -S option all available sections will be displayed.



       zpool import [-o  mntopts] [ -o  property= value] ...  [-d path ... |
       -c  cachefile] [-D] [-f]  [-m] [-N] [-R  root] [-F [-n  [-l]]] -a

           Imports  all pools found in the search directories or device paths.
           Identical to the previous command, except that  all  pools  with  a
           sufficient  number  of  devices  available are imported. Pools that
           were previously destroyed with the zpool destroy command,  are  not
           imported unless the -D option is specified.

           -o mntopts

               Comma-separated  list  of  mount  options  to use when mounting
               datasets within the pool.  See  zfs(8)  for  a  description  of
               dataset properties and mount options.


           -o property=value

               Sets  the  specified  property  on  the  imported pool. See the
               "Properties" section for more information on the available pool
               properties.


           -c cachefile

               Reads  configuration  from the given cachefile that was created
               with the "cachefile" pool  property.  This  cachefile  is  used
               instead of searching for devices.


           -d path

               Searches  for  devices  or  files in path. The -d option can be
               specified multiple times. This option is incompatible with  the
               -c option.


           -D

               Imports destroyed pools only. The -f option is also required.


           -f

               Forces  import,  even  if  the  pool  appears to be potentially
               active.


           -F

               Recovery mode for a non-importable pool. Attempt to return  the
               pool to an importable state by discarding the last few transac‐
               tions. Not all damaged pools can be  recovered  by  using  this
               option. If successful, the data from the discarded transactions
               is irretrievably lost. This option is ignored if  the  pool  is
               importable or already imported.


           -a

               Searches for and imports all pools found.


           -m

               Allows a pool to import when a log device is missing.


           -R root

               Sets the cachefile property to none and the altroot property to
               root.


           -N

               Imports the pool without mounting or sharing any file systems.


           -n

               Used with the -F recovery option.  Determines  whether  a  non-
               importable  pool  can  be  made  importable again, but does not
               actually perform the pool recovery. For more details about pool
               recovery mode, see the -F option, above.


           -l

               If  possible, have -n display information in current /dev/chas‐
               sis location form.



       zpool import [-d path ... |-c cachefile] [-D] [-F [-n]]  <pool | id>
       zpool import [-o  mntopts] [ -o  property= value] ...  [-d path ... |
       -c  cachefile] [-D] [-f]  [-m] [-N] [-R  root] [-F [-n]] [-l] [-t  tmp‐
       pool] pool | id [newpool]

           Imports a specific pool. A pool can be identified by  its  name  or
           the  numeric  identifier.  If  newpool  is  specified,  the pool is
           imported using  the  persistent  name  newpool.  Otherwise,  it  is
           imported  with  the same name as its exported name. Do not import a
           root pool with a new name. Otherwise, the system might not boot.

           If a device is removed from a system without running  zpool  export
           first,  the  device  appears  as  potentially  active. It cannot be
           determined if this was a failed export, or whether  the  device  is
           really  in  use  from another host. To import a pool in this state,
           the -f option is required.

           -o mntopts

               Comma-separated list of mount  options  to  use  when  mounting
               datasets  within  the  pool.  See  zfs(8)  for a description of
               dataset properties and mount options.


           -o property=value

               Sets the specified property  on  the  imported  pool.  See  the
               "Properties" section for more information on the available pool
               properties.


           -c cachefile

               Reads configuration from the given cachefile that  was  created
               with  the  cachefile  pool  property.  This  cachefile  is used
               instead of searching for devices.


           -d path

               Searches for devices or files in path. The  -d  option  can  be
               specified  multiple times. This option is incompatible with the
               -c option.


           -D

               Imports destroyed pool. The -f option is also required.


           -f

               Forces import, even if  the  pool  appears  to  be  potentially
               active.


           -F

               Recovery  mode for a non-importable pool. Attempt to return the
               pool to an importable state by discarding the last few transac‐
               tions.  Not  all  damaged  pools can be recovered by using this
               option. If successful, the data from the discarded transactions
               is  irretrievably  lost.  This option is ignored if the pool is
               importable or already imported.


           -R root

               Sets the cachefile property to none and the altroot property to
               root.


           -N

               Imports the pool without mounting any file systems.


           -n

               Used  with  the  -F  recovery option. Determines whether a non-
               importable pool can be made  importable  again,  but  does  not
               actually perform the pool recovery. For more details about pool
               recovery mode, see the -F option, above.


           -l

               If possible, have -n display information in current  /dev/chas‐
               sis location form.


           -m

               Allows a pool to import when a log device is missing.


           -t tmppool

               Use  the specified temporary pool name for the duration of this
               import. Implies -o cachefile=none.




       zpool iostat [-T  d|u] [-v [-l]]  [pool] ...  [interval[ count]]

           Displays I/O statistics for the given pools. When given  an  inter‐
           val, the statistics are printed every interval seconds until Ctrl-C
           is pressed. If no pools are specified, statistics for every pool in
           the system is shown. If count is specified, the command exits after
           count reports are printed.

           -T d|u

               Display a time stamp.

               Specify d for standard date format. See date(1). Specify u  for
               a  printed  representation  of  the  internal representation of
               time. See time(2).


           -v

               Verbose statistics. Reports  usage  statistics  for  individual
               vdevs within the pool, in addition to the pool-wide statistics.


           -l

               If  possible,  have  -v  display  vdev  statistics  in  current
               /dev/chassis location form.



       zpool label [-d path ... | -c cachefile] -C <pool | id> [device]

           Clears ZFS pool metadata on a specified inactive pool to  make  its
           device(s) available for use in new pools or by other filesystems.

           A  pool can be identified by its name or the numeric identifier. If
           a device is specified, this  command  clears  the  pool's  metadata
           found  only on the given device. If the -d option is not specified,
           this command searches for devices in /dev/dsk directory.


           -c cachefile    Reads the configuration from  the  given  cachefile
                           that  was created with the cachefile pool property.
                           This cachefile is used  instead  of  searching  for
                           devices.


           -d path         Searches  for  devices  or  files  in  path. The -d
                           option can be specified multiple times. This option
                           is incompatible with the -c option.




       zpool label -C <device>

           Clears  ZFS  pool  metadata on a specified device. A device must be
           specified by using its full path and name. The device must not be a
           part of an active pool, otherwise an error message is printed out.


       zpool label [-d path ... | -c cachefile] -R <pool | id> [device]

           This is an undo operation for the zpool label -C command. It recov‐
           ers ZFS metadata for a specific pool and, if enough of  the  pool's
           devices  are  restored  this way, makes it possible to reimport the
           pool. If a device is specified, it recovers metadata only  on  this
           device.

           A  pool  can be identified by its name or the numerical identifier.
           If the -d option  is  not  specified,  this  command  searches  for
           devices in /dev/dsk directory. A device must be specified using its
           full path and name.


           -c cachefile    Reads the configuration from  the  given  cachefile
                           that  was created with the cachefile pool property.
                           This cachefile is used  instead  of  searching  for
                           devices.


           -d path         Searches  for  devices or files in path, where path
                           can be a directory or a device path. The -d  option
                           can be specified multiple times.




       zpool label -R <device>

           Recovers  all  recoverable  ZFS  pool  metadata  found on specified
           device. A device must be specified using its full  path  and  name.
           The device must not be a part of an active pool, otherwise an error
           message is printed out.


       zpool list [-H] [-o  props[,...]] [-T  d|u] [pool] ...

           Lists the given pools along with a health status and  space  usage.
           When given no arguments, all pools in the system are listed.

           When  given  an  interval, the status and space usage are displayed
           every interval seconds until Ctrl-C is entered. If count is  speci‐
           fied, the command exits after count reports are displayed.

           -H

               Scripted mode. Do not display headers, and separate fields by a
               single tab instead of arbitrary space.


           -o props

               Comma-separated list of properties to display. See the "Proper‐
               ties"  section for a list of valid properties. The default list
               is name, size, allocated, free, capacity, health, altroot.


           -T d|u

               Display a time stamp.

               Specify d for standard date format. See date(1). Specify u  for
               a  printed  representation  of  the  internal representation of
               time. See time(2).



       zpool monitor -t provider  [-T d|u] [[-p] -o  field[,...]] [pool] ...
       [interval [count]]

           Displays  status or progress information for the given pools. If no
           pool is entered, information for all pools is displayed. When given
           an  interval,  the  information  is  printed every interval seconds
           until Ctrl-C is pressed. If count is specified, the  command  exits
           after count reports are printed.


           -o field[,. . .]    Display only selected field(s).


           -p                  Display  using  a stable machine-parseable for‐
                               mat. For more information, see 'Parseable  Out‐
                               put Format', below.


           -t provider         Display data from the listed providers. Current
                               providers are send, receive (or recv), destroy,
                               scrub,  and  resilver.  An  up-to-date  list of
                               providers is available from 'zpool  help  moni‐
                               tor'.


           -T d|u              Display  a  time  stamp. Specify d for standard
                               date format.  See  date(1).  Specify  u  for  a
                               printed  representation  of the internal repre‐
                               sentation of time. See time(2).




       zpool offline [-t] pool  device ...

           Takes the specified physical device offline. While  the  device  is
           offline, no attempt is made to read or write to the device.

           This command is not applicable to cache devices.

           -t

               Temporary.  Upon  reboot, the specified physical device reverts
               to its previous state.



       zpool online [-e] pool  device...

           Brings the specified physical device online.

           This command is not applicable to cache devices.

           -e

               Expand the device to use all available space. If the device  is
               part  of  a  mirror  or raidz then all devices must be expanded
               before the new space will become available to the pool.



       zpool reguid pool

           Change the guid of a specified pool. The new guid will be generated
           automatically.  The  command  will  fail  if the pool or any of its
           vdevs is not in state HEALTHY or if there are any  outstanding  FMA
           faults.


       zpool remove pool  device ...

           Begins  the removal of specified device from the pool. This command
           supports removing hot spares, cache, log,  meta  and  non-redundant
           data  devices.  A  redundant  log  or data device can be removed by
           specifying the top-level mirror or raidz.  Data  devices  that  are
           part  of  a  redundant configuration can be removed using the zpool
           detach command. This command  accepts  a  list  of  devices  to  be
           removed.  The  list of devices need to be of same type, either data
           devices or non-data devices, not a mix.

           Removing a top-level data device migrates the data from the  device
           to  be removed to the remaining data devices in the pool. The zpool
           status command reports the progress of the remove  operation  until
           the resilvering completes.


       zpool remove -s pool

           The inprogress top-level data device removing operation may be can‐
           celled by zpool remove -s before its completion.


           -s    Cancel removing a top-level data device and returns the  pool
                 to its original state.




       zpool replace [-f] pool  old_device [new_device]

           Replaces  old_device with new_device. This is equivalent to attach‐
           ing new_device, waiting for it  to  resilver,  and  then  detaching
           old_device .

           The size of new_device must be greater than or equal to the minimum
           size of all the devices in a mirror or raidz configuration.

           new_device is required if the pool is not redundant. If  new_device
           is  not specified, it defaults to old_device. This form of replace‐
           ment is useful after an existing disk has failed and has been phys‐
           ically  replaced.  In  this  case,  the  new disk may have the same
           /dev/dsk path as the old device, even though it is actually a  dif‐
           ferent disk. ZFS recognizes this.

           In  zpool  status  output,  the  old_device is shown under the word
           replacing with the string /old appended to it.  Once  the  resilver
           completes,  both the replacing and the old_device are automatically
           removed. If the new device fails before the resilver completes  and
           a  third device is installed in its place, then both failed devices
           will show up with /old  appended,  and  the  resilver  starts  over
           again.  After the resilver completes, both /old devices are removed
           along with the word replacing.

           -f

               Forces use of new_device, even if it appears to be in use.  Not
               all devices can be overridden in this manner.



       zpool scrub [-s] pool ...

           Begins  a scrub. The scrub examines all data in the specified pools
           to verify that it checksums correctly. For  replicated  (mirror  or
           raidz)  devices,  ZFS  automatically  repairs any damage discovered
           during the scrub. The zpool status command reports the progress  of
           the scrub and summarizes the results of the scrub upon completion.

           Scrubbing  and resilvering are very similar operations. The differ‐
           ence is that resilvering only examines data that ZFS  knows  to  be
           out  of  date (for example, when attaching a new device to a mirror
           or replacing an existing device), whereas  scrubbing  examines  all
           data to discover silent errors due to hardware faults or disk fail‐
           ure.

           Because scrubbing and resilvering are I/O-intensive operations, ZFS
           allows  only  one  at  a time. If a scrub is already in progress, a
           subsequent zpool scrub returns an error, with  the  advice  to  use
           zpool  scrub   -s  to cancel the current scrub. If a resilver is in
           progress, ZFS does not allow a scrub to be started until the resil‐
           ver completes.

           -s

               Stop scrubbing.



       zpool set property=value pool

           Sets the given property on the specified pool. See the "Properties"
           section for more information on what  properties  can  be  set  and
           acceptable values.


       zpool split [-n [-l ]] [-R  altroot] [-o  mntopts] [-o  property=value]
       pool newpool [ device ...]

           Splits off one disk from each mirrored top-level vdev in a pool and
           creates a new pool from the split-off disks. The original pool must
           be made up of one or more mirrors and must not be in the process of
           resilvering.  The  split subcommand chooses the last device in each
           mirror vdev unless overridden by a device specification on the com‐
           mand line.

           When   using  a  device  argument,  split  includes  the  specified
           device(s) in a new pool and, should any devices remain unspecified,
           assigns  the  last  device  in each mirror vdev to that pool, as it
           does normally. If you are uncertain about the outcome  of  a  split
           command,  use the -n ("dry-run") option to ensure your command will
           have the effect you intend.

           -n

               Displays the configuration that would be created without  actu‐
               ally splitting the pool. The actual pool split could still fail
               due to insufficient privileges or device status.


           -l

               If possible, have  -n  display  the  configuration  in  current
               /dev/chassis location form.


           -R altroot

               Automatically  import  the  newly created pool after splitting,
               using the specified altroot parameter for the new pool's alter‐
               nate root. See the altroot description in the "Properties" sec‐
               tion, above.


           -o mntopts

               Comma-separated list of mount  options  to  use  when  mounting
               datasets  within  the  pool.  See  zfs(8)  for a description of
               dataset properties and mount options. Valid only in conjunction
               with the -R option.


           -o property=value

               Sets  the  specified property on the new pool. See the "Proper‐
               ties" section, above, for more  information  on  the  available
               pool properties.



       zpool status [-s  all | field[,...]] [-S section[,...]] [-l] [- v] [-x]
       [-T d|u] [pool] ...  [interval[count]]

           Displays the detailed health status for the given pools. If no pool
           is  specified,  then  the status of each pool in the system is dis‐
           played. For more information on pool and  device  health,  see  the
           "Device Failure and Recovery" section.

           When  given  an  interval, the status and space usage are displayed
           every interval seconds until Ctrl-C is entered. If count is  speci‐
           fied, the command exits after count reports are displayed.

           If  a  scrub  or  resilver is in progress, this command reports the
           percentage done and the estimated time to completion. Both of these
           are  only  approximate,  because the amount of data in the pool and
           the other workloads on the system can change.

           -s  all | field[,...]]

               A comma-separated list of device status property fields to dis‐
               play.  The  list  of  status fields available are: name, state,
               read,  write,  checksum,  repair,  resilver,  slow,  allocunit,
               psize, lsize, alloc, free and pctfull. See 'Device status prop‐
               erties' section for more details.

               When used in combination with -S, 'config' section  is  implic‐
               itly included in the sections displayed.

               Without  -s  option,  the  default  fields  (name, state, read,
               write, checksum) will be displayed.


           -S section[,...]]

               A comma-separated list of sections to display. The list of sta‐
               tus  sections  available  are:  pool,  id, state, scan, config,
               dedup, errors.

               Without -S option all available sections will be displayed.


           -l

               If possible, display vdev status in current /dev/chassis  loca‐
               tion form.


           -x

               Display status only for pools that are exhibiting errors or are
               otherwise unavailable.


           -v

               Displays verbose data error information, printing  out  a  com‐
               plete  list  of  all  data  errors since the last complete pool
               scrub.


           -T d|u

               Display a time stamp.

               Specify d for standard date format. See date(1). Specify u  for
               a  printed  representation  of  the  internal representation of
               time. See time(2).



       zpool upgrade

           Identifies a pool's on-disk  version,  which  determines  available
           pool  features  in  the currently running software release. You can
           continue to use older pool versions, but some features might not be
           available.  A  pool  can be upgraded by using the zpool upgrade  -a
           command. You will not be able to access a pool of a  later  version
           on a system that runs an earlier software version.


       zpool upgrade -v

           Displays  ZFS  pool versions supported by the current software. The
           current ZFS pool versions and all previous supported  versions  are
           displayed,  along with an explanation of the features provided with
           each version.


       zpool upgrade [-n] [-V  version [-f]] -a | pool ...

           Upgrades the specified pool to the latest on-disk version. If  this
           command  reveals  that  a  pool is out-of-date, the pool can subse‐
           quently be upgraded using the zpool upgrade   -a  command.  A  pool
           that  is  upgraded  will not be accessible on a system that runs an
           earlier software release.


           -a

               Upgrades all pools.


           -n

               Report what would be done without actually upgrading any pools.


           -f

               Force the upgrade even if it makes more boot  environments  un-
               bootable.


           -V version

               Upgrades  to  the  specified version, which must be higher than
               the current version. If the -V flag is not specified, the  pool
               is upgraded to the most recent version.

           If  a  pool  is bootable zpool upgrade will verify that the upgrade
           will not make any more boot environments un-bootable.  If  it  will
           make more boot environments un-bootable those boot environments and
           their current supported pool versions  will  be  listed.  To  force
           zpool  upgrade  to  do the upgrade and make those boot environments
           un-bootable the -V and -f flags must both be  used.  zpool  upgrade
           will never allow an upgrade to make the active boot environment un-
           bootable.

           If a bootable pool is listed or the -a flag is present and the pool
           can  be  updated  without  making  any  more  boot environments un-
           bootable then the upgrade will be done unless the -n is given.


   Display Fields
       The fields are  different  for  different  providers.  If  a  field  is
       selected that is not supported by a provider an error is returned.


       DONE        Amount of data completed or processed so far.


       OTHER       Provider  dependent. Provides extra information such as the
                   current item being processed or the current  state  of  the
                   task. For example, in a zfs send operation this value might
                   reflect the individual dataset or snapshot currently  being
                   sent.  The  specific  values  reported  as OTHER are not an
                   interface and may change without notice.


       PCTDONE     Percentage of data processed.


       POOL        Pool information was retrieved from.


       PROVIDER    Task providing  the  information.  One  of  send,  receive,
                   destroy, scrub, or resilver.


       SPEED       Units  per  second. Usually bytes, but is dependent on what
                   unit the data provider uses.


       STRTTIME    Time the provider started on the displayed task.


       TAG         A TAG disambiguates whole operations. It is unique  at  any
                   one  time,  but values can repeat in subsequent operations.
                   For instance, two simultaneous sends would  have  different
                   TAGs even if sending the same dataset.


       TIMELEFT    A relative time in which this task will be completed. It is
                   calculated off the rate of the data  which  is  being  pro‐
                   cessed.


       TIMESTMP    Time the monitored data snapshot was taken.


       TOTAL       Estimate of total amount of data to be processed.



   Parseable Output Format
       The  "zpool  monitor" command provides a -p option that displays output
       in a machine-parsable format. The output format is one or more lines of
       colon (:) delimited fields. Output includes only those fields requested
       by means of the -o option, in the order requested.  Note  that  the  -o
       all  option, which displays all the fields cannot be used with parsable
       output option.


       When you request multiple fields,  any  literal  colon  characters  are
       escaped  by  a  backslash  (\)  before being output. Similarly, literal
       backslash characters are also  escaped  (\\).  This  escape  format  is
       parseable  by  using shell read(1) functions with the environment vari‐
       able set as IFS=:. Note that escaping is not done when you request only
       a single field.

EXAMPLES
       Example 1 Creating a RAID-Z Storage Pool



       The following command creates a pool with a single raidz top-level vdev
       that consists of six disks.


         # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0



       Example 2 Creating a Mirrored Storage Pool



       The following command creates a pool with two mirrors, where each  mir‐
       ror contains two disks.


         # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0




       Alternatively,  whole  disks  can be specified using /dev/chassis paths
       describing the disk's current location.


         # zpool create tank \
             mirror \
                 /dev/chassis/RACK29.U01-04/DISK_00/disk \
                 /dev/chassis/RACK29.U05-08/DISK_00/disk \
             mirror \
                 /dev/chassis/RACK29.U01-04/DISK_01/disk \
                 /dev/chassis/RACK29.U05-08/DISK_01/disk



       Example 3 Adding a Mirror to a ZFS Storage Pool



       The following command adds two mirrored disks to the pool tank,  assum‐
       ing  the  pool  is  already  made up of two-way mirrors. The additional
       space is immediately available to any datasets within the pool.


         # zpool add tank mirror c1t0d0 c1t1d0



       Example 4 Listing Available ZFS Storage Pools



       The following command lists all available pools on the system.


         # zpool list
         NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
         pool   278G  4.19G  274G   1%  1.00x  ONLINE  -
         rpool  278G  78.2G  200G  28%  1.00x  ONLINE  -



       Example 5 Listing All Properties for a Pool



       The following command lists all the properties for a pool.


         % zpool get all pool
         NAME  PROPERTY       VALUE                SOURCE
         pool  allocated      4.19G                -
         pool  altroot        -                    default
         pool  autoexpand     off                  default
         pool  autoreplace    off                  default
         pool  bootfs         -                    default
         pool  cachefile      -                    default
         pool  capacity       1%                   -
         pool  dedupditto     0                    default
         pool  dedupratio     1.00x                -
         pool  delegation     on                   default
         pool  failmode       wait                 default
         pool  free           274G                 -
         pool  guid           1907687796174423256  -
         pool  health         ONLINE               -
         pool  lastscrub      Jan_21               local
         pool  listshares     off                  local
         pool  listsnapshots  off                  default
         pool  readonly       off                  -
         pool  scrubinterval  2m                   local
         pool  size           278G                 -
         pool  version        34                   default



       Example 6 Destroying a ZFS Storage Pool



       The following command destroys the pool "tank" and  any  datasets  con‐
       tained within.


         # zpool destroy -f tank



       Example 7 Exporting a ZFS Storage Pool



       The following command exports the devices in pool tank so that they can
       be relocated or later imported.


         # zpool export tank



       Example 8 Importing a ZFS Storage Pool



       The following command displays available pools, and  then  imports  the
       pool "tank" for use on the system.



       The results from this command are similar to the following:


         # zpool import
           pool: tank
             id: 7678868315469843843
          state: ONLINE
         action: The pool can be imported using its name or numeric identifier.
         config:

                       tank  ONLINE
                   mirror-0  ONLINE
                     c1t2d0  ONLINE
                     c1t3d0  ONLINE

         # zpool import tank



       Example 9 Upgrading All ZFS Storage Pools to the Current Version



       The  following  command  upgrades  all ZFS Storage pools to the current
       version of the software.


         # zpool upgrade -a
         This system is currently running ZFS pool version 22.

         All pools are formatted using this version.



       Example 10 Managing Hot Spares



       The following command creates a new pool with an available hot spare:


         # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0




       If one of the disks were to fail, the pool  would  be  reduced  to  the
       degraded  state.  The failed device can be replaced using the following
       command:


         # zpool replace tank c0t0d0 c0t3d0




       After the device  has  been  resilvered,  the  spare  is  automatically
       detached  and  is  made  available  should another device fail. The hot
       spare can be permanently removed from the pool using the following com‐
       mand:


         # zpool remove tank c0t2d0



       Example 11 Creating a ZFS Pool with Separate Mirrored Log Devices



       The  following  command  creates  a ZFS storage pool consisting of two,
       two-way mirrors and mirrored log devices:


         # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
            c4d0 c5d0



       Example 12 Adding Cache Devices to a ZFS Pool



       The following command adds two disks for use as cache devices to a  ZFS
       storage pool:


         # zpool add pool cache c2d0 c3d0




       Once  added,  the  cache  devices gradually fill with content from main
       memory. Depending on the size of your cache devices, it could take over
       an hour for them to fill. Capacity and reads can be monitored using the
       iostat option as follows:


         # zpool iostat -v pool 5



       Example 13 Adding a Mirrored Meta Device To a ZFS Pool



       The following command adds a two-way mirrored  meta  device  to  a  ZFS
       storage pool:


         # zpool add pool meta mirror c2d0 c3d0



       Example 14 Removing a Mirrored Log Device



       Given  the configuration shown immediately below, the following command
       removes the mirrored log device mirror-2 in the pool tank.


            pool: tank
           state: ONLINE
           scrub: none requested
         config:

                  NAME        STATE     READ WRITE CKSUM
                  tank        ONLINE       0     0     0
                    mirror-0  ONLINE       0     0     0
                      c6t0d0  ONLINE       0     0     0
                      c6t1d0  ONLINE       0     0     0
                    mirror-1  ONLINE       0     0     0
                      c6t2d0  ONLINE       0     0     0
                      c6t3d0  ONLINE       0     0     0
                  logs
                    mirror-2  ONLINE       0     0     0
                      c4t0d0  ONLINE       0     0     0
                      c4t1d0  ONLINE       0     0     0



         # zpool remove tank mirror-2



       Example 15 Recovering a Faulted ZFS Pool



       If a pool is faulted but recoverable, a message indicating  this  state
       is  provided  by  zpool  status  if  the pool was cached (see cachefile
       above), or as part of the error output from a failed  zpool  import  of
       the pool.



       Recover a cached pool with the zpool clear command:


         # zpool clear -F data
         Pool data returned to its state as of Thu Jun 07 10:50:35 2012.
         Discarded approximately 29 seconds of transactions.




       If  the  pool  configuration  was not cached, use zpool import with the
       recovery mode flag:


         # zpool import -F data
         Pool data returned to its state as of Thu Jun 07 10:50:35 2012.
         Discarded approximately 29 seconds of transactions.



       Example 16 Importing a ZFS Pool with a Missing Log Device



       The following examples illustrate attempts to  import  a  pool  with  a
       missing log device. The -m option is used to complete the import opera‐
       tion.



       Additional devices are known to be part  of  this  pool,  though  their
       exact configuration cannot be determined.


         # zpool import tank
         The devices below are missing, use '-m' to import the pool anyway:
         c5t0d0 [log]

         cannot import 'tank': one or more devices is currently unavailable

         # zpool import -m tank
         # zpool status tank
            pool: tank
           state: DEGRADED
         status: One or more devices could not be opened.  Sufficient replicas
         exist for
                  the pool to continue functioning in a degraded state.
         action: Attach the missing device and online it using 'zpool online'.
             see: http://www.support.oracle.com/msg/ZFS-8000-2Q
            scan: none requested
         config:

                  NAME                   STATE     READ WRITE CKSUM
                  tank                   DEGRADED     0     0     0
                    c7t0d0               ONLINE       0     0     0
                  logs
                    1693927398582730352  UNAVAIL      0     0     0  was
         /dev/dsk/c5t0d0

         errors: No known data errors




       The  following  example  shows how to import a pool with a missing mir‐
       rored log device:


         # zpool import tank
         The devices below are missing, use ?-m? to import the pool anyway:
         mirror-1 [log]
         c5t0d0
         c5t1d0

         # zpool import -m tank

         # zpool status tank
            pool: tank
           state: DEGRADED
         status: One or more devices could not be opened.  Sufficient replicas
         exist for the pool to continue functioning in a degraded state.
         action: Attach the missing device and online it using 'zpool online'.
             see: http://www.support.oracle.com/msg/ZFS-8000-2Q
             scan: none requested
         config:

                  NAME                      STATE     READ WRITE CKSUM
                  tank                      DEGRADED     0     0     0
                    c7t0d0                  ONLINE       0     0     0
                  logs
                    mirror-1                UNAVAIL      0     0     0
         insufficient replicas
                      46385995713041169     UNAVAIL      0     0     0  was
         /dev/dsk/c5t0d0
                      13821442324672734438  UNAVAIL      0     0     0  was
         /dev/dsk/c5t1d0

         errors: No known data errors



       Example 17 Importing a Pool By a Specific Path



       The following command imports the pool tank by identifying  the  pool's
       specific  device  paths,  /dev/dsk/c9t9d9  and /dev/dsk/c9t9d8, in this
       example.


         # zpool import -d /dev/dsk/c9t9d9s0 /dev/dsk/c9t9d8s0 tank




       An existing limitation is that even though this pool  is  comprised  of
       whole disks, the command must include the specific device's slice iden‐
       tifier.


       Example 18 Removing two Mirrored Data Devices



       Given the configuration shown below, the following command removes  the
       mirrored data device mirror-0 and mirror-1 in the pool tank.



              pool: tank
             state: ONLINE
             scrub: none requested
            config:

                    NAME        STATE     READ WRITE CKSUM
                    tank        ONLINE       0     0     0
                      mirror-0  ONLINE       0     0     0
                        c6t0d0  ONLINE       0     0     0
                        c6t1d0  ONLINE       0     0     0
                      mirror-1  ONLINE       0     0     0
                        c6t2d0  ONLINE       0     0     0
                        c6t3d0  ONLINE       0     0     0
                      mirror-2  ONLINE       0     0     0
                        c6t4d0  ONLINE       0     0     0
                        c6t5d0  ONLINE       0     0     0

           # zpool remove tank mirror-0 mirror-1

         zpool status shows mirror-0 and mirror-1 are being removed.

           # zpool status tank
              pool: tank
             state: ONLINE
            status: One or more devices is currently being removed.
            action: Wait for the resilver to complete.
                    Run 'zpool status -v' to see device specific
                    details.
              scan: resilver  in  progress  since  Mon Jul 7 18:19:35
                    2014
                    16.7G scanned
                    884M  resilvered at 52.6M/s,  9.94% done, 0h1m to
                    go
            config:

                    NAME         STATE    READ WRITE CKSUM
                    tank        ONLINE      0     0     0
                      mirror-0  REMOVING    0     0     0
                        c6t0d0  REMOVING    0     0     0
                        c6t1d0  REMOVING    0     0     0
                      mirror-1  REMOVING    0     0     0
                        c6t2d0  REMOVING    0     0     0
                        c6t3d0  REMOVING    0     0     0
                      mirror-2  ONLINE      0     0     0
                        c6t4d0  ONLINE      0     0     0
                        c6t5d0  ONLINE      0     0     0

            errors: No known data errors

         After the resilvering  completes, mirror-0 and  mirror-1 are
         removed from the pool configuration  and the pool returns to
         ONLINE state.

           # zpool status tank
              pool: tank
             state: ONLINE
              scan: resilvered 6.67G in 0h2m with 0 errors on Mon Jul
                    7 18:22:10 2014
            config:

                    NAME         STATE     READ WRITE CKSUM
                    tank         ONLINE       0     0     0
                      mirror-2   ONLINE       0     0     0
                        c6t4d0  ONLINE       0     0     0
                        c6t5d0  ONLINE       0     0     0

            errors: No known data errors




       Example 19 Obtaining Parseable Output



       The  following command is used to obtain parseable output and will pro‐
       vide one interval:



         # zpool monitor -p -o pool,pctdone,other -t send poolA poolC
         poolA:20.4:poolA/fs2/team2@fs2_all
         poolA:0.0:poolA/fs2/team2@all
         poolA:28.6:poolA/fs\:1/team3@fs1_all
         poolC:33.3:poolC/fs1/team2@fs1_all
         poolC:50.0:poolC/fs2/team1@fs2_all



       Example 20 Removing zpool Metadata



       The following command removes the zpool metadata:



         # zpool import
           pool: tank
             id: 16467356871648988132
          state: ONLINE
         action: The pool can be imported using its name or numeric identifier.
         config:

                 tank         ONLINE
                   raidz1-0   ONLINE
                     c7t8d0   ONLINE
                     c7t9d0   ONLINE
                     c7t10d0  ONLINE

         # zpool label -C tank
         # zpool import
         cannot import: no pools found




       Example 21 Recovering zpool Metadata



       The following command recovers the zpool metadata:



               # zpool import -D
                 pool: tank
                   id: 16467356871648988132
                state: CLEARED
               status: The pool has cleared device(s) and therefore it is not possible to determine its
                       exact configuration. The configuration presented below is only tentative.
               action: You can try using 'zpool label -R' to recover the pool but some
                       devices might be already used by another pool or be unavailable.
               config:

                       tank         CLEARED
                         raidz1-0   CLEARED
                           c7t8d0   CLEARED
                           c7t9d0   CLEARED
                           c7t10d0  CLEARED

               # zpool label -R tank
               # zpool import




       Example 22 Removing zpool Metadata from a Specified Device



       The following command removes zpool metadata from a specified device:



         # zpool status tank
           pool: tank
          state: ONLINE
           scan: none requested
         config:

                 NAME        STATE     READ WRITE CKSUM
                 tank        ONLINE       0     0     0
                   mirror-0  ONLINE       0     0     0
                     c2t1d0  ONLINE       0     0     0
                     c2t2d0  ONLINE       0     0     0

         errors: No known data errors
         # zpool export pool_m
         # zpool label -C /dev/dsk/c2t1d0s0
         # zpool import
         no pools available to import




       Example 23 Recovering zpool Metadata on a Specified Device



       The following command recovers zpool metadata on a specified device:



         # zpool import -D
           pool: tank
             id: 413554598802822140
          state: CLEARED (EXPORTED)
         status: The pool has cleared device(s) and therefore it is not possible to determine its
                 exact configuration. The configuration presented below is only tentative.
         action: You can try using 'zpool label -R' to recover the pool but some devices might be
                 already used by another pool or be unavailable.
         config:

                 tank        CLEARED
                   mirror-0  CLEARED
                     c2t1d0  CLEARED
                     c2t2d0  ONLINE

         # zpool label -R /dev/dsk/c2t1d0s0
         # zpool import tank




       Example 24 Shortening The syntax of vdevs



       The following example shows how to shorten the syntax of  vdevs  to  be
       included in a pool by using {}.



         # zpool create tank raidz2 /test/c{1,2,3,4,5}disk
         root@vboxrf:/test# zpool status tank
         pool: tank
         state: ONLINE
         scan: none requested
         config:

              NAME              STATE     READ WRITE CKSUM
              tank              ONLINE       0     0     0
              raidz2-0          ONLINE       0     0     0
              /test/c1disk      ONLINE       0     0     0
              /test/c2disk      ONLINE       0     0     0
              /test/c3disk      ONLINE       0     0     0
              /test/c4disk      ONLINE       0     0     0
              /test/c5disk      ONLINE       0     0     0



       Example 25 Removing a raidz Data Device



       Given  the configuration shown below, the following command removes the
       data device raidz1-0 from the pool tank.



              pool: tank
             state: ONLINE
             scrub: none requested
            config:

                    NAME        STATE     READ WRITE CKSUM
                    tank        ONLINE       0     0     0
                      raidz1-0  ONLINE       0     0     0
                        c6t0d0  ONLINE       0     0     0
                        c6t1d0  ONLINE       0     0     0
                      raidz1-1  ONLINE       0     0     0
                        c6t2d0  ONLINE       0     0     0
                        c6t3d0  ONLINE       0     0     0

           # zpool remove tank raidz1-0

         zpool status shows raidz1-0 is being removed.

           # zpool status tank
              pool: tank
             state: ONLINE
            status: One or more devices is currently being removed.
            action: Wait for the resilver to complete.
                    Run 'zpool status -v' to see device specific
                    details.
              scan: resilver  in  progress  since  Mon Jul 7 18:19:35
                    2014
                    16.7G scanned
                    884M  resilvered at 52.6M/s,  9.94% done, 0h1m to
                    go
            config:

                    NAME        STATE    READ WRITE CKSUM
                    tank        ONLINE      0     0     0
                      raidz1-0  REMOVING    0     0     0
                        c6t0d0  REMOVING    0     0     0
                        c6t1d0  REMOVING    0     0     0
                      raidz1-1  ONLINE      0     0     0
                        c6t2d0  ONLINE      0     0     0
                        c6t3d0  ONLINE      0     0     0

            errors: No known data errors

         After the resilvering  completes, raidz1-0 is removed from the pool
         configuration and the pool returns to ONLINE state.

           # zpool status tank
              pool: tank
             state: ONLINE
              scan: resilvered 6.67G in 0h2m with 0 errors on Mon Jul
                    7 18:22:10 2014
            config:

                    NAME        STATE     READ WRITE CKSUM
                    tank        ONLINE       0     0     0
                      raidz-1   ONLINE       0     0     0
                        c6t2d0  ONLINE       0     0     0
                        c6t3d0  ONLINE       0     0     0

            errors: No known data errors



       Example 26 Removing a non-redundant Data Device and a meta device



       Given the configuration shown below, the following command removes  the
       non-redundant  data  device c6t0d0 and meta device c6t2d0 from the pool
       tank.



           # zpool status tank
             pool: tank
            state: ONLINE
             scan: none requested
           config:

                   NAME      STATE      READ WRITE CKSUM
                   tank      ONLINE        0     0     0
                     c6t0d0  ONLINE        0     0     0
                     c6t1d0  ONLINE        0     0     0
                   metas
                     c6t2d0  ONLINE        0     0     0
                     c6t3d0  ONLINE        0     0     0
                   logs
                     c6t4d0  ONLINE        0     0     0
                   cache
                     c6t5d0  ONLINE        0     0     0
                   spares
                     c6t6d0  AVAIL

           errors: No known data errors

           # zpool remove tank c6t0d0 c6t2d0

         zpool status shows c6t0d0 and
            c6t2d0 are being removed.

             pool: tank
            state: ONLINE
           status: One or more devices are being removed.
           action: Wait for the resilver to complete.
                   Run 'zpool status -v' to see device specific details.
             scan: resilver in progress since Fri Jan 19 10:33:36 2018
               2.38G scanned out of 2.48G at 144M/s, 1s to go
               0 resilvered
           config:

                   NAME      STATE      READ WRITE CKSUM
                   tank      ONLINE        0     0     0
                     c6t0d0  REMOVING      0     0     0
                     c6t1d0  ONLINE        0     0     0
                   metas
                     c6t2d0  REMOVING      0     0     0
                     c6t3d0  ONLINE        0     0     0
                   logs
                     c6t4d0  ONLINE        0     0     0
                   cache
                     c6t5d0  ONLINE        0     0     0
                   spares
                     c6t6d0  AVAIL

           errors: No known data errors

         After the resilver completes, c6t0d0 and
         c6t2d0 are removed from the pool configuration
         and the pool returns to ONLINE state.

           # zpool status tank
             pool: tank
            state: ONLINE
             scan: resilvered 1.27M in 1s with 0 errors on Fri Jan 19 09:37:41 2018
           config:

                   NAME      STATE      READ WRITE CKSUM
                   tank      ONLINE        0     0     0
                     c6t1d0  ONLINE        0     0     0
                   metas
                     c6t3d0  ONLINE        0     0     0
                   logs
                     c6t4d0  ONLINE        0     0     0
                   cache
                     c6t5d0  ONLINE        0     0     0
                   spares
                     c6t6d0  AVAIL

           errors: No known data errors



       Example 27 Changing the pool guid of an existing pool



       The following command changes the guid of an existing pool to a  random
       number.


         # zpool get guid tank
         NAME   PROPERTY  VALUE                 SOURCE
         tank   guid      11540845105265937039

         # zpool reguid tank

         # zpool get guid tank
         NAME   PROPERTY  VALUE                SOURCE
         tank   guid      3172738027577799950  -



EXIT STATUS
       The following exit values are returned:

       0

           Successful completion.


       1

           An error occurred.


       2

           Invalid command line options were specified.


ATTRIBUTES
       See attributes(7) for descriptions of the following attributes:


       tab()  box; cw(2.75i) |cw(2.75i) lw(2.75i) |lw(2.75i) ATTRIBUTE TYPEAT‐
       TRIBUTE VALUE _ Availabilitysystem/file-system/zfs _ Interface Stabili‐
       tyCommitted


SEE ALSO
       ps(1), SDC(4), attributes(7), beadm(8), zfs(8), datasets(7)

WARNINGS
       For  making more space for a zpool, which is available by expanding the
       capacity of the underlying LUN, do not use the format  command  to  get
       the  new size of the LUN, and to relabel it. Instead, use the following
       procedure:

           1.     Run zpool set autoexpand=on <zpool> once, and leave  autoex‐
                  pand=on for the zpool all the time.


           2.     Expand  the  size  of  the  LUN  as  desired. The zpool will
                  reflect the size of the LUN automatically.



NOTES
       Each ZFS storage pool has an associated process, zpool-poolname,  visi‐
       ble  in  such tools as ps(1). A user has no interaction with these pro‐
       cesses. For more information, see the SDC(4) man page.



Oracle Solaris 11.4               11 May 2021                         zpool(8)
맨 페이지 내용의 저작권은 맨 페이지 작성자에게 있습니다.
RSS ATOM XHTML 5 CSS3