3.0
2020-01-09T20:44:09Z
Templates
ZFS on Linux
ZFS on Linux
Templates
ZFS
ZFS dataset
ZFS vdev
ZFS zpool
-
ZFS on Linux version
7
0
vfs.file.contents[/sys/module/zfs/version]
3600
30
0
0
4
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[arc_dnode_limit]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[arc_meta_limit]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[arc_meta_used]
60
30
365
0
3
B
0
0
0
0
1
0
0
arc_meta_used = hdr_size + metadata_size + dbuf_size + dnode_size + bonus_size
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[bonus_size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC max size
7
0
zfs.arcstats[c_max]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC minimum size
7
0
zfs.arcstats[c_min]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[data_size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[dbuf_size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[dnode_size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[hdr_size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[hits]
60
30
365
0
3
1
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[metadata_size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[mfu_hits]
60
30
365
0
3
1
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[mfu_size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[misses]
60
30
365
0
3
1
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[mru_hits]
60
30
365
0
3
1
0
0
0
1
0
0
0
ZFS
-
ZFS ARC stat "$1"
7
0
zfs.arcstats[mru_size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC current size
7
0
zfs.arcstats[size]
60
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
-
ZFS ARC Cache Hit Ratio
15
0
zfs.arcstats_hit_ratio
60
30
365
0
0
%
0
0
0
0
1
100*(last(zfs.arcstats[hits])/(last(zfs.arcstats[hits])+last(zfs.arcstats[misses])))
0
0
0
ZFS
-
ZFS ARC total read
15
0
zfs.arcstats_total_read
60
30
365
0
3
B
0
0
0
0
1
last(zfs.arcstats[hits])+last(zfs.arcstats[misses])
0
0
0
ZFS
-
ZFS parameter $1
7
0
zfs.get.param[zfs_arc_dnode_limit_percent]
3600
30
365
0
3
%
0
0
0
0
1
0
0
0
ZFS
-
ZFS parameter $1
7
0
zfs.get.param[zfs_arc_meta_limit_percent]
3600
30
365
0
3
%
0
0
0
0
1
0
0
0
ZFS
Zfs Dataset discovery
7
zfs.fileset.discovery
1800
0
0
0
0
0
1
{#FILESETNAME}
@ZFS fileset
8
A
{#FILESETNAME}
@not docker ZFS dataset
8
B
2
Discover ZFS dataset. Dataset names must contain a "/" else it's a zpool.
Zfs dataset $1 compressratio
7
1
zfs.get.compressratio[{#FILESETNAME}]
1800
30
365
0
0
%
0
0
0
0
100
0
0
0
ZFS
ZFS dataset
Zfs dataset $1 $2
7
0
zfs.get.fsinfo[{#FILESETNAME},available]
300
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
ZFS dataset
Zfs dataset $1 $2
7
0
zfs.get.fsinfo[{#FILESETNAME},referenced]
300
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
ZFS dataset
Zfs dataset $1 $2
7
0
zfs.get.fsinfo[{#FILESETNAME},usedbychildren]
300
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
ZFS dataset
Zfs dataset $1 $2
7
0
zfs.get.fsinfo[{#FILESETNAME},usedbydataset]
3600
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
ZFS dataset
Zfs dataset $1 $2
7
0
zfs.get.fsinfo[{#FILESETNAME},usedbysnapshots]
300
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
ZFS dataset
Zfs dataset $1 $2
7
0
zfs.get.fsinfo[{#FILESETNAME},used]
300
30
365
0
3
B
0
0
0
0
1
0
0
0
ZFS
ZFS dataset
( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_AVERAGE_ALERT}/100)
More than {$ZFS_AVERAGE_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
0
3
0
More than {$ZFS_HIGH_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_HIGH_ALERT}/100)
( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_DISASTER_ALERT}/100)
More than {$ZFS_DISASTER_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
0
5
0
( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_HIGH_ALERT}/100)
More than {$ZFS_HIGH_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
0
4
0
More than {$ZFS_DISASTER_ALERT}% used on dataset {#FILESETNAME} on {HOST.NAME}
( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#FILESETNAME},used].last()} ) ) > ({$ZFS_DISASTER_ALERT}/100)
Zfs dataset {#FILESETNAME} usage
900
200
0.0000
100.0000
1
1
1
1
0
0.0000
0.0000
1
0
0
0
0
0
3333FF
0
2
0
-
ZFS on Linux
zfs.get.fsinfo[{#FILESETNAME},usedbydataset]
1
0
FF33FF
0
2
0
-
ZFS on Linux
zfs.get.fsinfo[{#FILESETNAME},usedbysnapshots]
2
0
FF3333
0
2
0
-
ZFS on Linux
zfs.get.fsinfo[{#FILESETNAME},usedbychildren]
3
0
33FF33
0
2
0
-
ZFS on Linux
zfs.get.fsinfo[{#FILESETNAME},available]
Zfs Pool discovery
7
zfs.pool.discovery
3600
0
0
0
0
0
0
3
Zpool {#POOLNAME} available
7
0
zfs.get.fsinfo[{#POOLNAME},available]
300
30
365
0
3
Bytes
0
0
0
0
1
0
0
0
ZFS
ZFS zpool
Zpool {#POOLNAME} used
7
0
zfs.get.fsinfo[{#POOLNAME},used]
300
30
365
0
3
Bytes
0
0
0
0
1
0
0
0
ZFS
ZFS zpool
Zpool {#POOLNAME} Health
7
0
zfs.zpool.health[{#POOLNAME}]
300
30
0
0
4
0
0
0
0
1
0
0
0
ZFS
ZFS zpool
Zpool {#POOLNAME} scrub status
7
0
zfs.zpool.scrub[{#POOLNAME}]
300
30
365
0
3
0
0
0
0
1
0
0
Detect if the pool is currently scrubbing itself.
This is not a bad thing itself, but it slows down the entire pool and should be terminated when on production server during business hours if it causes a noticeable slowdown.
0
ZFS
ZFS zpool
ZFS zpool scrub status
( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_AVERAGE_ALERT}/100)
More than {$ZPOOL_AVERAGE_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
0
3
0
More than {$ZPOOL_HIGH_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_HIGH_ALERT}/100)
( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_DISASTER_ALERT}/100)
More than {$ZPOOL_DISASTER_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
0
5
0
( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_HIGH_ALERT}/100)
More than {$ZPOOL_HIGH_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
0
4
0
More than {$ZPOOL_DISASTER_ALERT}% used on zpool {#POOLNAME} on {HOST.NAME}
( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} / ( {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},available].last()} + {ZFS on Linux:zfs.get.fsinfo[{#POOLNAME},used].last()} ) ) > ({$ZPOOL_DISASTER_ALERT}/100)
{ZFS on Linux:zfs.zpool.scrub[{#POOLNAME}].max(12h)}=0
Zpool {#POOLNAME} is scrubbing for more than 12h on {HOST.NAME}
0
3
0
Zpool {#POOLNAME} is scrubbing for more than 24h on {HOST.NAME}
{ZFS on Linux:zfs.zpool.scrub[{#POOLNAME}].max(24h)}=0
{ZFS on Linux:zfs.zpool.scrub[{#POOLNAME}].max(24h)}=0
Zpool {#POOLNAME} is scrubbing for more than 24h on {HOST.NAME}
0
4
0
{ZFS on Linux:zfs.zpool.health[{#POOLNAME}].str(ONLINE)}=0
Zpool {#POOLNAME} is {ITEM.VALUE} on {HOST.NAME}
0
4
0
Zpool {#POOLNAME} available and used
900
200
0.0000
100.0000
1
1
1
1
0
0.0000
0.0000
0
0
0
0
0
0
00EE00
0
2
0
-
ZFS on Linux
zfs.get.fsinfo[{#POOLNAME},available]
1
0
EE0000
0
2
0
-
ZFS on Linux
zfs.get.fsinfo[{#POOLNAME},used]
Zfs vdev discovery
7
zfs.vdev.discovery
3600
0
0
0
0
0
0
3
vdev {#VDEV}: CHECKSUM error counter
7
0
zfs.vdev.error_counter.cksum[{#VDEV}]
300
30
365
0
3
0
0
0
0
1
0
0
This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
If yes, use 'zpool replace' to replace the device.
If not, clear the error with 'zpool clear'.
0
ZFS
ZFS vdev
vdev {#VDEV}: READ error counter
7
0
zfs.vdev.error_counter.read[{#VDEV}]
300
30
365
0
3
0
0
0
0
1
0
0
This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
If yes, use 'zpool replace' to replace the device.
If not, clear the error with 'zpool clear'.
0
ZFS
ZFS vdev
vdev {#VDEV}: WRITE error counter
7
0
zfs.vdev.error_counter.write[{#VDEV}]
300
30
365
0
3
0
0
0
0
1
0
0
This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
If yes, use 'zpool replace' to replace the device.
If not, clear the error with 'zpool clear'.
0
ZFS
ZFS vdev
vdev {#VDEV}: total number of errors
15
0
zfs.vdev.error_total[{#VDEV}]
300
30
365
0
3
0
0
0
0
1
last(zfs.vdev.error_counter.cksum[{#VDEV}])+last(zfs.vdev.error_counter.read[{#VDEV}])+last(zfs.vdev.error_counter.write[{#VDEV}])
0
0
This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
If yes, use 'zpool replace' to replace the device.
If not, clear the error with 'zpool clear'.
0
ZFS
ZFS vdev
{ZFS on Linux:zfs.vdev.error_total[{#VDEV}].last()}>0
vdev {#VDEV} has encountered {ITEM.VALUE} errors on {HOST.NAME}
0
4
This device has experienced an unrecoverable error. Determine if the device needs to be replaced.
If yes, use 'zpool replace' to replace the device.
If not, clear the error with 'zpool clear'.
You may also run a zpool scrub to check if some other undetected errors are present on this vdev.
0
ZFS vdev errors
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
1
0
0
0
0
0
CC00CC
0
2
0
-
ZFS on Linux
zfs.vdev.error_counter.cksum[{#VDEV}]
1
0
F63100
0
2
0
-
ZFS on Linux
zfs.vdev.error_counter.read[{#VDEV}]
2
0
BBBB00
0
2
0
-
ZFS on Linux
zfs.vdev.error_counter.write[{#VDEV}]
{$ZFS_ARC_META_ALERT}
90
{$ZFS_AVERAGE_ALERT}
90
{$ZFS_DISASTER_ALERT}
99
{$ZFS_HIGH_ALERT}
95
{$ZPOOL_AVERAGE_ALERT}
85
{$ZPOOL_DISASTER_ALERT}
99
{$ZPOOL_HIGH_ALERT}
90
ZFS ARC
1
4
0
1500
150
0
0
1
1
0
0
0
0
0
ZFS ARC memory usage
ZFS on Linux
3
0
1500
150
0
1
1
1
0
0
0
0
0
ZFS ARC Cache Hit Ratio
ZFS on Linux
3
0
1500
150
0
2
1
1
0
0
0
0
0
ZFS ARC breakdown
ZFS on Linux
3
0
1500
150
0
3
1
1
0
0
0
0
0
ZFS ARC arc_meta_used breakdown
ZFS on Linux
3
{ZFS on Linux:vfs.file.contents[/sys/module/zfs/version].diff(0)}>0
Version of ZoL is now {ITEM.VALUE} on {HOST.NAME}
0
1
0
{ZFS on Linux:zfs.arcstats[dnode_size].last()}>({ZFS on Linux:zfs.arcstats[arc_dnode_limit].last()}*0.9)
ZFS ARC dnode size > 90% dnode max size on {HOST.NAME}
0
4
0
{ZFS on Linux:zfs.arcstats[arc_meta_used].last()}>({ZFS on Linux:zfs.arcstats[arc_meta_limit].last()}*0.01*{$ZFS_ARC_META_ALERT})
ZFS ARC meta size > {$ZFS_ARC_META_ALERT}% meta max size on {HOST.NAME}
0
4
0
ZFS ARC arc_meta_used breakdown
900
200
0.0000
100.0000
1
1
1
1
0
0.0000
0.0000
1
0
0
0
0
0
3333FF
0
2
0
-
ZFS on Linux
zfs.arcstats[metadata_size]
1
0
00EE00
0
2
0
-
ZFS on Linux
zfs.arcstats[dnode_size]
2
0
EE0000
0
2
0
-
ZFS on Linux
zfs.arcstats[hdr_size]
3
0
EEEE00
0
2
0
-
ZFS on Linux
zfs.arcstats[dbuf_size]
4
0
EE00EE
0
2
0
-
ZFS on Linux
zfs.arcstats[bonus_size]
ZFS ARC breakdown
900
200
0.0000
100.0000
1
1
1
1
0
0.0000
0.0000
1
0
0
0
0
0
3333FF
0
2
0
-
ZFS on Linux
zfs.arcstats[data_size]
1
0
00AA00
0
2
0
-
ZFS on Linux
zfs.arcstats[metadata_size]
2
0
EE0000
0
2
0
-
ZFS on Linux
zfs.arcstats[dnode_size]
3
0
CCCC00
0
2
0
-
ZFS on Linux
zfs.arcstats[hdr_size]
4
0
A54F10
0
2
0
-
ZFS on Linux
zfs.arcstats[dbuf_size]
5
0
888888
0
2
0
-
ZFS on Linux
zfs.arcstats[bonus_size]
ZFS ARC Cache Hit Ratio
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
1
1
0
0
0
0
00CC00
0
2
0
-
ZFS on Linux
zfs.arcstats_hit_ratio
ZFS ARC memory usage
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
1
2
0
ZFS on Linux
zfs.arcstats[c_max]
0
5
0000EE
0
2
0
-
ZFS on Linux
zfs.arcstats[size]
1
2
DD0000
0
2
0
-
ZFS on Linux
zfs.arcstats[c_max]
2
0
00BB00
0
2
0
-
ZFS on Linux
zfs.arcstats[c_min]
ZFS zpool scrub status
0
Scrub in progress
1
No scrub in progress