Rule |
State |
Error |
Last Evaluation |
Evaluation Time |
alert: CephFilesystemDamaged
expr: ceph_health_detail{name="MDS_DAMAGE"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.1
severity: critical
type: ceph_default
annotations:
description: |
The filesystems metadata has been corrupted. Data access may be blocked.
Either analyse the output from the mds daemon admin socket, or escalate to support
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages#cephfs-health-messages
summary: Ceph filesystem is damaged.
|
ok
|
|
4.251s ago
|
243us |
alert: CephFilesystemOffline
expr: ceph_health_detail{name="MDS_ALL_DOWN"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.3
severity: critical
type: ceph_default
annotations:
description: |
All MDS ranks are unavailable. The ceph daemons providing the metadata for the Ceph filesystem are all down, rendering the filesystem offline.
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#mds-all-down
summary: Ceph filesystem is offline
|
ok
|
|
4.251s ago
|
156us |
alert: CephFilesystemDegraded
expr: ceph_health_detail{name="FS_DEGRADED"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.4
severity: critical
type: ceph_default
annotations:
description: |
One or more metadata daemons (MDS ranks) are failed or in a damaged state. At best the filesystem is partially available, worst case is the filesystem is completely unusable.
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#fs-degraded
summary: Ceph filesystem is degraded
|
ok
|
|
4.251s ago
|
167us |
alert: CephFilesystemMDSRanksLow
expr: ceph_health_detail{name="MDS_UP_LESS_THAN_MAX"}
> 0
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The filesystem's "max_mds" setting defined the number of MDS ranks in the filesystem. The current number of active MDS daemons is less than this setting.
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#mds-up-less-than-max
summary: Ceph MDS daemon count is lower than configured
|
ok
|
|
4.251s ago
|
72.34us |
alert: CephFilesystemInsufficientStandby
expr: ceph_health_detail{name="MDS_INSUFFICIENT_STANDBY"}
> 0
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The minimum number of standby daemons determined by standby_count_wanted is less than the actual number of standby daemons. Adjust the standby count or increase the number of mds daemons within the filesystem.
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#mds-insufficient-standby
summary: Ceph filesystem standby daemons too low
|
ok
|
|
4.251s ago
|
93.41us |
alert: CephFilesystemFailureNoStandby
expr: ceph_health_detail{name="FS_WITH_FAILED_MDS"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.5
severity: critical
type: ceph_default
annotations:
description: |
An MDS daemon has failed, leaving only one active rank without further standby. Investigate the cause of the failure or add a standby daemon
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#fs-with-failed-mds
summary: Ceph MDS daemon failed, no further standby available
|
ok
|
|
4.251s ago
|
81.57us |
alert: CephFilesystemReadOnly
expr: ceph_health_detail{name="MDS_HEALTH_READ_ONLY"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.2
severity: critical
type: ceph_default
annotations:
description: |
The filesystem has switched to READ ONLY due to an unexpected write error, when writing to the metadata pool
Either analyse the output from the mds daemon admin socket, or escalate to support
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages#cephfs-health-messages
summary: Ceph filesystem in read only mode, due to write error(s)
|
ok
|
|
4.251s ago
|
79.28us |
|
6.336s ago |
939us |
Rule |
State |
Error |
Last Evaluation |
Evaluation Time |
alert: CephMonDownQuorumAtRisk
expr: ((ceph_health_detail{name="MON_DOWN"}
== 1) * on() (count(ceph_mon_quorum_status == 1) == bool (floor(count(ceph_mon_metadata)
/ 2) + 1))) == 1
for: 30s
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.3.1
severity: critical
type: ceph_default
annotations:
description: |
{{ $min := query "floor(count(ceph_mon_metadata) / 2) +1" | first | value }}Quorum requires a majority of monitors (x {{ $min }}) to be active
Without quorum the cluster will become inoperable, affecting all connected clients and services.
The following monitors are down:
{{- range query "(ceph_mon_quorum_status == 0) + on(ceph_daemon) group_left(hostname) (ceph_mon_metadata * 0)" }}
- {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down
summary: Monitor quorum is at risk
|
ok
|
|
7.875s ago
|
769.3us |
alert: CephMonDown
expr: (count(ceph_mon_quorum_status
== 0) <= (count(ceph_mon_metadata) - floor(count(ceph_mon_metadata) / 2) + 1))
for: 30s
labels:
severity: warning
type: ceph_default
annotations:
description: |
{{ $down := query "count(ceph_mon_quorum_status == 0)" | first | value }}{{ $s := "" }}{{ if gt $down 1.0 }}{{ $s = "s" }}{{ end }}You have {{ $down }} monitor{{ $s }} down.
Quorum is still intact, but the loss of further monitors will make your cluster inoperable.
The following monitors are down:
{{- range query "(ceph_mon_quorum_status == 0) + on(ceph_daemon) group_left(hostname) (ceph_mon_metadata * 0)" }}
- {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down
summary: One of more ceph monitors are down
|
ok
|
|
7.874s ago
|
375.4us |
alert: CephMonDiskspaceCritical
expr: ceph_health_detail{name="MON_DISK_CRIT"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.3.2
severity: critical
type: ceph_default
annotations:
description: |
The free space available to a monitor's store is critically low (<5% by default).
You should increase the space available to the monitor(s). The
default location for the store sits under /var/lib/ceph. Your monitor hosts are;
{{- range query "ceph_mon_metadata"}}
- {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-disk-crit
summary: Disk space on at least one monitor is critically low
|
ok
|
|
7.874s ago
|
73.58us |
alert: CephMonDiskspaceLow
expr: ceph_health_detail{name="MON_DISK_LOW"}
== 1
for: 5m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The space available to a monitor's store is approaching full (>70% is the default).
You should increase the space available to the monitor store. The
default location for the store sits under /var/lib/ceph. Your monitor hosts are;
{{- range query "ceph_mon_metadata"}}
- {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-disk-low
summary: Disk space on at least one monitor is approaching full
|
ok
|
|
7.874s ago
|
723.6us |
alert: CephMonClockSkew
expr: ceph_health_detail{name="MON_CLOCK_SKEW"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The ceph monitors rely on a consistent time reference to maintain
quorum and cluster consistency. This event indicates that at least
one of your mons is not sync'd correctly.
Review the cluster status with ceph -s. This will show which monitors
are affected. Check the time sync status on each monitor host.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-clock-skew
summary: Clock skew across the Monitor hosts detected
|
ok
|
|
7.873s ago
|
117.5us |
|
4.457s ago |
17.87ms |
Rule |
State |
Error |
Last Evaluation |
Evaluation Time |
alert: CephOSDDownHigh
expr: count(ceph_osd_up
== 0) / count(ceph_osd_up) * 100 >= 10
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.1
severity: critical
type: ceph_default
annotations:
description: |
{{ $value | humanize }}% or {{ with query "count(ceph_osd_up == 0)" }}{{ . | first | value }}{{ end }} of {{ with query "count(ceph_osd_up)" }}{{ . | first | value }}{{ end }} OSDs are down (>= 10%).
The following OSDs are down:
{{- range query "(ceph_osd_up * on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0" }}
- {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }}
{{- end }}
summary: More than 10% of OSDs are down
|
ok
|
|
9.303s ago
|
730.1us |
alert: CephOSDHostDown
expr: ceph_health_detail{name="OSD_HOST_DOWN"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.8
severity: warning
type: ceph_default
annotations:
description: |
The following OSDs are down:
{{- range query "(ceph_osd_up * on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0" }}
- {{ .Labels.hostname }} : {{ .Labels.ceph_daemon }}
{{- end }}
summary: An OSD host is offline
|
ok
|
|
9.302s ago
|
119.2us |
alert: CephOSDDown
expr: ceph_health_detail{name="OSD_DOWN"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.2
severity: warning
type: ceph_default
annotations:
description: |
{{ $num := query "count(ceph_osd_up == 0)" | first | value }}{{ $s := "" }}{{ if gt $num 1.0 }}{{ $s = "s" }}{{ end }}{{ $num }} OSD{{ $s }} down for over 5mins.
The following OSD{{ $s }} {{ if eq $s "" }}is{{ else }}are{{ end }} down:
{{- range query "(ceph_osd_up * on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0"}}
- {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-down
summary: An OSD has been marked down/unavailable
|
ok
|
|
9.302s ago
|
133.5us |
alert: CephOSDNearFull
expr: ceph_health_detail{name="OSD_NEARFULL"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.3
severity: warning
type: ceph_default
annotations:
description: |
One or more OSDs have reached their NEARFULL threshold
Use 'ceph health detail' to identify which OSDs have reached this threshold.
To resolve, either add capacity to the cluster, or delete unwanted data
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-nearfull
summary: OSD(s) running low on free space (NEARFULL)
|
ok
|
|
9.302s ago
|
91.59us |
alert: CephOSDFull
expr: ceph_health_detail{name="OSD_FULL"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.6
severity: critical
type: ceph_default
annotations:
description: |
An OSD has reached it's full threshold. Writes from all pools that share the
affected OSD will be blocked.
To resolve, either add capacity to the cluster, or delete unwanted data
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-full
summary: OSD(s) is full, writes blocked
|
ok
|
|
9.302s ago
|
85.5us |
alert: CephOSDBackfillFull
expr: ceph_health_detail{name="OSD_BACKFILLFULL"}
> 0
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
An OSD has reached it's BACKFILL FULL threshold. This will prevent rebalance operations
completing for some pools. Check the current capacity utilisation with 'ceph df'
To resolve, either add capacity to the cluster, or delete unwanted data
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-backfillfull
summary: OSD(s) too full for backfill operations
|
ok
|
|
9.302s ago
|
90.09us |
alert: CephOSDTooManyRepairs
expr: ceph_health_detail{name="OSD_TOO_MANY_REPAIRS"}
== 1
for: 30s
labels:
severity: warning
type: ceph_default
annotations:
description: |
Reads from an OSD have used a secondary PG to return data to the client, indicating
a potential failing disk.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-too-many-repairs
summary: OSD has hit a high number of read errors
|
ok
|
|
9.302s ago
|
66.87us |
alert: CephOSDTimeoutsPublicNetwork
expr: ceph_health_detail{name="OSD_SLOW_PING_TIME_FRONT"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
OSD heartbeats on the cluster's 'public' network (frontend) are running slow. Investigate the network
for any latency issues on this subnet. Use 'ceph health detail' to show the affected OSDs.
summary: Network issues delaying OSD heartbeats (public network)
|
ok
|
|
9.302s ago
|
76.88us |
alert: CephOSDTimeoutsClusterNetwork
expr: ceph_health_detail{name="OSD_SLOW_PING_TIME_BACK"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
OSD heartbeats on the cluster's 'cluster' network (backend) are running slow. Investigate the network
for any latency issues on this subnet. Use 'ceph health detail' to show the affected OSDs.
summary: Network issues delaying OSD heartbeats (cluster network)
|
ok
|
|
9.302s ago
|
68.8us |
alert: CephOSDInternalDiskSizeMismatch
expr: ceph_health_detail{name="BLUESTORE_DISK_SIZE_MISMATCH"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
One or more OSDs have an internal inconsistency between the size of the physical device and it's metadata.
This could lead to the OSD(s) crashing in future. You should redeploy the effected OSDs.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#bluestore-disk-size-mismatch
summary: OSD size inconsistency error
|
ok
|
|
9.302s ago
|
58.76us |
alert: CephDeviceFailurePredicted
expr: ceph_health_detail{name="DEVICE_HEALTH"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The device health module has determined that one or more devices will fail
soon. To review the device states use 'ceph device ls'. To show a specific
device use 'ceph device info <dev id>'.
Mark the OSD as out (so data may migrate to other OSDs in the cluster). Once
the osd is empty remove and replace the OSD.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#id2
summary: Device(s) have been predicted to fail soon
|
ok
|
|
9.302s ago
|
69.8us |
alert: CephDeviceFailurePredictionTooHigh
expr: ceph_health_detail{name="DEVICE_HEALTH_TOOMANY"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.7
severity: critical
type: ceph_default
annotations:
description: |
The device health module has determined that the number of devices predicted to
fail can not be remediated automatically, since it would take too many osd's out of
the cluster, impacting performance and potentially availabililty. You should add new
OSDs to the cluster to allow data to be relocated to avoid the data integrity issues.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#device-health-toomany
summary: Too many devices have been predicted to fail, unable to resolve
|
ok
|
|
9.302s ago
|
87.12us |
alert: CephDeviceFailureRelocationIncomplete
expr: ceph_health_detail{name="DEVICE_HEALTH_IN_USE"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The device health module has determined that one or more devices will fail
soon, but the normal process of relocating the data on the device to other
OSDs in the cluster is blocked.
Check the the cluster has available freespace. It may be necessary to add
more disks to the cluster to allow the data from the failing device to
successfully migrate.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#device-health-in-use
summary: A device failure is predicted, but unable to relocate data
|
ok
|
|
9.302s ago
|
72.24us |
alert: CephOSDFlapping
expr: (rate(ceph_osd_up[5m])
* on(ceph_daemon) group_left(hostname) ceph_osd_metadata) * 60 > 1
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.4
severity: warning
type: ceph_default
annotations:
description: |
OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked down and back up at {{ $value | humanize }} times once a minute for 5 minutes. This could indicate a network issue (latency, packet drop, disruption) on the clusters "cluster network". Check the network environment on the listed host(s).
documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds
summary: Network issues are causing OSD's to flap (mark each other out)
|
ok
|
|
9.302s ago
|
614.1us |
alert: CephOSDReadErrors
expr: ceph_health_detail{name="BLUESTORE_SPURIOUS_READ_ERRORS"}
== 1
for: 30s
labels:
severity: warning
type: ceph_default
annotations:
description: |
An OSD has encountered read errors, but the OSD has recovered by retrying the reads. This may indicate an issue with the Hardware or Kernel.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#bluestore-spurious-read-errors
summary: Device read errors detected
|
ok
|
|
9.301s ago
|
56.53us |
alert: CephPGImbalance
expr: abs(((ceph_osd_numpg
> 0) - on(job) group_left() avg by(job) (ceph_osd_numpg > 0)) / on(job) group_left()
avg by(job) (ceph_osd_numpg > 0)) * on(ceph_daemon) group_left(hostname) ceph_osd_metadata
> 0.3
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.5
severity: warning
type: ceph_default
annotations:
description: |
OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} deviates by more than 30% from average PG count.
summary: PG allocations are not balanced across devices
|
ok
|
|
9.301s ago
|
3.704ms |
|
3.508s ago |
2.3ms |
Rule |
State |
Error |
Last Evaluation |
Evaluation Time |
alert: CephPGsInactive
expr: ceph_pool_metadata
* on(pool_id, instance) group_left() (ceph_pg_total - ceph_pg_active) > 0
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.1
severity: critical
type: ceph_default
annotations:
description: |
{{ $value }} PGs have been inactive for more than 5 minutes in pool {{ $labels.name }}. Inactive placement groups aren't able to serve read/write requests.
summary: One or more Placement Groups are inactive
|
ok
|
|
3.508s ago
|
862.8us |
alert: CephPGsUnclean
expr: ceph_pool_metadata
* on(pool_id, instance) group_left() (ceph_pg_total - ceph_pg_clean) > 0
for: 15m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.2
severity: warning
type: ceph_default
annotations:
description: |
{{ $value }} PGs haven't been clean for more than 15 minutes in pool {{ $labels.name }}. Unclean PGs haven't been able to completely recover from a previous failure.
summary: One or more platcment groups are marked unclean
|
ok
|
|
3.507s ago
|
715.8us |
alert: CephPGsDamaged
expr: ceph_health_detail{name=~"PG_DAMAGED|OSD_SCRUB_ERRORS"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.4
severity: critical
type: ceph_default
annotations:
description: |
During data consistency checks (scrub), at least one PG has been flagged as being damaged or inconsistent.
Check to see which PG is affected, and attempt a manual repair if necessary. To list problematic placement groups, use 'rados list-inconsistent-pg <pool>'. To repair PGs use the 'ceph pg repair <pg_num>' command.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-damaged
summary: Placement group damaged, manual intervention needed
|
ok
|
|
3.507s ago
|
142.6us |
alert: CephPGRecoveryAtRisk
expr: ceph_health_detail{name="PG_RECOVERY_FULL"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.5
severity: critical
type: ceph_default
annotations:
description: |
Data redundancy may be reduced, or is at risk, since one or more OSDs are at or above their 'full' threshold. Add more capacity to the cluster, or delete unwanted data.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-recovery-full
summary: OSDs are too full for automatic recovery
|
ok
|
|
3.507s ago
|
131.2us |
alert: CephPGUnavilableBlockingIO
expr: ((ceph_health_detail{name="PG_AVAILABILITY"}
== 1) - scalar(ceph_health_detail{name="OSD_DOWN"})) == 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.3
severity: critical
type: ceph_default
annotations:
description: |
Data availability is reduced impacting the clusters ability to service I/O to some data. One or more placement groups (PGs) are in a state that blocks IO.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-availability
summary: Placement group is unavailable, blocking some I/O
|
ok
|
|
3.507s ago
|
161.5us |
alert: CephPGBackfillAtRisk
expr: ceph_health_detail{name="PG_BACKFILL_FULL"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.6
severity: critical
type: ceph_default
annotations:
description: |
Data redundancy may be at risk due to lack of free space within the cluster. One or more OSDs have breached their 'backfillfull' threshold. Add more capacity, or delete unwanted data.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-backfill-full
summary: Backfill operations are blocked, due to lack of freespace
|
ok
|
|
3.507s ago
|
63.3us |
alert: CephPGNotScrubbed
expr: ceph_health_detail{name="PG_NOT_SCRUBBED"}
== 1
for: 5m
labels:
severity: warning
type: ceph_default
annotations:
description: |
One or more PGs have not been scrubbed recently. The scrub process is a data integrity
feature, protectng against bit-rot. It checks that objects and their metadata (size and
attributes) match across object replicas. When PGs miss their scrub window, it may
indicate the scrub window is too small, or PGs were not in a 'clean' state during the
scrub window.
You can manually initiate a scrub with: ceph pg scrub <pgid>
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-not-scrubbed
summary: Placement group(s) have not been scrubbed
|
ok
|
|
3.507s ago
|
48.92us |
alert: CephPGsHighPerOSD
expr: ceph_health_detail{name="TOO_MANY_PGS"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The number of placement groups per OSD is too high (exceeds the mon_max_pg_per_osd setting).
Check that the pg_autoscaler hasn't been disabled for any of the pools, with 'ceph osd pool autoscale-status'
and that the profile selected is appropriate. You may also adjust the target_size_ratio of a pool to guide
the autoscaler based on the expected relative size of the pool
(i.e. 'ceph osd pool set cephfs.cephfs.meta target_size_ratio .1')
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks/#too-many-pgs
summary: Placement groups per OSD is too high
|
ok
|
|
3.507s ago
|
76.31us |
alert: CephPGNotDeepScrubbed
expr: ceph_health_detail{name="PG_NOT_DEEP_SCRUBBED"}
== 1
for: 5m
labels:
severity: warning
type: ceph_default
annotations:
description: |
One or more PGs have not been deep scrubbed recently. Deep scrub is a data integrity
feature, protectng against bit-rot. It compares the contents of objects and their
replicas for inconsistency. When PGs miss their deep scrub window, it may indicate
that the window is too small or PGs were not in a 'clean' state during the deep-scrub
window.
You can manually initiate a deep scrub with: ceph pg deep-scrub <pgid>
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-not-deep-scrubbed
summary: Placement group(s) have not been deep scrubbed
|
ok
|
|
3.507s ago
|
78.96us |
|
5.649s ago |
31.63ms |
Rule |
State |
Error |
Last Evaluation |
Evaluation Time |
alert: CephPoolGrowthWarning
expr: (predict_linear(ceph_pool_percent_used[2d],
3600 * 24 * 5) * on(pool_id) group_right() ceph_pool_metadata) >= 95
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.9.2
severity: warning
type: ceph_default
annotations:
description: |
Pool '{{ $labels.name }}' will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.
summary: Pool growth rate may soon exceed it's capacity
|
ok
|
|
5.649s ago
|
31.14ms |
alert: CephPoolBackfillFull
expr: ceph_health_detail{name="POOL_BACKFILLFULL"}
> 0
labels:
severity: warning
type: ceph_default
annotations:
description: |
A pool is approaching it's near full threshold, which will prevent rebalance operations from completing. You should consider adding more capacity to the pool.
summary: Freespace in a pool is too low for recovery/rebalance
|
ok
|
|
5.618s ago
|
227us |
alert: CephPoolFull
expr: ceph_health_detail{name="POOL_FULL"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.9.1
severity: critical
type: ceph_default
annotations:
description: |
A pool has reached it's MAX quota, or the OSDs supporting the pool
have reached their FULL threshold. Until this is resolved, writes to
the pool will be blocked.
Pool Breakdown (top 5)
{{- range query "topk(5, sort_desc(ceph_pool_percent_used * on(pool_id) group_right ceph_pool_metadata))" }}
- {{ .Labels.name }} at {{ .Value }}%
{{- end }}
Either increase the pools quota, or add capacity to the cluster first
then increase it's quota (e.g. ceph osd pool set quota <pool_name> max_bytes <bytes>)
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pool-full
summary: Pool is full - writes are blocked
|
ok
|
|
5.618s ago
|
135.7us |
alert: CephPoolNearFull
expr: ceph_health_detail{name="POOL_NEAR_FULL"}
> 0
for: 5m
labels:
severity: warning
type: ceph_default
annotations:
description: |
A pool has exceeeded it warning (percent full) threshold, or the OSDs
supporting the pool have reached their NEARFULL thresholds. Writes may
continue, but you are at risk of the pool going read only if more capacity
isn't made available.
Determine the affected pool with 'ceph df detail', for example looking
at QUOTA BYTES and STORED. Either increase the pools quota, or add
capacity to the cluster first then increase it's quota
(e.g. ceph osd pool set quota <pool_name> max_bytes <bytes>)
summary: One or more Ceph pools are getting full
|
ok
|
|
5.618s ago
|
119.4us |
|
3.999s ago |
923.7us |