/etc/prometheus/alerting/ceph_alerts.yml > PrometheusServer
|
alert: PrometheusJobMissing
expr: absent(up{job="ceph"})
for: 30s
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.12.1
severity: critical
type: ceph_default
annotations:
description: |
The prometheus job that scrapes from Ceph is no longer defined, this
will effectively mean you'll have no metrics or alerts for the cluster.
Please review the job definitions in the prometheus.yml file of the prometheus
instance.
summary: The scrape job for Ceph is missing from Prometheus
|
/etc/prometheus/alerting/ceph_alerts.yml > cephadm
|
alert: CephadmDaemonFailed
expr: ceph_health_detail{name="CEPHADM_FAILED_DAEMON"}
> 0
for: 30s
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.11.1
severity: critical
type: ceph_default
annotations:
description: |
A daemon managed by cephadm is no longer active. Determine, which daemon is down with 'ceph health detail'. you may start daemons with the 'ceph orch daemon start <daemon_id>'
summary: A ceph daemon manged by cephadm is down
|
alert: CephadmPaused
expr: ceph_health_detail{name="CEPHADM_PAUSED"}
> 0
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
Cluster management has been paused manually. This will prevent the orchestrator from service management and reconciliation. If this is not intentional, resume cephadm operations with 'ceph orch resume'
documentation: https://docs.ceph.com/en/latest/cephadm/operations#cephadm-paused
summary: Orchestration tasks via cephadm are PAUSED
|
alert: CephadmUpgradeFailed
expr: ceph_health_detail{name="UPGRADE_EXCEPTION"}
> 0
for: 30s
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.11.2
severity: critical
type: ceph_default
annotations:
description: |
The cephadm cluster upgrade process has failed. The cluster remains in an undetermined state.
Please review the cephadm logs, to understand the nature of the issue
summary: Ceph version upgrade has failed
|
/etc/prometheus/alerting/ceph_alerts.yml > cluster health
|
alert: CephHealthWarning
expr: ceph_health_status
== 1
for: 15m
labels:
severity: warning
type: ceph_default
annotations:
description: |
Ceph has been in HEALTH_WARN for more than 15 minutes. Please check "ceph health detail" for more information.
summary: Cluster is in a WARNING state
Labels |
State |
Active Since |
Value |
alertname="CephHealthWarning"
instance="192.168.214.105:9283"
job="ceph"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:40.13866993 +0000 UTC |
1 |
Annotations |
- description
- Ceph has been in HEALTH_WARN for more than 15 minutes. Please check "ceph health detail" for more information.
- summary
- Cluster is in a WARNING state
|
|
alert: CephHealthError
expr: ceph_health_status
== 2
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.2.1
severity: critical
type: ceph_default
annotations:
description: |
Ceph in HEALTH_ERROR state for more than 5 minutes. Please check "ceph health detail" for more information.
summary: Cluster is in an ERROR state
|
/etc/prometheus/alerting/ceph_alerts.yml > generic
|
alert: CephDaemonCrash
expr: ceph_health_detail{name="RECENT_CRASH"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.1.2
severity: critical
type: ceph_default
annotations:
description: |
One or more daemons have crashed recently, and need to be acknowledged. This notification
ensures that software crashes don't go unseen. To acknowledge a crash, use the
'ceph crash archive <id>' command.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks/#recent-crash
summary: One or more Ceph daemons have crashed, and are pending acknowledgement
|
/etc/prometheus/alerting/ceph_alerts.yml > healthchecks
|
alert: CephSlowOps
expr: ceph_healthcheck_slow_ops
> 0
for: 30s
labels:
severity: warning
type: ceph_default
annotations:
description: |
{{ $value }} OSD requests are taking too long to process (osd_op_complaint_time exceeded)
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#slow-ops
summary: MON/OSD operations are slow to complete
|
/etc/prometheus/alerting/ceph_alerts.yml > mds
|
alert: CephFilesystemDamaged
expr: ceph_health_detail{name="MDS_DAMAGE"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.1
severity: critical
type: ceph_default
annotations:
description: |
The filesystems metadata has been corrupted. Data access may be blocked.
Either analyse the output from the mds daemon admin socket, or escalate to support
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages#cephfs-health-messages
summary: Ceph filesystem is damaged.
|
alert: CephFilesystemDegraded
expr: ceph_health_detail{name="FS_DEGRADED"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.4
severity: critical
type: ceph_default
annotations:
description: |
One or more metadata daemons (MDS ranks) are failed or in a damaged state. At best the filesystem is partially available, worst case is the filesystem is completely unusable.
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#fs-degraded
summary: Ceph filesystem is degraded
|
alert: CephFilesystemFailureNoStandby
expr: ceph_health_detail{name="FS_WITH_FAILED_MDS"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.5
severity: critical
type: ceph_default
annotations:
description: |
An MDS daemon has failed, leaving only one active rank without further standby. Investigate the cause of the failure or add a standby daemon
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#fs-with-failed-mds
summary: Ceph MDS daemon failed, no further standby available
|
alert: CephFilesystemInsufficientStandby
expr: ceph_health_detail{name="MDS_INSUFFICIENT_STANDBY"}
> 0
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The minimum number of standby daemons determined by standby_count_wanted is less than the actual number of standby daemons. Adjust the standby count or increase the number of mds daemons within the filesystem.
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#mds-insufficient-standby
summary: Ceph filesystem standby daemons too low
|
alert: CephFilesystemMDSRanksLow
expr: ceph_health_detail{name="MDS_UP_LESS_THAN_MAX"}
> 0
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The filesystem's "max_mds" setting defined the number of MDS ranks in the filesystem. The current number of active MDS daemons is less than this setting.
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#mds-up-less-than-max
summary: Ceph MDS daemon count is lower than configured
|
alert: CephFilesystemOffline
expr: ceph_health_detail{name="MDS_ALL_DOWN"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.3
severity: critical
type: ceph_default
annotations:
description: |
All MDS ranks are unavailable. The ceph daemons providing the metadata for the Ceph filesystem are all down, rendering the filesystem offline.
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#mds-all-down
summary: Ceph filesystem is offline
|
alert: CephFilesystemReadOnly
expr: ceph_health_detail{name="MDS_HEALTH_READ_ONLY"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.5.2
severity: critical
type: ceph_default
annotations:
description: |
The filesystem has switched to READ ONLY due to an unexpected write error, when writing to the metadata pool
Either analyse the output from the mds daemon admin socket, or escalate to support
documentation: https://docs.ceph.com/en/latest/cephfs/health-messages#cephfs-health-messages
summary: Ceph filesystem in read only mode, due to write error(s)
|
/etc/prometheus/alerting/ceph_alerts.yml > mgr
|
alert: CephMgrPrometheusModuleInactive
expr: up{job="ceph"}
== 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.6.2
severity: critical
type: ceph_default
annotations:
description: |
The mgr/prometheus module at {{ $labels.instance }} is unreachable. This could mean that the module has been disabled or the mgr itself is down.
Without the mgr/prometheus module metrics and alerts will no longer function. Open a shell to ceph and use 'ceph -s' to to determine whether the mgr is active. If the mgr is not active, restart it, otherwise you can check the mgr/prometheus module is loaded with 'ceph mgr module ls' and if it's not listed as enabled, enable it with 'ceph mgr module enable prometheus'
summary: Ceph's mgr/prometheus module is not available
Labels |
State |
Active Since |
Value |
alertname="CephMgrPrometheusModuleInactive"
instance="node-2:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.6.2"
severity="critical"
type="ceph_default"
|
firing |
2025-05-02 15:19:31.621985075 +0000 UTC |
0 |
Annotations |
- description
- The mgr/prometheus module at node-2:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down.
Without the mgr/prometheus module metrics and alerts will no longer function. Open a shell to ceph and use 'ceph -s' to to determine whether the mgr is active. If the mgr is not active, restart it, otherwise you can check the mgr/prometheus module is loaded with 'ceph mgr module ls' and if it's not listed as enabled, enable it with 'ceph mgr module enable prometheus'
- summary
- Ceph's mgr/prometheus module is not available
|
alertname="CephMgrPrometheusModuleInactive"
instance="node-4:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.6.2"
severity="critical"
type="ceph_default"
|
firing |
2025-05-02 15:19:31.621985075 +0000 UTC |
0 |
Annotations |
- description
- The mgr/prometheus module at node-4:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down.
Without the mgr/prometheus module metrics and alerts will no longer function. Open a shell to ceph and use 'ceph -s' to to determine whether the mgr is active. If the mgr is not active, restart it, otherwise you can check the mgr/prometheus module is loaded with 'ceph mgr module ls' and if it's not listed as enabled, enable it with 'ceph mgr module enable prometheus'
- summary
- Ceph's mgr/prometheus module is not available
|
alertname="CephMgrPrometheusModuleInactive"
instance="node-3:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.6.2"
severity="critical"
type="ceph_default"
|
firing |
2025-05-02 15:19:41.621985075 +0000 UTC |
0 |
Annotations |
- description
- The mgr/prometheus module at node-3:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down.
Without the mgr/prometheus module metrics and alerts will no longer function. Open a shell to ceph and use 'ceph -s' to to determine whether the mgr is active. If the mgr is not active, restart it, otherwise you can check the mgr/prometheus module is loaded with 'ceph mgr module ls' and if it's not listed as enabled, enable it with 'ceph mgr module enable prometheus'
- summary
- Ceph's mgr/prometheus module is not available
|
|
alert: CephMgrModuleCrash
expr: ceph_health_detail{name="RECENT_MGR_MODULE_CRASH"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.6.1
severity: critical
type: ceph_default
annotations:
description: |
One or more mgr modules have crashed and are yet to be acknowledged by the administrator. A crashed module may impact functionality within the cluster. Use the 'ceph crash' commands to investigate which module has failed, and archive it to acknowledge the failure.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#recent-mgr-module-crash
summary: A mgr module has recently crashed
|
/etc/prometheus/alerting/ceph_alerts.yml > mon
|
alert: CephMonDiskspaceLow
expr: ceph_health_detail{name="MON_DISK_LOW"}
== 1
for: 5m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The space available to a monitor's store is approaching full (>70% is the default).
You should increase the space available to the monitor store. The
default location for the store sits under /var/lib/ceph. Your monitor hosts are;
{{- range query "ceph_mon_metadata"}}
- {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-disk-low
summary: Disk space on at least one monitor is approaching full
Labels |
State |
Active Since |
Value |
alertname="CephMonDiskspaceLow"
instance="192.168.214.105:9283"
job="ceph"
name="MON_DISK_LOW"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:40.084118673 +0000 UTC |
1 |
Annotations |
- description
- The space available to a monitor's store is approaching full (>70% is the default).
You should increase the space available to the monitor store. The
default location for the store sits under /var/lib/ceph. Your monitor hosts are;
- node-1
- node-2
- node-3
- node-4
- node-5.ceph.cri.epita.fr
- documentation
- https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-disk-low
- summary
- Disk space on at least one monitor is approaching full
|
|
alert: CephMonClockSkew
expr: ceph_health_detail{name="MON_CLOCK_SKEW"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The ceph monitors rely on a consistent time reference to maintain
quorum and cluster consistency. This event indicates that at least
one of your mons is not sync'd correctly.
Review the cluster status with ceph -s. This will show which monitors
are affected. Check the time sync status on each monitor host.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-clock-skew
summary: Clock skew across the Monitor hosts detected
|
alert: CephMonDiskspaceCritical
expr: ceph_health_detail{name="MON_DISK_CRIT"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.3.2
severity: critical
type: ceph_default
annotations:
description: |
The free space available to a monitor's store is critically low (<5% by default).
You should increase the space available to the monitor(s). The
default location for the store sits under /var/lib/ceph. Your monitor hosts are;
{{- range query "ceph_mon_metadata"}}
- {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-disk-crit
summary: Disk space on at least one monitor is critically low
|
alert: CephMonDown
expr: (count(ceph_mon_quorum_status
== 0) <= (count(ceph_mon_metadata) - floor(count(ceph_mon_metadata) / 2) + 1))
for: 30s
labels:
severity: warning
type: ceph_default
annotations:
description: |
{{ $down := query "count(ceph_mon_quorum_status == 0)" | first | value }}{{ $s := "" }}{{ if gt $down 1.0 }}{{ $s = "s" }}{{ end }}You have {{ $down }} monitor{{ $s }} down.
Quorum is still intact, but the loss of further monitors will make your cluster inoperable.
The following monitors are down:
{{- range query "(ceph_mon_quorum_status == 0) + on(ceph_daemon) group_left(hostname) (ceph_mon_metadata * 0)" }}
- {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down
summary: One of more ceph monitors are down
|
alert: CephMonDownQuorumAtRisk
expr: ((ceph_health_detail{name="MON_DOWN"}
== 1) * on() (count(ceph_mon_quorum_status == 1) == bool (floor(count(ceph_mon_metadata)
/ 2) + 1))) == 1
for: 30s
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.3.1
severity: critical
type: ceph_default
annotations:
description: |
{{ $min := query "floor(count(ceph_mon_metadata) / 2) +1" | first | value }}Quorum requires a majority of monitors (x {{ $min }}) to be active
Without quorum the cluster will become inoperable, affecting all connected clients and services.
The following monitors are down:
{{- range query "(ceph_mon_quorum_status == 0) + on(ceph_daemon) group_left(hostname) (ceph_mon_metadata * 0)" }}
- {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down
summary: Monitor quorum is at risk
|
/etc/prometheus/alerting/ceph_alerts.yml > nodes
|
|
|
|
|
|
/etc/prometheus/alerting/ceph_alerts.yml > osd
|
Labels |
State |
Active Since |
Value |
alertname="CephPGImbalance"
ceph_daemon="osd.9"
hostname="node-5.ceph.cri.epita.fr"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:38.656874605 +0000 UTC |
0.3757094211123722 |
Annotations |
- description
- OSD osd.9 on node-5.ceph.cri.epita.fr deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.15"
hostname="node-1"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-09 05:28:08.656874605 +0000 UTC |
0.3337116912599321 |
Annotations |
- description
- OSD osd.15 on node-1 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.19"
hostname="node-5"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-09 07:06:48.656874605 +0000 UTC |
0.314793794930004 |
Annotations |
- description
- OSD osd.19 on node-5 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.7"
hostname="node-4"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:38.656874605 +0000 UTC |
0.3567915247824441 |
Annotations |
- description
- OSD osd.7 on node-4 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.8"
hostname="node-5.ceph.cri.epita.fr"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:38.656874605 +0000 UTC |
0.34733257661748 |
Annotations |
- description
- OSD osd.8 on node-5.ceph.cri.epita.fr deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.3"
hostname="node-2"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:38.656874605 +0000 UTC |
0.4040862656072644 |
Annotations |
- description
- OSD osd.3 on node-2 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.1"
hostname="node-1"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:38.656874605 +0000 UTC |
0.3851683692773363 |
Annotations |
- description
- OSD osd.1 on node-1 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.2"
hostname="node-2"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:38.656874605 +0000 UTC |
0.3851683692773363 |
Annotations |
- description
- OSD osd.2 on node-2 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.0"
hostname="node-1"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-09 05:27:08.656874605 +0000 UTC |
0.3189557321225879 |
Annotations |
- description
- OSD osd.0 on node-1 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.13"
hostname="node-4"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-09 05:28:08.656874605 +0000 UTC |
0.3053348467650399 |
Annotations |
- description
- OSD osd.13 on node-4 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.4"
hostname="node-3"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-09 05:28:08.656874605 +0000 UTC |
0.39462731744230034 |
Annotations |
- description
- OSD osd.4 on node-3 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.5"
hostname="node-3"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-09 05:27:38.656874605 +0000 UTC |
0.4419220582671206 |
Annotations |
- description
- OSD osd.5 on node-3 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
alertname="CephPGImbalance"
ceph_daemon="osd.6"
hostname="node-4"
instance="192.168.214.105:9283"
job="ceph"
oid="1.3.6.1.4.1.50495.1.2.1.4.5"
severity="warning"
type="ceph_default"
|
firing |
2025-05-02 15:19:38.656874605 +0000 UTC |
0.41354521377222847 |
Annotations |
- description
- OSD osd.6 on node-4 deviates by more than 30% from average PG count.
- summary
- PG allocations are not balanced across devices
|
|
alert: CephDeviceFailurePredicted
expr: ceph_health_detail{name="DEVICE_HEALTH"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The device health module has determined that one or more devices will fail
soon. To review the device states use 'ceph device ls'. To show a specific
device use 'ceph device info <dev id>'.
Mark the OSD as out (so data may migrate to other OSDs in the cluster). Once
the osd is empty remove and replace the OSD.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#id2
summary: Device(s) have been predicted to fail soon
|
alert: CephDeviceFailurePredictionTooHigh
expr: ceph_health_detail{name="DEVICE_HEALTH_TOOMANY"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.7
severity: critical
type: ceph_default
annotations:
description: |
The device health module has determined that the number of devices predicted to
fail can not be remediated automatically, since it would take too many osd's out of
the cluster, impacting performance and potentially availabililty. You should add new
OSDs to the cluster to allow data to be relocated to avoid the data integrity issues.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#device-health-toomany
summary: Too many devices have been predicted to fail, unable to resolve
|
alert: CephDeviceFailureRelocationIncomplete
expr: ceph_health_detail{name="DEVICE_HEALTH_IN_USE"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The device health module has determined that one or more devices will fail
soon, but the normal process of relocating the data on the device to other
OSDs in the cluster is blocked.
Check the the cluster has available freespace. It may be necessary to add
more disks to the cluster to allow the data from the failing device to
successfully migrate.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#device-health-in-use
summary: A device failure is predicted, but unable to relocate data
|
alert: CephOSDBackfillFull
expr: ceph_health_detail{name="OSD_BACKFILLFULL"}
> 0
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
An OSD has reached it's BACKFILL FULL threshold. This will prevent rebalance operations
completing for some pools. Check the current capacity utilisation with 'ceph df'
To resolve, either add capacity to the cluster, or delete unwanted data
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-backfillfull
summary: OSD(s) too full for backfill operations
|
alert: CephOSDDown
expr: ceph_health_detail{name="OSD_DOWN"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.2
severity: warning
type: ceph_default
annotations:
description: |
{{ $num := query "count(ceph_osd_up == 0)" | first | value }}{{ $s := "" }}{{ if gt $num 1.0 }}{{ $s = "s" }}{{ end }}{{ $num }} OSD{{ $s }} down for over 5mins.
The following OSD{{ $s }} {{ if eq $s "" }}is{{ else }}are{{ end }} down:
{{- range query "(ceph_osd_up * on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0"}}
- {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }}
{{- end }}
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-down
summary: An OSD has been marked down/unavailable
|
alert: CephOSDDownHigh
expr: count(ceph_osd_up
== 0) / count(ceph_osd_up) * 100 >= 10
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.1
severity: critical
type: ceph_default
annotations:
description: |
{{ $value | humanize }}% or {{ with query "count(ceph_osd_up == 0)" }}{{ . | first | value }}{{ end }} of {{ with query "count(ceph_osd_up)" }}{{ . | first | value }}{{ end }} OSDs are down (>= 10%).
The following OSDs are down:
{{- range query "(ceph_osd_up * on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0" }}
- {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }}
{{- end }}
summary: More than 10% of OSDs are down
|
alert: CephOSDFlapping
expr: (rate(ceph_osd_up[5m])
* on(ceph_daemon) group_left(hostname) ceph_osd_metadata) * 60 > 1
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.4
severity: warning
type: ceph_default
annotations:
description: |
OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked down and back up at {{ $value | humanize }} times once a minute for 5 minutes. This could indicate a network issue (latency, packet drop, disruption) on the clusters "cluster network". Check the network environment on the listed host(s).
documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds
summary: Network issues are causing OSD's to flap (mark each other out)
|
alert: CephOSDFull
expr: ceph_health_detail{name="OSD_FULL"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.6
severity: critical
type: ceph_default
annotations:
description: |
An OSD has reached it's full threshold. Writes from all pools that share the
affected OSD will be blocked.
To resolve, either add capacity to the cluster, or delete unwanted data
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-full
summary: OSD(s) is full, writes blocked
|
alert: CephOSDHostDown
expr: ceph_health_detail{name="OSD_HOST_DOWN"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.8
severity: warning
type: ceph_default
annotations:
description: |
The following OSDs are down:
{{- range query "(ceph_osd_up * on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0" }}
- {{ .Labels.hostname }} : {{ .Labels.ceph_daemon }}
{{- end }}
summary: An OSD host is offline
|
alert: CephOSDInternalDiskSizeMismatch
expr: ceph_health_detail{name="BLUESTORE_DISK_SIZE_MISMATCH"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
One or more OSDs have an internal inconsistency between the size of the physical device and it's metadata.
This could lead to the OSD(s) crashing in future. You should redeploy the effected OSDs.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#bluestore-disk-size-mismatch
summary: OSD size inconsistency error
|
alert: CephOSDNearFull
expr: ceph_health_detail{name="OSD_NEARFULL"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.4.3
severity: warning
type: ceph_default
annotations:
description: |
One or more OSDs have reached their NEARFULL threshold
Use 'ceph health detail' to identify which OSDs have reached this threshold.
To resolve, either add capacity to the cluster, or delete unwanted data
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-nearfull
summary: OSD(s) running low on free space (NEARFULL)
|
alert: CephOSDReadErrors
expr: ceph_health_detail{name="BLUESTORE_SPURIOUS_READ_ERRORS"}
== 1
for: 30s
labels:
severity: warning
type: ceph_default
annotations:
description: |
An OSD has encountered read errors, but the OSD has recovered by retrying the reads. This may indicate an issue with the Hardware or Kernel.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#bluestore-spurious-read-errors
summary: Device read errors detected
|
alert: CephOSDTimeoutsClusterNetwork
expr: ceph_health_detail{name="OSD_SLOW_PING_TIME_BACK"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
OSD heartbeats on the cluster's 'cluster' network (backend) are running slow. Investigate the network
for any latency issues on this subnet. Use 'ceph health detail' to show the affected OSDs.
summary: Network issues delaying OSD heartbeats (cluster network)
|
alert: CephOSDTimeoutsPublicNetwork
expr: ceph_health_detail{name="OSD_SLOW_PING_TIME_FRONT"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
OSD heartbeats on the cluster's 'public' network (frontend) are running slow. Investigate the network
for any latency issues on this subnet. Use 'ceph health detail' to show the affected OSDs.
summary: Network issues delaying OSD heartbeats (public network)
|
alert: CephOSDTooManyRepairs
expr: ceph_health_detail{name="OSD_TOO_MANY_REPAIRS"}
== 1
for: 30s
labels:
severity: warning
type: ceph_default
annotations:
description: |
Reads from an OSD have used a secondary PG to return data to the client, indicating
a potential failing disk.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-too-many-repairs
summary: OSD has hit a high number of read errors
|
/etc/prometheus/alerting/ceph_alerts.yml > pgs
|
alert: CephPGBackfillAtRisk
expr: ceph_health_detail{name="PG_BACKFILL_FULL"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.6
severity: critical
type: ceph_default
annotations:
description: |
Data redundancy may be at risk due to lack of free space within the cluster. One or more OSDs have breached their 'backfillfull' threshold. Add more capacity, or delete unwanted data.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-backfill-full
summary: Backfill operations are blocked, due to lack of freespace
|
alert: CephPGNotDeepScrubbed
expr: ceph_health_detail{name="PG_NOT_DEEP_SCRUBBED"}
== 1
for: 5m
labels:
severity: warning
type: ceph_default
annotations:
description: |
One or more PGs have not been deep scrubbed recently. Deep scrub is a data integrity
feature, protectng against bit-rot. It compares the contents of objects and their
replicas for inconsistency. When PGs miss their deep scrub window, it may indicate
that the window is too small or PGs were not in a 'clean' state during the deep-scrub
window.
You can manually initiate a deep scrub with: ceph pg deep-scrub <pgid>
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-not-deep-scrubbed
summary: Placement group(s) have not been deep scrubbed
|
alert: CephPGNotScrubbed
expr: ceph_health_detail{name="PG_NOT_SCRUBBED"}
== 1
for: 5m
labels:
severity: warning
type: ceph_default
annotations:
description: |
One or more PGs have not been scrubbed recently. The scrub process is a data integrity
feature, protectng against bit-rot. It checks that objects and their metadata (size and
attributes) match across object replicas. When PGs miss their scrub window, it may
indicate the scrub window is too small, or PGs were not in a 'clean' state during the
scrub window.
You can manually initiate a scrub with: ceph pg scrub <pgid>
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-not-scrubbed
summary: Placement group(s) have not been scrubbed
|
alert: CephPGRecoveryAtRisk
expr: ceph_health_detail{name="PG_RECOVERY_FULL"}
== 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.5
severity: critical
type: ceph_default
annotations:
description: |
Data redundancy may be reduced, or is at risk, since one or more OSDs are at or above their 'full' threshold. Add more capacity to the cluster, or delete unwanted data.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-recovery-full
summary: OSDs are too full for automatic recovery
|
|
alert: CephPGsDamaged
expr: ceph_health_detail{name=~"PG_DAMAGED|OSD_SCRUB_ERRORS"}
== 1
for: 5m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.4
severity: critical
type: ceph_default
annotations:
description: |
During data consistency checks (scrub), at least one PG has been flagged as being damaged or inconsistent.
Check to see which PG is affected, and attempt a manual repair if necessary. To list problematic placement groups, use 'rados list-inconsistent-pg <pool>'. To repair PGs use the 'ceph pg repair <pg_num>' command.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-damaged
summary: Placement group damaged, manual intervention needed
|
alert: CephPGsHighPerOSD
expr: ceph_health_detail{name="TOO_MANY_PGS"}
== 1
for: 1m
labels:
severity: warning
type: ceph_default
annotations:
description: |
The number of placement groups per OSD is too high (exceeds the mon_max_pg_per_osd setting).
Check that the pg_autoscaler hasn't been disabled for any of the pools, with 'ceph osd pool autoscale-status'
and that the profile selected is appropriate. You may also adjust the target_size_ratio of a pool to guide
the autoscaler based on the expected relative size of the pool
(i.e. 'ceph osd pool set cephfs.cephfs.meta target_size_ratio .1')
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks/#too-many-pgs
summary: Placement groups per OSD is too high
|
|
|
/etc/prometheus/alerting/ceph_alerts.yml > pools
|
alert: CephPoolBackfillFull
expr: ceph_health_detail{name="POOL_BACKFILLFULL"}
> 0
labels:
severity: warning
type: ceph_default
annotations:
description: |
A pool is approaching it's near full threshold, which will prevent rebalance operations from completing. You should consider adding more capacity to the pool.
summary: Freespace in a pool is too low for recovery/rebalance
|
alert: CephPoolFull
expr: ceph_health_detail{name="POOL_FULL"}
> 0
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.9.1
severity: critical
type: ceph_default
annotations:
description: |
A pool has reached it's MAX quota, or the OSDs supporting the pool
have reached their FULL threshold. Until this is resolved, writes to
the pool will be blocked.
Pool Breakdown (top 5)
{{- range query "topk(5, sort_desc(ceph_pool_percent_used * on(pool_id) group_right ceph_pool_metadata))" }}
- {{ .Labels.name }} at {{ .Value }}%
{{- end }}
Either increase the pools quota, or add capacity to the cluster first
then increase it's quota (e.g. ceph osd pool set quota <pool_name> max_bytes <bytes>)
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pool-full
summary: Pool is full - writes are blocked
|
|
alert: CephPoolNearFull
expr: ceph_health_detail{name="POOL_NEAR_FULL"}
> 0
for: 5m
labels:
severity: warning
type: ceph_default
annotations:
description: |
A pool has exceeeded it warning (percent full) threshold, or the OSDs
supporting the pool have reached their NEARFULL thresholds. Writes may
continue, but you are at risk of the pool going read only if more capacity
isn't made available.
Determine the affected pool with 'ceph df detail', for example looking
at QUOTA BYTES and STORED. Either increase the pools quota, or add
capacity to the cluster first then increase it's quota
(e.g. ceph osd pool set quota <pool_name> max_bytes <bytes>)
summary: One or more Ceph pools are getting full
|
/etc/prometheus/alerting/ceph_alerts.yml > rados
|
|