Kubernetes CrashLoopBackOff
Diagnose pods that restart in a loop.
CrashLoopBackOff means Kubernetes is trying to restart a pod that keeps crashing. Each attempt is spaced exponentially. Here's how to solve this problem.
Symptoms
- Pod in CrashLoopBackOff status
- Restart count that increases
- Service inaccessible
- Events showing repeated restarts
Common Causes
- Application crash: The main process returns an error or exception.
- Liveness probe failing: K8s kills the container because the probe doesn't respond.
- Insufficient resources: OOMKilled because limits are too low.
Diagnostic Steps
- Describe the pod: kubectl describe pod [name]
- Examine logs: kubectl logs [pod] --previous
- Check namespace events
- Analyze resources (CPU, RAM)
Automate with MoniTao
MoniTao monitors your Kubernetes services:
- HTTP monitoring of your Ingress/Services
- Alerts when service stops responding
- Heartbeat for K8s CronJobs
Best Practices
- Configure probes with realistic timeouts
- Use resource requests and limits
- Implement graceful shutdown
- Centralize logs for debugging
FAQ
Difference between liveness and readiness probe?
Liveness: is the container working? Readiness: is it ready to receive traffic?
What does OOMKilled mean?
The container exceeded its memory limit and was killed.
How to see crashed container logs?
kubectl logs [pod] --previous shows previous instance logs.
Can MoniTao monitor K8s directly?
It monitors your services via HTTP. For K8s metrics, use Prometheus.
Useful Links
Ready to Sleep Soundly?
Start free, no credit card required.