You deploy a Docker container. It starts. Then it immediately stops. Or maybe it never starts at all. You run docker ps and see nothing. You run docker ps -a and there it is — status: Exited.
The natural instinct is to start searching for the error. But Docker already told you what went wrong. It’s in the exit code — that number in parentheses next to “Exited.” Most people ignore it and start trying random solutions. Don’t be most people.
The exit code is the fastest diagnostic tool you have. Learn to read it, and you can diagnose most container failures in under a minute.
Step 1: Find the Exit Code
docker ps -a
Look at the STATUS column:
CONTAINER ID IMAGE STATUS NAMES
abc123 myapp Exited (1) 2 minutes ago web
def456 redis Exited (137) 5 hours ago cache
ghi789 nginx Exited (0) 1 minute ago proxy
Each number means something specific. Here’s your cheat sheet:
Exit 0 — The container finished its job and stopped normally. Nothing crashed. If you expected it to keep running, the problem is your CMD or ENTRYPOINT — the main process completed and had nothing else to do.
Exit 1 — The application inside the container crashed. This is the most common exit code. Check the logs for the specific error.
Exit 2 — Misuse of shell command. Often means the command in your CMD or ENTRYPOINT has a syntax error.
Exit 126 — The command exists but is not executable. Usually a permission problem — the script doesn’t have execute permissions.
Exit 127 — Command not found. The binary or script specified in CMD or ENTRYPOINT doesn’t exist in the container. Typo in the command, missing installation, or wrong path.
Exit 137 — The container was killed externally, usually by the Linux OOM (Out of Memory) killer or by docker kill. The container used too much memory and the system terminated it.
Exit 139 — Segmentation fault. The application tried to access memory it shouldn’t. Usually a bug in the code or an incompatible binary/library.
Exit 143 — The container received SIGTERM — a graceful shutdown signal. This is normal when you run docker stop.
For more detail:
docker inspect container_name --format='{{.State.ExitCode}}'
docker inspect container_name --format='{{.State.OOMKilled}}'
The second command returns true if the container was killed because it ran out of memory.
Exit Code 0: Container Exits Immediately But No Error
This confuses people the most. The container starts, runs for less than a second, and exits with code 0. No error in the logs. What happened?
Nothing went wrong. The container did exactly what you told it to do — it ran a process, the process finished, and the container stopped. Docker containers are not virtual machines. They run a single process, and when that process ends, the container ends.
Common scenario: You run a bare Ubuntu or Alpine image:
docker run ubuntu
This starts the Ubuntu container, runs the default command (bash), and since there’s no terminal attached and no input, bash immediately exits. Container done.
Fix: Keep the container running with a foreground process.
If you want the container to stay alive for debugging:
docker run -d ubuntu tail -f /dev/null
If you’re building an app, make sure your Dockerfile’s CMD runs a long-running process — a web server, a database, a service that listens for connections:
CMD ["node", "server.js"] # Node.js
CMD ["python", "app.py"] # Python
CMD ["nginx", "-g", "daemon off;"] # Nginx in foreground
The key phrase is “daemon off” or equivalent. Many services default to running in the background (daemonizing), which means the foreground process exits immediately and Docker thinks the container is done. Force the service to run in the foreground.
Exit Code 1: Application Crashed
This is the bread and butter of container debugging. Something in your application code or configuration went wrong.
Step 1: Read the logs.
docker logs container_name
Everything the application printed to stdout and stderr before crashing is captured here. The error message tells you exactly what failed.
Common causes and their log messages:
Missing environment variable:
Error: DATABASE_URL is not defined
Fix: Add the missing variable when running the container:
docker run -e DATABASE_URL=postgres://user:pass@host/db myapp
Or use an env file:
docker run --env-file .env myapp
Database connection failed:
Error: connect ECONNREFUSED 127.0.0.1:5432
The app is trying to connect to a database at localhost, but there’s no database inside the container. In Docker, localhost means the container itself. If the database is in another container, use the container name as the hostname, or use Docker networking.
Missing file or module:
Error: Cannot find module '/app/server.js'
ModuleNotFoundError: No module named 'flask'
Either the file wasn’t copied into the image (check your Dockerfile COPY commands) or dependencies weren’t installed (check that npm install or pip install runs during the build).
Step 2: Debug interactively.
If the logs aren’t enough, get a shell inside the container:
docker run -it --entrypoint /bin/sh myapp
This overrides the normal startup command and drops you into a terminal. From here you can:
- Check if files exist:
ls -la /app/ - Test commands manually:
node server.js - Check environment variables:
env - Test database connectivity:
ping db-host
Exit Code 137: Killed by the OOM Killer
Exit 137 means the container was forcefully terminated — usually because it consumed more memory than allowed.
Verify it was an OOM kill:
docker inspect container_name --format='{{.State.OOMKilled}}'
If this returns true, the container exceeded its memory limit.
Fix option 1: Increase the memory limit.
docker run -m 1g myapp # 1 gigabyte limit
docker run -m 2g myapp # 2 gigabyte limit
In docker-compose:
services:
app:
image: myapp
deploy:
resources:
limits:
memory: 1G
Fix option 2: Remove the limit entirely (not recommended for production).
docker run --memory-swap -1 myapp
Fix option 3: Fix the actual memory leak.
If your app genuinely needs more memory over time, it probably has a memory leak. A Node.js app that starts at 100MB and grows to 2GB over a few hours is leaking memory. The container restart is just masking the problem. Profile your application’s memory usage and fix the leak.
Fix option 4: Check if the host itself is out of memory.
free -h
If the host machine has very little available memory, even a container with no explicit memory limit can get OOM-killed. The kernel kills the most memory-hungry process to free resources, and that’s often a Docker container.
Exit Code 127: Command Not Found
The command in your Dockerfile’s CMD or ENTRYPOINT doesn’t exist in the container.
Common causes:
The binary isn’t installed. You wrote CMD ["python", "app.py"] but the image doesn’t have Python installed:
FROM alpine
CMD ["python", "app.py"] # python doesn't exist in alpine
Fix:
FROM python:3.11-alpine
CMD ["python", "app.py"]
Wrong path. The command exists but Docker can’t find it because the PATH is different:
CMD ["/usr/local/bin/myapp"] # check that this path is correct
Typo. You wrote pyhton instead of python. Check your Dockerfile for spelling errors.
Windows line endings. If you wrote a shell script on Windows and copied it to a Linux container, it might have \r\n line endings that Linux can’t interpret. The script exists, but the shell can’t parse the first line because of the hidden \r character.
Fix in Dockerfile:
RUN sed -i 's/\r$//' /app/start.sh
Exit Code 126: Permission Denied
The command exists but can’t be executed. The file doesn’t have execute permissions.
docker logs container_name
# exec /app/start.sh: permission denied
Fix in Dockerfile:
COPY start.sh /app/start.sh
RUN chmod +x /app/start.sh
Or fix it before building by running chmod +x start.sh on your local machine.
The “Container Keeps Restarting” Loop
If you set a restart policy (--restart=always or restart: always in docker-compose), and the container keeps crashing, Docker keeps restarting it. You end up with a container that starts, crashes, restarts, crashes, restarts — an infinite crash loop.
Break the loop:
docker update --restart=no container_name
docker stop container_name
Now investigate with docker logs and fix the actual problem before re-enabling the restart policy.
Docker Daemon Won’t Start
If Docker itself won’t start (not a container, but the Docker service):
sudo systemctl status docker
If it shows failed:
sudo journalctl -u docker -e
Common causes:
Disk full. Docker stores images and containers in /var/lib/docker/. If that partition is full, Docker can’t start. Check with df -h /var/lib/docker/. Free space with docker system prune -a (warning: this removes all stopped containers and unused images).
Socket conflict. Another Docker instance or process is holding the socket file. Delete the leftover socket and restart:
sudo rm /var/run/docker.sock
sudo systemctl restart docker
Corrupted storage driver. If Docker was killed during a write operation:
sudo dockerd --storage-driver overlay2
If it starts with a different storage driver, you may need to reset Docker’s data directory.
The Debug Cheat Sheet
Run these in order when a container won’t start:
# 1. What's the exit code?
docker ps -a
# 2. What did it say before dying?
docker logs container_name --tail 50
# 3. Was it OOM killed?
docker inspect container_name --format='{{.State.OOMKilled}}'
# 4. What's using disk space?
docker system df
# 5. Get a shell inside to investigate
docker run -it --entrypoint /bin/sh image_name
Five commands. That’s all you need to diagnose 90% of container startup failures.