How to Fix Docker Container That Won't Start or Keeps Exiting — Read the Exit Code First

By Adhen Prasetiyo

Thursday, March 12, 2026 • 8 min read

Terminal showing Docker container exit code errors with Docker whale logo

You deploy a Docker container. It starts. Then it immediately stops. Or maybe it never starts at all. You run docker ps and see nothing. You run docker ps -a and there it is — status: Exited.

The natural instinct is to start searching for the error. But Docker already told you what went wrong. It’s in the exit code — that number in parentheses next to “Exited.” Most people ignore it and start trying random solutions. Don’t be most people.

The exit code is the fastest diagnostic tool you have. Learn to read it, and you can diagnose most container failures in under a minute.

Step 1: Find the Exit Code

docker ps -a

Look at the STATUS column:

CONTAINER ID   IMAGE      STATUS                    NAMES

abc123         myapp      Exited (1) 2 minutes ago  web

def456         redis      Exited (137) 5 hours ago  cache

ghi789         nginx      Exited (0) 1 minute ago   proxy

Each number means something specific. Here’s your cheat sheet:

Exit 0 — The container finished its job and stopped normally. Nothing crashed. If you expected it to keep running, the problem is your CMD or ENTRYPOINT — the main process completed and had nothing else to do.

Exit 1 — The application inside the container crashed. This is the most common exit code. Check the logs for the specific error.

Exit 2 — Misuse of shell command. Often means the command in your CMD or ENTRYPOINT has a syntax error.

Exit 126 — The command exists but is not executable. Usually a permission problem — the script doesn’t have execute permissions.

Exit 127 — Command not found. The binary or script specified in CMD or ENTRYPOINT doesn’t exist in the container. Typo in the command, missing installation, or wrong path.

Exit 137 — The container was killed externally, usually by the Linux OOM (Out of Memory) killer or by docker kill. The container used too much memory and the system terminated it.

Exit 139 — Segmentation fault. The application tried to access memory it shouldn’t. Usually a bug in the code or an incompatible binary/library.

Exit 143 — The container received SIGTERM — a graceful shutdown signal. This is normal when you run docker stop.

For more detail:

docker inspect container_name --format='{{.State.ExitCode}}'

docker inspect container_name --format='{{.State.OOMKilled}}'

The second command returns true if the container was killed because it ran out of memory.

Exit Code 0: Container Exits Immediately But No Error

This confuses people the most. The container starts, runs for less than a second, and exits with code 0. No error in the logs. What happened?

Nothing went wrong. The container did exactly what you told it to do — it ran a process, the process finished, and the container stopped. Docker containers are not virtual machines. They run a single process, and when that process ends, the container ends.

Common scenario: You run a bare Ubuntu or Alpine image:

docker run ubuntu

This starts the Ubuntu container, runs the default command (bash), and since there’s no terminal attached and no input, bash immediately exits. Container done.

Fix: Keep the container running with a foreground process.

If you want the container to stay alive for debugging:

docker run -d ubuntu tail -f /dev/null

If you’re building an app, make sure your Dockerfile’s CMD runs a long-running process — a web server, a database, a service that listens for connections:

CMD ["node", "server.js"]           # Node.js

CMD ["python", "app.py"]            # Python

CMD ["nginx", "-g", "daemon off;"]  # Nginx in foreground

The key phrase is “daemon off” or equivalent. Many services default to running in the background (daemonizing), which means the foreground process exits immediately and Docker thinks the container is done. Force the service to run in the foreground.

Exit Code 1: Application Crashed

This is the bread and butter of container debugging. Something in your application code or configuration went wrong.

Step 1: Read the logs.

docker logs container_name

Everything the application printed to stdout and stderr before crashing is captured here. The error message tells you exactly what failed.

Common causes and their log messages:

Missing environment variable:

Error: DATABASE_URL is not defined

Fix: Add the missing variable when running the container:

docker run -e DATABASE_URL=postgres://user:pass@host/db myapp

Or use an env file:

docker run --env-file .env myapp

Database connection failed:

Error: connect ECONNREFUSED 127.0.0.1:5432

The app is trying to connect to a database at localhost, but there’s no database inside the container. In Docker, localhost means the container itself. If the database is in another container, use the container name as the hostname, or use Docker networking.

Missing file or module:

Error: Cannot find module '/app/server.js'

ModuleNotFoundError: No module named 'flask'

Either the file wasn’t copied into the image (check your Dockerfile COPY commands) or dependencies weren’t installed (check that npm install or pip install runs during the build).

Step 2: Debug interactively.

If the logs aren’t enough, get a shell inside the container:

docker run -it --entrypoint /bin/sh myapp

This overrides the normal startup command and drops you into a terminal. From here you can:

  • Check if files exist: ls -la /app/
  • Test commands manually: node server.js
  • Check environment variables: env
  • Test database connectivity: ping db-host

Exit Code 137: Killed by the OOM Killer

Exit 137 means the container was forcefully terminated — usually because it consumed more memory than allowed.

Verify it was an OOM kill:

docker inspect container_name --format='{{.State.OOMKilled}}'

If this returns true, the container exceeded its memory limit.

Fix option 1: Increase the memory limit.

docker run -m 1g myapp          # 1 gigabyte limit

docker run -m 2g myapp          # 2 gigabyte limit

In docker-compose:

services:

  app:

    image: myapp

    deploy:

      resources:

        limits:

          memory: 1G

Fix option 2: Remove the limit entirely (not recommended for production).

docker run --memory-swap -1 myapp

Fix option 3: Fix the actual memory leak.

If your app genuinely needs more memory over time, it probably has a memory leak. A Node.js app that starts at 100MB and grows to 2GB over a few hours is leaking memory. The container restart is just masking the problem. Profile your application’s memory usage and fix the leak.

Fix option 4: Check if the host itself is out of memory.

free -h

If the host machine has very little available memory, even a container with no explicit memory limit can get OOM-killed. The kernel kills the most memory-hungry process to free resources, and that’s often a Docker container.

Exit Code 127: Command Not Found

The command in your Dockerfile’s CMD or ENTRYPOINT doesn’t exist in the container.

Common causes:

The binary isn’t installed. You wrote CMD ["python", "app.py"] but the image doesn’t have Python installed:

FROM alpine

CMD ["python", "app.py"]  # python doesn't exist in alpine

Fix:

FROM python:3.11-alpine

CMD ["python", "app.py"]

Wrong path. The command exists but Docker can’t find it because the PATH is different:

CMD ["/usr/local/bin/myapp"]  # check that this path is correct

Typo. You wrote pyhton instead of python. Check your Dockerfile for spelling errors.

Windows line endings. If you wrote a shell script on Windows and copied it to a Linux container, it might have \r\n line endings that Linux can’t interpret. The script exists, but the shell can’t parse the first line because of the hidden \r character.

Fix in Dockerfile:

RUN sed -i 's/\r$//' /app/start.sh

Exit Code 126: Permission Denied

The command exists but can’t be executed. The file doesn’t have execute permissions.

docker logs container_name

# exec /app/start.sh: permission denied

Fix in Dockerfile:

COPY start.sh /app/start.sh

RUN chmod +x /app/start.sh

Or fix it before building by running chmod +x start.sh on your local machine.

The “Container Keeps Restarting” Loop

If you set a restart policy (--restart=always or restart: always in docker-compose), and the container keeps crashing, Docker keeps restarting it. You end up with a container that starts, crashes, restarts, crashes, restarts — an infinite crash loop.

Break the loop:

docker update --restart=no container_name

docker stop container_name

Now investigate with docker logs and fix the actual problem before re-enabling the restart policy.

Docker Daemon Won’t Start

If Docker itself won’t start (not a container, but the Docker service):

sudo systemctl status docker

If it shows failed:

sudo journalctl -u docker -e

Common causes:

Disk full. Docker stores images and containers in /var/lib/docker/. If that partition is full, Docker can’t start. Check with df -h /var/lib/docker/. Free space with docker system prune -a (warning: this removes all stopped containers and unused images).

Socket conflict. Another Docker instance or process is holding the socket file. Delete the leftover socket and restart:

sudo rm /var/run/docker.sock

sudo systemctl restart docker

Corrupted storage driver. If Docker was killed during a write operation:

sudo dockerd --storage-driver overlay2

If it starts with a different storage driver, you may need to reset Docker’s data directory.

The Debug Cheat Sheet

Run these in order when a container won’t start:

# 1. What's the exit code?

docker ps -a

# 2. What did it say before dying?

docker logs container_name --tail 50

# 3. Was it OOM killed?

docker inspect container_name --format='{{.State.OOMKilled}}'

# 4. What's using disk space?

docker system df

# 5. Get a shell inside to investigate

docker run -it --entrypoint /bin/sh image_name

Five commands. That’s all you need to diagnose 90% of container startup failures.

Step-by-Step Guide

1

Find the exit code of the stopped container

Run docker ps -a to see all containers including stopped ones. Look at the STATUS column for the exit code number in parentheses. For example Exited (1) means exit code 1 and Exited (137) means exit code 137. If you need more detail run docker inspect container_name --format='{{.State.ExitCode}}' for the exact exit code and docker inspect container_name --format='{{.State.OOMKilled}}' to check if it was killed for using too much memory. Step 2: Name: Check the container logs for error messages | Text: Run docker logs container_name to see what the application printed before it crashed. Add the --tail 50 flag to see only the last 50 lines. If the container keeps restarting use docker logs -f container_name to follow the logs in real time. The error message in the logs usually tells you exactly what went wrong such as missing environment variables, wrong database connection string, or a missing file.

2

Fix exit code 1 which means application error

Exit code 1 means the application inside the container crashed. Check the logs for the specific error. Common causes include missing environment variables such as database URL or API keys. Fix by adding the -e flag when running the container like docker run -e DATABASE_URL=your_url image_name. Other causes include missing dependencies or wrong file paths in the Dockerfile. You can debug interactively by running docker run -it --entrypoint /bin/sh image_name to get a shell inside the container and investigate.

3

Fix exit code 137 which means out of memory OOM kill

Exit code 137 means the Linux kernel killed the container because it used more memory than allowed. Check with docker inspect container_name --format='{{.State.OOMKilled}}' which returns true if it was an OOM kill. Fix by increasing the memory limit with docker run -m 512m image_name or remove the limit entirely with docker run --memory-swap -1 image_name. If the container genuinely needs more memory you need to optimize the application or add more RAM to the host.

4

Fix containers that exit immediately with code 0

Exit code 0 means the container finished successfully which seems wrong if you wanted it to keep running. This happens when the main process completes and has nothing else to do. For example a container running an Ubuntu image exits immediately because there is no long-running process. Fix by running a process that stays alive such as docker run -d image_name tail -f /dev/null for debugging or make sure your Dockerfile CMD or ENTRYPOINT runs a service that listens for connections and does not exit.

Frequently Asked Questions

What does Docker exit code 139 mean?
Exit code 139 means the container process received a SIGSEGV signal which is a segmentation fault. This happens when the application tries to access memory it does not have permission to read or write. It is usually caused by a bug in the application code, a corrupted binary, or incompatible native libraries. Try rebuilding the Docker image from scratch with docker build --no-cache to eliminate corrupted cached layers.
Why does my Docker container keep restarting in a loop?
This happens when you set a restart policy like --restart=always or restart: always in docker-compose and the container keeps crashing. Docker restarts it, it crashes again, Docker restarts it again creating an infinite loop. To break the loop run docker update --restart=no container_name then investigate the crash using docker logs. Fix the underlying issue before re-enabling the restart policy.
How do I keep a Docker container running for debugging?
Override the entrypoint to start a shell instead of the application. Run docker run -it --entrypoint /bin/sh image_name for Alpine-based images or docker run -it --entrypoint /bin/bash image_name for Debian or Ubuntu-based images. This gives you an interactive terminal inside the container where you can check files, test commands, and investigate why the application fails to start.
Can insufficient disk space cause Docker containers to fail?
Yes. If the host machine runs out of disk space Docker cannot create container layers, write logs, or store temporary files. The container may fail to start or crash mid-operation. Check disk space with df -h and Docker disk usage with docker system df. Free space by removing unused images and containers with docker system prune -a which removes all stopped containers, unused networks, dangling images, and build cache.
Adhen Prasetiyo

Research Bug bounty at javahack team

Research Bug bounty Profesional

Web Development Research Bug Hunter
View all articles →