How to Fix Cron Jobs Not Running on Linux — The Environment Variable Trap That Gets Everyone

By Adhen Prasetiyo

Saturday, April 4, 2026 • 10 min read

Linux terminal showing crontab editor with a failing cron job and system log entries

You wrote a perfectly good bash script. You tested it in the terminal. It ran flawlessly. You added it to crontab, set it to run every night at 2 AM, and went to bed feeling productive.

The next morning, nothing happened. The backup wasn’t made. The report wasn’t generated. The database wasn’t cleaned up. The script that worked perfectly five minutes ago just… didn’t run.

No error message in your inbox. No log entry. Nothing in the output. Just silence. Like your cron job fell into a black hole.

This is one of the most common and most frustrating problems in Linux system administration. And the cause is almost always the same: cron runs your command in a completely different environment than your terminal, and nobody tells you this until you’ve wasted three hours staring at a script that’s perfectly correct.

Why Cron Is Not Your Terminal

When you open a terminal and type a command, your shell goes through an elaborate setup process. It reads ~/.bashrc, ~/.bash_profile, ~/.profile, and sometimes other files. These files set your PATH variable (which tells the shell where to find programs), define environment variables, load aliases, configure your prompt, and more.

Your terminal knows where python3 is, where node is, where docker is, and where your custom scripts live — because your PATH includes all those directories.

Cron does none of this.

When cron executes your job, it runs with a brutally minimal environment. The PATH is typically just /usr/bin:/bin — that’s it. No /usr/local/bin, no /snap/bin, no /home/you/.local/bin, no nothing. Any command that’s installed outside of /usr/bin or /bin is invisible to cron.

This means your script that calls python3 might fail because the Python you installed is at /usr/local/bin/python3 and cron doesn’t look there. Your script that calls docker fails because Docker is at /usr/bin/docker on some systems and /snap/bin/docker on others. Your script that calls pg_dump fails because PostgreSQL tools are at /usr/lib/postgresql/15/bin/ and cron has no idea that path exists.

The script is fine. The schedule is fine. Cron is running. It’s just running your command in a world where half the tools don’t exist.

Step 1: Make Sure Cron Is Actually Running

Before blaming the environment, verify that the cron daemon itself is alive:

systemctl status cron          # Debian, Ubuntu

systemctl status crond         # CentOS, RHEL, Fedora

You should see active (running). If it’s stopped or failed:

sudo systemctl start cron

sudo systemctl enable cron     # Start on boot

Then verify your job is in the crontab:

crontab -l                     # Current user's jobs

sudo crontab -l                # Root's jobs

If your job doesn’t appear in the output, it was never saved. Run crontab -e to add it. Make sure you save and exit properly — if you’re using vi as the editor, press Esc, type :wq, and press Enter. If you prefer a different editor, set it with export VISUAL=nano before running crontab -e.

Step 2: Check the Cron Log

Cron logs every execution attempt to the system log. This is your forensic evidence.

On Debian/Ubuntu:

grep CRON /var/log/syslog | tail -20

On CentOS/RHEL:

tail -20 /var/log/cron

You’re looking for lines that match the time your job should have run. A successful cron trigger looks like:

Apr 4 02:00:01 server CRON[12345]: (username) CMD (/home/user/scripts/backup.sh)

If you see this line, cron did execute your command. The problem is that the command failed silently. The fix is in Steps 3-5.

If you don’t see any entry for your job at the expected time, cron didn’t recognize the schedule. The most common syntax errors:

Wrong number of fields. Crontab needs exactly 5 time fields (minute, hour, day-of-month, month, day-of-week) followed by the command. Missing or extra fields break the entire line silently.

Using ranges incorrectly. 1-5 in the day-of-week field means Monday through Friday, but 0 is Sunday (and so is 7 on some systems). Mixing these up shifts your schedule by a day.

Trailing whitespace. Some older cron implementations choke on trailing spaces after the command. Make sure each line ends cleanly.

Use crontab.guru to verify your expression — paste your 5 time fields and it’ll tell you in plain English when the job will run. It also warns about common mistakes.

Step 3: Fix the PATH (This Is the Fix 90% of You Need)

If the cron log shows your job was triggered but nothing happened, the PATH is almost certainly the problem.

Option A: Set PATH at the top of your crontab.

Run crontab -e and add this as the very first line, before any job entries:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin

This gives cron the same PATH your terminal uses. Now all your commands will be found.

Option B: Use absolute paths in every command.

Instead of:

0 2 * * * python3 /home/user/scripts/backup.py

Use:

0 2 * * * /usr/bin/python3 /home/user/scripts/backup.py

Find the absolute path of any command with which:

which python3    # Shows /usr/bin/python3 or /usr/local/bin/python3

which docker     # Shows /usr/bin/docker or /snap/bin/docker

which pg_dump    # Shows /usr/lib/postgresql/15/bin/pg_dump

Option A is easier to maintain. Option B is more explicit and portable. Both work.

Important: if your script internally calls other programs (e.g., a bash script that runs curl, then jq, then aws), each of those commands also needs to be found via PATH. Setting PATH at the top of the crontab fixes this globally. Or you can add export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin at the top of the script itself.

Step 4: Add Output Logging (Stop Debugging Blind)

By default, cron tries to email the output of each job to the user who owns the crontab. On most modern servers, there’s no mail transfer agent configured, so this email goes nowhere. Your script’s error messages, stack traces, and diagnostic output vanish into the void.

Fix this immediately by adding output redirection to every cron job:

0 2 * * * /usr/bin/python3 /home/user/scripts/backup.py >> /home/user/logs/backup.log 2>&1

Breaking this down:

  • >> appends standard output to the log file (use > to overwrite each time)
  • 2>&1 redirects standard error to the same file as standard output

Create the log directory first:

mkdir -p /home/user/logs

Now when the job fails, the error message is captured in the log file. Check it after the next scheduled run:

cat /home/user/logs/backup.log

Common errors you’ll discover:

  • “command not found” — PATH issue (see Step 3)
  • “Permission denied” — script doesn’t have execute permission or is trying to write to a directory it can’t access
  • “No such file or directory” — the script uses relative paths that don’t resolve correctly from cron’s working directory
  • Python ImportError or ModuleNotFoundError — cron is using a different Python installation than the one you installed packages into (common with virtual environments)

Step 5: Fix Permissions and File Paths

Execute permission. Your script must be executable:

chmod +x /home/user/scripts/backup.sh

Without this, cron can’t run the script directly. You can work around this by calling the interpreter explicitly (/bin/bash /home/user/scripts/backup.sh), but it’s better practice to make the script executable and include a proper shebang.

Shebang line. The first line of your script must tell the system which interpreter to use:

#!/bin/bash          # For bash scripts

#!/usr/bin/env python3   # For Python scripts

#!/usr/bin/env node      # For Node.js scripts

Without this, cron may try to interpret a Python script as shell commands, producing bizarre errors.

Working directory. When you run a script from your terminal, the working directory is wherever you’re currently standing (cd’d into). When cron runs a script, the working directory is typically / or the home directory of the crontab owner.

If your script uses relative paths like ./data/input.csv or output/report.pdf, those will resolve relative to cron’s working directory — not your script’s directory. They’ll point to the wrong location or a nonexistent path.

Fix by using absolute paths everywhere inside your scripts, or add a cd at the beginning:

#!/bin/bash

cd /home/user/projects/myapp || exit 1

./process_data.sh

The || exit 1 ensures the script stops if the cd fails (e.g., if the directory doesn’t exist) instead of running commands in the wrong location.

Virtual environments. If your Python script runs inside a virtual environment, cron doesn’t activate it. You need to call the Python binary from inside the venv directly:

0 2 * * * /home/user/myproject/venv/bin/python /home/user/myproject/script.py

This uses the venv’s Python (with all its installed packages) without needing to activate the environment.

Step 6: The crontab vs cron.d vs cron.daily Confusion

There are multiple places cron jobs can live, and mixing them up causes subtle failures:

crontab -e (per-user crontab) — the most common. Each user has their own crontab. Format: 5 time fields + command.

0 2 * * * /home/user/backup.sh

/etc/crontab (system-wide crontab) — has an extra field for the username. If you put a user crontab entry here without the username field, or put a system crontab entry in your personal crontab with the extra field, the job will fail.

0 2 * * * root /opt/scripts/system-backup.sh

/etc/cron.d/ (system cron fragments) — same format as /etc/crontab (includes username field). Files here are read by cron automatically. File names must not contain dots or other special characters, or cron will ignore them silently.

/etc/cron.daily/, /etc/cron.hourly/, etc. — scripts placed here run at the specified interval via anacron. No crontab syntax needed — just the script. But scripts must be executable and must not have file extensions. A script named backup.sh in /etc/cron.daily/ might be ignored because of the .sh extension depending on the run-parts configuration.

If your job works in crontab -e but not in /etc/cron.d/, you probably forgot the username field. If it works nowhere, the PATH and permission issues from Steps 3-5 are the culprit.

The Cron Job Testing Template

Don’t wait 24 hours to find out your nightly backup didn’t work. Use this template to test any cron job quickly:

# Run every minute (for testing — change back after confirming it works)

* * * * * /usr/bin/python3 /home/user/scripts/backup.py >> /home/user/logs/backup.log 2>&1

Set this, wait 1-2 minutes, then check the log file:

tail -f /home/user/logs/backup.log

If the log file has output (success or error), the job is running. Fix any errors you see. Once everything works, change the schedule to what you actually want (e.g., 0 2 * * * for 2 AM daily).

Then add a lock file to prevent overlapping runs if the job could potentially take longer than its interval:

0 2 * * * /usr/bin/flock -n /tmp/backup.lock /usr/bin/python3 /home/user/scripts/backup.py >> /home/user/logs/backup.log 2>&1

flock is a simple but powerful tool that prevents multiple instances of the same job from running simultaneously. The -n flag makes it exit immediately if the lock is already held, instead of waiting.

Your cron job isn’t broken. It’s working exactly as designed — in an environment that’s radically different from your terminal. Once you understand that difference and account for it with explicit PATHs, absolute file paths, proper permissions, and output logging, cron becomes the most reliable automation tool on any Linux server.

Step-by-Step Guide

1

Verify the cron service is running

Before debugging your job make sure the cron daemon itself is running. Run systemctl status cron on Debian or Ubuntu or systemctl status crond on CentOS or RHEL. If the service shows inactive or failed start it with sudo systemctl start cron and enable it to start on boot with sudo systemctl enable cron. If cron is not running none of your scheduled jobs will execute regardless of how they are configured. Also check if your job is actually in the crontab by running crontab -l for the current user or sudo crontab -l for root jobs. If the job does not appear in the list it was never saved properly.

2

Check the cron log for execution attempts

Cron logs every job it tries to execute. On Ubuntu and Debian check /var/log/syslog by running grep CRON /var/log/syslog. On CentOS and RHEL check /var/log/cron. Look for entries matching the time your job should have run. If you see an entry with CMD followed by your command cron did try to execute it. If there is no entry at all cron did not recognize the schedule. Common causes of missing entries include incorrect crontab syntax with wrong number of fields, using named days or months without proper formatting, or the job being in the wrong user crontab. If cron ran the command but you see no output and no result the job executed but produced an error that went to /dev/null or was lost because no output redirection was configured.

3

Fix the PATH and environment variable problem

This is the number one cause of cron job failures. When you run a command in your terminal your shell loads your profile which sets the PATH variable to include directories like /usr/local/bin, /home/user/.local/bin, and program-specific paths. Cron does NOT load your shell profile. It runs with a minimal PATH that typically only includes /usr/bin and /bin. If your script calls programs installed in /usr/local/bin, /snap/bin, or any custom location cron cannot find them and the command fails silently. Fix this by adding the full PATH at the top of your crontab. Run crontab -e and add this line before your jobs: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin. Alternatively use absolute paths for every command in your cron jobs. Instead of python3 script.py use /usr/bin/python3 /home/user/scripts/script.py.

4

Add output logging to catch silent errors

By default cron sends job output as email to the crontab owner. If your system does not have a mail transfer agent configured this output is lost and you never see errors. To capture output add redirection to every cron job. Change your crontab entry from 0 2 * * * /home/user/backup.sh to 0 2 * * * /home/user/backup.sh >> /home/user/logs/backup.log 2>&1. The >> appends stdout to the log file and 2>&1 redirects stderr to the same file. Now when the job fails the error message will be in the log file instead of disappearing into the void. Create the log directory first with mkdir -p /home/user/logs. Check the log file after the next scheduled run to see exactly what error occurred.

5

Fix script permissions and shebang line

Cron requires scripts to be executable. If your script does not have execute permission cron will fail to run it. Add execute permission with chmod +x /home/user/scripts/backup.sh. Also make sure your script has a proper shebang line as the first line. For bash scripts the first line must be #!/bin/bash. For Python scripts use #!/usr/bin/env python3. Without the shebang cron does not know which interpreter to use and may try to execute the script as a shell script regardless of its actual language. Also verify the script does not use relative file paths inside. Since cron does not run from your home directory a script that references ./data/file.txt will look for data/file.txt in the root directory not in your home folder. Use absolute paths everywhere inside cron scripts.

Frequently Asked Questions

How do I verify that my cron syntax is correct?
Use an online cron expression tool like crontab.guru which shows you in plain English when your job will run next. Enter your five time fields and it will tell you the exact schedule. Common mistakes include confusing the day of week field where 0 and 7 both mean Sunday, forgetting that the month field is 1 to 12 not 0 to 11, and accidentally using 6 fields instead of 5. The standard crontab format is minute hour day-of-month month day-of-week command. If you are editing files in /etc/cron.d instead of using crontab -e there is a sixth field for the username that runs between the schedule and the command.
Why does my cron job work when I run it manually but fail in cron?
Because your terminal session and cron have completely different environments. Your terminal has your full PATH, your shell profile variables, proxy settings, database connection strings, and other environment variables that your script depends on. Cron has almost none of these. The fix is to either set all required environment variables at the top of your crontab file or source your profile inside the script itself by adding source /home/user/.bashrc at the beginning of the script. However be careful with sourcing .bashrc because some distributions include an early exit in .bashrc for non-interactive shells which prevents the rest of the file from loading.
Can I run a cron job every 30 seconds?
No. The smallest interval cron supports is one minute. To run something every 30 seconds you need a workaround. The simplest approach is to create two cron entries. The first runs at the start of each minute and the second runs the same command with a 30-second sleep delay. For example: * * * * * /path/to/script.sh and * * * * * sleep 30 && /path/to/script.sh. For intervals shorter than 30 seconds cron is not the right tool. Consider using a systemd timer with OnUnitActiveSec or a loop-based daemon script managed by systemd instead.
How do I stop a cron job from running multiple instances simultaneously?
Use flock to create a lock file that prevents overlapping execution. Change your crontab entry from 0 * * * * /path/to/script.sh to 0 * * * * /usr/bin/flock -n /tmp/script.lock /path/to/script.sh. The -n flag tells flock to exit immediately if the lock is already held meaning a previous instance is still running. This prevents the common scenario where a slow job stacks up multiple instances that collectively overwhelm the server. This is especially important for backup scripts, database maintenance, and any job that might take longer than its scheduled interval.
Adhen Prasetiyo

Research Bug bounty at javahack team

Research Bug bounty Profesional

Web Development Research Bug Hunter
View all articles →