Every program running on your server is a process. When one of them hogs the CPU, leaks memory, or spawns hundreds of children, your sites go down. Knowing how to find and deal with problem processes is a core VPS skill.

See What’s Running

Quick Overview

  # Simple list of all processes
ps aux

# Count total processes
ps aux | wc -l

# What's using the most CPU right now?
ps aux --sort=-%cpu | head -15

# What's using the most memory?
ps aux --sort=-%mem | head -15
  

Live Monitoring with htop

top comes preinstalled but htop is better in every way:

  # Install it
sudo apt install htop -y    # Ubuntu/Debian
sudo dnf install htop -y    # Rocky/AlmaLinux

# Run it
htop
  

htop shortcuts:

KeyAction
F6Sort by column (CPU, MEM, PID, etc.)
F5Tree view - see parent/child process relationships
F9Kill selected process
F4Filter by name
/Search
qQuit

Check a Specific Process

  # Find all Nginx processes
ps aux | grep nginx

# Find processes by exact name (no grep noise)
pgrep -la nginx

# See everything a process is doing (file descriptors, sockets, etc.)
ls -la /proc/<PID>/fd
cat /proc/<PID>/status
  

Kill Runaway Processes

By PID

  # Ask it to stop (SIGTERM - gives the process a chance to clean up)
kill 12345

# Force kill (SIGKILL - instant, no cleanup)
kill -9 12345
  

Always try kill before kill -9. A force-killed MySQL or database process can leave corrupted data.

By Name

  # Kill all processes matching a name
pkill -f "php-cgi"

# Kill all processes for a specific user
pkill -u baduser

# Preview what would be killed (without actually killing)
pgrep -la "php-cgi"
  

Kill Everything from a Specific User

If a compromised account is spawning processes:

  # List their processes first
ps -u baduser

# Kill them all
sudo killall -u baduser
  

CPU Priority with nice and renice

Every process has a “niceness” value from -20 (highest priority) to 19 (lowest priority). Default is 0. Higher nice values mean the process yields more CPU time to others.

  # Start a backup with low priority (won't slow down your web server)
nice -n 15 tar -czf backup.tar.gz /var/www/

# Check the niceness of running processes
ps -eo pid,ni,comm --sort=-ni | head -20

# Change priority of a running process
renice 10 -p 12345

# Make MySQL higher priority (requires root)
sudo renice -5 -p $(pgrep -f mysqld)
  

When to use this: long-running backups, imports, large file compressions - anything that shouldn’t compete with Nginx/PHP/MySQL for CPU.

Memory: Finding Leaks and Hogs

See Who’s Using RAM

  # Total memory overview
free -h

# Top 10 memory consumers
ps aux --sort=-%mem | head -10

# Detailed memory breakdown by process
smem -tk    # install: sudo apt install smem
  

Shared vs Private Memory

ps can be misleading because it shows shared library memory for each process individually. For a more accurate picture:

  # RSS = Resident Set Size (includes shared memory - inflated for PHP-FPM)
# PSS = Proportional Set Size (divides shared memory fairly)
sudo cat /proc/$(pgrep -o php-fpm)/smaps_rollup
  

Memory Pressure Check

On kernels 4.20+, you can check if the system is under memory pressure:

  cat /proc/pressure/memory
  

If you see high avg10 or avg60 values, your server needs more RAM or fewer services.

The OOM Killer

When the system runs out of memory and swap, the kernel’s Out-Of-Memory (OOM) Killer picks a process to kill - usually the biggest one. On a web server, that’s often MySQL.

Check If OOM Killed Something

  # Search system log
dmesg | grep -i "oom"
journalctl --since "24 hours ago" | grep -i "Out of memory"

# Detailed OOM report
dmesg | grep -A 10 "oom-kill"
  

Protect a Process from OOM

Each process has an oom_score_adj value. Lower = less likely to be killed. Range: -1000 to 1000.

  # Check OOM score for MySQL
cat /proc/$(pgrep -o mysqld)/oom_score

# Protect MySQL from OOM kills (-500 makes it much less likely)
echo -500 | sudo tee /proc/$(pgrep -o mysqld)/oom_score_adj

# Make permanent via systemd (recommended)
sudo systemctl edit mysql
  

Add this override:

  [Service]
OOMScoreAdjust=-500
  
  sudo systemctl daemon-reload
sudo systemctl restart mysql
  

Resource Limits with ulimit

ulimit controls per-user resource limits: open files, max processes, memory, etc.

Check Current Limits

  # All limits for the current session
ulimit -a

# Max open files
ulimit -n

# Max user processes
ulimit -u
  

Common Hosting Issues Caused by Limits

SymptomLikely LimitFix
“Too many open files” in Nginx or MySQLOpen files (nofile)Increase to 65536
PHP-FPM can’t spawn workersMax processes (nproc)Increase to 4096+
MySQL crashes under loadOpen files or stack sizeCheck nofile and stack

Set Limits Permanently

Edit /etc/security/limits.conf:

  sudo nano /etc/security/limits.conf
  

Add:

  www-data    soft    nofile    65536
www-data    hard    nofile    65536
mysql       soft    nofile    65536
mysql       hard    nofile    65536
*           soft    nproc     4096
*           hard    nproc     4096
  

For systemd services, set limits in the service file instead:

  sudo systemctl edit nginx
  
  [Service]
LimitNOFILE=65536
  
  sudo systemctl daemon-reload
sudo systemctl restart nginx
  

Monitor Resource Usage Over Time

Quick Snapshot

  # CPU, memory, and swap at a glance
vmstat 1 5    # 5 samples, 1 second apart

# I/O activity
iostat -x 1 5

# Network connections
ss -s    # summary
ss -tp   # TCP connections with process names
  

Install and Use sar (Historical Data)

  sudo apt install sysstat -y    # Ubuntu/Debian
sudo dnf install sysstat -y    # Rocky/AlmaLinux

# Enable data collection
sudo systemctl enable --now sysstat

# CPU history
sar -u    # today
sar -u -f /var/log/sysstat/sa$(date -d yesterday +%d)    # yesterday

# Memory history
sar -r

# Load average history
sar -q
  

sar is invaluable for spotting patterns - CPU spikes at 2 AM might be a cron job. Memory slowly climbing over days might be a leak.

Troubleshooting

ProblemFix
Server is slow but CPU and RAM look fineCheck I/O wait with iostat or vmstat. High wa in top means disk is the bottleneck.
PHP-FPM processes keep growingPHP memory leak or long-running script. Restart PHP-FPM: sudo systemctl restart php*-fpm. Set pm.max_requests = 500 in pool config to auto-recycle workers.
MySQL killed by OOM overnightLower its OOM score. Also check if backups run at night and overlap - two memory-heavy tasks at once will trigger OOM.
htop shows 200% CPU usageThat’s 2 cores at 100%. htop shows per-core percentages combined. A 4-core VPS maxes at 400%.
Can’t kill a process (state “D”)“D” means uninterruptible sleep - the process is waiting on disk I/O. You can’t kill it. Fix the underlying I/O issue (full disk, dead NFS mount, etc.) and it’ll unblock.
Load average always high but CPU is idleHigh load average with low CPU usually means too many processes waiting on I/O or uninterruptible states. Check disk and network.

Last updated 21 Apr 2026, 08:08 +0300. history

Was this page helpful?