Container security starts with your base image.
But here’s the catch:
- Simply upgrading to the "latest" version of a base image can break your application.
- You’re forced to choose between shipping known vulnerabilities or spending days fixing compatibility issues.
- And often... you’re not even sure if an upgrade is worth it.
In this post, we’ll explore why updating base images is harder than it seems, walk through real examples, and show how you can automate safe, intelligent upgrades without breaking your app.
The Problem: “Just update your base image” — Easier said than done
If you're reading this, you probably have googled something like “How to secure your containers” and the first point in every AI-generated slop article you’ve read is this, update your base image. Simple right? Well not so fast.
Your base image is your central point of security, if your base image has vulnerabilities inside it then your application carries those vulnerabilities with it. Let’s play out this scenario.
You run a scan against your container image and a high-severity CVE is found. The helpful recommendation is to upgrade the base image, fantastic, you will be done before lunch.
⚠️ CVE-2023-37920 found in ubuntu:20.04
Severity: High
Fixed in: 22.04
Recommendation: Upgrade base image
…but you discover a problem.
By blindly upgrading from ubuntu:20.04
to ubuntu:22.04
, your application shatters.
Let's look at some examples of bumping a base image and what happens in reality.
Example 1: A Dockerfile That Breaks After an Upgrade
Initial Dockerfile:
FROM python:3.8-buster
RUN apt-get update && apt-get install -y libpq-dev
RUN pip install psycopg2==2.8.6 flask==1.1.2
COPY . /appCMD ["python", "app.py"]
The team upgrades to:
FROM python:3.11-bookworm
RUN apt-get update && apt-get install -y libpq-dev
RUN pip install psycopg2==2.8.6 flask==1.1.2COPY . /appCMD ["python", "app.py"]
Result:
psycopg2==2.8.6
fails to compile against newerlibpq
headers onbookworm.
flask==1.1.2
does not supportPython 3.11
runtime features (deprecated APIs break).- The build breaks in CI.
- Your dev team is mad and your lunch is ruined.
Example 2: Base Image Upgrades That Introduce Subtle Runtime Bugs
Original:
FROM node:14-busterCOPY . /app
RUN npm ci
CMD ["node", "server.js"]
Upgrade to:
FROM node:20-bullseye
COPY . /app
RUN npm ci
CMD ["node", "server.js"]
Runtime Problem:
node:20
uses newerOpenSSL
versions — strict TLS verification breaks older axios configurations.- The app throws
UNABLE_TO_VERIFY_LEAF_SIGNATURE
errors on runtimeHTTP
calls to legacy services.
Why “latest” is a trap
The Docker ecosystem encourages using latest tags or top-line releases. But this often means your application that was running on Monday suddenly fails on Tuedday. This is often a trap that will cause headaches, outages and slower development as you spend time fixing bugs.
So the solution then obviously is to pin to a minor version you have tested…. Not so fast as now you entered the game of security whack-a-mole where you will forever be discovering new CVEs that could leave you vulnerable.
Decision Paralysis: Should you upgrade or not?
Security teams push for upgrades.
Developers push back due to stability.
Who’s right? It depends.
BUT, to even understand the decision, you need to look at all the options, which means to create a massive spreadsheet of all the versions, security risks, stability risks, and availability.
Let’s take a look at what that could be like.
This leaves you with complex, crappy, and impossible choices
- Stay on the old image and accept vulnerabilities
- Upgrade and break your app, risking production downtime
- Attempt manual compatibility testing — days of work
The manual upgrade workflow:
If you’re doing this by hand, here’s what it looks like:
- Check CVEs:
trivy image python:3.8-buster
- Research each CVE: Is it reachable in your application context?
- Decide on upgrade candidate
- Test the new image:
- Build
- Run unit tests
- Run integration tests
- If failure, try to patch code or upgrade libraries.
- Repeat for every container.
It’s exhausting.
The Cost of Staying Still
You might think “if it ain’t broke, don’t fix it.”
But unpatched container CVEs are a massive contributor to security breaches “87% of container images running in production had at least one critical or high-severity vulnerability." source
There are also plenty of known exploits that exist in popular base images.
- Unzip Path Traversal vulnerability (
CVE-2020-27350
) — sat in millions of containers for years. - Heartbleed (
CVE-2014-0160
) stayed in legacy containers long after official fixes. PHP-FPM RCE
(CVE-2019-11043
) allow remote attackers to execute arbitrary code via crafted HTTP requests and was Extremely common in container base images withpre-installed PHP-FPM
prior to being patched
How Our Auto-Fix Feature Helps
To solve in this exact scenario, Aikido Security rolled out our container auto-fix feature because, well, we live in this pain too.
The feature works like this, your images, Aikido scans your containers for vulnerabilities. If (or more likely when) we find vulnerabilities, like always we alert you, then Instead of yelling at you to update your base image we provide you with different options. We create a table that lets you know what version of the base image will solve what CVEs, this way you can very quickly see that a minor bump may remove all or a majority of high CVEs meaning this is an adequate upgrade of the base image.
If the upgrade is a minor bump you can automatically create a pull request to bump up the version.
That it hours of work saved
Conclusion:
- Upgrading container base images is genuinely hard.
- The “just upgrade” advice oversimplifies a complex, risk-laden process.
- Your teams are right to be cautious — but they shouldn’t have to choose between security and stability.
- Aikido’s container autofix does the hard work for you so you can make an informed decision.
- So the next time you see a base image vulnerability alert, you won’t panic. You’ll get a PR.