ℹ️
The idea of this article is to help you up your game and understand how secure and configurable you can have your development or auditing setup.

If you haven't read part one, I suggest you get the general idea behind it, skip the code, and just understand its overall structure.

Where do you run your code? - an intro to devcontainers
An introduction to devcontainers. One way to isolate your environment is one step closer to being more secure than before.

Also, there's an amazing video by Patrick from Cyfrin, where he mentions our previous article, and talks about the importance of creating isolation with devcontainers as well.

For this article, I'm going directly to the point, which is security. If you feel there are gaps in your knowledge that prevent you from fully taking advantage of the reading material, well, let me tell you that you're in luck. DeepSeek-V3 r1 (incredibly cheap and open source ChatGPT) has been released and outperforms Claude Sonnet and is on par with o1, you can take a chance to fill in those knowledge gaps and try it for free, no sign-up nor paywalls.

UPDATE: Apparently DeepSeek is getting DDoSed, so they paused registrations momentarily. But you can set up your own local version easily. When I started writing this article r1 wasn't much of a sensation yet. Catch up to the r1 lore in just one tweet.

Introduction

Now that you’ve set up your first devcontainer (or are about to), it’s important to understand their limitations. Devcontainers aren’t failsafe, container escapes and vulnerabilities (check CVEs 1, 2, 3, 4, 5, 6, 7) have existed for years and will continue to emerge. Even in VMs (check CVEs 1, 2, 3, 4, 5).

Container Breakouts: Escape Techniques in Cloud Environments
Unit 42 researchers test container escape methods and possible impacts within a Kubernetes cluster using a containerd container runtime.
Zero-Day Vulnerability Found in VirtualBox: Host Systems at Risk
A new threat has emerged concerning the security of VirtualBox virtual machines (VMs). A threat actor known as Cas has
VMware sandbox escape bugs are so critical, patches are released for end-of-life products
VMware ESXi, Workstation, Fusion, and Cloud Foundation all affected.

Security requires balance. Unless you’re a high-value target (e.g., financial institutions, protocol maintainers, or high-net-worth individuals), fixating on edge cases like 0-day exploits is counterproductive. These vulnerabilities are expensive (e.g., Docker: 50−300k, VMWare: 50−300k, VMWare: 200k-1M, VBox: $100k-1.5M) and rarely wasted on small targets.

Key principles:

  1. Separate critical workflows (signing, admin) from daily-use devices.
  2. Assume basic hardening – our Dockerfile already uses non-root users, dropped capabilities, and minimal base images.
  3. Focus on likely threats – misconfigurations, leaks – not Hollywood-style hacks.

Let’s dissect the Dockerfile to see how it aims to balance security and practicality.

Walkthrough: Dockerfile

You can start by opening it by going here. Below we're going to be speaking about key characteristics, and don't delve into what we have explained later or can be easily deducible.

1. Multi-Stage Builds

Multi-stage builds reduce the final image size by including only the necessary components. This minimizes the attack surface by removing unnecessary tools and libraries that could introduce vulnerabilities.

FROM --platform=linux/amd64 ghcr.io/crytic/echidna/echidna:latest AS echidna
FROM python:3.12-slim as python-base
FROM mcr.microsoft.com/vscode/devcontainers/base:debian

In this case, pulling Echidna (or any package, for that matter) using the latest tag is not recommended. It introduces instability and makes builds non-reproducible, as the version you get depends on when you install it.


2. Non-Root User

Running the container as a non-root user (vscode) reduces the risk of privilege escalation attacks. Even if an attacker gains access to the container, they won’t have root privileges by default.

USER root
RUN echo "vscode ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
USER vscode

While we run most processes as a non-root user, the sudoers file grants the vscode user elevated privileges:

  • ALL=(ALL): The vscode user can run commands as any user, including root.
  • NOPASSWD:ALL: No password is required for these commands.

While convenient for development, this setup creates security risks. If attackers compromise the vscode user, they gain unrestricted root access, no password required.

A more paranoid config would be:

  1. Remove NOPASSWD: Require a password for sudo
    vscode ALL=(ALL) ALL
  2. Limit Sudo Access: Restrict vscode to specific
    vscode ALL=(ALL) /usr/bin/apt-get, /usr/bin/pip
  3. Avoid Sudo: Perform privileged operations during the build phase and avoid sudo entirely at runtime.

3. Minimal Package Installation

Only essential packages are installed, reducing the number of potential vulnerabilities. The --no-install-recommends flag ensures that unnecessary dependencies are not installed.

RUN apt-get update -y && apt-get install -y \
    zsh python3-dev libpython3-dev build-essential vim curl git sudo pkg-config \
    --no-install-recommends

4. Isolated Python Package Installation with uv

uv is An extremely fast Python package and project manager written in Rust. It replaces pippip-toolspipxpoetrypyenvtwinevirtualenv, and more. Believe me, they do not exaggerate when they claim it's extremely fast.

GitHub - astral-sh/uv: An extremely fast Python package and project manager, written in Rust.
An extremely fast Python package and project manager, written in Rust. - astral-sh/uv
RUN python3 -m pip install --no-cache-dir --upgrade uv

If you have ever created a Python virtual environment before, well, this does something similar for you, but better 😜. Here's a representation of what it is analog for:

# Step 1: Create venv
python -m venv ~/.local/pipx/venvs/slither

# Step 2: Install package
~/.local/pipx/venvs/slither/bin/pip install slither

# Step 3: Symlink executable
ln -s ~/.local/pipx/venvs/slither/bin/slither ~/.local/bin/slither

# Verify
which slither  # Output: ~/.local/bin/slither

Now you just do the following:

# installing my_tool in a venv
uv tool install my_tool

# running a tool in an ephemeral context
uvx other_tool

I noticed it doesn't have a way to append the flag --include-deps to install all dependent executables yet (there's some ongoing discussion to implement something similar). In our case, we'd use it with napalm-core since it depends on slither and a few more tools, avoiding installing them independently.

Shout-out to Patrick for this recommendation, never going back to pipx again.


5. Version Management with asdf

asdf allows for precise version management of tools and languages, ensuring that only trusted and tested versions are used. This reduces the risk of vulnerabilities introduced by outdated or untested versions.

I like it because it's incredibly easy to handle different versions and be always up to date. You can install rust and even solc like this, though solc-select has been more adopted. You can check the plugin list here.

git clone https://github.com/asdf-vm/asdf.git $HOME/.asdf --branch ${ASDF_VERSION}  && \
    echo '. $HOME/.asdf/asdf.sh' >> $HOME/.zshrc && \
    asdf plugin add golang && \
    asdf install golang latest && \
    asdf global golang latest

We still need to explore installing the rest of our dependencies like python, rust, nvm using this method without breaking anything.


6. Prebuilt Binary (e.g. Echidna)

Echidna is copied from a prebuilt binary in a trusted image (ghcr.io/crytic/echidna/echidna:latest).

Using prebuilt tools can be safer because I trust Crytic. Trusting these prebuilt tools saves time and reduces mistakes. It’s usually safer to rely on trusted sources than to build everything yourself. Just make sure you download from official places to avoid fakes.

COPY --chown=vscode:vscode --from=echidna /usr/local/bin/echidna ${HOME}/.local/bin/echidna
RUN chmod 755 ${HOME}/.local/bin/echidna

Walkthrough: devcontainer.json

There are a lot of properties you can use to configure your devcontainer. Here's the official list.

So, let's start by focusing on the configuration that is not trivial or that any other simple tutorial can have already.

1. Mount isolation

Along with dockerignore you can configure very easily to rest assured nothing you don't want will end up accessed from within the container.

These two flags handle all there is to know for our working environment. the first one forces the workspace to be mounted in a place it does not exist. Otherwise, by default you're going to get everything you have inside your folder mounted to the container, which is a feature indeed!

  // Disables mounting the host workspace into the container.
  "workspaceMount": "type=tmpfs,destination=/workspace",
  // Sets a workspace path entirely isolated within the container
  "workspaceFolder": "/home/vscode/quests",

2. Filesystem configurations

If you were to explore and not modify anything except for mounted volumes you can enable the --read-only flag. This does not mean you purposely modifying something but also the outcome of any interaction you do modifying literally almost anything as well.

// For a dev environment this is more a hussle than a feature.
// "--read-only",

Then for some temporary volumes we can add some extra security:

  • rw: Read/write access. Lets apps write tmp files but nothing else.
  • noexec: Blocks binary execution. Stops attackers from running scripts/ELF files
  • nosuid: Disables SUID/SGID bits, which prevents privilege escalations via temporary files.
"--tmpfs=/tmp:rw,noexec,nosuid,size=512m",
"--tmpfs=/var/tmp:rw,noexec,nosuid,size=512m",
"--tmpfs=/dev/shm:rw,noexec,nosuid,size=64m",

3. Linux capabilities

Well, if this is the first time you hear about capabilities, you're in for a treat.

They allow you to break root's "power" into 40+ granular privileges so you can give each process only the caps it needs. It's like instead of giving admin to your repo you just give them PR access or pull and create issues privileges only.

A few caps as an example:

  • CAP_NET_BIND_SERVICE → Bind to ports <1024
  • CAP_SYS_ADMIN → Mount filesystems (dangerous)
  • CAP_CHOWN → Change file ownership

How to use them is outside the scope of this post, but you can use getcap to check caps, and setcap to assign them.

In our configuration we decide to drop them all together, there shouldn't be a need to use any of them by default.

// Drop all capabilities
"--cap-drop=ALL",

If you want to check the current capabilities of your system and understand how it works here's a script generated by DeepSeek-V3.

❯ bash findcaps.sh 
Checking binaries in: /usr/local/sbin
Checking binaries in: /usr/local/bin
Checking binaries in: /usr/bin
/usr/bin/dumpcap cap_dac_override,cap_net_admin,cap_net_raw=eip
/usr/bin/newgidmap cap_setgid=ep
/usr/bin/newuidmap cap_setuid=ep
/usr/bin/rcp cap_net_bind_service=ep
/usr/bin/rlogin cap_net_bind_service=ep
/usr/bin/rsh cap_net_bind_service=ep
/usr/bin/sway cap_sys_nice=ep
Checking binaries in: /var/lib/flatpak/exports/bin
Checking binaries in: /usr/bin/site_perl
Checking binaries in: /usr/bin/vendor_perl
Checking binaries in: /usr/bin/core_perl
Checking binaries in: /home/matt/.cargo/bin

The opposite of dropping capabilities would be using the --privileged flag, though you'll rarely encounter the need to use it, something like docker-in-docker require it as far as I know. For obvious reasons we're not going to do that. Container would be running as root on host.

4. Privilege escalation

The --security-opt no-new-privileges flag in restricts container's processes from gaining additional privileges (e.g., via setuid or setgid binaries). This prevents privilege escalation attacks within the container. It's a simple but effective way to harden containerized applications.

"--security-opt", "no-new-privileges",

Some examples are passwd, su, sudo, crontab, among others.

a 5. AppArmor & seccomp

There is an entire category under Linux Security Modules (LSMs) but we're only going to be briefly speaking about these two. This section briefly explains to to 'integrate' or 'configure' them with Docker. You should have them installed already.

AppArmor

AppArmor is a module that provides Mandatory Access Control (MAC). It works by enforcing security profiles on applications, restricting their access to files, directories, and system resources. Think of it as a "jail" for programs, where each app gets a predefined set of permissions.

You can run aa-status to check if you have it already installed and enabled.

"--security-opt", "apparmor:docker-default",

This is the line to enable it, using the profile docker-default. Profiles should be located under /etc/apparmor.d/. More on profiles here.

seccomp

Seccomp is a Linux kernel feature that restricts the system calls (syscalls) a process can make. Think of it as a "syscall firewall" for applications.

docker info | grep -i seccomp to check if it has been enabled.

This flag also supports profiles, and few should come by default (unconfined, default, block-all), if not you can research on how to create your own.

"--security-opt", "seccomp=unconfined"

This flags disables seccomp.

Default docker profile. More information about profiles from the official docker docs.

Bonus: SELinux

SELinux (Security-Enhanced Linux) is another MAC system, but it’s far more granular and complex than AppArmor. It assigns security labels to every file, process, and resource, then enforces strict policies based on these labels.

You can run sestatus to check if you have it already installed and enabled. You might also need some other tools/deps like container-selinux.

Edit Docker daemon config (/etc/docker/daemon.json):

{  
  "selinux-enabled": true  
}  

Though I haven't thoroughly tested this through a devcontainer, here's what the bare minimum config for SELinux could look like.

{
  "name": "Minimal Secure Dev Container",
  "dockerFile": "Dockerfile",
  "runArgs": [
    // Apply default SELinux type
    "--security-opt", "label=type:container_t",
    // Auto-relabel workspace for SELinux
    "-v", "${localWorkspaceFolder}:/workspace:Z"
  ],
  "workspaceMount": "source=${localWorkspaceFolder},target=/workspace,type=bind,Z",
  "workspaceFolder": "/workspace"
}
  1. label=type:container_t applies Docker’s default SELinux type to the container, enforcing basic isolation.
  2. :Z basically renames a volume to be used within the SELinux context.

6. Networking

For network isolation and secure internet access containers, the optimal configuration depends on your security requirements and use case.

The default profiles for the network are none, bridge and host. I'd say you always start with bridge, never use host, and none when you don't want internet at all.

  // "--network=none",
  "--network=bridge",
  "--dns=1.1.1.1",
  "--dns=1.0.0.1",

You can also disable IPv6 if your network does not depend on it, lowering the attack surface (misconfigurations).

    "--sysctl=net.ipv6.conf.all.disable_ipv6=1",  // Disable IPv6
    "--sysctl=net.ipv6.conf.default.disable_ipv6=1",

Even if you don’t drop all capabilities, consider removing NET_RAW and NET_ADMIN. Most applications, like web servers and APIs, don’t require raw packet access, and removing these reduces attack vectors. You will be lowering the attack surface, and stealthy port scans. As a consequence you won't be able to use some tools like ping or traceroute.

"--cap-drop=NET_RAW",

Bonus: iptables

If you never heard about iptables before, the only thing you need to know is that it is a network administration tool that allows you to describe rules.

In this case we're going to be disallowing communication channels between host and container, and within containers themselves.

Preventing host <> container communication

# Block host → containers
sudo iptables -I OUTPUT -d 172.17.0.0/16 -j DROP

# Block containers → host
sudo iptables -I INPUT -s 172.17.0.0/16 -j DROP

Check that your Docker network is within that range.

Preventing container <> container communication

# Insert a DROP rule for container-to-container traffic in the DOCKER-USER chain.
iptables -I DOCKER-USER -s 172.17.0.0/16 -d 172.17.0.0/16 -j DROP

Take note that this won't persist unless you explicitly do so.

7. Customizations

Customizations allow you to change things related directly to the behavior of VSCode, and your overall settings. Here I include a few basic ones that should work on every VSCode based IDE.

Automated tasks

If you assisted to any of our previous workshops on backdoors, we always start by showing that VSCode comes with automated tasks by default. So we entertain our public by launching an 'evil looking script' right after they decide to trust the workspace.

To disallow this by default you can start by using these two settings.

// Killswitch for automated tasks
"task.autoDetect": "off",
"task.problemMatchers.autoDetect": "off",

Do you trust the authors?

Now, back to the blue button. How many times have you pressed it without really thinking about what is going on behind its action?

This protects you from:

  • Auto-running malicious scripts (tasks, debuggers, Git hooks).
  • Risky extensions installing/running without consent.
  • Terminal commands or env changes from untrusted projects.
  • Unauthorized file/system access (e.g., ~/.ssh/etc).

To disable it by default, you can disable it like this:

// Trust no one by default
"security.workspace.trust.enabled": false,

Telemetry

Well, this isn't a security feature, it's more about privacy than blocking threats. However, disabling it might prevent potential data leaks like exposing project paths, versions, and tool usage patterns.

  • VS Code: Set telemetry.telemetryLevel to off in settings.
  • Docker: Add "telemetry": false to /etc/docker/daemon.json

Future research

  1. Experiment with doas instead of sudo for finer-grained permissions.
  2. Dropping all capabilities might be excessive, depending on the need for a dev/auditor.
  3. Customize profiles for AppArmor, seccomp,, and SELinux.
  4. Consider creating network namespace isolation instead of reusing bridge.
  5. Add binary verification steps in the Docker file.

Final thoughts

If you’ve followed along, kudos 👏—you’ve leveled up your security game, even by just reading it. Pair all this with a firewall (I’m team Lulu on Mac and OpenSnitch on Linux), and you’ve gone the extra mile.

The intentional structure of this guide shows how each security measure works independently. If this sparks your curiosity – whether it's hardening containers or dropping capabilities – maybe you'll start noticing similar principles apply elsewhere.

Configuring security settings is like tackling a climbing route, you spot a line to the top, test each foothold and handhold, and bail when a move doesn’t stick. Not every technique works for every pitch (or project), but you’re building muscle memory with every attempt. Keep grinding—the send will eventually come 💪.

So if this humble guide starts to prompt you to think beyond "Is my password long enough?" or "Is this transaction safe to execute?" and start evaluating your processes, then hell yes, mission accomplished.

And if after all this you're still paranoid, throw in a canary. But we can talk about that in another article.