Containerized Development with Docker, VSCode, and GitLab CI: A Practical Guide
By a Program Manager in pursuit of optimization

A Personal Introduction
As a Program Manager, my daily responsibilities involve orchestrating teams, overseeing projects, and ensuring the delivery of quality products. While I’m not a seasoned developer diving into code every day, I’ve always been fascinated by the optimization of development and deployment processes.
This curiosity led me to explore containerization technologies and CI/CD pipelines, which represent the backbone of any modern delivery system. I quickly understood that mastering these tools, even from a program manager’s perspective, can have a considerable impact on the efficiency of the projects I oversee.
This tutorial is the fruit of that exploration—perhaps I’m sometimes stating the obvious for experienced developers, but my goal is to share a pragmatic vision of these technologies, accessible even for those who, like me, don’t code every day but need to understand and optimize their team’s technical ecosystem.
After all, optimization isn’t just about code, but the entire development process, from the first commit to production deployment.
Reflections of a Program Manager on Process Optimization
Before diving into the technical aspects, allow me to share some insights from my experience as a program manager.
In modern technology projects, I’ve observed an interesting trend: sometimes, we spend months optimizing algorithms to gain a few milliseconds of execution time, but we neglect the hours lost daily due to inconsistent development environments or manual deployment processes prone to errors.
Adopting a containerized approach with well-configured CI/CD pipelines may seem like a significant initial investment, but experience has shown me that the return on investment is often spectacular:
- Reduction of the infamous « it works on my machine » – Containers ensure that code runs identically everywhere
- Accelerated onboarding of new developers – From several days to just a few hours
- Early detection of issues – Automated tests in the pipeline flag problems before they reach production
- Faster and more reliable delivery – Fewer manual interventions means fewer human errors
These improvements aren’t just technical—they directly impact team satisfaction, product quality, and ultimately, the commercial success of the project.
My conviction is that an effective program manager must understand these mechanisms sufficiently to promote their adoption and eliminate organizational obstacles that might hinder their implementation.
Now, let’s move on to the practical aspects of this optimization!
What Will You Learn?
- Configuring VSCode for container development
- Creating optimized multi-stage Dockerfiles
- Setting up a CI/CD pipeline with GitLab
- Optimizing performance and image size
- Version management strategies
Ready to significantly improve your workflow? Let’s get started!
Configuring VSCode for Container Development
Step 1: Install Essential Extensions
To begin, install these extensions in VSCode:
- Remote – Containers
- Docker
- GitLab Workflow
- Language-specific extensions (Go, Java, Angular, C++)
Step 2: Configure Container Debugging
Create a .vscode/launch.json file with this configuration:
{
"configurations": [
{
"name": "Go: Docker Debug",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug-go",
"platform": "go",
"go": {
"remoteRoot": "/app"
}
},
{
"name": "Angular: Docker Debug",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug-angular",
"platform": "node",
"node": {
"remoteRoot": "/app"
}
}
]
}
This configuration will allow debugging directly in containers, keeping all the usual features like breakpoints and variable inspection.
Optimized Multi-stage Dockerfiles
One of the secrets for efficient containers is the use of multi-stage builds. Here’s an example for a Go application:
# Base for development and compilation
FROM registry.access.redhat.com/ubi8/go-toolset:1.19 AS base
# Development stage
FROM base AS dev
WORKDIR /app
RUN go install github.com/go-delve/delve/cmd/dlv@latest
ENV GO111MODULE=on
CMD ["tail", "-f", "/dev/null"]
# Compilation stage
FROM base AS build
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN go build -o app .
# Minimal final image
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.7
WORKDIR /app
COPY --from=build /app/app .
USER 1001
ENTRYPOINT ["./app"]
Pro Tip: This multi-stage approach separates the development environment from the production image, allowing for a much lighter final image!
GitLab CI/CD Configuration
GitLab CI/CD allows you to automate your tests and deployments. Here’s how to configure a basic pipeline:
stages:
- validate
- build
- test
- deploy
variables:
DOCKER_DRIVER: overlay2
CI_REGISTRY: "registry.example.com"
# Validation job for Go
go-lint:
stage: validate
image: registry.access.redhat.com/ubi8/go-toolset:1.19
script:
- go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
- golangci-lint run ./...
# Build job for Angular
angular-build:
stage: build
image: registry.access.redhat.com/ubi8/nodejs-18:1-70
script:
- npm ci
- npm run build -- --configuration production
artifacts:
paths:
- dist/
This .gitlab-ci.yml file defines a simple pipeline with validation, build, test, and deployment stages.
Docker Image Optimization
Image size directly affects the performance of your pipeline. Here are some essential optimization techniques:
1. Compressing RUN Instructions
# Before - Multiple layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y nginx
# After - Single layer with cleanup
RUN apt-get update && \
apt-get install -y --no-install-recommends curl nginx && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
2. Using Volumes for Caches
To optimize local development, use volumes to preserve caches:
services:
go-dev:
volumes:
- .:/app
- go-cache:/go/pkg/mod
environment:
- GOCACHE=/go/cache
volumes:
go-cache:
driver: local
3. Cleaning Caches After Installation
# Cleanup after npm installation
RUN npm ci && npm cache clean --force
Version Management
Version management is critical for keeping your code organized. Adopt semantic versioning (SemVer):
- MAJOR Version (X.y.z): incompatible changes
- MINOR Version (x.Y.z): compatible new features
- PATCH Version (x.y.Z): bug fixes
Automate your version bumps with a simple script:
#!/bin/bash
# bump-version.sh
TYPE=$1 # major, minor, or patch
VERSION_FILE="VERSION"
CURRENT_VERSION=$(cat $VERSION_FILE)
# Split version
MAJOR=$(echo $CURRENT_VERSION | cut -d. -f1)
MINOR=$(echo $CURRENT_VERSION | cut -d. -f2)
PATCH=$(echo $CURRENT_VERSION | cut -d. -f3)
# Bump according to type
case $TYPE in
major)
MAJOR=$((MAJOR + 1))
MINOR=0
PATCH=0
;;
minor)
MINOR=$((MINOR + 1))
PATCH=0
;;
patch)
PATCH=$((PATCH + 1))
;;
esac
NEW_VERSION="$MAJOR.$MINOR.$PATCH"
echo $NEW_VERSION > $VERSION_FILE
Integrate this script into your GitLab CI pipeline for automated version bumps.
Alternative: Using Alpine Images
Although our tutorial focuses on Red Hat UBI images, a popular alternative is using images based on Alpine Linux. Let’s explore this option and compare the advantages and disadvantages.
Alpine Images: Ultra-Light but with Trade-offs
Alpine Linux is a minimalist distribution that has gained popularity in the Docker ecosystem for its extremely small size.
Size comparison:
- Red Hat UBI Minimal base image: ~114 MB
- Alpine base image: ~5 MB
This significant difference can have a major impact on your build times and storage costs.
Maven + Jenkins with Alpine: A Concrete Example
Here’s an example of a Dockerfile using Alpine for a Java application with Maven, optimized for Jenkins integration:
# Build stage with Alpine
FROM alpine:3.17 AS builder
# Maven and JDK installation
RUN apk add --no-cache openjdk11 maven
# Maven configuration
WORKDIR /app
COPY pom.xml .
COPY src ./src
# Build with Maven
RUN mvn package -DskipTests
# Minimal final image
FROM alpine:3.17
# JRE installation only
RUN apk add --no-cache openjdk11-jre
# Artifact copy
WORKDIR /app
COPY --from=builder /app/target/*.jar app.jar
# Non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
CMD ["java", "-jar", "app.jar"]
Jenkins Configuration for Alpine Images
Here’s an example of a Jenkinsfile optimized for this type of image:
pipeline {
agent any
environment {
DOCKER_IMAGE = 'myapp:${BUILD_NUMBER}'
}
stages {
stage('Build') {
steps {
sh 'docker build -t ${DOCKER_IMAGE} .'
}
}
stage('Analyze Image') {
steps {
sh '''
echo "Image size analysis:"
docker images ${DOCKER_IMAGE} --format "{{.Size}}"
echo "Layer analysis:"
docker history ${DOCKER_IMAGE}
'''
}
}
stage('Test') {
steps {
sh 'docker run --rm ${DOCKER_IMAGE} java -version'
}
}
stage('Deploy') {
when {
branch 'main'
}
steps {
sh 'docker tag ${DOCKER_IMAGE} myapp:latest'
sh 'docker push myapp:latest'
}
}
}
}
Advantages and Risks of Alpine Images
✅ Advantages:
- Reduced size: 10 to 20 times smaller than standard images
- Accelerated deployment times: Less data to transfer
- Reduced attack surface: Fewer installed packages = fewer potential vulnerabilities
- CI/CD efficiency: Faster builds and deployments
⚠️ Risks and Disadvantages:
- Different C library: Alpine uses musl libc instead of glibc, which can cause incompatibilities
- Native dependency issues: Some compiled libraries may not work without modification
- Limited support: Fewer packages available compared to UBI or Debian
- Learning curve: Uses apk instead of apt/yum/dnf for package management
Practical advice: Before completely migrating to Alpine, thoroughly test your applications, particularly those with native dependencies or specific glibc requirements. Consider Alpine as an option for simple applications where image size is a priority.
When to Choose Red Hat UBI vs Alpine?
Choose Red Hat UBI when:
- You need commercial support and enterprise compliance
- Your application has complex dependencies
- You work in an environment where Red Hat is already standardized
- Security and stability are more important than image size
Choose Alpine when:
- Image size and performance are critical
- Your application has few native dependencies
- You can rigorously test compatibility
- You work in a development or startup environment
Comparison of CI/CD Architectures: GitLab CI vs Jenkins
Although we’ve focused on GitLab CI in this tutorial, Jenkins remains a popular solution for continuous integration. Here’s a brief comparison to help you choose:
| Criterion | GitLab CI | Jenkins |
|---|---|---|
| Configuration | YAML (.gitlab-ci.yml) | Jenkinsfile (Groovy) or GUI interface |
| SCM Integration | Native with GitLab | Plugins for various SCMs |
| Deployment | Simpler for cloud deployments | More flexible for complex environments |
| Learning curve | Moderate | Steep |
| Extensibility | Via GitLab CI/CD API | Vast plugin ecosystem |
| Containerization | Native | Via Docker plugins |
Equivalent Example with Jenkins for a Maven Application
For those who prefer using Jenkins, here’s an example of a Jenkinsfile for a Java application with Maven:
pipeline {
agent {
docker {
image 'maven:3.8.6-openjdk-11-slim'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Validate') {
steps {
sh 'mvn validate'
}
}
stage('Compile') {
steps {
sh 'mvn compile'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
post {
always {
junit '**/target/surefire-reports/TEST-*.xml'
}
}
}
stage('Package') {
steps {
sh 'mvn package -DskipTests'
archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
}
}
stage('Build Image') {
steps {
script {
def customImage = docker.build("my-java-app:${env.BUILD_ID}")
customImage.push()
customImage.push('latest')
}
}
}
}
}
This example shows how Jenkins can run a Maven pipeline in a Docker container, with Maven cache persistence and artifact management.
Conclusion
As a program manager who explored this field out of curiosity rather than initial technical expertise, I can attest that even partial mastery of these tools can radically transform a team’s efficiency.
The approach I’ve shared here represents a balance between technical rigor and organizational pragmatism. It allows you to:
- Standardize environments without imposing excessive rigidity
- Accelerate onboarding of new developers without creating dependence on a single person
- Optimize performance without sacrificing maintainability
- Reduce image size without compromising stability
- Automate the delivery process while maintaining traceability
For program managers who, like me, seek to optimize without necessarily coding daily, I encourage you to familiarize yourself with these concepts. This knowledge will allow you to have more productive conversations with your technical teams, anticipate certain challenges, and make more informed decisions regarding resource allocation.
Even if you never configure a CI/CD pipeline yourself, understanding these mechanisms will give you a considerable advantage in managing modern technology projects.
Feel free to share this tutorial with your teams—it could serve as the basis for a fruitful discussion on optimizing your current development processes.
Additional Resources
- Official Docker Documentation
- Remote Development Extensions Guide for VSCode
- GitLab CI/CD Documentation
- Alpine Linux for Docker Containers
- Jenkins Pipeline Documentation
- Red Hat Universal Base Images (UBI)
Did you like this tutorial? Share it with your developer colleagues and leave a comment below if you have questions or experiences with these different approaches!
[Keywords: Docker, VSCode, GitLab CI, Jenkins, DevOps, Development, Containers, CI/CD, Red Hat, Alpine Linux, Maven]











En savoir plus sur Wet & sea & IA
Subscribe to get the latest posts sent to your email.