Introduction
In today’s cloud-native landscape, deploying Vue applications efficiently and reliably requires a robust containerization and orchestration strategy. This guide walks through the process of building a Nuxt Vue application and deploying it to Kubernetes, starting with a simple single-pod deployment and progressing to a production-ready, highly available architecture.
Purpose and Scope
This guide focuses on the build and deployment processes for Nuxt Vue applications in Kubernetes environments, covering both development and production scenarios. We’ll explore kubectl for development workflows and Helm for production deployments.
Target Audience
- Developers with Vue.js experience looking to containerize their applications
- DevOps engineers implementing Kubernetes deployments
- Teams transitioning from development to production environments
Prerequisites and Assumptions
- Working knowledge of Vue.js and Nuxt
- Basic understanding of Docker concepts
- Access to a Kubernetes cluster
- Installed and configured:
- kubectl
- Helm
- Docker
If you need to setup a development cluster check out our articles on Minikube and K3s deployments.
Setting Up the Nuxt Application
Creating a New Nuxt.js Project
Initialize a new Nuxt project using the following commands:
npx create-nuxt-app my-nuxt-app
cd my-nuxt-app
During initialization, select your preferred options for:
- Package manager (npm/yarn)
- UI framework
- Testing framework
- Rendering mode
Development Environment Setup
Install required dependencies:
npm install
Key configuration files:
nuxt.config.js
: Main application configuration.env
: Environment variablespackage.json
: Dependencies and scripts
Building for Production
Execute the production build:
npm run build
The build process:
- Compiles Vue components
- Optimizes assets
- Generates static files
- Creates server-side rendering bundle
Containerization
Dockerfile Creation
Create a multi-stage Dockerfile for optimal image size:
# Build stage
FROM node:18-alpine as builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/.output ./.output # This copies the complete build output
COPY --from=builder /app/package*.json ./
RUN npm install --production
EXPOSE 3000
CMD ["node", "/app/.output/server/index.mjs"] # This directly runs the server entry point
This multi-stage Dockerfile creates an efficient production container:
- First stage (builder):
- Uses Node 18 Alpine as a lightweight base
- Sets the working directory to /app
- Copies package files and installs dependencies (done before copying all files to leverage Docker cache)
- Copies the application code and runs the build command
- Second stage (production):
- Starts with a fresh Node 18 Alpine image
- Copies only the built .output directory from the builder stage (contains the optimized application)
- Copies package.json files and installs only production dependencies
- Exposes port 3000 for the application
- Sets the start command to directly run the Nuxt server entry point
Building the Container
Build and test locally:
docker build -t my-nuxt-app:1.0.0 .
docker run -p 3000:3000 my-nuxt-app:1.0.0
docker build -t my-nuxt-app:1.0.0 .
: Builds the Docker image from the Dockerfile in the current directory (.) and tags it as “my-nuxt-app” with version “1.0.0”.docker run -p 3000:3000 my-nuxt-app:1.0.0
: Runs a container from the built image, mapping port 3000 of the host to port 3000 of the container for local testing.
Container Registry
Push to your registry:
docker tag my-nuxt-app:1.0.0 <registry>/my-nuxt-app:1.0.0
docker push <registry>/my-nuxt-app:1.0.0
docker tag
: Creates a new tag for your image that includes your registry’s address. Replace<registry>
with your actual container registry (e.g., “docker.io/username” for Docker Hub).docker push
: Uploads the tagged image to your container registry, making it available for Kubernetes to pull.
Version tagging strategy:
- Use semantic versioning
- Include build numbers for tracking
- Tag latest for current stable version
Single Pod Deployment
Basic Kubernetes Resources
Create a basic deployment configuration:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nuxt-app
labels:
app: nuxt-app
spec:
replicas: 1 # Single replica for development
selector:
matchLabels:
app: nuxt-app
template:
metadata:
labels:
app: nuxt-app
spec:
containers:
- name: nuxt-app
image: <registry>/my-nuxt-app:1.0.0 # Replace with your actual image reference
ports:
- containerPort: 3000
name: http
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
# Basic health check
livenessProbe:
httpGet:
path: / # Using root path as a basic health check
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: nuxt-app
labels:
app: nuxt-app
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: 3000
protocol: TCP
name: http
selector:
app: nuxt-app
This YAML file defines two Kubernetes resources:
- Deployment:
apiVersion/kind
: Specifies this is a Deployment resource using apps/v1 APImetadata
: Names the deployment “nuxt-app” and adds identifying labelsspec.replicas
: Configures a single instance (suitable for development)spec.selector
: Defines which pods belong to this deployment using labelsspec.template
: Defines the pod template:- Container using your Nuxt application image
- Exposes port 3000
- Sets resource requests (minimum allocations) and limits (maximum allocations)
- Configures health checks with livenessProbe (restarts container if it fails) and readinessProbe (determines if container can receive traffic)
- Service:
- Creates a service named “nuxt-app”
- Type ClusterIP makes it accessible within the cluster
- Maps port 3000 to the container’s port 3000
- Uses selector to route traffic to pods with label app=nuxt-app
kubectl Deployment
Deploy the application:
kubectl create namespace nuxt-app
kubectl config set-context --current --namespace=nuxt-app
kubectl apply -f deployment.yaml
kubectl create namespace nuxt-app
: Creates a dedicated namespace called “nuxt-app” for organization and isolationkubectl config set-context --current --namespace=nuxt-app
: Sets the current context to use the new namespace by defaultkubectl apply -f deployment.yaml
: Applies the configuration, creating both the Deployment and Service resources
Testing and Debugging
Verify deployment:
kubectl get pods
kubectl logs -l app=nuxt-app
kubectl port-forward svc/nuxt-app 3000:3000
kubectl get pods
: Lists all pods in the current namespace, showing status and agekubectl logs -l app=nuxt-app
: Fetches logs from all containers with the label app=nuxt-appkubectl port-forward svc/nuxt-app 3000:3000
: Creates a temporary connection from your local port 3000 to port 3000 of the service, allowing you to access the application at http://localhost:3000
High Availability Architecture
Scaling Considerations
Implement horizontal scaling for high availability:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nuxt-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nuxt-app
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
This HorizontalPodAutoscaler (HPA) resource automatically scales your application:
scaleTargetRef
: References the nuxt-app Deployment to controlminReplicas: 3
: Ensures at least 3 replicas run at all times for high availabilitymaxReplicas: 10
: Sets an upper limit of 10 replicas during high loadmetrics
: Configures scaling based on CPU utilization, adding new pods when average CPU usage exceeds 80%
Resilience Patterns
Implement health checks:
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
Enhanced health checks improve application reliability:
livenessProbe
: Checks a dedicated /health endpoint to determine if the container is aliveinitialDelaySeconds: 30
: Waits 30 seconds before first probe to allow application startup- Kubernetes will restart the container if this probe fails
readinessProbe
: Checks a dedicated /ready endpoint to confirm the application can serve requestsinitialDelaySeconds: 5
: Waits 5 seconds before checking if the application is ready- Kubernetes will not route traffic to the pod until this probe succeeds
Load Balancing
Configure service and ingress:
apiVersion: v1
kind: Service
metadata:
name: nuxt-app
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 3000
selector:
app: nuxt-app
This service configuration provides internal load balancing:
type: ClusterIP
: Creates an internal-only IP addressport: 80
: Exposes the service on standard HTTP port 80targetPort: 3000
: Routes traffic to port 3000 on the podsselector
: Distributes traffic across all pods with the label app=nuxt-app
Production Deployment with Helm
Helm Chart Structure
Create a new Helm chart:
helm create my-nuxt-app
This command generates a starter Helm chart with a standardized structure. It creates a directory named “my-nuxt-app” containing templates and default values that you’ll customize for your application.
Directory structure:
my-nuxt-app/
Chart.yaml
values.yaml
templates/
deployment.yaml
service.yaml
ingress.yaml
configmap.yaml
NOTES.txt
Chart Customization for Nuxt
1. Update Chart.yaml
Modify the Chart.yaml file to include proper metadata:
apiVersion: v2
name: my-nuxt-app
description: A Helm chart for Nuxt.js applications
type: application
version: 0.1.0
appVersion: "1.0.0"
dependencies:
- name: common
version: 1.x.x
repository: https://charts.bitnami.com/bitnami
# Optional common library for standardized functions
This file defines basic chart information and any dependencies.
2. Configure values.yaml
Expand the values.yaml to include all necessary configurations:
# Application configuration
replicaCount: 3
image:
repository: <registry>/my-nuxt-app
tag: 1.0.0
pullPolicy: IfNotPresent
# Pod-specific settings
podAnnotations: {}
podSecurityContext: {}
securityContext: {}
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
# Service configuration
service:
type: ClusterIP
port: 80
targetPort: 3000
# Ingress configuration
ingress:
enabled: true
className: nginx
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: myapp.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: myapp-tls
hosts:
- myapp.example.com
# Resource management
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
# Scaling configuration
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
# Liveness and readiness probes
probes:
liveness:
path: /
initialDelaySeconds: 30
periodSeconds: 10
readiness:
path: /
initialDelaySeconds: 5
periodSeconds: 5
# Environment variables
env:
NODE_ENV: production
# Additional environment variables can be added here
# Volume mounts for static assets (if needed)
persistence:
enabled: false
# uncomment for persistent storage
# accessMode: ReadWriteOnce
# size: 1Gi
# storageClass: standard
This comprehensive values.yaml serves as a central configuration, making it easy to adjust settings without modifying templates.
3. Customize Deployment Template
Update the templates/deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-nuxt-app.fullname" . }}
labels:
{{- include "my-nuxt-app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "my-nuxt-app.selectorLabels" . | nindent 6 }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
{{- include "my-nuxt-app.selectorLabels" . | nindent 8 }}
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 3000
protocol: TCP
env:
- name: NODE_ENV
value: {{ .Values.env.NODE_ENV }}
{{- range $key, $val := .Values.env }}
{{- if ne $key "NODE_ENV" }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
livenessProbe:
httpGet:
path: {{ .Values.probes.liveness.path }}
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
periodSeconds: {{ .Values.probes.liveness.periodSeconds }}
readinessProbe:
httpGet:
path: {{ .Values.probes.readiness.path }}
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
periodSeconds: {{ .Values.probes.readiness.periodSeconds }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.persistence.enabled }}
volumeMounts:
- name: static-content
mountPath: /app/.output/public
{{- end }}
{{- if .Values.persistence.enabled }}
volumes:
- name: static-content
persistentVolumeClaim:
claimName: {{ include "my-nuxt-app.fullname" . }}-static
{{- end }}
This template creates a deployment with proper health checks, resource limits, and environment variable configuration for your Nuxt application.
4. Configure Service Template
Update the templates/service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: {{ include "my-nuxt-app.fullname" . }}
labels:
{{- include "my-nuxt-app.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: http
selector:
{{- include "my-nuxt-app.selectorLabels" . | nindent 4 }}
This service exposes your Nuxt application within the cluster or externally, depending on the service type.
5. Set Up Ingress for External Access
Update the templates/ingress.yaml file:
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "my-nuxt-app.fullname" . }}
labels:
{{- include "my-nuxt-app.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "my-nuxt-app.fullname" $ }}
port:
number: {{ $.Values.service.port }}
{{- end }}
{{- end }}
{{- end }}
This creates an Ingress resource for routing external traffic to your application, with optional TLS for HTTPS.
6. Configure Autoscaling
Create a templates/hpa.yaml file:
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "my-nuxt-app.fullname" . }}
labels:
{{- include "my-nuxt-app.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "my-nuxt-app.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
This template enables automatic scaling of your application based on CPU utilization.
7. Add Persistent Volume (Optional)
For applications that need persistent storage, create a templates/pvc.yaml file:
{{- if .Values.persistence.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "my-nuxt-app.fullname" . }}-static
labels:
{{- include "my-nuxt-app.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.persistence.accessMode | default "ReadWriteOnce" }}
resources:
requests:
storage: {{ .Values.persistence.size | default "1Gi" }}
{{- if .Values.persistence.storageClass }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
{{- end }}
This creates persistent storage for static assets if needed.
8. Create ConfigMap for Runtime Configuration
Add a templates/configmap.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "my-nuxt-app.fullname" . }}-config
labels:
{{- include "my-nuxt-app.labels" . | nindent 4 }}
data:
{{- if .Values.configData }}
{{- toYaml .Values.configData | nindent 2 }}
{{- end }}
Then update your values.yaml to include:
configData:
NUXT_PUBLIC_API_URL: https://api.example.com
# Add other public environment variables
Reference this ConfigMap in the deployment template by adding:
envFrom:
- configMapRef:
name: {{ include "my-nuxt-app.fullname" . }}-config
Secrets Management
For sensitive environment variables, create a templates/secret.yaml:
{{- if .Values.secretData }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "my-nuxt-app.fullname" . }}-secret
labels:
{{- include "my-nuxt-app.labels" . | nindent 4 }}
type: Opaque
data:
{{- range $key, $val := .Values.secretData }}
{{ $key }}: {{ $val | b64enc }}
{{- end }}
{{- end }}
Add to values.yaml:
secretData:
API_KEY: your-api-key
Reference these secrets in deployment.yaml:
envFrom:
- secretRef:
name: {{ include "my-nuxt-app.fullname" . }}-secret
Deployment Process
Install and upgrade commands:
# Initial install
helm install my-release my-nuxt-app --values production-values.yaml --namespace nuxt-app --create-namespace
# Upgrade existing deployment
helm upgrade my-release my-nuxt-app --values production-values.yaml --namespace nuxt-app
# Uninstall
helm uninstall my-release --namespace nuxt-app
For CI/CD integration:
# Pass dynamic values from CI/CD pipeline
helm upgrade --install my-release my-nuxt-app \
--set image.tag=${RELEASE_VERSION} \
--set ingress.hosts[0].host=${DOMAIN_NAME} \
--namespace nuxt-app
Testing Your Helm Chart
Before deploying, validate the chart:
# Lint the chart for syntax errors
helm lint ./my-nuxt-app
# Generate and view the manifests without installing
helm template my-release ./my-nuxt-app --values production-values.yaml
# Perform a dry run
helm install my-release ./my-nuxt-app --values production-values.yaml --dry-run
Advanced Deployment Strategies
Rolling Updates
Configure rolling updates in deployment:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
This configuration ensures zero-downtime deployments:
type: RollingUpdate
: Specifies a gradual update strategymaxSurge: 1
: Allows creating at most 1 pod above the desired count during updatesmaxUnavailable: 0
: Ensures all pods remain available during the update (no downtime)
Blue-Green Deployment
Implement using service selectors:
apiVersion: v1
kind: Service
metadata:
name: nuxt-app
spec:
selector:
app: nuxt-app
version: blue # Switch between blue/green
This demonstrates a blue-green deployment approach using service selectors:
- Two deployments exist simultaneously (blue and green versions)
- The service routes traffic based on the version label
- To switch versions, you update only the service selector from “blue” to “green”
- This allows instant rollback by simply changing the selector back
Canary Releases
Use traffic splitting with service mesh or ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
This Ingress configuration implements a canary release pattern:
nginx.ingress.kubernetes.io/canary: "true"
: Marks this as a canary (test) versionnginx.ingress.kubernetes.io/canary-weight: "20"
: Routes 20% of traffic to this version- The remaining 80% traffic goes to the stable version
- This allows testing new versions with limited user exposure
Production Considerations
Resource Optimization
Set resource limits:
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
This resource configuration ensures optimal cluster utilization:
requests
: Minimum guaranteed resources (used for pod scheduling)memory: "256Mi"
: 256 MiB of RAM reservedcpu: "100m"
: 0.1 CPU cores (100 millicores) reserved
limits
: Maximum allowed resource usagememory: "512Mi"
: Pod will be restarted if it exceeds 512 MiB of RAMcpu: "200m"
: CPU usage capped at 0.2 cores (200 millicores)
Monitoring Setup
- Implement Prometheus metrics
- Configure Grafana dashboards
- Set up alerting rules
Security Measures
- Implement RBAC policies
- Configure network policies
- Use Kubernetes secrets for sensitive data
Troubleshooting Guide
Common Issues
Container problems:
kubectl logs <pod-name>
kubectl describe pod <pod-name>
kubectl exec -it <pod-name> -- /bin/sh
These commands help diagnose container issues:
kubectl logs <pod-name>
: Shows the console output of the applicationkubectl describe pod <pod-name>
: Provides detailed information about the pod, including events and statuskubectl exec -it <pod-name> -- /bin/sh
: Opens an interactive shell inside the container for direct investigation
Networking issues:
kubectl get endpoints
kubectl get ingress
kubectl describe service nuxt-app
These commands help troubleshoot networking problems:
kubectl get endpoints
: Shows the actual pod IPs that back each servicekubectl get ingress
: Lists all Ingress resources and their statuskubectl describe service nuxt-app
: Provides detailed information about the service configuration, including endpoints and selector
Debugging Techniques
Helm debugging:
helm lint my-nuxt-app
helm template --debug my-nuxt-app
helm history my-release
These commands help debug Helm chart issues:
helm lint my-nuxt-app
: Checks the chart for best practices and potential errorshelm template --debug my-nuxt-app
: Renders the templates locally without installing, showing the generated YAMLhelm history my-release
: Shows the revision history of a release, useful for tracking changes and rollbacks
Conclusion
This comprehensive guide provides a complete workflow for building and deploying Nuxt Vue applications to Kubernetes, from development through to production. By following these practices and configurations, teams can establish a robust, scalable, and maintainable deployment process for their Nuxt applications.
The combination of kubectl for development workflows and Helm for production deployments provides teams with the flexibility needed throughout the application lifecycle. Regular monitoring, proper resource management, and implementation of security best practices ensure a stable and secure production environment.
Remember to regularly review and update your deployment configurations, keep dependencies current, and maintain comprehensive documentation for your team. As your application grows, consider implementing additional features such as service mesh, advanced monitoring, and automated scaling policies to further enhance your deployment architecture.