Docker Interview Questions

35 questions with detailed answers

Question:
What is Docker and how does it differ from virtual machines?
Answer:
Docker is a containerization platform that packages applications and their dependencies into lightweight, portable containers. Key differences from VMs: 1) Architecture: Docker containers share the host OS kernel, while VMs have their own OS, 2) Resource Usage: Containers are more lightweight and use fewer resources, 3) Startup Time: Containers start in seconds, VMs take minutes, 4) Isolation: VMs provide stronger isolation, containers provide process-level isolation, 5) Portability: Containers are more portable across different environments. Example: A Docker container might use 50MB RAM, while a VM for the same application could use 1GB+.

Question:
Explain the Docker architecture and its main components.
Answer:
Docker architecture consists of: 1) Docker Client: Command-line interface that communicates with Docker daemon, 2) Docker Daemon (dockerd): Background service that manages containers, images, networks, 3) Docker Images: Read-only templates used to create containers, 4) Docker Containers: Running instances of Docker images, 5) Docker Registry: Storage for Docker images (Docker Hub, private registries), 6) Docker Engine: Runtime that creates and manages containers. Example workflow: docker run nginx - Client sends command to daemon, Daemon pulls nginx image from registry, Daemon creates and starts container.

Question:
What is a Dockerfile and explain its key instructions?
Answer:
Dockerfile is a text file containing instructions to build Docker images. Key instructions: 1) FROM: Sets base image, 2) RUN: Executes commands during build, 3) COPY/ADD: Copies files from host to image, 4) WORKDIR: Sets working directory, 5) EXPOSE: Documents ports the container listens on, 6) ENV: Sets environment variables, 7) CMD: Default command when container starts, 8) ENTRYPOINT: Configures container as executable. Example: FROM node:14, WORKDIR /app, COPY package.json ., RUN npm install, COPY . ., EXPOSE 3000, CMD ["npm", "start"].

Question:
What are Docker volumes and explain different types?
Answer:
Docker volumes provide persistent storage for containers. Types: 1) Named Volumes: Managed by Docker - docker volume create myvolume, docker run -v myvolume:/data app, 2) Bind Mounts: Mount host directory - docker run -v /host/path:/container/path app, 3) tmpfs Mounts: Temporary filesystem in memory - docker run --tmpfs /tmp app. Benefits: Data persistence beyond container lifecycle, Sharing data between containers, Backup and restore capabilities, Performance optimization. Example use case: Database containers use named volumes to persist data even when container is recreated.

Question:
Explain Docker Compose and provide a practical example.
Answer:
Docker Compose is a tool for defining and running multi-container applications using YAML files. Key features: 1) Service definition, 2) Network configuration, 3) Volume management, 4) Environment variables, 5) Dependencies between services. Example docker-compose.yml includes web service with build context, ports 3000:3000, depends on db, and db service with postgres:13 image, environment variables, and volumes. Commands: docker-compose up -d (Start services), docker-compose down (Stop and remove), docker-compose logs (View logs).

Question:
What are Docker health checks and how do you implement them?
Answer:
Docker health checks monitor container health and enable automatic recovery. Implementation methods: 1) Dockerfile HEALTHCHECK: HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 CMD curl -f http://localhost/ || exit 1, 2) Docker Compose healthcheck with test, interval, timeout, retries, start_period, 3) Runtime health check: docker run -d --health-cmd="curl -f http://localhost || exit 1" --health-interval=30s nginx. Health check states: starting (Initial period), healthy (Check passed), unhealthy (Check failed). Benefits: Automatic container restart, Load balancer integration, Monitoring and alerting, Service discovery.

Question:
What are the differences between CMD and ENTRYPOINT in Dockerfile?
Answer:
CMD and ENTRYPOINT are both used to specify what command runs when container starts, but they behave differently: 1) CMD: Provides default command and arguments, Can be overridden by docker run arguments, Only last CMD instruction is used, Example: CMD ["nginx", "-g", "daemon off;"], 2) ENTRYPOINT: Configures container as executable, Cannot be overridden (only appended to), Arguments from docker run are passed to ENTRYPOINT, Example: ENTRYPOINT ["nginx"], CMD ["-g", "daemon off;"], 3) Combined usage: ENTRYPOINT defines the executable, CMD provides default arguments, docker run myapp -t will run "nginx -t" instead of "nginx -g daemon off;", 4) Best practices: Use ENTRYPOINT for main command, Use CMD for default arguments, Use exec form (JSON array) for both to avoid shell overhead.

Question:
Explain Docker container lifecycle and state management.
Answer:
Docker container lifecycle states: 1) Created: Container created but not started (docker create), 2) Running: Container is executing (docker start, docker run), 3) Paused: Container processes are paused (docker pause), 4) Stopped: Container has exited (docker stop), 5) Killed: Container forcefully terminated (docker kill), 6) Removed: Container deleted from system (docker rm). State transitions: docker create -> docker start -> Running, docker pause -> Paused -> docker unpause -> Running, docker stop -> Stopped -> docker start -> Running, docker kill -> Stopped, docker rm -> Removed. Management commands: docker ps (running containers), docker ps -a (all containers), docker logs (container output), docker inspect (detailed info), docker stats (resource usage). Best practices: Use health checks for automatic restart, Implement graceful shutdown handling, Monitor container states, Clean up stopped containers regularly with docker container prune.

Question:
How do you implement Docker container auto-restart policies?
Answer:
Docker restart policies automatically restart containers when they exit: 1) no (default): Never restart, 2) always: Always restart regardless of exit status, 3) unless-stopped: Restart unless manually stopped, 4) on-failure[:max-retries]: Restart only on failure. Examples: docker run --restart=always nginx, docker run --restart=on-failure:3 myapp, docker run --restart=unless-stopped redis. Docker Compose: restart: always, restart: on-failure, restart: unless-stopped. Use cases: always for critical services, on-failure for applications that may crash, unless-stopped for services that should survive reboots but respect manual stops. Monitoring: docker ps shows restart count, docker inspect shows restart policy. Best practices: Use unless-stopped for most services, Implement proper health checks, Monitor restart patterns, Set appropriate restart delays.

Question:
Explain Docker networking modes and when to use each.
Answer:
Docker networking modes: 1) Bridge (default): Containers on same host can communicate - Use for single-host applications, 2) Host: Container uses host network directly - Use for high-performance networking, port conflicts, 3) None: No networking - Use for security-sensitive applications, 4) Overlay: Multi-host networking for Swarm - Use for distributed applications, 5) Macvlan: Assigns MAC address to container - Use for legacy applications requiring direct network access. Examples: docker run --network=host nginx, docker network create mynet, docker run --network=mynet app.

Question:
What is Docker Swarm and how does it provide orchestration?
Answer:
Docker Swarm is Docker native clustering and orchestration solution. Key features: 1) Service management, 2) Load balancing, 3) Service discovery, 4) Rolling updates, 5) Scaling, 6) High availability. Architecture: Manager nodes control cluster state and scheduling, Worker nodes run containers. Example setup: docker swarm init, docker swarm join --token , docker service create --replicas 3 --name web nginx, docker service scale web=5, docker service update --image nginx:latest web. Benefits: Built-in load balancing, Automatic failover, Declarative service model, Integrated with Docker CLI.

Question:
Explain multi-stage builds and their benefits.
Answer:
Multi-stage builds allow using multiple FROM statements in Dockerfile to create optimized images. Benefits: 1) Smaller final images, 2) Separation of build and runtime dependencies, 3) Better security (no build tools in production), 4) Simplified CI/CD pipelines. Example: Build stage uses node:14 AS builder with WORKDIR /app, COPY package.json, RUN npm install, COPY and RUN npm run build. Production stage uses nginx:alpine, COPY --from=builder /app/dist /usr/share/nginx/html, EXPOSE 80, CMD nginx. This reduces image size from ~1GB to ~50MB.

Question:
How do you implement Docker security best practices?
Answer:
Docker security best practices: 1) Use official base images, 2) Keep images updated, 3) Run as non-root user, 4) Use minimal base images (Alpine, distroless), 5) Scan images for vulnerabilities, 6) Implement proper secrets management, 7) Use read-only filesystems, 8) Limit container capabilities. Example secure Dockerfile: FROM node:14-alpine, RUN addgroup -g 1001 -S nodejs, RUN adduser -S nextjs -u 1001, WORKDIR /app, COPY --chown=nextjs:nodejs . ., USER nextjs, EXPOSE 3000, CMD ["node", "server.js"]. Runtime security: docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE app, docker run --read-only --tmpfs /tmp app.

Question:
Explain Docker image layers and how they work.
Answer:
Docker images are built in layers using a union filesystem. Each Dockerfile instruction creates a new layer. How layers work: 1) Each layer is read-only, 2) Layers are stacked on top of each other, 3) Changes create new layers, 4) Layers are shared between images, 5) Only changed layers need to be downloaded. Example: FROM ubuntu:20.04 (Layer 1), RUN apt-get update (Layer 2), RUN apt-get install -y nginx (Layer 3), COPY index.html /var/www/html/ (Layer 4). Benefits: Efficient storage (shared layers), Faster builds (cached layers), Faster downloads (only new layers), Version control for images. Best practices: Combine RUN commands to reduce layers, Order instructions by change frequency, Use .dockerignore to exclude unnecessary files.

Question:
How do you troubleshoot Docker container issues?
Answer:
Docker troubleshooting strategies: 1) Check container status: docker ps -a (List all containers), docker logs (View logs), docker inspect (Detailed info), 2) Debug running containers: docker exec -it /bin/bash (Interactive shell), docker stats (Resource usage), docker top (Running processes), 3) Network troubleshooting: docker network ls (List networks), docker network inspect bridge (Network details), docker port (Port mappings), 4) Common issues: Container exits immediately (Check CMD/ENTRYPOINT), Port not accessible (Verify port mapping and firewall), Out of disk space (Clean up with docker system prune), Permission denied (Check user permissions and SELinux).

Question:
Explain Docker registry and how to set up a private registry.
Answer:
Docker registry is a storage and distribution system for Docker images. Types: 1) Docker Hub (public), 2) Private registries (self-hosted), 3) Cloud registries (AWS ECR, Google GCR, Azure ACR). Setting up private registry: docker run -d -p 5000:5000 --name registry registry:2, docker tag myapp localhost:5000/myapp, docker push localhost:5000/myapp, docker pull localhost:5000/myapp. Production setup includes TLS certificates, authentication with htpasswd, and persistent volumes. Authentication: htpasswd -Bbn username password > auth/htpasswd, docker login localhost:5000.

Question:
Explain Docker secrets management and best practices.
Answer:
Docker secrets management protects sensitive data like passwords, API keys, certificates. Docker Swarm Secrets: echo "mypassword" | docker secret create db_password -, docker service create --secret db_password --env POSTGRES_PASSWORD_FILE=/run/secrets/db_password postgres. Best practices: 1) Never use environment variables for secrets, 2) Use external secret management (Vault, AWS Secrets Manager), 3) Rotate secrets regularly, 4) Limit secret access with RBAC, 5) Audit secret usage. External secrets example: SECRET=$(aws secretsmanager get-secret-value --secret-id prod/db/password --query SecretString --output text), docker run -e DB_PASSWORD="$SECRET" myapp.

Question:
How do you optimize Docker images for production?
Answer:
Docker image optimization strategies: 1) Use minimal base images: FROM alpine:3.14 (5MB) instead of ubuntu (72MB), or distroless images, 2) Multi-stage builds to separate build and runtime dependencies, 3) Optimize layers: Combine commands with && and clean up in same layer, Order by change frequency, 4) Use .dockerignore: node_modules, .git, *.md, .env, 5) Remove unnecessary files: apt-get remove build tools after installation, 6) Use specific tags: FROM node:16.14.2-alpine not node:latest. Results: Image size reduction from 1GB to 50MB is common.

Question:
Explain Docker container resource limits and monitoring.
Answer:
Docker resource management controls CPU, memory, and I/O usage. Setting limits: 1) Memory limits: docker run -m 512m nginx (512MB limit), docker run --oom-kill-disable nginx, 2) CPU limits: docker run --cpus="1.5" nginx (1.5 CPU cores), docker run --cpu-shares=512 nginx (Relative weight), 3) Docker Compose deploy resources with limits and reservations. Monitoring: docker stats (Real-time stats), docker system df (Disk usage), docker system events (System events), docker inspect container | jq .HostConfig.Memory. Integration with monitoring tools: Prometheus + cAdvisor, Grafana dashboards, ELK stack for logs.

Question:
What is Docker BuildKit and what advantages does it provide?
Answer:
BuildKit is Docker improved build engine with enhanced features and performance. Key features: 1) Parallel builds, 2) Build cache optimization, 3) Secrets and SSH forwarding, 4) Multi-platform builds, 5) Advanced Dockerfile syntax. Enabling BuildKit: export DOCKER_BUILDKIT=1, docker build . Advanced features: Cache mounts (RUN --mount=type=cache,target=/root/.npm npm install), Secrets (RUN --mount=type=secret,id=mypassword cat /run/secrets/mypassword), SSH forwarding, Multi-platform builds (docker buildx build --platform linux/amd64,linux/arm64 -t myapp .). Benefits: 2-10x faster builds, Better caching, Parallel execution, Enhanced security, Cross-platform support.

Question:
How do you implement CI/CD pipelines with Docker?
Answer:
Docker CI/CD pipeline implementation: 1) Basic pipeline stages: Build Docker image, Run tests in containers, Push to registry, Deploy to environments, 2) GitLab CI example with stages build, test, deploy using docker build, docker push, docker run for tests, docker service update for deployment, 3) GitHub Actions example with checkout, build image, run tests, deploy steps, 4) Multi-stage pipeline: FROM node:16 AS test with COPY, RUN npm ci, RUN npm test, then FROM node:16-alpine AS production with COPY --from=test. Best practices: Use specific image tags, Implement security scanning, Cache dependencies, Parallel test execution, Blue-green deployments.

Question:
Explain Docker logging strategies and log management.
Answer:
Docker logging strategies for production environments: 1) Logging drivers: json-file (default), syslog, fluentd with docker run --log-driver options, 2) Configure in daemon.json with log-driver and log-opts for max-size and max-file, 3) Centralized logging with ELK: app with fluentd logging driver, fluentd service for log processing, 4) Application logging best practices: Structured logging with JSON format, Include correlation IDs, timestamps, user context, 5) Log aggregation patterns: Sidecar containers for log processing, Log shipping to external systems, Real-time log streaming, Log retention policies, 6) Monitoring: docker logs -f --tail 100 container, docker logs container | grep ERROR | wc -l.

Question:
How do you handle data persistence and backup strategies in Docker?
Answer:
Docker data persistence and backup strategies: 1) Volume types: Named volumes (docker volume create db_data, docker run -v db_data:/var/lib/mysql mysql), Bind mounts (docker run -v /host/data:/container/data app), tmpfs (docker run --tmpfs /tmp app), 2) Database backup strategies: docker exec mysql mysqldump -u root -p database > backup.sql, docker exec postgres pg_dump -U user database > backup.sql, Volume backup with alpine tar, 3) Automated backup with cron scripts, 4) Docker Compose with backup service running pg_dump periodically, 5) Cross-region backup: aws s3 sync /backups s3://my-backup-bucket/. Best practices: Regular backup testing, Multiple backup locations, Encryption for sensitive data, Monitoring backup success, Documented recovery procedures.

Question:
Explain Docker performance optimization techniques.
Answer:
Docker performance optimization strategies: 1) Image optimization: Use Alpine Linux (smaller, faster), Multi-stage builds to separate build and runtime, 2) Resource limits: docker run --cpus="2" --memory="1g" myapp, I/O limits with --device-read-bps, 3) Storage optimization: Use overlay2 storage driver, Regular cleanup with docker system prune, 4) Network optimization: Use host networking for high throughput (docker run --network=host myapp), Custom networks for isolation, 5) Container startup optimization: Copy package.json first for better caching, Use exec form for CMD, 6) Monitoring: docker stats --no-stream, Profile with nicolaka/netshoot, 7) Production optimizations: Set resource limits and reservations, Configure ulimits and sysctls. Performance tips: Use read-only containers, Minimize layer count, Use .dockerignore effectively, Enable BuildKit, Use specific base image tags.

Question:
How do you implement blue-green deployment with Docker?
Answer:
Blue-green deployment with Docker provides zero-downtime deployments: 1) Basic concept: Blue (Current production environment), Green (New version environment), Switch traffic from blue to green, 2) Docker Swarm implementation: docker service create --name app-green --replicas 3 myapp:v2, Test green environment, Update load balancer labels, Remove blue after verification, 3) Docker Compose with Traefik using labels for routing rules, 4) Automated deployment script: Detect current color, Deploy new version, Health check with curl, Switch traffic with label updates, Cleanup old version, 5) Load balancer configuration with nginx configs. Benefits: Zero downtime deployments, Easy rollback capability, Production testing before switch, Reduced deployment risk.

Question:
Explain Docker in production: monitoring, scaling, and maintenance.
Answer:
Docker production management strategies: 1) Monitoring stack: Prometheus for metrics collection, Grafana for visualization, cAdvisor for container metrics with volumes mounted for system access, 2) Auto-scaling with Docker Swarm: docker service create with resource limits, Manual scaling with docker service scale, Auto-scaling script monitoring CPU usage and scaling replicas, 3) Health monitoring: Script checking service replicas vs running instances, Send alerts for unhealthy services, 4) Maintenance procedures: Rolling updates (docker service update --image myapp:v2 web), Drain node for maintenance (docker node update --availability drain node1), Backup and restore with tar, Log rotation, 5) Security maintenance: Regular security updates (docker system prune), Vulnerability scanning (docker scan), Update base images. Best practices: Comprehensive monitoring, Automate scaling decisions, Regular backup testing, Security patch management, Capacity planning, Incident response procedures.

Question:
How do you implement container orchestration with Docker Swarm vs Kubernetes?
Answer:
Container orchestration comparison: Docker Swarm: 1) Native Docker solution, simpler setup, 2) Built-in service discovery and load balancing, 3) Declarative service model, 4) Rolling updates and rollbacks, 5) Secrets and config management, 6) Example: docker swarm init, docker service create --replicas 3 myapp. Kubernetes: 1) More complex but feature-rich, 2) Pods, Services, Deployments, ConfigMaps, 3) Horizontal Pod Autoscaler, 4) Advanced networking with CNI, 5) Helm for package management, 6) Example: kubectl create deployment myapp --image=myapp --replicas=3. When to choose: Docker Swarm for simpler deployments, existing Docker expertise, smaller teams. Kubernetes for complex applications, advanced features, large scale, multi-cloud deployments. Migration path: Start with Swarm, migrate to Kubernetes as complexity grows.

Question:
How do you implement Docker container communication and service discovery?
Answer:
Docker container communication methods: 1) Same host communication: Default bridge network allows containers to communicate by IP, Custom bridge networks enable communication by container name, Example: docker network create mynet, docker run --network=mynet --name web nginx, docker run --network=mynet --name app myapp (app can reach web by name), 2) Multi-host communication: Overlay networks in Docker Swarm, Example: docker network create --driver overlay myoverlay, 3) Service discovery: Docker Swarm built-in service discovery, Containers can reach services by service name, Load balancing across service replicas, 4) External service discovery: Consul, etcd, or cloud-native solutions, 5) Port publishing: docker run -p 8080:80 nginx (host port 8080 -> container port 80), 6) Environment variables: Pass service endpoints as env vars, 7) Volume sharing: Shared volumes for file-based communication. Best practices: Use custom networks for isolation, Implement health checks, Use service names instead of IPs, Monitor network performance.

Question:
What are Docker plugins and how do you use them?
Answer:
Docker plugins extend Docker functionality for storage, networking, and authorization: 1) Volume plugins: Provide persistent storage backends, Examples: REX-Ray for cloud storage, Flocker for data management, Installation: docker plugin install rexray/ebs, Usage: docker volume create --driver rexray/ebs myvolume, 2) Network plugins: Custom networking solutions, Examples: Weave, Calico for advanced networking, Installation: docker plugin install weaveworks/net-plugin, 3) Authorization plugins: Control access to Docker API, Example: Twistlock for security policies, 4) Log plugins: Custom log drivers, Examples: Fluentd, Splunk drivers, 5) Plugin management: docker plugin ls (list plugins), docker plugin install (install plugin), docker plugin enable/disable (control plugin state), docker plugin rm (remove plugin). Use cases: Multi-cloud storage, Advanced networking, Security compliance, Custom logging. Best practices: Test plugins in development, Monitor plugin performance, Keep plugins updated, Have fallback options.

Question:
How do you implement Docker image scanning and vulnerability management?
Answer:
Docker image scanning and vulnerability management: 1) Built-in Docker Scan: docker scan myimage:latest, Powered by Snyk, Identifies vulnerabilities in base images and dependencies, 2) Third-party tools: Clair (open source), Twistlock/Prisma Cloud, Aqua Security, Anchore Engine, 3) CI/CD integration: Add scanning step in pipeline, Fail builds on high-severity vulnerabilities, Example: docker scan --severity high myimage, 4) Base image selection: Use official images, Choose minimal base images (Alpine, distroless), Keep base images updated, 5) Dependency management: Regularly update application dependencies, Use package lock files, Remove unnecessary packages, 6) Runtime protection: Monitor running containers, Implement runtime security policies, Use admission controllers in Kubernetes, 7) Compliance: Regular security audits, Vulnerability reporting, Patch management processes. Best practices: Scan early and often, Automate vulnerability detection, Prioritize critical vulnerabilities, Maintain security baseline, Document remediation procedures.

Question:
Explain Docker storage drivers and their performance characteristics.
Answer:
Docker storage drivers manage container filesystem layers: 1) overlay2 (recommended): Default on most systems, Good performance, Copy-on-write efficiency, Supports up to 128 lower layers, 2) aufs: Legacy driver, Good performance on Ubuntu, Not available on all kernels, 3) devicemapper: Block-level storage, Good for production, Requires configuration, Higher overhead, 4) btrfs: Filesystem-level, Good for development, Snapshot capabilities, Requires btrfs filesystem, 5) zfs: Enterprise features, Compression and deduplication, High memory usage, 6) vfs: No copy-on-write, Poor performance, Used for testing. Performance characteristics: overlay2 has best performance/compatibility balance, devicemapper good for production with proper configuration, aufs legacy but stable, btrfs/zfs have advanced features but higher overhead. Configuration: Set in /etc/docker/daemon.json with storage-driver option, Requires Docker restart, Cannot change on existing installations without data loss. Best practices: Use overlay2 for most cases, Monitor storage performance, Regular cleanup with docker system prune, Consider storage requirements for production.

Question:
How do you implement Docker container backup and disaster recovery?
Answer:
Docker container backup and disaster recovery strategies: 1) Image backup: Push images to multiple registries, docker tag myapp registry1.com/myapp, docker push registry1.com/myapp, Export images: docker save myapp > myapp.tar, Import images: docker load < myapp.tar, 2) Volume backup: Named volumes: docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz -C /data ., Bind mounts: Regular filesystem backup, 3) Container state backup: docker commit container myapp:backup (not recommended for production), 4) Application-level backup: Database dumps: docker exec postgres pg_dump database > backup.sql, Configuration exports, 5) Automated backup: Cron jobs for regular backups, Backup rotation policies, Cross-region replication, 6) Disaster recovery: Infrastructure as Code for quick rebuild, Multi-region deployments, Automated failover procedures, Recovery testing. Best practices: Test restore procedures regularly, Document recovery steps, Automate backup processes, Monitor backup success, Encrypt sensitive backups, Maintain multiple backup locations.

Question:
What are Docker init systems and why are they important?
Answer:
Docker init systems handle process management inside containers: 1) Problem: PID 1 responsibilities in containers, Signal handling (SIGTERM, SIGINT), Zombie process reaping, Proper shutdown procedures, 2) Solutions: tini (lightweight init), dumb-init (simple process supervisor), s6-overlay (full init system), Docker --init flag, 3) Implementation: Dockerfile: RUN apk add --no-cache tini, ENTRYPOINT ["/sbin/tini", "--"], CMD ["myapp"], Docker run: docker run --init myapp, 4) Benefits: Proper signal forwarding to application, Zombie process cleanup, Graceful shutdown handling, Better container behavior, 5) Use cases: Applications that spawn child processes, Long-running services, Containers that need proper signal handling, 6) Example without init: Application ignores SIGTERM, docker stop waits 10 seconds then SIGKILL, Zombie processes accumulate. Example with init: tini forwards SIGTERM to application, Application shuts down gracefully, Clean process tree. Best practices: Always use init for production containers, Test signal handling, Monitor process behavior, Use minimal init systems.

Question:
How do you implement Docker container resource quotas and limits in production?
Answer:
Docker container resource quotas and limits for production: 1) Memory limits: Hard limits: docker run -m 1g myapp (container killed if exceeded), Soft limits: --memory-reservation 512m (reclaimed under pressure), Swap control: --memory-swap 2g (total memory + swap), OOM behavior: --oom-kill-disable (prevent OOM killer), 2) CPU limits: CPU shares: --cpu-shares 512 (relative weight), CPU quota: --cpus 1.5 (absolute limit), CPU affinity: --cpuset-cpus 0,1 (specific cores), 3) I/O limits: Block I/O: --device-read-bps /dev/sda:1mb, --device-write-bps /dev/sda:1mb, --blkio-weight 500 (relative I/O priority), 4) Network limits: Bandwidth limiting with tc (traffic control), Container network namespaces, 5) Production implementation: Docker Compose deploy resources, Kubernetes resource requests/limits, Monitoring with cAdvisor, Prometheus metrics, 6) Best practices: Set both requests and limits, Monitor resource usage, Implement alerting, Test under load, Plan for peak usage, Use horizontal scaling. Example: docker run --memory 1g --cpus 1.0 --device-read-bps /dev/sda:10mb myapp.

Question:
What are Docker context and how do you manage multiple Docker environments?
Answer:
Docker context allows managing multiple Docker environments from single client: 1) Default context: Points to local Docker daemon, 2) Remote contexts: Connect to remote Docker hosts, Docker Swarm clusters, Kubernetes clusters, 3) Context management: docker context create myremote --docker host=tcp://remote:2376, docker context use myremote, docker context ls (list contexts), docker context inspect myremote, 4) Use cases: Development vs production environments, Multiple cloud providers, Hybrid deployments, Team collaboration, 5) Security: TLS certificates for secure connections, SSH tunneling for remote access, 6) Examples: docker context create prod --docker host=tcp://prod.example.com:2376 --docker ca=ca.pem --docker cert=cert.pem --docker key=key.pem, docker context use prod, docker ps (now shows prod containers). Benefits: Single CLI for multiple environments, Secure remote management, Easy environment switching, Simplified deployment workflows. Best practices: Use descriptive context names, Secure remote connections, Document context configurations, Regular context cleanup.
Study Tips
  • Read each question carefully
  • Try to answer before viewing the solution
  • Practice explaining concepts out loud
  • Review regularly to reinforce learning
Share & Practice

Found this helpful? Share with others!

Feedback