After weeks of building, troubleshooting, and optimizing my CCNA lab environment, I am excited to share the entire project — now fully documented and open-sourced on GitHub. This post walks through the journey from an initial EVE-NG deployment to a fully automated Proxmox-based lab using Terraform, Ansible, and custom shell scripts.
You can find the complete repository here: github.com/jczaldivar71/eve-ng-ccna-lab
Project Overview
The EVE-NG CCNA Lab project started as a straightforward network simulation environment for CCNA study. It quickly evolved into a full infrastructure-as-code project covering:
- EVE-NG lab deployment with API-driven automation
- Migration from EVE-NG to Proxmox for better performance and scalability
- Custom shell scripts for image management, licensing, and node orchestration
- A Python script (
generate_readme.py) to auto-generate comprehensive documentation - qcow2 disk image optimization achieving a 39% storage reduction
- Terraform and Ansible playbooks for reproducible infrastructure deployment
GitHub Documentation and the generate_readme.py Script
One of the key pieces of this project is the generate_readme.py Python script. Rather than manually maintaining a README that would inevitably fall out of sync with the actual project structure, I wrote a script that scans the repository and automatically generates a comprehensive README.md file.
The script inspects every directory — configs/, scripts/, terraform-ansible/, topology/, and images/ — and produces a fully formatted document with a table of contents, script references, setup instructions, and troubleshooting tips. Running it is as simple as:
cd scripts/
python generate_readme.py
The generated README covers 13 sections including Overview, Project Structure, Prerequisites, Quick Start, Lab Topology, Scripts Reference, qcow2 Image Management, EVE-NG API Usage, Proxmox Deployment, Configuration Files, Troubleshooting, Known Limitations, and License information. At 340 lines, it serves as a complete guide for anyone wanting to replicate or build upon this lab.
EVE-NG to Proxmox Migration
EVE-NG is a fantastic network emulation platform, but I ran into limitations around resource management and integration with modern IaC tools. The decision to migrate to Proxmox was driven by several factors:
- Better resource control: Proxmox provides fine-grained CPU, memory, and storage allocation through its API
- Terraform integration: The Proxmox Terraform provider enables declarative infrastructure definitions
- Thin provisioning: Proxmox handles thin-provisioned qcow2 images natively, which was critical for storage optimization
- Ansible compatibility: Post-deployment configuration is seamless with Ansible playbooks targeting Proxmox VMs
The migration involved exporting router and switch images from EVE-NG, converting and optimizing the qcow2 disk images, and then redeploying them on Proxmox using Terraform. The entire workflow is captured in the terraform-ansible/ directory of the repository.
Automation Scripts
The scripts/ directory contains six purpose-built shell scripts that automate every aspect of lab management:
- eve-ng-api-auth.sh: Handles cookie-based API authentication with EVE-NG, exporting session tokens for use in subsequent API calls. Includes examples for listing labs, getting node details, and starting all nodes.
- start-lab-nodes.sh: Automates the process of starting all lab nodes through the EVE-NG REST API with proper sequencing and health checks.
- scp-upload-images.sh: Securely transfers qcow2 images to the EVE-NG or Proxmox host via SCP with progress tracking and integrity verification.
- qcow2-optimize.sh: The image optimization workhorse — converts, compresses, and thin-provisions qcow2 disk images (more on this below).
- fix-permissions.sh: Ensures correct file ownership and permissions on EVE-NG image directories, a common source of lab startup failures.
- iol-license-fix.sh: Generates and applies the proper IOL (IOS on Linux) license file, which is required for Cisco IOL images to boot correctly.
Each script is documented with usage instructions and can be run independently or chained together for a complete deployment workflow.
qcow2 Image Optimization
One of the most impactful parts of this project was optimizing the qcow2 disk images. Network appliance images (Cisco IOSv, IOSvL2, CSR1000v, etc.) often ship with significant wasted space — preallocated but unused disk blocks that consume real storage.
The qcow2-optimize.sh script automates a multi-step optimization pipeline:
- Sparsification: Uses
virt-sparsifyto zero out unused blocks within the guest filesystem - Compression: Applies qcow2 internal compression via
qemu-img convert -c - Thin provisioning: Ensures metadata is set for thin-provisioned allocation on the hypervisor
- Integrity check: Runs
qemu-img checkto verify image health post-optimization
The results were significant: total image storage dropped from 30GB to 18.3GB — a 39% reduction. This is especially meaningful in a home lab where storage is often limited. The optimized images boot identically to the originals but consume far less disk space on the Proxmox host.
Terraform and Ansible Deployment
The final piece of the puzzle is fully automated deployment using Terraform and Ansible. The terraform-ansible/ directory contains everything needed to stand up the lab from scratch:
Terraform handles the infrastructure provisioning:
- VM creation on Proxmox with defined CPU, memory, and disk parameters
- Network interface configuration with VLAN tagging
- Cloud-init integration for initial bootstrapping
- State management for tracking deployed resources
Ansible manages the post-deployment configuration:
init-proxmox.yml:Initializes the Proxmox host with required packages, storage configuration, and network bridgesdeploy-vm.yml:Deploys individual VMs with their specific configurationsremove-gateway.yml:Cleans up default gateway routes that can interfere with lab routing exercises
Configuration variables are stored in group_vars/all.yml (with a .sample template provided), and the hosts inventory file defines the Proxmox target. The ansible.cfg sets sensible defaults for host key checking and privilege escalation.
With this setup, spinning up a complete CCNA lab goes from a manual multi-hour process to a single command:
cd terraform-ansible/
terraform init && terraform apply
ansible-playbook -i hosts deploy-vm.yml
What’s Next
This project is a living repository — I plan to continue adding to it as I progress through my CCNA studies and expand the lab. Future additions may include:
- Additional topology configurations for specific CCNA exam topics
- Integration with network monitoring tools
- CI/CD pipeline for automated lab testing
- Support for additional platforms (VIRL, GNS3)
If you are studying for the CCNA or building your own home lab, feel free to fork the repository and adapt it to your needs. Contributions and feedback are always welcome.
GitHub Repository: https://github.com/jczaldivar71/eve-ng-ccna-lab



