Lab Environment Overview
Complete topology, hardware, networking, and service inventory for the pgnet.io VCF 9.0.2 home lab.
Overview
The pgnet.io home lab is a production-equivalent VMware Cloud Foundation 9.0.2 environment built on commodity AMD Ryzen hardware. It runs a Consolidated Architecture — a single 3-node cluster that carries both the VCF management stack and user workloads simultaneously, eliminating the need for a separate workload domain.
The lab covers the full VCF stack: core compute and storage (vSAN ESA), software-defined networking (NSX), identity (Active Directory + ADCS), observability (VCF Operations, Logs, Ops for Networks), automation (VCF Automation + Salt), and Kubernetes (Supervisor + VKS).
Design goals:
- Minimal hardware footprint — 3 nodes, no dedicated workload cluster
- Offline lifecycle management — no direct internet access from the VCF cluster
- BGP dynamic routing — full Tier-0 peering, no static routes
- Consumer hardware with lab-specific workarounds (vSAN ESA Mock VIB, AMD Ryzen patches)
Physical Topology
View source
graph TB
subgraph inet[Internet]
ISP[ISP Uplink]
end
subgraph net[Physical Network]
FW[Firewall / Router]
SW[Core Switch]
end
subgraph vcf[VCF Cluster - 3 Nodes]
H1[pgesxa1 - 10.200.1.220]
H2[pgesxa2 - 10.200.1.222]
H3[pgesxa3 - 10.200.1.224]
end
subgraph sup[Supporting Infrastructure]
INF[DNS / NTP / Depot - 10.200.1.240]
NAS[pgnas - NFS - 10.200.1.110]
WIN[winsrv1 - Active Directory]
end
ISP --> FW
FW --> SW
SW --> H1
SW --> H2
SW --> H3
SW --> INF
SW --> NAS
SW --> WIN
style FW fill:#1a3a3f,stroke:#62d6ec,color:#e4e3d9
style SW fill:#2a2a24,stroke:#7a7a6e,color:#e4e3d9
style H1 fill:#2e2910,stroke:#9dd823,color:#e4e3d9
style H2 fill:#2e2910,stroke:#9dd823,color:#e4e3d9
style H3 fill:#2e2910,stroke:#9dd823,color:#e4e3d9
style INF fill:#3a1f25,stroke:#ffb1c0,color:#e4e3d9
style NAS fill:#1e2a0f,stroke:#9dd823,color:#e4e3d9
style WIN fill:#3a1f25,stroke:#ffb1c0,color:#e4e3d9
VCF Cluster Hosts
| Host | FQDN | Management IP | Role |
|---|---|---|---|
| Host 1 | pgesxa1.pgnet.io | 10.200.1.220 | VCF ESXi Node |
| Host 2 | pgesxa2.pgnet.io | 10.200.1.222 | VCF ESXi Node |
| Host 3 | pgesxa3.pgnet.io | 10.200.1.224 | VCF ESXi Node |
Per-host spec:
- CPU: AMD Ryzen (16+ cores) — requires kernel workarounds for NSX Edge and Memory Tiering
- RAM: 128 GB — extended via vSAN ESA Memory Tiering (NVMe tier at 300%)
- Storage: vSAN ESA on NVMe — requires Mock VIB to pass HCL validation on consumer drives
- Network: 1GbE management (VLAN 201), 10GbE data/storage (VLANs 202–209, 250–251) — all ports trunked
Supporting Infrastructure
| Host | FQDN | IP | Role |
|---|---|---|---|
| Infrastructure Server | pglin1.pgnet.io | 10.200.1.240 | BIND 9 DNS · NTP · HTTP (Offline Depot) |
| NAS Appliance | pgnas.pgnet.io | 10.200.1.110 | NFS datastores · ISO library · backup targets |
| Windows Domain Controller | winsrv1 | — | Active Directory · ADCS · DNS forwarding |
Network Architecture
VLAN & Subnet Schedule
| Traffic Type | VLAN | CIDR | Purpose |
|---|---|---|---|
| VM Management | 201 | 10.200.1.0/24 | ESXi mgmt · vCenter · SDDC Manager · NSX Managers |
| vMotion | 202 | 10.200.2.0/24 | Host-to-host live migration |
| vSAN | 203 | 10.200.3.0/24 | Storage traffic — East/West · MTU 9000 |
| NFS Storage | 204 | 10.200.4.0/24 | NAS appliance · bulk datastores |
| Host TEP | 205 | 10.200.5.0/24 | NSX Host Overlay (Geneve) · MTU 9000 · no DNS |
| Edge TEP | 206 | 10.200.6.0/24 | NSX Edge Overlay · MTU 9000 · no DNS |
| VM Workload | 207 | 10.200.7.0/24 | General VM traffic |
| VKS Management | 208 | 10.200.8.0/24 | Tanzu / Kubernetes management |
| K8s Workload | 209 | 10.200.9.0/24 | Tanzu / Kubernetes workload |
| RouterNet 1 | 250 | 10.200.250.0/24 | BGP uplink 1 — peer: 10.200.250.1 |
| RouterNet 2 | 251 | 10.200.251.0/24 | BGP uplink 2 — peer: 10.200.251.1 |
Routing
The Tier-0 Gateway peers via BGP to both physical router uplinks (VLANs 250/251). No static routes are used for north-south traffic. NSX manages all East-West routing between segments.
Service Inventory
All services run in the 10.200.1.0/24 management VLAN unless noted. DNS is authoritative for pgnet.io via BIND 9 at 10.200.1.240.
VCF Core Stack
| Service | FQDN | IP | Notes |
|---|---|---|---|
| SDDC Manager | sddc.pgnet.io | 10.200.1.27 | Central lifecycle & config manager |
| Fleet Manager | fleet.pgnet.io | 10.200.1.10 | |
| Management vCenter | vc.pgnet.io | 10.200.1.11 | |
| NSX Manager VIP | nsx.pgnet.io | 10.200.1.15 | Load-balanced across manager nodes |
| NSX Manager Node 1 | nsxm1.pgnet.io | 10.200.1.24 | |
| VCF Installer Appliance | installer.pgnet.io | 10.200.1.30 | Used during bring-up only |
NSX Edge & Routing
| Service | FQDN | IP | Notes |
|---|---|---|---|
| NSX Edge Node 1 | pgen1.pgnet.io | 10.200.1.50 | |
| NSX Edge Node 2 | pgen2.pgnet.io | 10.200.1.51 | |
| BGP Peer 1 | router-uplink-1.pgnet.io | 10.200.250.1 | Physical router |
| BGP Peer 2 | router-uplink-2.pgnet.io | 10.200.251.1 | Physical router |
| Tier-0 Gateway VIP | t0-gateway.pgnet.io | TBD |
Observability & Management
| Service | FQDN | IP | Notes |
|---|---|---|---|
| VCF Operations | ops.pgnet.io | 10.200.1.12 | |
| VCF Ops Collector | opscol.pgnet.io | 10.200.1.13 | |
| VCF Ops for Networks | opsnet.pgnet.io | 10.200.1.44 | Network Insight platform |
| Ops for Networks Collector | opsnetcol.pgnet.io | 10.200.1.45 | |
| VCF Logs | log.pgnet.io | 10.200.1.19 |
Automation & Platform
| Service | FQDN | IP | Notes |
|---|---|---|---|
| VCF Automation | auto.pgnet.io | 10.200.1.16 | VPCs and tenancy |
| Supervisor API | api.pgnet.io | 10.200.208.100 | VLAN 208 |
Storage
| Service | FQDN | IP | Notes |
|---|---|---|---|
| NFS NAS | pgnas.pgnet.io | 10.200.1.110 | VLAN 204 for data path (10GbE) |
DNS & Identity
DNS
BIND 9 runs on 10.200.1.240 and is the sole authoritative resolver for pgnet.io.
- All VCF appliances, hosts, and services point to
10.200.1.240as their primary DNS. - SRV records and Kerberos lookups for
pgnet.local/pggb.localare forwarded to the Active Directory DNS onwinsrv1.
Identity
Authentication domain: pggb.local (Active Directory on winsrv1)
- Infrastructure FQDNs live in
pgnet.io(managed by BIND 9). - All user, service account, and SSO authentication is handled by
pggb.local. - ADCS on
winsrv1provides the certificate authority for wildcard and VCF-specific certificates.
NTP
10.200.1.240 also runs NTP. All hosts and appliances synchronise to this server to ensure consistent time across DNS, Kerberos, and VCF components.
Related Guides
| Guide | What it covers |
|---|---|
| Architecture & Planning | Deep-dive BOM, VLAN design, full DNS validation table |
| Infrastructure Preparation | Switch config, Kickstart scripts, ESXi bootstrapping |
| Deployment | VCF bring-up, SDDC Manager |
| Post-Deployment | Identity, certificates, NSX, Edge, Supervisor |
| Operations | VCF Operations, Ops for Networks |
| Logs Deployment | VCF Logs setup and use |
| Automation | VCF Automation, VPCs |
| NSX & Network Operations | BGP, routing, network ops |
| VKS & Supervisor Services | Kubernetes, Contour, Harbor, Argo CD |
| Windows AD Deployment | AD, ADCS, DNS forwarding |
| Salt | Salt Stack automation states |