vcf lab topology networking reference

Lab Environment Overview

Complete topology, hardware, networking, and service inventory for the pgnet.io VCF 9.0.2 home lab.

Overview

The pgnet.io home lab is a production-equivalent VMware Cloud Foundation 9.0.2 environment built on commodity AMD Ryzen hardware. It runs a Consolidated Architecture — a single 3-node cluster that carries both the VCF management stack and user workloads simultaneously, eliminating the need for a separate workload domain.

The lab covers the full VCF stack: core compute and storage (vSAN ESA), software-defined networking (NSX), identity (Active Directory + ADCS), observability (VCF Operations, Logs, Ops for Networks), automation (VCF Automation + Salt), and Kubernetes (Supervisor + VKS).

Design goals:

  • Minimal hardware footprint — 3 nodes, no dedicated workload cluster
  • Offline lifecycle management — no direct internet access from the VCF cluster
  • BGP dynamic routing — full Tier-0 peering, no static routes
  • Consumer hardware with lab-specific workarounds (vSAN ESA Mock VIB, AMD Ryzen patches)

Physical Topology

View source
graph TB
    subgraph inet[Internet]
        ISP[ISP Uplink]
    end
    subgraph net[Physical Network]
        FW[Firewall / Router]
        SW[Core Switch]
    end
    subgraph vcf[VCF Cluster - 3 Nodes]
        H1[pgesxa1 - 10.200.1.220]
        H2[pgesxa2 - 10.200.1.222]
        H3[pgesxa3 - 10.200.1.224]
    end
    subgraph sup[Supporting Infrastructure]
        INF[DNS / NTP / Depot - 10.200.1.240]
        NAS[pgnas - NFS - 10.200.1.110]
        WIN[winsrv1 - Active Directory]
    end
    ISP --> FW
    FW --> SW
    SW --> H1
    SW --> H2
    SW --> H3
    SW --> INF
    SW --> NAS
    SW --> WIN
    style FW  fill:#1a3a3f,stroke:#62d6ec,color:#e4e3d9
    style SW  fill:#2a2a24,stroke:#7a7a6e,color:#e4e3d9
    style H1  fill:#2e2910,stroke:#9dd823,color:#e4e3d9
    style H2  fill:#2e2910,stroke:#9dd823,color:#e4e3d9
    style H3  fill:#2e2910,stroke:#9dd823,color:#e4e3d9
    style INF fill:#3a1f25,stroke:#ffb1c0,color:#e4e3d9
    style NAS fill:#1e2a0f,stroke:#9dd823,color:#e4e3d9
    style WIN fill:#3a1f25,stroke:#ffb1c0,color:#e4e3d9

VCF Cluster Hosts

HostFQDNManagement IPRole
Host 1pgesxa1.pgnet.io10.200.1.220VCF ESXi Node
Host 2pgesxa2.pgnet.io10.200.1.222VCF ESXi Node
Host 3pgesxa3.pgnet.io10.200.1.224VCF ESXi Node

Per-host spec:

  • CPU: AMD Ryzen (16+ cores) — requires kernel workarounds for NSX Edge and Memory Tiering
  • RAM: 128 GB — extended via vSAN ESA Memory Tiering (NVMe tier at 300%)
  • Storage: vSAN ESA on NVMe — requires Mock VIB to pass HCL validation on consumer drives
  • Network: 1GbE management (VLAN 201), 10GbE data/storage (VLANs 202–209, 250–251) — all ports trunked

Supporting Infrastructure

HostFQDNIPRole
Infrastructure Serverpglin1.pgnet.io10.200.1.240BIND 9 DNS · NTP · HTTP (Offline Depot)
NAS Appliancepgnas.pgnet.io10.200.1.110NFS datastores · ISO library · backup targets
Windows Domain Controllerwinsrv1Active Directory · ADCS · DNS forwarding

Network Architecture

VLAN & Subnet Schedule

Traffic TypeVLANCIDRPurpose
VM Management20110.200.1.0/24ESXi mgmt · vCenter · SDDC Manager · NSX Managers
vMotion20210.200.2.0/24Host-to-host live migration
vSAN20310.200.3.0/24Storage traffic — East/West · MTU 9000
NFS Storage20410.200.4.0/24NAS appliance · bulk datastores
Host TEP20510.200.5.0/24NSX Host Overlay (Geneve) · MTU 9000 · no DNS
Edge TEP20610.200.6.0/24NSX Edge Overlay · MTU 9000 · no DNS
VM Workload20710.200.7.0/24General VM traffic
VKS Management20810.200.8.0/24Tanzu / Kubernetes management
K8s Workload20910.200.9.0/24Tanzu / Kubernetes workload
RouterNet 125010.200.250.0/24BGP uplink 1 — peer: 10.200.250.1
RouterNet 225110.200.251.0/24BGP uplink 2 — peer: 10.200.251.1

Routing

The Tier-0 Gateway peers via BGP to both physical router uplinks (VLANs 250/251). No static routes are used for north-south traffic. NSX manages all East-West routing between segments.

info
Jumbo FramesVLANs 202, 203, 205, and 206 require MTU 9000 on the physical switch. Native VLAN on host trunk ports is set to 201 so untagged management traffic (Kickstart, initial boot) lands correctly.

Service Inventory

All services run in the 10.200.1.0/24 management VLAN unless noted. DNS is authoritative for pgnet.io via BIND 9 at 10.200.1.240.

VCF Core Stack

ServiceFQDNIPNotes
SDDC Managersddc.pgnet.io10.200.1.27Central lifecycle & config manager
Fleet Managerfleet.pgnet.io10.200.1.10
Management vCentervc.pgnet.io10.200.1.11
NSX Manager VIPnsx.pgnet.io10.200.1.15Load-balanced across manager nodes
NSX Manager Node 1nsxm1.pgnet.io10.200.1.24
VCF Installer Applianceinstaller.pgnet.io10.200.1.30Used during bring-up only

NSX Edge & Routing

ServiceFQDNIPNotes
NSX Edge Node 1pgen1.pgnet.io10.200.1.50
NSX Edge Node 2pgen2.pgnet.io10.200.1.51
BGP Peer 1router-uplink-1.pgnet.io10.200.250.1Physical router
BGP Peer 2router-uplink-2.pgnet.io10.200.251.1Physical router
Tier-0 Gateway VIPt0-gateway.pgnet.ioTBD

Observability & Management

ServiceFQDNIPNotes
VCF Operationsops.pgnet.io10.200.1.12
VCF Ops Collectoropscol.pgnet.io10.200.1.13
VCF Ops for Networksopsnet.pgnet.io10.200.1.44Network Insight platform
Ops for Networks Collectoropsnetcol.pgnet.io10.200.1.45
VCF Logslog.pgnet.io10.200.1.19

Automation & Platform

ServiceFQDNIPNotes
VCF Automationauto.pgnet.io10.200.1.16VPCs and tenancy
Supervisor APIapi.pgnet.io10.200.208.100VLAN 208

Storage

ServiceFQDNIPNotes
NFS NASpgnas.pgnet.io10.200.1.110VLAN 204 for data path (10GbE)

DNS & Identity

DNS

BIND 9 runs on 10.200.1.240 and is the sole authoritative resolver for pgnet.io.

  • All VCF appliances, hosts, and services point to 10.200.1.240 as their primary DNS.
  • SRV records and Kerberos lookups for pgnet.local / pggb.local are forwarded to the Active Directory DNS on winsrv1.

Identity

Authentication domain: pggb.local (Active Directory on winsrv1)

  • Infrastructure FQDNs live in pgnet.io (managed by BIND 9).
  • All user, service account, and SSO authentication is handled by pggb.local.
  • ADCS on winsrv1 provides the certificate authority for wildcard and VCF-specific certificates.

NTP

10.200.1.240 also runs NTP. All hosts and appliances synchronise to this server to ensure consistent time across DNS, Kerberos, and VCF components.


GuideWhat it covers
Architecture & PlanningDeep-dive BOM, VLAN design, full DNS validation table
Infrastructure PreparationSwitch config, Kickstart scripts, ESXi bootstrapping
DeploymentVCF bring-up, SDDC Manager
Post-DeploymentIdentity, certificates, NSX, Edge, Supervisor
OperationsVCF Operations, Ops for Networks
Logs DeploymentVCF Logs setup and use
AutomationVCF Automation, VPCs
NSX & Network OperationsBGP, routing, network ops
VKS & Supervisor ServicesKubernetes, Contour, Harbor, Argo CD
Windows AD DeploymentAD, ADCS, DNS forwarding
SaltSalt Stack automation states