Hands-On Labs

Learn by doing, real commands, real results. Copy, paste, understand.

6 CLI Basics
4 Subnet Exercises
8 GCP Labs
4 Docker/K8s

CLI Basics, No Cloud Needed

๐Ÿ“ก Lab 1: Ping & Connectivity Testing

Beginner ICMP
Objective: Test network reachability and interpret ping output (TTL, latency, packet loss)

Step 1: Basic ping

ping -c 4 google.com

What just happened: Sent 4 ICMP Echo Request packets. TTL = hops left, time = round-trip latency in ms.

Step 2: Ping with custom packet size

ping -c 3 -s 1400 google.com

What just happened: Sent 1400-byte packets. If they get fragmented or blocked, you'll know your MTU limit.

Step 3: Ping local gateway

ping -c 3 $(ip route | grep default | awk '{print $3}')

What just happened: Pinged your default gateway (router). If this fails, your local network connection is broken.

๐Ÿ—บ๏ธ Lab 2: Traceroute, Trace the Path

Beginner Routing
Objective: See every hop between you and a destination, identify where slowdowns occur

Step 1: Basic traceroute

traceroute google.com

What just happened: Each line = one hop (router). It shows the IP and 3 round-trip times. Asterisks (*) mean that hop didn't respond (timeout).

Step 2: Traceroute with ICMP (needs root)

sudo traceroute -I google.com

What just happened: Uses ICMP instead of UDP. Some routers block UDP but allow ICMP, so this may show more hops.

Step 3: Windows equivalent

tracert google.com

What just happened: Windows uses ICMP by default. Same concept, look for high latency or * (timeouts) to find the bottleneck.

๐Ÿ“– Lab 3: DNS Queries with dig & nslookup

Beginner DNS
Objective: Query DNS records, understand A, MX, NS records, and try different resolvers

Step 1: Basic A record lookup

dig google.com

What just happened: Shows the A record (IPv4 address), the authoritative server, and query time in ms.

Step 2: Short output + specific record types

dig +short google.com dig MX google.com dig NS google.com

What just happened: +short gives just the IP. MX shows mail servers (with priority), NS shows nameservers.

Step 3: Query a specific DNS server

dig @8.8.8.8 google.com nslookup google.com 8.8.8.8

What just happened: Asked Google's public DNS (8.8.8.8) instead of your system resolver. Useful if your local DNS is broken.

๐Ÿ–ฅ๏ธ Lab 4: Read Your IP Config

Beginner IPv4
Objective: Read and understand your network interface configuration, IP, mask, MAC, gateway

Step 1: Show interfaces (Linux)

ip addr show

What just happened: Shows all interfaces. Look for inet (IPv4), inet6 (IPv6), ether (MAC). lo = loopback, eth0/enp0s3 = wired, wlan0 = WiFi.

Step 2: Show interfaces (macOS)

ifconfig en0

What just happened: en0 is usually WiFi on Mac. Look for inet (your IP), netmask, broadcast, ether (MAC address).

Step 3: Find your gateway and DNS

ip route | grep default cat /etc/resolv.conf

What just happened: First command shows your default gateway (router IP). Second shows your DNS resolver (nameserver lines).

๐Ÿ”ฌ Lab 5: Packet Capture with tcpdump

Intermediate Analysis
Objective: Capture live network packets, filter by host/port, and save to file for analysis

Step 1: Capture 10 packets on any interface

sudo tcpdump -i any -c 10

What just happened: Captured 10 raw packets. Each line shows timestamp, protocol, source โ†’ destination, and flags.

Step 2: Filter by host and port

sudo tcpdump -i any host 8.8.8.8 -c 5 sudo tcpdump -i any port 443 -c 5

What just happened: First captures only traffic to/from 8.8.8.8 (Google DNS). Second captures HTTPS traffic (port 443).

Step 3: Save to file for Wireshark

sudo tcpdump -i any -c 50 -w capture.pcap

What just happened: Saved 50 packets to a .pcap file. Open this in Wireshark for detailed graphical analysis of each packet.

๐Ÿ” Lab 6: Port Scanning with nmap

Intermediate Security
Objective: Discover hosts on a network, scan open ports, and detect running services

Step 1: Host discovery (ping scan)

nmap -sn 192.168.1.0/24

What just happened: Discovers all live hosts on your local /24 subnet without scanning ports. Shows IP and MAC address.

Step 2: Scan specific ports

nmap -p 22,80,443 192.168.1.1

What just happened: Checked if ports 22 (SSH), 80 (HTTP), 443 (HTTPS) are open on your router. State: open/closed/filtered.

Step 3: Service version detection

nmap -sV -p 1-1000 192.168.1.1

What just happened: Scanned ports 1-1000 and detected service versions (e.g., OpenSSH 8.9, Apache 2.4). Useful for security audits.


Subnetting Practice

๐Ÿงฎ Exercise 1: /26 Subnet Calculation

Given: 192.168.1.0/26, What's the range, broadcast, and usable hosts?

Show Answer
Subnet Mask255.255.255.192 Network192.168.1.0 First Usable192.168.1.1 Last Usable192.168.1.62 Broadcast192.168.1.63 Total Hosts64 (62 usable)

/26 = 26 network bits, 6 host bits โ†’ 2^6 = 64 addresses, minus 2 = 62 usable

๐Ÿงฎ Exercise 2: /20 Host Count

Given: 10.0.0.0/20, How many usable hosts? What's the subnet mask?

Show Answer
Subnet Mask255.255.240.0 Host Bits32 - 20 = 12 Total Addresses2^12 = 4,096 Usable Hosts4,094 Range10.0.0.0, 10.0.15.255

Third octet: 256 - 240 = 16 subnets of 256 = 4,096 total addresses

๐Ÿงฎ Exercise 3: Choose the Right Prefix

You need 50 hosts per subnet. What's the minimum CIDR prefix length?

Show Answer
Need50 usable hosts 2^5 = 32Too small (30 usable) 2^6 = 64Enough! (62 usable) โœ“ Host Bits6 Prefix/26 (32 - 6 = 26) Mask255.255.255.192

Always round UP to the next power of 2, then subtract 2 for network + broadcast

๐Ÿงฎ Exercise 4: Split Into Equal Subnets

Split 172.16.0.0/16 into 4 equal subnets, what are the ranges?

Show Answer

4 subnets = 2 extra bits โ†’ /16 + 2 = /18

SubnetRangeBroadcast
172.16.0.0/18172.16.0.1, 172.16.63.254172.16.63.255
172.16.64.0/18172.16.64.1, 172.16.127.254172.16.127.255
172.16.128.0/18172.16.128.1, 172.16.191.254172.16.191.255
172.16.192.0/18172.16.192.1, 172.16.255.254172.16.255.255

Each /18 subnet has 16,382 usable hosts (2^14 - 2)


GCP Free Tier Labs

โ˜๏ธ Lab 7: Create a VPC + Subnets

Beginner VPC
Objective: Create a custom VPC with two regional subnets from scratch

Step 1: Create a custom-mode VPC

gcloud compute networks create my-vpc --subnet-mode=custom

What just happened: Created a VPC in custom mode, no subnets auto-created. You have full control over CIDR ranges.

Step 2: Create subnets in two regions

gcloud compute networks subnets create web-subnet \ --network=my-vpc --region=us-central1 --range=10.0.1.0/24 gcloud compute networks subnets create db-subnet \ --network=my-vpc --region=us-east1 --range=10.0.2.0/24

What just happened: Created two subnets, web-subnet in us-central1 (10.0.1.0/24) and db-subnet in us-east1 (10.0.2.0/24). Each supports 254 hosts.

Step 3: Verify your setup

gcloud compute networks subnets list --network=my-vpc

๐Ÿงฑ Lab 8: Set Up Firewall Rules

Beginner Firewall
Objective: Create firewall rules to allow SSH and HTTP traffic into your VPC

Step 1: Allow SSH from anywhere

gcloud compute firewall-rules create allow-ssh \ --network=my-vpc --allow=tcp:22 --source-ranges=0.0.0.0/0 \ --target-tags=ssh-enabled

What just happened: Created an ingress rule allowing TCP port 22 (SSH) from any IP. Only VMs with tag "ssh-enabled" are affected.

Step 2: Allow HTTP traffic

gcloud compute firewall-rules create allow-http \ --network=my-vpc --allow=tcp:80 --source-ranges=0.0.0.0/0 \ --target-tags=http-server

Step 3: List all rules

gcloud compute firewall-rules list --filter="network=my-vpc"

๐Ÿ–ฅ๏ธ Lab 9: Deploy a VM and Connect

Beginner Compute
Objective: Launch a VM in your custom VPC and SSH into it

Step 1: Create a VM in web-subnet

gcloud compute instances create web-vm \ --zone=us-central1-a --machine-type=e2-micro \ --network=my-vpc --subnet=web-subnet \ --tags=ssh-enabled,http-server \ --image-family=debian-12 --image-project=debian-cloud

What just happened: Created an e2-micro VM (free tier eligible) in web-subnet with the SSH and HTTP tags for firewall rules.

Step 2: SSH into the VM

gcloud compute ssh web-vm --zone=us-central1-a

What just happened: gcloud handles SSH key creation/management automatically. You're now inside the VM.

Step 3: Verify internal IP from inside

hostname -I curl ifconfig.me

What just happened: hostname -I shows internal IP (10.0.1.x), curl ifconfig.me shows your external IP. These are different!

๐Ÿ”„ Lab 10: Set Up Cloud NAT

Intermediate NAT
Objective: Enable private VMs (no external IP) to access the internet via Cloud NAT

Step 1: Create a Cloud Router

gcloud compute routers create my-router \ --network=my-vpc --region=us-central1

What just happened: Cloud Router is required for Cloud NAT. It manages NAT IP allocation and route advertisement.

Step 2: Configure Cloud NAT

gcloud compute routers nats create my-nat \ --router=my-router --region=us-central1 \ --nat-all-subnet-ip-ranges \ --auto-allocate-nat-external-ips

What just happened: Cloud NAT is now active for all subnets. VMs without external IPs can now reach the internet (outbound only).

โš–๏ธ Lab 11: Create an Internal Load Balancer

Intermediate Load Balancing
Objective: Create a regional internal TCP/UDP load balancer with health checks

Step 1: Create a health check

gcloud compute health-checks create tcp my-health-check --port=80

Step 2: Create a backend service

gcloud compute backend-services create my-ilb-backend \ --load-balancing-scheme=INTERNAL --protocol=TCP \ --region=us-central1 --health-checks=my-health-check

Step 3: Create a forwarding rule

gcloud compute forwarding-rules create my-ilb-rule \ --load-balancing-scheme=INTERNAL --network=my-vpc \ --subnet=web-subnet --region=us-central1 \ --backend-service=my-ilb-backend --ports=80

What just happened: Created a regional ILB with a health check. Internal clients can now reach the backend VMs via a single internal IP.

๐Ÿ“– Lab 12: Configure Cloud DNS

Intermediate DNS
Objective: Create a managed DNS zone and add records

Step 1: Create a private DNS zone

gcloud dns managed-zones create my-zone \ --dns-name="internal.example.com." \ --description="Private zone" --visibility=private \ --networks=my-vpc

Step 2: Add an A record

gcloud dns record-sets create web.internal.example.com. \ --zone=my-zone --type=A --ttl=300 --rrdatas="10.0.1.10"

What just happened: VMs in my-vpc can now resolve web.internal.example.com to 10.0.1.10 using GCP's internal DNS.

๐Ÿ”’ Lab 13: Build a VPN Tunnel Between Two VPCs

Advanced VPN
Objective: Connect two VPCs using HA VPN with BGP

Step 1: Create two VPCs

gcloud compute networks create vpc-a --subnet-mode=custom gcloud compute networks subnets create subnet-a \ --network=vpc-a --region=us-central1 --range=10.1.0.0/24 gcloud compute networks create vpc-b --subnet-mode=custom gcloud compute networks subnets create subnet-b \ --network=vpc-b --region=us-central1 --range=10.2.0.0/24

Step 2: Create HA VPN gateways

gcloud compute vpn-gateways create vpn-gw-a \ --network=vpc-a --region=us-central1 gcloud compute vpn-gateways create vpn-gw-b \ --network=vpc-b --region=us-central1

Step 3: Create Cloud Routers and tunnels

gcloud compute routers create router-a \ --network=vpc-a --region=us-central1 --asn=65001 gcloud compute routers create router-b \ --network=vpc-b --region=us-central1 --asn=65002

What just happened: Each VPC gets an HA VPN gateway and Cloud Router with a unique ASN for BGP. Routes are exchanged dynamically.

๐Ÿข Lab 14: Set Up Shared VPC

Advanced Organization
Objective: Enable Shared VPC so service projects use the host project's network

Step 1: Enable Shared VPC on host project

gcloud compute shared-vpc enable HOST_PROJECT_ID

Step 2: Attach a service project

gcloud compute shared-vpc associated-projects add SERVICE_PROJECT_ID \ --host-project=HOST_PROJECT_ID

What just happened: The service project can now deploy resources (VMs, GKE) into the host project's VPC subnets. Network admins stay in the host project.


Docker & Kubernetes Networking

๐Ÿณ Lab 15: Docker Bridge vs Host Networking

Intermediate Docker
Objective: Understand the difference between bridge and host Docker network modes

Step 1: List Docker networks

docker network ls

What just happened: Shows default networks: bridge (default), host, none. Bridge creates an isolated network; host shares the host's network stack.

Step 2: Run with bridge (default) and inspect

docker run -d --name bridge-test nginx docker inspect bridge-test | grep IPAddress

What just happened: Container gets its own IP (172.17.x.x) on the bridge network. It's isolated from the host.

Step 3: Run with host networking

docker run -d --name host-test --network host nginx

What just happened: Container shares the host's network namespace, no NAT, no port mapping. Nginx is directly on port 80 of the host.

๐Ÿ”— Lab 16: Container-to-Container Communication

Intermediate Docker
Objective: Create a custom network and communicate between containers by name

Step 1: Create a custom bridge network

docker network create my-app-net

Step 2: Run two containers on it

docker run -d --name web --network my-app-net nginx docker run -d --name app --network my-app-net alpine sleep 3600

Step 3: Ping by container name

docker exec app ping -c 3 web

What just happened: Docker's built-in DNS resolves "web" to its container IP. Custom networks enable DNS-based discovery (default bridge does NOT).

โ˜ธ๏ธ Lab 17: Kubernetes Service Types

Intermediate Kubernetes
Objective: Create and compare ClusterIP, NodePort, and LoadBalancer services

Step 1: Deploy an nginx pod

kubectl create deployment nginx --image=nginx --replicas=2 kubectl get pods

Step 2: Expose as ClusterIP (internal only)

kubectl expose deployment nginx --port=80 --type=ClusterIP --name=nginx-clusterip kubectl get svc nginx-clusterip

What just happened: ClusterIP gives an internal IP only reachable within the cluster. Other pods can access it but external clients cannot.

Step 3: Expose as LoadBalancer (external)

kubectl expose deployment nginx --port=80 --type=LoadBalancer --name=nginx-lb kubectl get svc nginx-lb --watch

What just happened: On GKE, this provisions a Cloud Load Balancer with an external IP. The EXTERNAL-IP column shows the public address once ready.

๐ŸŒ Lab 18: GKE Ingress Setup

Advanced GKE
Objective: Route external HTTP traffic to services using Kubernetes Ingress on GKE

Step 1: Create a GKE cluster

gcloud container clusters create my-cluster \ --zone=us-central1-a --num-nodes=2 --machine-type=e2-small

Step 2: Deploy and expose a service

kubectl create deployment web --image=nginx kubectl expose deployment web --port=80 --type=NodePort

Step 3: Create an Ingress resource

kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-ingress spec: defaultBackend: service: name: web port: number: 80 EOF

What just happened: GKE's ingress controller provisions a Google Cloud HTTPS Load Balancer. Check the IP with kubectl get ingress.