Learn by doing, real commands, real results. Copy, paste, understand.
Step 1: Basic ping
ping -c 4 google.comWhat just happened: Sent 4 ICMP Echo Request packets. TTL = hops left, time = round-trip latency in ms.
Step 2: Ping with custom packet size
ping -c 3 -s 1400 google.comWhat just happened: Sent 1400-byte packets. If they get fragmented or blocked, you'll know your MTU limit.
Step 3: Ping local gateway
ping -c 3 $(ip route | grep default | awk '{print $3}')What just happened: Pinged your default gateway (router). If this fails, your local network connection is broken.
Step 1: Basic traceroute
traceroute google.comWhat just happened: Each line = one hop (router). It shows the IP and 3 round-trip times. Asterisks (*) mean that hop didn't respond (timeout).
Step 2: Traceroute with ICMP (needs root)
sudo traceroute -I google.comWhat just happened: Uses ICMP instead of UDP. Some routers block UDP but allow ICMP, so this may show more hops.
Step 3: Windows equivalent
tracert google.comWhat just happened: Windows uses ICMP by default. Same concept, look for high latency or * (timeouts) to find the bottleneck.
Step 1: Basic A record lookup
dig google.comWhat just happened: Shows the A record (IPv4 address), the authoritative server, and query time in ms.
Step 2: Short output + specific record types
dig +short google.com
dig MX google.com
dig NS google.comWhat just happened: +short gives just the IP. MX shows mail servers (with priority), NS shows nameservers.
Step 3: Query a specific DNS server
dig @8.8.8.8 google.com
nslookup google.com 8.8.8.8What just happened: Asked Google's public DNS (8.8.8.8) instead of your system resolver. Useful if your local DNS is broken.
Step 1: Show interfaces (Linux)
ip addr showWhat just happened: Shows all interfaces. Look for inet (IPv4), inet6 (IPv6), ether (MAC). lo = loopback, eth0/enp0s3 = wired, wlan0 = WiFi.
Step 2: Show interfaces (macOS)
ifconfig en0What just happened: en0 is usually WiFi on Mac. Look for inet (your IP), netmask, broadcast, ether (MAC address).
Step 3: Find your gateway and DNS
ip route | grep default
cat /etc/resolv.confWhat just happened: First command shows your default gateway (router IP). Second shows your DNS resolver (nameserver lines).
Step 1: Capture 10 packets on any interface
sudo tcpdump -i any -c 10What just happened: Captured 10 raw packets. Each line shows timestamp, protocol, source โ destination, and flags.
Step 2: Filter by host and port
sudo tcpdump -i any host 8.8.8.8 -c 5
sudo tcpdump -i any port 443 -c 5What just happened: First captures only traffic to/from 8.8.8.8 (Google DNS). Second captures HTTPS traffic (port 443).
Step 3: Save to file for Wireshark
sudo tcpdump -i any -c 50 -w capture.pcapWhat just happened: Saved 50 packets to a .pcap file. Open this in Wireshark for detailed graphical analysis of each packet.
Step 1: Host discovery (ping scan)
nmap -sn 192.168.1.0/24What just happened: Discovers all live hosts on your local /24 subnet without scanning ports. Shows IP and MAC address.
Step 2: Scan specific ports
nmap -p 22,80,443 192.168.1.1What just happened: Checked if ports 22 (SSH), 80 (HTTP), 443 (HTTPS) are open on your router. State: open/closed/filtered.
Step 3: Service version detection
nmap -sV -p 1-1000 192.168.1.1What just happened: Scanned ports 1-1000 and detected service versions (e.g., OpenSSH 8.9, Apache 2.4). Useful for security audits.
Given: 192.168.1.0/26, What's the range, broadcast, and usable hosts?
/26 = 26 network bits, 6 host bits โ 2^6 = 64 addresses, minus 2 = 62 usable
Given: 10.0.0.0/20, How many usable hosts? What's the subnet mask?
Third octet: 256 - 240 = 16 subnets of 256 = 4,096 total addresses
You need 50 hosts per subnet. What's the minimum CIDR prefix length?
Always round UP to the next power of 2, then subtract 2 for network + broadcast
Split 172.16.0.0/16 into 4 equal subnets, what are the ranges?
4 subnets = 2 extra bits โ /16 + 2 = /18
| Subnet | Range | Broadcast |
|---|---|---|
| 172.16.0.0/18 | 172.16.0.1, 172.16.63.254 | 172.16.63.255 |
| 172.16.64.0/18 | 172.16.64.1, 172.16.127.254 | 172.16.127.255 |
| 172.16.128.0/18 | 172.16.128.1, 172.16.191.254 | 172.16.191.255 |
| 172.16.192.0/18 | 172.16.192.1, 172.16.255.254 | 172.16.255.255 |
Each /18 subnet has 16,382 usable hosts (2^14 - 2)
Step 1: Create a custom-mode VPC
gcloud compute networks create my-vpc --subnet-mode=customWhat just happened: Created a VPC in custom mode, no subnets auto-created. You have full control over CIDR ranges.
Step 2: Create subnets in two regions
gcloud compute networks subnets create web-subnet \
--network=my-vpc --region=us-central1 --range=10.0.1.0/24
gcloud compute networks subnets create db-subnet \
--network=my-vpc --region=us-east1 --range=10.0.2.0/24What just happened: Created two subnets, web-subnet in us-central1 (10.0.1.0/24) and db-subnet in us-east1 (10.0.2.0/24). Each supports 254 hosts.
Step 3: Verify your setup
gcloud compute networks subnets list --network=my-vpcStep 1: Allow SSH from anywhere
gcloud compute firewall-rules create allow-ssh \
--network=my-vpc --allow=tcp:22 --source-ranges=0.0.0.0/0 \
--target-tags=ssh-enabledWhat just happened: Created an ingress rule allowing TCP port 22 (SSH) from any IP. Only VMs with tag "ssh-enabled" are affected.
Step 2: Allow HTTP traffic
gcloud compute firewall-rules create allow-http \
--network=my-vpc --allow=tcp:80 --source-ranges=0.0.0.0/0 \
--target-tags=http-serverStep 3: List all rules
gcloud compute firewall-rules list --filter="network=my-vpc"Step 1: Create a VM in web-subnet
gcloud compute instances create web-vm \
--zone=us-central1-a --machine-type=e2-micro \
--network=my-vpc --subnet=web-subnet \
--tags=ssh-enabled,http-server \
--image-family=debian-12 --image-project=debian-cloudWhat just happened: Created an e2-micro VM (free tier eligible) in web-subnet with the SSH and HTTP tags for firewall rules.
Step 2: SSH into the VM
gcloud compute ssh web-vm --zone=us-central1-aWhat just happened: gcloud handles SSH key creation/management automatically. You're now inside the VM.
Step 3: Verify internal IP from inside
hostname -I
curl ifconfig.meWhat just happened: hostname -I shows internal IP (10.0.1.x), curl ifconfig.me shows your external IP. These are different!
Step 1: Create a Cloud Router
gcloud compute routers create my-router \
--network=my-vpc --region=us-central1What just happened: Cloud Router is required for Cloud NAT. It manages NAT IP allocation and route advertisement.
Step 2: Configure Cloud NAT
gcloud compute routers nats create my-nat \
--router=my-router --region=us-central1 \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ipsWhat just happened: Cloud NAT is now active for all subnets. VMs without external IPs can now reach the internet (outbound only).
Step 1: Create a health check
gcloud compute health-checks create tcp my-health-check --port=80Step 2: Create a backend service
gcloud compute backend-services create my-ilb-backend \
--load-balancing-scheme=INTERNAL --protocol=TCP \
--region=us-central1 --health-checks=my-health-checkStep 3: Create a forwarding rule
gcloud compute forwarding-rules create my-ilb-rule \
--load-balancing-scheme=INTERNAL --network=my-vpc \
--subnet=web-subnet --region=us-central1 \
--backend-service=my-ilb-backend --ports=80What just happened: Created a regional ILB with a health check. Internal clients can now reach the backend VMs via a single internal IP.
Step 1: Create a private DNS zone
gcloud dns managed-zones create my-zone \
--dns-name="internal.example.com." \
--description="Private zone" --visibility=private \
--networks=my-vpcStep 2: Add an A record
gcloud dns record-sets create web.internal.example.com. \
--zone=my-zone --type=A --ttl=300 --rrdatas="10.0.1.10"What just happened: VMs in my-vpc can now resolve web.internal.example.com to 10.0.1.10 using GCP's internal DNS.
Step 1: Create two VPCs
gcloud compute networks create vpc-a --subnet-mode=custom
gcloud compute networks subnets create subnet-a \
--network=vpc-a --region=us-central1 --range=10.1.0.0/24
gcloud compute networks create vpc-b --subnet-mode=custom
gcloud compute networks subnets create subnet-b \
--network=vpc-b --region=us-central1 --range=10.2.0.0/24Step 2: Create HA VPN gateways
gcloud compute vpn-gateways create vpn-gw-a \
--network=vpc-a --region=us-central1
gcloud compute vpn-gateways create vpn-gw-b \
--network=vpc-b --region=us-central1Step 3: Create Cloud Routers and tunnels
gcloud compute routers create router-a \
--network=vpc-a --region=us-central1 --asn=65001
gcloud compute routers create router-b \
--network=vpc-b --region=us-central1 --asn=65002What just happened: Each VPC gets an HA VPN gateway and Cloud Router with a unique ASN for BGP. Routes are exchanged dynamically.
Step 1: Enable Shared VPC on host project
gcloud compute shared-vpc enable HOST_PROJECT_IDStep 2: Attach a service project
gcloud compute shared-vpc associated-projects add SERVICE_PROJECT_ID \
--host-project=HOST_PROJECT_IDWhat just happened: The service project can now deploy resources (VMs, GKE) into the host project's VPC subnets. Network admins stay in the host project.
Step 1: List Docker networks
docker network lsWhat just happened: Shows default networks: bridge (default), host, none. Bridge creates an isolated network; host shares the host's network stack.
Step 2: Run with bridge (default) and inspect
docker run -d --name bridge-test nginx
docker inspect bridge-test | grep IPAddressWhat just happened: Container gets its own IP (172.17.x.x) on the bridge network. It's isolated from the host.
Step 3: Run with host networking
docker run -d --name host-test --network host nginxWhat just happened: Container shares the host's network namespace, no NAT, no port mapping. Nginx is directly on port 80 of the host.
Step 1: Create a custom bridge network
docker network create my-app-netStep 2: Run two containers on it
docker run -d --name web --network my-app-net nginx
docker run -d --name app --network my-app-net alpine sleep 3600Step 3: Ping by container name
docker exec app ping -c 3 webWhat just happened: Docker's built-in DNS resolves "web" to its container IP. Custom networks enable DNS-based discovery (default bridge does NOT).
Step 1: Deploy an nginx pod
kubectl create deployment nginx --image=nginx --replicas=2
kubectl get podsStep 2: Expose as ClusterIP (internal only)
kubectl expose deployment nginx --port=80 --type=ClusterIP --name=nginx-clusterip
kubectl get svc nginx-clusteripWhat just happened: ClusterIP gives an internal IP only reachable within the cluster. Other pods can access it but external clients cannot.
Step 3: Expose as LoadBalancer (external)
kubectl expose deployment nginx --port=80 --type=LoadBalancer --name=nginx-lb
kubectl get svc nginx-lb --watchWhat just happened: On GKE, this provisions a Cloud Load Balancer with an external IP. The EXTERNAL-IP column shows the public address once ready.
Step 1: Create a GKE cluster
gcloud container clusters create my-cluster \
--zone=us-central1-a --num-nodes=2 --machine-type=e2-smallStep 2: Deploy and expose a service
kubectl create deployment web --image=nginx
kubectl expose deployment web --port=80 --type=NodePortStep 3: Create an Ingress resource
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
spec:
defaultBackend:
service:
name: web
port:
number: 80
EOFWhat just happened: GKE's ingress controller provisions a Google Cloud HTTPS Load Balancer. Check the IP with kubectl get ingress.