A Set of Experiments with Vector Packet Processing (VPP)
Switched to coredhcp
Removed double import of adblocking filling logs
Added ad blocking


browse  log 



You can also use your local clone with git send-email.

#VPP Experiments - Home Gateway Configuration


All things will be squashed, history is a lie


A work in progress attempt to learn about Vector Packet Processing by building a home gateway for my 1.25 Gb/s link. Essentially, I was motivated by interacting with some big scale routing systems to investigate faster packet processing environments for HPC and non-closed source routers.

Part of the goals in this project are to also learn some lessons from investigating embedded systems and utilize some of the lessons learned in that world to build a more robust router system.


#What works:

  • Basic routing for IPv4 & IPv6
  • DHCP via CoreDHCP
  • Local DNS resolver and DNSCrypt-proxy recursive resolvver
  • IPv6 SLAAC RA via radvd
  • Pyinfra based configuration management
  • Built-in ad block based on DNS records

AKA right now this configuration gets me on the internet. I have IPv6 explicitly turned off because ACLs must be used for home filtering, but I'm still figuring out the VPP syntax

#What needs added:

  • VPP acl plugins - right now NAT is the only thing that's preventing connections which is stupid and will not block IPv6. I need basic bogon and martian filtering
  • Set up management port with SSH and static IPs
  • Power failure loss hardening
    • Read-only rootfs
    • Separate logging to flash for persistence
    • BIOS automatic reboots
    • Potentially utilize LittleFS for storage of configurations
    • Ext4 options for safety
    • Maybe move to ZFS
    • Hardware Watchdog timers
  • Firmware updates
  • Syslog
  • Parameterize/templatize all the configs
  • Secure boot and other boot protections

#What are the long term goals:

  • Port VPP to NixOS and move away from Pyinfra
    • Maybe port to Alpine or Buildroot
  • Add a SLAAC/NDP SeND implementation
  • IP failover - Xfinity sucks more than humanly imaginable
  • UPS integration
  • Fully automated install with preseeds
    • Or not, since preseeds are super super limited... I need to figure out automated PXE installs
  • Migrate entirely to GoVPP configuration and potentially utilize the API for better control plane integration

#The hardware

This weird little Aliexpress Topton, which at the time was one of the most affordable 2.5Gb/s 6x NICs. I also needed more than a few cores as I want to do pinning. Here's a quick copy-pasta of the bullet points:

  • 6xIntel i225-V B3 2.5G RJ45 LANs, 2xUSB3.0, 2xUSB2.0, 1xHD-MI, 1xDB9 COM RS232.
  • 2xDDR4 SODIMM ram slots, max support 64GB 2400MHz.
  • Support two storage: 1xmSATA SSD + 1x2.5" SATA SSD/HDD.
  • 1xSIM slot, 1xMPCIE wireless slot.
  • Support AES-NI
  • Fanless system

I wanted to just put a model number, but as usual you have to chuckle a bit:

# cat /sys/class/dmi/id/sys_vendor 
Default string
# cat /sys/class/dmi/id/chassis_serial 
Default string
# cat /sys/class/dmi/id/chassis_version 
Default string

Hey man, whatever it goes fast.


Automation isn't fully there yet, so do a Debian Stable (bullseye) install as a normal and set up a user. Then modify the example_init.py to match the desired settings and apply the configuration:

$ cp inventory/example_init.py inventory/router.py
# Edit `inventory/router.py` to match your system needs. Options should be pretty obvious
# Edit `group_data/all.py` to change all the system configuration options you want and your ssh public key
$ pyinfra inventory/router.py deploy.py --ssh-user poptart --ssh-password "$PASS" --sudo

Now the system is set up and can be iterated on in a predictable manner. While this will match my exact configuration needs in the config, some things might be different depending on the hardware, primarily the PCI devices might be different than what are in the example file. The following shows how to invoke pyinfra to configure the system if you make changes and how it's a bit easier now:

pyinfra inventory/router exec --ssh-key ~/.ssh/id_vpp_ed25519 --ssh-user admin --sudo -- lshw -class network -businfo
--> Loading config...
--> Loading inventory...

--> Connecting to hosts...
Enter password for private key: /home/poptart/.ssh/id_vpp_ed25519:
    [] Connected

--> Proposed changes:
    Groups: router / vpp_node
    []   Operations: 1   Change: 1   No change: 0

--> Beginning operation run...
--> Starting operation: vpp1
[] Bus info          Device     Class          Description
[] =======================================================
[] pci@0000:01:00.0  enp1s0     network        Ethernet Controller I225-V
[] pci@0000:02:00.0             network        Ethernet Controller I225-V
[] pci@0000:03:00.0             network        Ethernet Controller I225-V
[] pci@0000:07:00.0             network        Ethernet Controller I225-V
[] pci@0000:08:00.0             network        Ethernet Controller I225-V
[] pci@0000:09:00.0             network        Ethernet Controller I225-V
[]                   lstack     network        Ethernet interface
    [] Success

--> Results:
    Groups: router / vpp_node
    []   Changed: 1   No change: 0   Errors: 0

Now any time you want to make a change or add something to the router the config can be updated with:

pyinfra inventory/router.py deploy.py --ssh-key ~/.ssh/id_vpp_ed25519 --ssh-user admin --sudo

By default the system uses:

  • Subnet:
  • Gateway:
  • Punted Stack: