Wire-speed eBPF/XDP firewall with automatic port whitelisting.
Zero config. ~40–65 ns per packet. 28× less CPU under flood.
eBPF / XDP Kernel ≥ 4.18 IPv4 + IPv6 systemd · OpenRC MIT License ★ —
$ curl --proto '=https' --tlsv1.2 -sSfL https://raw.githubusercontent.com/Kookiejarz/Auto_XDP/refs/heads/main/setup_xdp.sh | sudo bash

Drops threats before the
kernel even sees them.

XDP hooks into the NIC driver — the earliest possible point in the Linux packet path. Unlike iptables or nftables, packets are evaluated before the kernel networking stack, at wire speed. Auto XDP adds an auto-sync daemon that watches which ports are actually open and updates the firewall rules in real time. Zero manual config.

Packet path comparison
TRADITIONAL AUTO XDP NIC Driver hardware RX CPU Kernel Stack socket buf · TCP/IP CPU iptables netfilter · late check DROP full kernel path wasted on every blocked packet NIC Driver hardware RX ⚡ XDP Hook driver level · pre-stack DROP ≈0 CPU PASS Kernel Stack legit traffic only Your App SSH · nginx · postgres drop before stack = zero wasted work
CPU reduction under flood
85.9% → 3.0% softirq
0ns
Per-packet latency
measured on real hardware
Configuration required
auto-sync handles the rest
Live Packet Decision Path — XDP Firewall Core
🌐 Internet NIC Driver eth0 / enp3s0 — hardware RX queue ⚡ XDP HOOK — driver level XDP FIREWALL CORE Protocol Classifier ETH → IPv4/v6 → L4 TCP UDP ICMP ARP / NDP ── TCP PATH ── Malformed Packet Check NULL/XMAS/SYN+FIN · bad doff · port=0 DROP SYN? yes MAP tcp_whitelist ARRAY[65536] SYN Rate Limit per-IP · per-port window MAP tcp_conntrack INSERT LRU_HASH[262144] no (ACK) MAP tcp_conntrack lookup (ACK flow) CT_MISS → DROP ICMP Token Bucket 100 pps burst · per-sec refill PASS DROP ARP/NDP PASS ── UDP PATH ── MAP udp_conntrack reply-tuple lookup hit→PASS MAP trusted_ipv4/v6 LPM_TRIE · CIDR match hit→PASS MAP udp_whitelist ARRAY[65536] · server ports DROP UDP Rate Limit per-src · global sliding window XDP_PASS → kernel network stack XDP_DROP zero CPU overhead Kernel Network Stack TCP/IP · socket layer Your Application SSH · nginx · postgres … tc egress program records outbound TCP SYN + UDP reply tuples → seeds conntrack maps for return traffic
Incoming packet
XDP_PASS
XDP_DROP
ICMP / rate-limited
ARP / NDP
BPF Maps

Auto-sync port whitelist.

The xdp_port_sync daemon watches listening sockets in real time using Linux Netlink Process Connector. When a process opens or closes a port, the BPF maps are updated within milliseconds — no manual firewall rules, ever.

xdp_port_sync.py
tcp_whitelist ARRAY[65536]
udp_whitelist ARRAY[65536]
Performance

Same flood. 28× less CPU.

Tested with a high-performance AMD EPYC™ 7Y43 attacker generating ~367k PPS / 188 Mbps of UDP flood against a 1 vCPU AMD Ryzen 9 3900X target over the public internet.

Auto XDP OFF
0%
softirq CPU — kernel processing every packet
Auto XDP ON
0%
softirq CPU — packets dropped at NIC driver level
How to reproduce:
Load modprobe pktgen on the attacker, configure a 64-byte UDP flood (pkt_size 64, clone_skb 100, count 10000000), and compare top softirq usage with sudo axdp watch showing live counter deltas on the target.
Demo

See it in action.

VIDEO COMING SOON
Live install + flood test demo
Origin

Why I built this.

Personal cloud instances are constantly scanned and probed. Every day, bots hammer SSH, random high ports, and anything that looks like it might be an exposed service. Traditional firewalls like iptables work — but they process packets after the kernel networking stack, adding latency and CPU overhead. Worse, they require manual port management: every time you start a new service, you have to remember to open the firewall.

I wanted something that hooks in at the NIC driver level — the earliest possible interception point — and manages itself. When you start a new process that binds a port, the firewall should already know. When that process exits, the port should close automatically.

The result is Auto XDP: an eBPF/XDP firewall that sits at wire speed and a userspace daemon that keeps it honest. One install command. Zero ongoing config. And if your kernel doesn't support native XDP, it falls back to nftables automatically — so it works everywhere.

Design principles
Wire speed first. XDP at the NIC driver, before any kernel processing.
Self-managing. Daemon watches sockets via Netlink, syncs BPF maps in real time.
🔁 Graceful fallback. nftables backend activates automatically when XDP can't attach.
🛡 Defense in depth. Conntrack, rate limits, malformed-packet drops — layers, not luck.