Machnet is an open-source, DPDK-based networking stack that gives your distributed applications kernel-bypass performance on public cloud VMs — with zero DPDK expertise required.
Traditional kernel networking adds milliseconds of latency. Machnet removes the kernel from the data path so your applications can communicate at wire speed.
Achieve sub-100µs tail latency with DPDK kernel-bypass. No kernel overhead, no context switches — just raw speed for your critical path.
Use a simple sockets-like API. No need to compile your application with DPDK or understand PMDs, mbufs, or ring buffers.
Tested and optimized for Azure, AWS, and GCP VMs. Works with cloud-native NICs out of the box — no bare metal required.
Machnet runs as a separate process and mediates NIC access. Multiple applications on the same machine can share one Machnet instance.
Pull our pre-built Docker image and start benchmarking in minutes. No custom kernel modules, no complex build chains.
750K RPC/s at 61µs P99.9 on Azure F8s_v2. Over 1M RPC/s on bare metal. Every claim is backed by reproducible benchmarks.
Machnet acts as a userspace networking sidecar. Your application talks to it over shared memory; it talks to the network over DPDK.
Launch the Machnet Docker container on each VM. It binds to a dedicated NIC and manages all DPDK operations.
Use the lightweight C API to attach to Machnet via shared memory. No DPDK linking or recompilation needed.
Call machnet_send() and machnet_recv(). Machnet handles all the kernel-bypass magic under the hood.
If you can use sockets, you can use Machnet. Five function calls is all it takes.
// Initialize and attach to Machnet machnet_init(); MachnetChannelCtx_t *ctx = machnet_attach(); // Listen for incoming connections machnet_listen(ctx, local_ip, port); // Connect to a remote Machnet peer MachnetFlow_t flow; machnet_connect(ctx, local_ip, remote_ip, port, &flow); // Send and receive messages machnet_send(ctx, flow, buf, len, &msg_id); machnet_recv(ctx, buf, buf_size, &flow);
Machnet decouples your application from DPDK. It runs as a separate process and multiplexes NIC access for all local applications.
+-------------+ +-------------+ +-------------+
| App A | | App B | | App C |
+------+-------+ +------+-------+ +------+-------+
| | |
| Shared Memory | Shared Memory | Shared Memory
| (sockets-like) | (sockets-like) | (sockets-like)
v v v
+------------------------------------------------------+
| Machnet Process |
| |
| Channel Mgr | Flow Mgr | Packet Engine |
+------------------------------------------------------+
|
| DPDK (kernel-bypass)
v
+------------------------------------------------------+
| NIC (SmartNIC / Cloud NIC) |
+------------------------------------------------------+Tested across major cloud providers and bare-metal hardware with a variety of NICs and DPDK drivers.
Get started with Machnet in minutes. Pull the Docker image, follow the tutorial, and see sub-100µs latency for yourself.