Webserver in Rust with Warp

2020-02-04

Why

Rust can help with writing high performance web servers with very consistent latencies. Which means that you can use smaller boxes to host the services, or can keep a monolithic application scaling for as long as possible, before jumping into microservices, which come with issues of their own.

Setup

cargo new warp-server

Use cargo add to add dependencies to the toml file. You'd have to install cargo edit, if you don't want to, you can edit, cargo.toml manually.

cargo add tokio warp serde

Cargo add doesn't set features, so you'll have to edit manually anyway. Serde will need the derive feature, and tokio the macros.

[package]
name = "warp-server"
version = "0.1.0"
edition = "2018"

[dependencies]
tokio = { version = "0.2.11", features = ["macros"] }
warp = "0.2.1"
serde = { version = "1.0.104", features = ["derive"] }

Then add a simple endpoint to the main.rs file. We'll do a simple rest health endpoint, that returns a hardcoded version.

use warp::Filter;
use warp::reply::json;
use serde::{ Serialize, Deserialize };

#[derive(Serialize, Deserialize)]
struct HealthResponse {
    version: u32
}

#[tokio::main]
async fn main() {
    let health = warp::path!("api" / "health")
        .map(|| json(&HealthResponse { version: 42 }));

    warp::serve(health)
        .run(([127, 0, 0, 1], 3030))
        .await;
}

Now if you run cargo run or cargo --release from the command line, and navigate to http://localhost:3030/api/health you should see the health endpoint response.

> curl http://localhost:3030/api/health
{"version":42}

We can do a small bench with wrk2. We can see that the our small server has a very good latency. 1.6ms for 99 percentile is nothing to scoff at. You can look below for a simpler node server that does less work(no route matching and no json serialization) that has a higher latency. Obviously you should take these simple benchmarks with a grain of salt, I didn't disable my machine's turbo boost, didn't make sure no background jobs were running, this is not a good way to do benches.

> wrk -t2 -c100 -d20s -R10000 --latency http://127.0.0.1:3030/api/health
Running 20s test @ http://127.0.0.1:3030/api/health
  2 threads and 100 connections
  Thread calibration: mean lat.: 0.791ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 0.786ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   743.63us  381.72us   2.23ms   61.29%
    Req/Sec     5.13k   337.81     7.00k    77.67%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%  738.00us
 75.000%    1.05ms
 90.000%    1.24ms
 99.000%    1.60ms
 99.900%    2.06ms
 99.990%    2.20ms
 99.999%    2.23ms
100.000%    2.23ms
...
#[Mean    =        0.744, StdDeviation   =        0.382]
#[Max     =        2.232, Total count    =        97500]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  198797 requests in 20.00s, 23.13MB read
Requests/sec:   9939.65
Transfer/sec:      1.16MB
> cat index.js
const http = require('http');

http
  .createServer(function(req, res) {
    res.writeHead(200, {'Content-Type': 'application/json'});
    res.end('{"version": 42}');
  })
  .listen(3030);

> wrk -t2 -c100 -d20s -R10000 --latency http://127.0.0.1:3030/api/health
Running 20s test @ http://127.0.0.1:3030/api/health
  2 threads and 100 connections
  Thread calibration: mean lat.: 1.178ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.134ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.05ms  489.88us   3.16ms   63.85%
    Req/Sec     5.29k   677.16     8.67k    82.52%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    1.05ms
 75.000%    1.41ms
 90.000%    1.70ms
 99.000%    2.14ms
 99.900%    2.48ms
 99.990%    3.03ms
 99.999%    3.12ms
100.000%    3.16ms
#[Mean    =        1.049, StdDeviation   =        0.490]
#[Max     =        3.160, Total count    =        97500]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  194242 requests in 20.00s, 30.57MB read
Requests/sec:   9711.71
Transfer/sec:      1.53MB

RAM usage

Comparing the RAM usage(RES column in htop, which is memory usage in KB), we can see that the node server uses about ~43MB, while the warp server ~3.8MB, which is 11 times less. This means you can launch 10x more hello world services on single box with warp.

 PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
19344 user       20   0  569M 43512 27128 S  0.0  0.3  0:05.76 node index.js
 PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
19589 user       20   0  5456  3816  2356 S  0.0  0.0  0:02.01 target/release/warp-server

Conclusion

Warp is a very clean and fast web framework for Rust. It has support for http/1.1 http/2, tls, websockets and a whole lot more. If you want to write some servers that run blazing fast and consume next to no RAM, then warp is a good option.