Let's take a look on how one of my backends evolved from a beefy Actix + Diesel stack into a slimmer warp + serde combo. And, just for fun, I'll add a small anecdote about forcing an innocent Web App into synchronous fetch calls and how I ended in such a dark place.

This article is part of a series covering the evolution of LillaOst.

Let's dig in.

Before start, perhaps you'd like a podcast that talks a bit about Yew? It just dropped while I was writing this piece. It's interesting, perhaps not the best ambassador of the library but I still hope you can enjoy it.

Starting point: Actix + Diesel + PostgreSQL

Almost by accident I decided to write a WebApp using only rust. I wanted to try the language and what could be done in the ecosystem1. After a look into Are we web yet? I decided to go ahead and start building on top of Actix + Diesel + PostgreSQL. Because every Web App needs a relational DB for persistence and an ironclad web server, right?

This combination worked perfect for my needs but for one issue: it was overkill. When I started the project I just came out of a .Net heavy environment where an ASP + Entity Framework + MSSQL combination would've been a reasonable starting point. When working with .Net the tooling is fantastic, the pipeline clear and when you're not trying to do anything too fancy things "just work". That Diesel + Actix + PostgreSQL stack, at first glance, was "almost like working in .Net but in rust".

The complexity of the stack became a problem after the first working release was in use at home. As every project in contact with end users the feature requests started immediately. In particular the problems began when I tried to expand the scope of the events I was persisting. As you can imagine, and given that it was my first development in Rust, the whole code was not particularly well modularized and testing changes in the underlying models forced me to do the usual ORM dance. Since I didn't bothered to have a proper test environment (I introduced TDD-isms later in my rust journey2) this forced me to have a couple DBs, populating fake data was a PITA, I was forced to keep a SQL configuration of the DB, etc.

To be clear, I'm not implying that Diesel or Actix, or PSQL are in any way bad products. They're not. They're absolutely phenomenal. I simply introduced them too early, before I was remotely ready to keep the discipline and some semblance of reasonable structure in my code. And, more important, before my problem needed industry-grade solutions.

An anecdote about DBs VS. Plain-Old-Files

While working on rewriting the UI in yew I was checking some material by Robert C. Martin:

In one of his adorable ramblings he describes a project (perhaps Fitnesse?) where the developers started a product planning on introducing a DB down the line. But as the project progressed they never see the benefit of installing it. The file based persistence they've been using during de development was doing the job fine.

That made me think: perhaps I've over planned my solution? At that point I was starting to use cargo test and slowly learning how the suites work in rust. I was using intermediate .JSON files with fake data to check some timestamp details. So, why not to try that route? I had every piece in position: Serde was there, serializing was enforced by Diesel, so why not try to use a old-chunk'o-JSON?

And that's exactly what I did. After some testing on my very humble RaspberryPi3B I verified that it was capable to handle "years" of fake data without struggling too much. The "schema" was dead simple. Making data backups was as stupid as sending a file to git.

I stripped the DB out of my project and I was happily tweaking the code again without thinking too much about it.

From Actix to Warp

Sometimes when you approach some code with a mindset of "How can I simplify this mess?" you start questioning every component. Once Diesel was out of the mix it was time to take a look into Actix. There's nothing wrong with it as far as I can tell. It has good documentation and samples. It seems to be one of the most performant web servers written in rust. It was working perfectly well already. And still, I decided to move to Warp. Honestly my reasons are way fuzzier when compared with removing Diesel. I'll try to summarize my steps a bit.

1. It was the first rust library I tried to use

Just try to imagine the hubris of starting from scratch with a language and decide to:

  1. Decide to make a Web App on a non-native target.
  2. Start reading the book right at that moment.
  3. The first library you decide to explore is "Actix" but you don't even know what Tokio is.

I think I choked with the mix of magical macros, documentation and bizarre syntax. Simply put, I made it work through brute force and luck3. From that point I never felt as if I understood that part of the puzzle. And this was a constant source of uncertainty.

2. Async was becoming a pressing matter

If you read the Bonus section you'll get an idea of my misadventures trying to make a totally "sync" frontend talk with an async backend. Once I realized that async was unavoidable I decided to start looking deeper in the ecosystem. Tokio was a natural candidate and there I went.

If I understand Actix correctly, it's using Tokio under the hood. But I wanted a more direct contact with it. Using Tokio's components it's possible to build a web server. My goal was to find something "in between these two extremes", a solution that let me dip my toes in Tokio without drowning. And warp seems right there, in that category.

A couple Warp samples and tricks

Let's take a look to how the root of the server looks:

let in_thread_server = task::LocalSet::new();
let (tx, rx) = mpsc::channel::<CommandToBackend>(32);

let routes =
    // API
            persons::filters::all_persons(tx.clone())
    .or(   feedings::filters::all_feedings(tx.clone()))
    .or( expulsions::filters::all_expulsions(tx.clone()))
    .or(     events::filters::all_events(tx.clone()))
    .or(      admin::filters::all_admin(tx.clone()))
    // Static content
    .or( static_file_filters::get_index())
    .or( static_file_filters::get_static_file())
    .or( static_file_filters::serve_index_by_default_get());

let warp_server = tokio::spawn(async move {
    warp::serve(routes).run(([0, 0, 0, 0], 80)).await;
});

If we ignore the slightly concerning LocalSet the code looks quite clear to me. Not a macro in sight. Just "plain rust".

This was the direction I was following. The examples in the repo are well explained and they'll probably cover any need you can have (within reason) But then I started noticing that the compilation times were increasing. Like a lot. Like 8 minutes for an arguably tiny server. Then I found that this issue is known and it has a solution: BoxedFilter.

Let's check an example. A GET call that answers to api/persons, that requires a mpsc::channel::sender and answers with a JSON containing the persons persisted in the server:

pub fn get_persons(tx: Sender<CommandToBackend>) -> BoxedFilter<(impl Reply,)> {
    warp::path!("api" / "persons")
        .and(warp::get())
        .and(with_command_sender(tx))
        .and_then(handlers::ost_get_persons)
        .boxed()
}

This filter actually uses a macro but this one is oh-so-convenient! Point is that after introducing this change in the server the compilation times in my RaspberryPi3B went from days of compiling to ~ 12 minutes. That is arguably a very long time but we're still talking about an ARM32 used for hobby projects. Not bad.

Bonus: Forcing a Web App to fetch synchronously

Once my new server was in place and properly tested it was time to hook the UI to it. Simple stuff, right? Well ... let's talk about the state the Web App code at that point in time. Starting with my Cargo.toml:

yew = { git = "https://github.com/yewstack/yew?branch=master", rev = "94b475213aae0ca0c5397c7809a17d23cebea041" }

Yep, you can do that in a Cargo.toml I don't suggest you do but it's possible. IIRC that rev corresponds to a very early 0.18.XX. About this point in time the yew team was about to publish 0.19.3.

And about how the components where querying data, I had a very handy "God Struct" that exposed functions such as:

impl LillaOstState {
// vvv Notice the lack of ASYNC 
    pub fn persons(&self) -> Vec<Box<dyn Person>> {
        self.local_storage_monolith.persons()
    }
    // Many, many, MANY sync calls follow
}

So, in summary:

  1. For my own sanity I needed to stabilize my UI code on some crated version of yew.
  2. I was assuming that my data was local, in memory and perfectly synchronous.
  3. I wanted to use the UI with the new backend ASAP.

This situation was no-bueno.

My frontend and backend were incapable to talk to each other, or at least I was unable to imagine a workaround that let me hook a fully sync UI with an async backend. I tried to find a "rusty" way of using synchronous fetch calls and sidestep the issue. I tried to use reqwest and a couple more approaches without luck.

Then I looked deeper into JavaScript. My question was: "In a modern browser, is it allowed to wait a fetch completion?". If you go to the documentation in XMLHttpRequest.open, read through the options and ignore every-single-warning, you discover that you can force a browser to execute its queries synchronously4. You also conclude that this is a very bad idea. It was exactly the kind of nasty patch I needed to do a temp release and unblock myself.

wasm_bindgen: Calling JS from WebAssembly

I had a potential solution in JavaScript that I wanted to call from my webAssembly. That's something that you can do through wasm_bindgen. At high level you write your js side by side your rust sources:

// Don't use this code. This is bad. This makes Santa sad.
let request;
export function get_string_from_network(url) {
    try {
        request = new XMLHttpRequest();
        request.open("GET", url, false); // <<< Here, that `false`
        request.send();
        return request.response;
    } catch (e) {
        console.error('get_string_from_network');
        console.error(e);
        return e;
    }
}

And then you expose the functions to rust through the super-duper-magical glue that wasm_bindgen is capable to generate:

use wasm_bindgen::prelude::*;

#[wasm_bindgen(module = "/src/serial-comms.js")]
extern "C" {
    #[wasm_bindgen(js_name = "get_string_from_network")]
    pub fn get_string_from_network(url: String) -> String;
}

Then, the absolutely stellar trunk does all the packaging required to bring your javascript into your dist build. It just works. At this point I had an opening to make all my calls synch and simply substitute LocalStorage --> get_string_from_network or whatever I needed.

After a moment of exultant joy I decided to go into the shower, hug myself and cry for my lost soul.

Program your Web Apps with async in mind. Be gentle to yourself.


1 For more details, check: LillaOst Introduction.

2 There's a lesson for me here. When starting with a new language, check the testing capabilities first.

4 Don't do this in your code. Honestly, don't. This is not the way, I promise.

3 Somehow as I get older and more experienced I become luckier in these matters, but this is beside the point.