User:Dicebot/Articles/Introduction to vibe.d

From D Wiki
Jump to: navigation, search


vibe.d is popular D web/networking framework that features async I/O and convenient fiber-based concurrency. This is first part in series of articles explaining its design, goals and example implementations of different application.


Some History

History of network services (especially web services) has always evolved around different ways to handle concurrency issue. By the very nature such application need to handle lot of simultaneous requests with no hard upper limits. What makes it especially complicated is that every single user originating each of those requests expects getting response as quickly as if he was the only one to use the system. And if your service suddenly gets popular you really want to be able to scale appropriately so that first impression will be as good as possible.


Long (really long) time ago most popular way to write web applications was CGI (Common Gateway Interface). It was very simple:

   -> request ->            -> spawn new process -> 
                 [web server]                      [application]
   <- response <-              <-read stdout <-

It won't be much of surprise if I tell you this does not work well with many concurrent requests. Spawning new process for each new request is both very slow (initialization overhead) and consumes lot of operating system resources (memory + process contexts). As soon as web has become more popular, new approaches were necessary.


Obvious enhancement over plain CGI approach is to keep same process running and feed it new request information over and over again. Simplified model:

   -> request ->                 -> pipe  -> 
                 [web server]                      [persistent application]
   <- response <-              <-read stdout <-

It is pretty fast saving both on initialization times and maintaining request contexts. There is no real concurrency here as application serves incoming requests sequentially, one by one. It may seem naive but in fact when each request processing time is small such approach in very smooth user experience because request queue will never fill up. Partial concurrency support can be added via spawning several application processes and load balancing incoming requests between those.

This approach is still somewhat popular for cases when processing requests does not require any I/O or costly computation. It is hard to imagine modern web service without any file system or database access though. This is where things start to get complicated. Something like reading a file is naturally done as blocking operation - you call the function and wait for it to return some data. In context of fastCGI however it will result in your application wasting CPU cycles doing nothing all this time. Time that could have been spent on processing other requests.

thread per request

This approach has become very wide-spread together with Apache. Famous "LAMP" (Linux Apache MySQL PHP) stack has been dominant among web services for quite a long time. It is very similar to CGI but spawns new thread for each request instead of a process. This is how it worked for LAMP:

   -> request ->                                         -> spawn new thread ->
                 [Apache]  <-> mod_php <-> [interpreter]                        [PHP script]
   <- response <-                                   <- read stdout, reset/kill thread <-

Some other solutions have also used similar model but merging web server and actual application into single service handling. Such model is very convenient to work with - no need to care about blocking operations at all, operating system scheduler will figure this out. And it worked well until concurrency requirements of web services have grown even more (see C10K problem). Threads are still first class operating system entities with a notable context overhead.

What is even worse, making thousands of threads compete for same shared resource locks will destroy your performance so hard you may think it is still CGI era. Locks that were absolutely unnecessary with fastCGI approach because you could keep separate not shared sets of data for few worker processes and slowly synchronize in the background with no performance penalties. Eventually it has become obvious that straightforward usage of threads is incapable or addressing new concurrency requirements.

async I/O

Concept of asynchronous input/output operations is actually not new at all. Non-blocking Berkeley sockets have been out there for ages and were recognized among most network programming professionals as true way of writing server applications. Idea is that instead of waiting for data to arrive your program just moves forward and periodically checks the state of that operation. For example nginx is very fast web server and reverse proxy that used such design for both socket and file system access. It only uses few independent worker threads to utilize multiple available processor cores, no shared state or keeping costly context data is necessary.

Simplistic description of such algorithm:

  1) check for new requests
  2.1) if there are any : process those until I/O is needed
  2.2.1) if response was written : clear the context
  2.2.2) else create context data, start non-blocking I/O
  3) loop through some of stored request contexts
  4) if I/O has finished for any of those, go to 2.1
  5) go to 1

"context" here is purely application internal concept operating system is not aware of. Because of that you can have thousands of them without having any impact on system scheduler.

Actual implementation is complicated though. Using non-blocking I/O in traditional C way has meant effectively turning your application into giant state machine and even small mistake and context scheduling logic can have horrible impact. Something like nginx helps with access to static files but modern web is incredibly dynamic. Finding programmer to write typical PHP/MySQL script is easy. Finding programmer to write same thing as non-blocking state machine? Hardly so.


node.js is what has really popularized asynchronous approach in modern web community. Instead of doing all polling of I/O operations manually it features event system that will trigger user-supplied callbacks once all data has arrived. It results in much simpler to write code, example from official web page:

http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(1337, '');

Such approach has slightly more overhead than manual state machine because of event loop to maintain but it is constant overhead which does not get much worse when request rate increases. Fundamental principle is the same : don't wait for I/O to finish, process other requests and return to it once data is ready. In such model requests only seem to be processed concurrently, in fact application executes small chunks for each request context sequentially. But because we don't waste time on blocking total processing time stays low enough to look smooth to the user.

But of course if node.js would was good enough solution, I would not have written this D article ;) It has several major issues:

  1. callback hell : application code only looks clean and simple when there are few I/O operations. Once you start registering new callbacks from bodies of existing ones it becomes very hard to figure out exact application code flow. If exceptions gets fired resulting stack trace is useless because it will lead to event loop origins and not the place where callback was registered.
  2. JavaScript : dynamic language with weak typing can be very convenient for writing user interface scripts but it does not fit demands of server programming. Network services to be both fast and reliably correct and failing to do so can cost you some very real money. Also V8 is very fast but not as fast as good native compiler optimizing for specific server hardware.
  3. no concurrency : node.js does not have even worker threads, to use all available cores one needs to spawn multiple processes. Doing concurrency with sharing is very difficult but when you are trying to get most of performance it can help you with improving instruction cache locality (by pinning worker threads to different parts of code) and total I/O operation count (by using smart local caching). It is on of options you prefer to avoid but still keep available.


[vibe.d is one attempt to make modern async networking even better.

It has same advantages as node.js :

  1. easy asynchronous I/O using event loop
  2. scales to very high concurrency levels
  3. provides high-level utilities for web applications

But also fixes mentioned issues:

  1. callbacks are not mandatory : using lightweight fibers makes it possible to manually pause and continue execution contexts with help of event loop, actual syntax looks as if it is blocking code
  2. D is modern natively compiled language with strong type system and powerful high-level abstractions
  3. vibe.core.concurrency is extension to std.concurrency module of D standard library that enables message passing between fibers, running task abstractions in reusable worker thread pools and many other advanced concurrency techniques, making it possible to build even very complicated systems

I must admit there are quite many different frameworks in various languages trying to address similar concerns in one way or another. vibe.d is not unique and not perfect. But it is at the very least advanced enough to represent latest technology of writing network services and this worth studying even for general educational purpose. Something I will try to help you with!

Getting Started

Easiest way to get started with vibe.d is to use dub, D source dependency manager. It has pre-defined template for creating new vibe.d projects:

$ dub init projname vibe.d
$ ls projname
dub.json  public  source  views
  • dub.json is project description file, I have explained it a bit in my other article. It will automatically include dependency on latest vibe.d release when new project is created using dub template. You don't need to modify it for initial experiments.
  • public is common place for static files (whenever served by vibe.d itself or reverse proxy). It is empty by default.
  • source is supposed to contain all actual application sources. In initial template it will contain simple app.d file which implements traditional "Hello, World!" service.
  • views is the place where dub / vibe.d will look for Diet HTML templates. This will be explained later in details.

To start service use dub again:

$ dub run
Building projname configuration "application", build type debug.
Running ./projname 
Listening for HTTP requests on ::1:8080
Listening for HTTP requests on
Please open in your browser.

You will need openssl and libevent libraries installed in your system (for Linux, other OS users please refer to official documentation). Use any package available in your distributive of choice.

Listing of app.d :

import vibe.d;

shared static this()
   auto settings = new HTTPServerSettings;
   settings.port = 8080;
   settings.bindAddresses = ["::1", ""];
   listenHTTP(settings, &hello);

   logInfo("Please open in your browser.");

void hello(HTTPServerRequest req, HTTPServerResponse res)
    res.writeBody("Hello, World!");

Here when you call listenHTTP no listening happens immediately. Instead new event is registered to start actual listener in main event loop - this is why this code is placed into module static constructor and not in usual main function. When new request arrives one of task fibers is used from the pool and after basic HTTP processing hello gets used as user entry point. As you may notice it is very similar to node.js example snippet, but don't worry! You will see the difference once we get to examples with actual I/O.

Defining Routes

Right now this application always replies with same data to all requests, whatever actual URL is. Lets add some routing capabilities! Change source/app.d to this:

import vibe.d;

shared static this()
    auto router = new URLRouter;
    router.any("/hello", &hello);
    router.get("/goodbye", &goodbye);
    router.get("/bye", &goodbye);

    auto settings = new HTTPServerSettings;
    settings.port = 8080;
    settings.bindAddresses = ["::1", ""];
    listenHTTP(settings, router);

    logInfo("Please open in your browser.");
    logInfo("Plain should result in 404 Not Found");

void hello(HTTPServerRequest req, HTTPServerResponse res)
    res.writeBody("Hello, World!");

void goodbye(HTTPServerRequest req, HTTPServerResponse res)
    res.writeBody("Hello, World!");

Now if you try opening URL (or any other undefined URL) in your browser you will get appropriate 404 status code response. Please also not that in this example "/hello" route is registered for all HTTP methods but "/bye" and "/goodbye" - only for GET requests. Trying to send POST or PUT or PATCH will also result in 404 Not Found. You can also register different handlers for same URL depending on HTTP method.

Using Diet Templates

Building HTML response string manually is not what one expects from any web framework. vibe.d features Diet templates for that (inspired by Jade) that are both less verbose than pure HTML and allow embedding inline D scripts:

!!! 5
        title vibe.d + Diet example
            h1 Header
            p Paragraph 1
            p Paragraph 2
                | some numbers : 
                - foreach(i; 0..5)
                    b #{i}

Resulting rendered HTML:

<!DOCTYPE html>
      <title>vibe.d + Diet example</title>
           <p>Paragraph 1</p>
           <p>Paragraph 2
 some numbers : 

Diet implementation is somewhat unique and only possible because of D features. All template rendering is done during compilation using power D meta-programming capabilities. It is what makes possible embedding D code inside templates in a way that is usually featured by interpreted languages. With decent optimizing compiler generate code for rendering should be as efficient as manually sending response string chunks to the socket. It does not come for free though - recompilation of a project with many Diet templates is likely to become rather long, making quick edit-compile-refresh cycle impossible. There are some ideas about improving such workflow with help of dynamically loaded shared libraries but it is still some work in progress.

To use this template put it into views folder as hello.dt and update your app.d:

import vibe.d;

shared static this()
   auto settings = new HTTPServerSettings;
   settings.port = 8080;
   settings.bindAddresses = ["::1", ""];
   listenHTTP(settings, &hello);

   logInfo("Please open in your browser.");

void hello(HTTPServerRequest req, HTTPServerResponse res)

For this to work out of the box it is important for "hello.dt" to be localed in "views" folder. When looking for text file imports D limits search paths to supplied list, for hygiene and security reasons. "views" is included in that list automatically but any other location will need adjustment in project description file (dub.json).

This example shows only tiny feature set of Diet templates but that should be enough to get base concept. Detailed description of more advanced features will be subject to separate articles and does not into this basic introduction.

Database Access

// TBD

Concurrency Model

// TBD

Worker Threads

// TBD


// TBD

Message Passing

// TBD