Inspired by recent technical posts by the Ushahidi Platform team, we're running a series of posts for software engineers and product developers detailing how the CrisisNET platform is put together. Contact us on Twitter to find out how you can get involved.

By Jonathon Morgan

We're taking the world's crisis data and making it accessible in a simple, intuitive, firehose of real-time information. In building the first version of the CrisisNET platform, we were most interested in validating our assumption that this problem (lack of access to real-time crisis data) is worth solving, with bonus points for discovering what sort of user (journalists, developers, analysts, crisis responders, etc) gets the most value out of this type of service.

This fairly simple idea represents an ambitious project. As anyone who has heard me speak over the past few months can attest, humanitarian data is scattered, incomplete, and hard to find. Other sources of open data, like social media for example, are easy to come by but unstructured and chaotic compared to reports from NGOs. Even a basic approach to collecting and understanding vast and complex information from disparate sources could easily represent a significant engineering effort.

With that in mind, our design decisions were guided by the following principles:

  1. Fail fast: build a minimum viable product as quickly as possible. If our assumptions are wrong, at least we won't waste too much time and money building something nobody wants.

  2. Decide later: we don't know who will use the system or how they'll use it. Aim for a level of abstraction that makes the data accessible without taking choices away from the user.

  3. Design for help: this problem is too big for our to solve on our own, even at a well-funded organization like Ushahidi. Make it easy for community developers to engage with the project, and solve small, discrete problems without necessarily understanding the entire system.

Building from those principles, our go-to-market strategy was to provide a simple data API to our users, while demostrating the platform's potential by partnering with data providers, and visualizing that data as it was ingested into our platform. We settled on a six month road map that'd give us three months for initial development, and then three months after that to ingest data, make visualizations, and travel around the world talking about our work and getting feedback from potential users.

Focusing on the dev process first, the three principles we outlined led us to a handful of high-level architectural decisions and technology selections.

A Distributed Approach

The platform design borrows ideas from Service Oriented Architecture (or SOA) and Pipes-and-Filters patterns, while not adhering religiously to either. Don't worry if you're not familiar with these terms. For now it's only important to note that both were relevant to our problem because they can be applied in simple, straightforward ways for small systems, but will grow to incorporate enterprise applications. They encourage small, discrete components that are easy to understand and manipulate. In technical terms, because each component is an independent service, running in its own process, both the mechanism used for Inter-process Communication (usually just called "IPC") and the processes themselves can be distributed, which means they can run on a separate physical or virtual machines. This allows developers and system administrators to design, implement and scale any aspect of the system in relative isolation, which both makes the application more flexible and also reduces the complexity of each component.

This way we can quickly adapt to how the platform needs to grow, depending on how it's used. For example if the primary stress comes from ingesting large amounts of data, we can increase resources dedicated to the parts of the system that process data as it comes in without worrying that we'll interrupt users' access to that data as they take it out. Or if the API and Developer Portal sees a sudden spike in requests, we can scale up the API server to accommodate the increased traffic without worrying about how that will impact development on the processing platform (or the marketing pages, scheduling engine, blog, etc). We'll get into the technical details of how each component is designed and our IPC mechanisms in future posts.

Perhaps more importantly, this means we can also distribute engineering responsibilities to the community. Developers interested in how we ingest data from a specific source, or how we expose search filters to access data from the API, or even how we might add new metadata to information as it moves through our system, can maximize their time by quickly understanding, updating or augmenting a small segment of the codebase in a way that doesn't jepordize the stability of the system.

Lastly, by building in small pieces, guided by robust architectural principles, we were able to release an alpha version of our platform in just five weeks, with the public beta following only a few weeks later. Our framework for ongoing development is both agile and stable, so our system has been able to grow and adapt with the needs of our users -- which is essential in releasing the first version of a new application. It's easy to get caught up in elegant designs and the latest technologies, but ultimately users only care about features you deploy, and we're building a web service, not the Mars Rover. So we ship early, we ship often, and quickly iterate based on user feedback -- all with the knowledge that small changes to one part of the codebase won't have surprising consequences.

In future posts we'll get into the details of how each component is put together, and how those components plugin to the system that drives our platform.

Note: If you're really into distributed systems, and want something to read before we publish the next post in this series, I've written about this approach to platform architecture before. Check it out.