From Monolithic to Micro-services, Part 4

We’ve covered a lot of ground in this series of articles, and are now on the final chapter. We’ve looked at the reasons we decided to make the move from a monolithic app to a micro-services approach, the criteria we used to select our toolset, and how we implemented the transition. Finally, we want to share with you how we use our new architecture, and share some of the code we have been writing.


Whether using a monolithic or micro-services approach, a decision must be reached on whether to employ a framework. There are hundreds of frameworks available, and every week or so, a new one is added. Even in Go, there are plenty of frameworks available. If deciding on a framework, or on whether or not to use one, you should be aware of their uses and benefits. Frameworks are mostly needed for discovering new services, and managing new and existing services (start, stop, etc.). They sometimes offer even more, including:

Despite these benefits, using a framework means you need to include its code within your app, and for the micro-services approach, that can be a lot of overhead, and a big dependency to add. A great deal of code is dedicated to the framework configuration and events management.

For us, this overhead was an unacceptable burden, even with the benefits a framework could potentially provide. We prefer to keep it simple, and stay with vanilla Go. This allows us to separate the “infrastructure” code from our business logic code. Every service remains independent, and cannot be started without a registry service, or any other service. If we need to test communication between services, we use docker-compose files to run the minimal images. No frameworks for us!

Consuming Microservices

A service is often not self-sufficient, and needs to communicate with its peers to function. Because we don’t use a specific framework, we stick to plain old http communication, and every service is actually a REST API. This allows us to use curl for debugging. For the moment, we are keeping things relatively simple, but our no framework approach allows for a great deal of flexibility for later optimization.

Because of the flexibility of our approach, we will be able to optimize later, including solutions such as GRPC, which is based on Protocol Buffers (another opensource jewel from Google.) GRPC establishes a contract between services in order to serialize data. This has the advantage of fixing API specs with an IDL. It looks something like this:

message HelloRequest {
  string greeting = 1;

message HelloResponse {
  string reply = 1;

service HelloService {
  rpc SayHello(HelloRequest) returns

This will be compiled, and the code generated will be included in the project. If the specs are evolving, the compiler will make sure nothing is broken.This has many advantages, including a big performance boost, because data is not serialized to standard JSON, but into a binary optimized format.

The ability to leverage such improvements in later implementations was a key factor in deciding our approach, however, for the moment, it is more convenient to use json directly.

Since Go v1.6, HTTP/2 has been part of the standard librarly for both the client and the server, allowing us to benefit from this performance boost in production, while staying with classic HTTP/1.1 on dev, for debugging. Building an HTTP/2 server in GO is as simple as:

import (

func handler(w http.ResponseWriter, req *http.Request) {
    w.Header().Set("Content-Type", "text/plain")
    w.Write([]byte("This is an example server.\n"))

func main() {
    http.HandleFunc("/", handler)
    log.Printf("About to listen on 10443. Go to")
    err := http.ListenAndServeTLS(":10443", "cert.pem", "key.pem", nil)

That’s right, it’s the same code as a “classic” HTTP server using HTTP/1.1.


Our configuration is kept as simple as possible as well. We follow 12factors methodology for our services, and all configuration is passed using environmental variables. That’s it, nothing else.

How do we set all the variables? The network info (addr, ports) are passed automatically by openshift to each container. Because of this, there is no need for a central registry.

To illustrate this, here is our postgresql library:

package pgcli

import (

    log ""
    _ ""

const (
    Host     = "postgresql-host"
    Port     = "postgresql-port"
    Password = "postgresql-password"
    User     = "postgresql-user"
    DB       = "postgresql-dbname"
    SSL      = "postgresql-sslmode"

func init() {
  // Set credentials from mounted volume

func MakeFlags() []cli.Flag {
    return []cli.Flag{
            Name:   Host + ", pg-host, ph",
            Value:  "postgresql", // service name
            Usage:  "PostgreSQL host (use '/tmp' for unix socket on mac os, or '/var/run/postgresql' on linux)",
            Name:   Port + ", pg-port, pp",
            Value:  "5432",
            Usage:  "PostgreSQL port",
            Name:   User + ", pg-user, pu",
            Value:  "postgres",
            Usage:  "PostgreSQL user",
            EnvVar: "POSTGRESQL_USER",
            Name:   Password + ", pg-pw, pw",
            Value:  "",
            Usage:  "PostgreSQL Password",
            EnvVar: "POSTGRESQL_PASSWORD",
            Name:   DB + ", pg-db, pd",
            Value:  "postgres",
            Usage:  "PostgreSQL database",
            EnvVar: "POSTGRESQL_DATABASE",
            Name:   SSL + ", pg-ssl, ps",
            Value:  "disable",
            Usage:  "PostgreSQL sslmode",
            EnvVar: "POSTGRESQL_SSL",

func Connect(c *cli.Context) *sqlx.DB {
    conStr := ConnectionString(c)
    log.Debugf("Connecting to db: '%s'", connectionStringWithPasswordRedacted(c))
    db, err := sqlx.Connect("postgres", conStr)
    if err != nil {
        log.Fatalf("Can't connect to postgresql, please check your connection url. (err: '%s')", err)
    return db


Every service is a command line tool, and a web server (default). The flags are added to a CLI (command line interface) command. Some options are passed in the CMD of the container.Creating microservices does not mean we need to isolate and duplicate code. We are able to take common libraries such as PGCLI, and share them amongst all services. As we are now using continuous deployment, any change to this library will trigger a new build of dependent projects (microservices), and deploy them right away.

Values are populated by a ‘secret’, which is basically a read-only configuration file shared by several containers. Credentials don’t appear in the container inspect info. Secret volumes are shared between services, like SMTP config. Each service has its own credentials, and is given a level of accreditation that allows it to access the database. Environmental variables like ‘POSTGRESQL_PORT_5432_TCP_PORT’ or ‘POSTGRESQL_PORT_5432_TCP_ADDR’ are populated for each container by Openshift.

This means we don’t have to manage the configuration of our service at all. As soon as we define a “postgresql” service (which is basically the IP of a LoadBalancer in front of our DBs), all containers will receive these variables automatically! As with docker-compose 2, containers share the same network, and a hostname named “postgresql” is now available. That’s what we love about Openshift and kubernetes; everything seems so simple once it is put in place.


Much like any other project, each microservice has unit tests. These tests can be more complex to achieve, since the microservices approach means more processes to start (in the right order) to go through all the layers. Because of this, and that and the upcoming Gemnasium Enterprise are now sharing a great deal of code and features, we use the latter for end-to-end tests. Because we use plain Go (mostly HTTP client/server), it is relatively easy to test new services.


Our no framework approach applies to scaling as well. Because our microservices rely on centralized services such as postgresql and NSQ, we are able to run a large number of them concurrently, without any problems. As long as we do not write files in the common shared file system (which we do not), our system hums along smoothly.

As we mentioned, our containers are accessible through services, which act as load balancers. For example, the badges service we outlined in our previous article routes and load balanced traffic on all containers running the badges server. Even with a hundred running simultaneously, nothing would change in our code (and the performance hit would be negligible as well!).

Monitoring and Alerting

The more you do with any app, whether monolithic or microservices, the more metrics will be needed when debugging, or even just monitoring. There are numerous things that are important to keep track of, number of signups, failures during project sync, notifications (by channel), and more. Gemnasium uses Graylog for this, because we need a tool that resides in our network, and keeps our logs secure within the network as well (logs should never be sent elsewhere). Graylog is an opensource and free log aggregator with tons of features (including dashboards, alerts, search graphs, and more).

Graylog tracks alerts for us based on number of occurrences. Airbrake alerts us of the first error, in order to allow us to respond. We have set up alerts and notification in Graylog based on triggers.

Embrace failure

Lastly, as developers, our approach has to account for the possibility of failure, and deal with it appropriately. Even if kubernetes is smart enough to avoid sending traffic to an invalid pod, things can quickly become unstable or unavailable. To us, accounting for failure means not only being reactive and aware, but actually embracing failure as part of the roadmap to success. We need to anticipate failure in every workflow, particularly:

  • Idempotent tasks (for example: do not insert twice if runs twice, or do not send the same email twice, etc)
  • Check writes (DB, NSQ, etc.)
  • Retries 
    • Hot retry (inside the running instance, with exp. backoff) 
    • Cold retry (instance can be killed, and the backoff with it, allowing a retry of every task that should be finished already)


Our overarching advice: Keep it simple at the beginning, and optimize further down the line, if necessary. This advice serves equally well for any project. Introducing complexities too early in the project’s lifeline merely compounds difficulties later on. Ensure any complexities added are well-considered, and add significantly to the end product - a framework’s benefits must outweigh the overhead and dependencies incurred, for example. Even in planning a large shift such as ours from monolithic to micro-services, the question should always be - “does this make our transition easier, or can I see enough benefits to warrant any difficulties?”. You will recall that our decision to select Openshift was based on exactly this principle. It provided a strong framework, and good documentation, allowing us to implement our first services successfully the first time around. Our decisions were based on what was before us, rather than what might be available in later versions of other products. Stick to what works, and innovate based on a firm foundation.

As we sign off on our final instalment of this blog series, we hope that our experiences in moving from a monolithic application to a microservices approach proves helpful to your organization, not only if you are preparing for a similar shift, but also if you are determining if such an approach is right for you.

Thanks for reading!