Six years ago I started a job using a programming language that was unknown to me. I'm talking about Go, designed by Robert Griesemer, Rob Pike and Ken Thompson at Google.

– If you are in a hurry and only interested in my conclusions about Go, just read The good, The bad and Final thoughts sections –

Before my current job I worked with some server languages - a little of PHP and a lot of .Net - so I was worried about my learning curve because a new language means a lot to read. For my surprise I was being productive since day one.

I recommend this and this lecture for a better understanding of Go history and motivations, but for the lazy:

Go was designed to address a set of software engineering issues that we had been exposed to in the construction of large server software.

The properties that led to that include:
* Clear dependencies
* Clear syntax
* Clear semantics
* Composition over inheritance
* Simplicity provided by the programming model (garbage collection, concurrency)
* Easy tooling (the go tool, gofmt, godoc, gofix)

So basically; Go was designed for the arouse of cloud computing (micro-services) and to enhance developer experience. With that in mind I can explain what I perceived as a new comer.

Let's take a look to a hello world:

package main

import (

func main() {
	var s string = "I'm a string"
	s2 := "I'm a shorthand declared and inited string"
	fmt.Println("Hello, playground:", s, s2)

View in playground

First of all we notice there is a package name declaration main. Second we found there is an import of fmt package. And finally we have a main function that prints using Println function. You can notice full s and shorthand s2 variable declaration (notice the var s string vs. only s2).

In the next example we are gonna use built in concurrency mechanisms: Goroutines and Channels.

A goroutine is a lightweight thread managed by the Go runtime, they are cheap in memory: 2kb size.

Channels are a typed conduit through which you can send and receive values with the channel operator: <-.

ch <- v    // Send v to channel ch.
v := <-ch  // Receive from ch, and assign value to v.

I reckon is easier to think of a channel as a FIFO array that can be filled by multiple go routines.

With that in mind let's take a look to an example where we send 3 messages to a channel using a goroutine and in another goroutine we receive and print those messages. It can be harder to understand at first sight but relax, it doesn't get more complicated than this:

package main

import (

func main() {
	// Create a new channel with buffer size 3.
	c := make(chan string, 3)

	// Populate channel
	go func() {
		c <- "One"
		c <- "Two"
		c <- "Three"

	// Listen for updates
	go func() {
		for {
			select {
			case s := <-c:
				fmt.Printf("received from channel: %s\n", s)

	// Ignore this.
	<-time.After(500 * time.Millisecond)

View in playground

As a side note you are not supposed to span goroutines manually since they should be launched from certain packages e.g.: http package. If you need visual hints this post can be really helpful!

Now you are ready to program in Go. Congratulations!


Since Go is strongly typed you would need a way to communicate between packages. That's interface purpose! Interfaces are named collections of method signatures.

Let's demonstrate the importance of a good interface (strong abstraction). Imagine you are working at a pet shelter site and they ask you for help with a new search engine for stranded pets. You can search cats, dogs or birds and the service should return shelters name and location (since you want users to know where to go). And more parameters like age or color.

So far only one API ( is available and we start with Finder interface.

package finder

type Finder interface {
	// Find receives an options struct
	// and returns a Result array or error if any.
	Find(Options) ([]Result, error)

type Options struct {
	Type  string // cat, dog, bird
	Age   int
	Color string

type Result struct {
	ShelterName string
	Lat         float64
	Long        float64

Our Finder implementation for SiteA would look like this:

package sitea

type SiteA struct{}

func (f SiteA) Find(opt Options) ([]Result, error) {
	// Parse opt to match URL format
	s := customParseHelper(opt)

	// Some http fetch here
	// ...

	// Hardcoded response for demonstration purposes
	resp := []Result{{
		"Oregon Shelter 1",

	return resp, nil

Looking good. But now there is more participation from third parties and SiteB wants you to search in their database because they don't have a search API. There is no problem because our interface abstraction allow us to do this.

func init() {
	// connect to database at init time
	conn := sql.Open(...)

type SiteB struct{}

func (f SiteB) Find(opt Options) ([]Result, error) {
	// SQL query using sql package or toolkit (pseudo code)
	conn.Query(SELECT shelter_name, lat, long
            FROM stranded_pets
            WHERE type=opt.Type
            AND age=opt.Age
            AND color=opt.Color)
	// ...

	// Hardcoded response for demonstration purposes
	resp := []Result{{
		"Seattle Shelter 20",

	return resp, nil

Then here we have a good look of how it looks assembled (using pseudo code for brevity):

package main

import (

func main() {
	// 1. Start database connections if any.
	// ...
	// 2. Register and init available Finder implementations.
	finderArray := []Finder{
	// 3. Declare an HTTP server and start attending requests.
	server := newServer()
	server.handleGet("/search", retrievePetsHandler)
	server.Run() // Keep running until SIGTERM is received.

// Handles GET /search requests and returns JSON responses
func retrievePetsHandler(r http.Request, w http.Response) {
	// Take search parameters from URL
	params := parseURLHelper(r)
	// Use available FinderArray
	var results []Result
	for i := range finderArray {
		currentFinder := finderArray[i]
		resp, _ := currentFinder.Find(params)
		// Add current results to final array
		results = append(results, resp...)
	// Encode response as json
	json.Encode(w, results)

A war story of mine

When I started using Go my first project was enhancing a POC into MVP. It was built using Revel framework (don't use it!). I worked one and a half year just adding new functionality. There were like 5k users without a lot of activity so Go's concurrency was not necessary at all. Due to the project success, my boss was confident enough to pursuit a new product he had in mind about a chat with translation ability.

At this point a friend of mine (unknown at the time) was hired. We shared a lot of ideas and started leveling up as a team not only talking about Go but trying new tools like vim too. We even assisted multiple times to local gophers (Go programmers) meetups. Pro Tip: I highly recommended having a programming partner.

So regarding the chat project I mentioned above, we - mid back end developers - were encouraged to finish the MVP in 3 months. We were trying approach after approach for socket handling like and ended using XMPP. Go helped us changing that integration easier than expected.

So after a few delays because of new features added arbitrarily and not enough testing we launched the application for both iOS and Android. I learned the hard way that having a team without senior supervision is a waste of time and money for any company. Spoilers: we sucked big time.

We reached 300K users before services became unresponsive so we required expertise from top Go programmers. After a week of intensive pair programming sessions with them - and a month of rewrites - we handled that amount of users with only 4 m4.xlarge instances (if my memory serves me right). Almost 20k concurrent users/server connections were active all day and we didn't have down-times after that.

The best advice I get from that experience was:

  • Your code should be readable as a book:
    There should be an index page and every package/component should be organized for readability.
  • Response times bigger than 500ms are wasted money.
  • You need eyes in your running code:
    Logging and monitoring tools are a must.
  • Usage of worker pool:
    Database connections are scarce so you need to define users limit per instance per database connection stream.
  • Always stress test before going live.

The good

So far I enjoyed being productive using Go. The API stability promise is a real deal. Never did have issues after updating versions locally or in cloud instances. That is real developer experience and I miss it so much right now working with React Native (yeah-yeah I know mobile development is another beast).

Another easy task was CI integration. Magic is hidden in Docker (written in Go too), just pull an image and you are ready to develop locally trusting your code would behave the same when deployed. I spent a lot of time writing the simplest Docker image possible to handle most of my projects needs and was a knowledge journey because I learned more about linux distros and bare metal stuff.

Another good point was writing tests for almost all levels of testing, from doing TDD to having implementation tests and some stress + load tests. I was coverage-sick at the time because I wanted almost 100% coverage when possible - mostly because they are easier to code.

And because Go files are easy to read I was confident forking libraries. Sometimes I just copied the helper parts into my codebase because some abstractions I previously made already covered my needs. Because a little copy is better than a little dependency.

The bad

It's impossible to live without trade-offs and Go has it's weaknesses too. You can be fooled trying to apply MVC everywhere or just copying other popular languages. Happened a lot with java developers trying to handle all possible cases with large interfaces or making a lot of directories to imitate java reverse domain dir structure. If you are starting I recommend you Go kit. It has a lot of info about what you need for a micro-service (or elegant monolith).

Because Go is too verbose you can feel tired after reading a package logic because file size is different than real logic and you may feel the impulse to abstract that package. So you end up with a lot of boilerplate code - a lot of error checking.

Third party integration can be a pain in the ass if you need something from MS. I remember spending a lot of time trying to fix a package for excel files read and write. And finished result was lacking because reverse engineering is hard. And in the end, there was not a functional outcome because reverse engineering can get really hard.

Don't ever try Revel. I don't want to be harsh to the people working on it but is not good for any project in the medium and long run. There is a lot of info and discussion about why is considered harmful so you are warned!

Vendoring was hard at the start because only go get was around to handle third party packages. After a while glide arouse to handle that problem and just after a lot of discussion, the core team added go modules to handle code dependency between large code bases. I can say it was difficult but after go modules was implemented it's easier.

Final thoughts

I still recommend Go for a well defined set of products. If you are a startup you can have fast POC to MVP times - I suggest an expert Go developer guiding a junior team. Or if you need to handle high concurrency and don't know another language, it can be easier to do with Go, but after a year or two change to another language. For example discord started with Go and after they hit performance limits they did rewrites using Rust.

Is possible to built nice CLI tools too, cobra excels at it. I see a lot of new tools made with Go frequently. I myself created some CLI tools for experimental purposes.

Go did help me reach communities and get to know interesting people that besides doing Go were passionate about learning new stuff. Some of them were already learning Rust and Elixir at the time so I was curious and asked them a lot too. I hope this post motivates you to keep learning and growing your skill set as Go motivated me to do the same.