February Photoblogging Challenge. Day 5: Pets

Unfortunately no pets allowed on this walk.

February Photoblogging Challenge. Day 4: Layers.

This is a photo of the geological kind: an exposed cliff face of the coast of Sunderland Bay.

February Photoblogging Challenge. Day 3: Comfort. POV shot of me sitting in a comfortable chair while holidaying in Phillip Island this week.

February Photoblogging Challenge. Day 2: Morning Beverage.

Micro.blog Photo Challenge. Day 1: “Close Up”. Not the best close up but any closer and they would have flown away.

Afternoon in the gardens just south of the Shrine Of Remembrance in Melbourne, taken yesterday.

Well, that’s a little terrifying. Turns out that coronavirus was mutating during Victoria’s second wave last year along similar lines to those variants we’re seeing now. Had it not gone extinct from the lock-down, this “Australian variant” could have been more infectious and possibly more resistant to vaccines.

I’m certainly glad the state government decided to go for zero community spread. As hard as the lock-downs were, I could imagine maintaining suppression of this variant would have been even more difficult, and would have just worn everyone down, not to mention all the additional sickness and death we would have experienced.

I gave Glitch a go for the first time today. I was sceptical that I would find any value in it, but it turns out to be a great environment for whipping up small apps really quickly. It only took a few hours to build a simple Finska Scorecard using Stimulus.

I just spent an hour and a half building what I thought was a simplified version of the R-Tree algorithm, until I came across a test case that completely breaks it. The lesson: don’t take shortcuts and just learn the well-known algorithm first.

In the summer months, if the outside air temperature gets warmer than 40°C, I treat myself to an ice latte as my afternoon beverage, in leau of a regular coffee. Well, it ticked over 40°C about half an hour ago, so…

Ice latte

I was having a discussion with people at work about the approval process of the Covid-19 vaccines here in Australia.

One person was raising questions as to why the Therapeutic Goods Administration, the agency responsible for approving drugs and treatments, was taking its time with approving the vaccine when a number of other countries have already started rolling it out (he had good enough reasons for asking).

There was a bit of a back and forth about the merits of speeding up the approval process vs. giving the TGA time to do a full approval, since the virus has been suppressed moderately successfully here.

The conversation ended about 10 minutes later with the approval of the Pfizer vaccine.

A Feature Request for Twitter, Free of Charge

It looks like Twitter’s product design team need some help. Their recent ideas, “inspired” by the features of other companies like Snap (Stories) and Club House (Audio Clips), don’t seem to be setting the world on fire. Well, here’s an idea for them to pursue, free of charge.

A lot of people I follow seem to use Twitter threads for long-form writing. This might be intentional, or it might be because they had a fleeting thought that they developed on the spot. But the end result is a single piece of writing, quantised over a series of tweets, and assembled as a thread.

I’d argue that consuming long-form writing this way is suboptimal. It’s doable: I can read the thread as they’re currently presented in the Twitter app and website. But it would also be nice to read it as a single web page, complete with a link that can be used to reference the thread as a whole.

Now, I don’t see these writers changing their behaviour any time soon. It’s obvious that people see value in producing content this way, otherwise they would do something else. But it’s because of this value that Twitter should lean in to this user behaviour, and make it easier to consume these threads as blog posts. The way they should do this is:

  • Offer the author the choice to publish the thread on a single web page, complete with a permalink URL that can be shared, once they have finished writing it. This could be on demand as they’re composing the thread, or it can be done automatically once the thread reaches a certain size.
  • Provide the link to this web page on the first tweet of the thread. The reader can follow the link to consume the thread on a single page, or can use the link to reference the thread as a whole.

I know that this is possible now, with things like the Thread Reader app, but there are some benefits for Twitter adding first party support for this. The first being that it keeps user on their “property”, especially if they add this feature to the app as well as the Twitter website. This neutralises the concern of sending the author or reader to another site to consume or publish their content, thereby feeding into the second benefit, which is that it elevates Twitter as a platform for writing long-form writing, in addition to microblogging. If their content can be enjoyed in a nicer reading experience, more writers would use Twitter to write this form of content, keeping users and writers on Twitter. The user benefits, the publisher benefits, and Twitter benefits. Win-win-win all round.

So there you are, Twitter, the next feature for the product backlog. It could be that I’m the only that that wants this, but I personally see more value in this than their other pie-in-the-sky endeavours that Twitter is perusing.

One final thing: I’m a big proponent of the open web and owning your own content, so I don’t endorse this as a way to publish your work. I’m coming at this as a reader of those that choose to use Twitter this way. Just because they’re OK with Twitter owning their content this way doesn’t mean I should have a less-than-adequate reading experience.

Adding Blog Posts to Day One using RSS

Prior to joining Micro.blog, I had a journal in Day One, which was the sole destination for all my personal writing. I still have the journal, mainly for stuff that I keep to myself, but since starting the blog, I always wondered how I could get my posts in there as well. It would be nice to collect everything I’ve written in a single place. In fact, there was a time I was considering building something that used Day One’s email to entry feature, just so I could achieve this.

I’ve since discovered, after reading this blog post, that Day One actually has an IFTTT integration. This means that it’s possible to setup an applet that takes new entries from an RSS feed, and add them as entries into a Day One journal.

I decided to give this a go, and it was quite simple to set up. I’m using the blog’s RSS feed as the source, and a new “Blog Journal” as the destination new entries will be created in. Setting up the integration with Day One was straight forward, however I had to make sure that the encryption mechanism of the new journal was “Standard” instead of “End-to-end”. This is slightly less secure but everything in that journal is going to be public anyway, so I’m not too concerned about that.

The Day One integration allows you to select the journal, any tags, and how the content is to look. This follows something resembling a template and allows the use of placeholders to select elements of the post, like the title and the body. The integration also allows you to specify the location and entry image, although I left that blank. In fact, I ran into trouble trying to set the entry image when a post didn’t have one: journal entries were being created with a generic IFTTT 404 image instead.

So far it’s been working well. I’ve only being writing short posts without a title — this will be the first long post with a title — but they’ve been showing up in Day One without any issues. There are still some unknowns about this integration. For example, I don’t know how images will work. I would hope that even though they’re links that Day One will handle them properly if you wanted to do something like make a physical book. It’s likely I’ll need to make a few tweaks before this is perfect.

But all-in-all, I’m pleased with this setup. It’s nice seeing everything I write show up in a single place now. In fact, I’m wondering if there are other things this integration could be useful for, now that I know that all that needs doing is setting up an RSS feed.

It’s only just now that I realised I no longer need to brace myself whenever I see the headline “The President is tweeting”.

A Simple Source IP Address Filter in Go

I’ve found that it’s occasionally useful to have something that allows through, or blocks, requests to your web application based on the source IP address. There are a number of reasons as to why you may want to do this: maybe it’s because you’d like to put something online that only you would have like access to, or it could be that you’re building something that is publicly available, but certain endpoints should only be accessible to certain machines for security or privacy reasons. For me, the motivation was to build something that was not quite ready to share with the outside world.

Either way, a simple IP Address filter might be a useful thing to keep in your toolkit. This article shows you how to build one.

A Pattern For Middleware

The filter will be implemented as middleware for a Go web-app. There are a few ways to build middleware in Go, but the pattern that I prefer is to implement it as a function that takes an upstream handler as an argument, and returns a new handler which wraps it:

func SourceIPFilter(upstreamHandler http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		upstreamHandler(w, r)
	})
}

Applying the middleware to a service handler is as simple as passing the handler itself to this new function:

func main() {
	var serviceHandler http.Handler = newServiceHandler()
	
	http.ListenAndServe(":8080", SourceIPFilter(serviceHandler))
}

Because this is using types defined in the http package, this way of building middleware provides the maximum level of flexibility that is available. This is true even when using a framework like Gin, as most of these frameworks usually have a way of allowing the use Go’s standard handler types.

Getting The Source IP Of A Request

The first thing that handler needs to do is get the origin IP address of the request. This is actually a bit more involved than might be first considered, so it may be a good idea to do this in a separate function.

func requestSourceIp(req *http.Request) (string, error) {
	// TODO
}

The simplest case is getting the IP address of the client. This is available to us from the RemoteAddr field http.Request struct. The value includes the port as well as the IP, se we can use net.SplitHostPort to discard it:

func requestSourceIp(req *http.Request) (string, error) {
	host, _, err := net.SplitHostPort(req.RemoteAddr)
	if err != nil {
		return "", err
	}
	
	return host, nil
}

This works if the Go application is accepting connections directly. However, it begins to break down as soon as mediators between the client and the Go application begin to appear. Some examples of these might be:

  • Reverse proxies, like Apache or Nginx, that will accept external connections and forward them to the Go application.
  • Load balancers1, which will route inbound connections amongst multiple instances of the application, and
  • CDNs, which will provide caching services and DDoS protection.

Each of these services may accept the incoming connection itself and will connect to your service using a separate connection, with a different source IP address. So using the RemoteAddr field won’t work here.

Fortunately, many of these proxies provide the source IP address as headers on the request. The standard approach is to do so by using the Forwarded header. This header will contain details of the request source as scene from the intermediaries, with the first one being the details of the original client. Each of these “forwarding elements” contain attributes of the forwarded request in the form of key-value pairs, such as the IP address that the request was forwarded for (for), whether the forwarded request was made using either HTTP or HTTPS (scheme), and the hostname of the forwarded request (host).

We’re only interested in the source IP address of the first forwarded request, so we will need to get the value of the for key-value pair of the first element of the Forwarded header value:

func requestSourceIp(req *http.Request) (string, error) {
	// Check the Forward header
	forwardedHeader := r.Header.Get("Forwarded")
	if forwardedHeader != "" {
		parts := strings.Split(forwardedHeader, ",")
		firstPart := strings.TrimSpace(parts[0])
		subParts := strings.Split(firstPart, ";")
		for _, part := range subParts {
			normalisedPart := strings.ToLower(strings.TrimSpace(part))
			if strings.HasPrefix(normalisedPart, "for=") {
				return normalisedPart[4:]
			}
		}
	}	

	// Check on the request
	host, _, err := net.SplitHostPort(req.RemoteAddr)
	if err != nil {
		return "", err
	}
	
	return host, nil
}

Note that the code above is checking for the presence of the header before checking the source IP address. This is because the presence of the header is indication that the request was proxied.

This should work for modern proxies. However, the Forwarded header is only a relatively recent addition, and prior to that, the de facto standard was to set the X-Forwareded-For header. This is a lot simpler than the Forwarded header; it’s only contains a list of IP addresses separated by commas. Similar to the Forwarded header, the first one is the IP address of the original client.

func requestSourceIp(req *http.Request) (string, error) {
	// Check the Forward header
	forwardedHeader := r.Header.Get("Forwarded")
	if forwardedHeader != "" {
		parts := strings.Split(forwardedHeader, ",")
		firstPart := strings.TrimSpace(parts[0])
		subParts := strings.Split(firstPart, ";")
		for _, part := range subParts {
			normalisedPart := strings.ToLower(strings.TrimSpace(part))
			if strings.HasPrefix(normalisedPart, "for=") {
				return normalisedPart[4:], nil
			}
		}
	}

	// Check the X-Forwarded-For header	
	xForwardedForHeader := r.Header.Get("X-Forwarded-For")
	if xForwardedForHeader != "" {
		parts := strings.Split(xForwardedForHeader, ",")
		firstPart := strings.TrimSpace(parts[0])
		return firstPart, nil
	}	
	
	// Check on the request
	host, _, err := net.SplitHostPort(req.RemoteAddr)
	if err != nil {
		return "", err
	}
	
	return host, nil
}

Building Out the Filter

We now have something that will find the source IP address of a request, whether or not it has been proxied. This can be added to our middleware as a function call.

func SourceIPFilter(upstreamHandler http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		sourceIp, err := requestSourceIp(r)
		if err != nil {
			http.Error(w, "Internal server error", http.StatusInternalServerError)
			return
		}
		
		upstreamHandler(w, r)
	})
}

The final piece can now be added, which is to configure the IP addresses that is permitted to access the service. This version deals with a single IP address that is passed into the middleware function as a parameter, but it should be relatively easy to extend this to deal with a set of permitted IP addresses:

func SourceIPFilter(allowedIpAddress string, upstreamHandler http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		sourceIp, err := requestSourceIp(r)
		if err != nil {
			// Unfortunately it's possible to get an error
			http.Error(w, "Internal server error", http.StatusInternalServerError)
			return
		}
		
		if sourceIp != allowedIpAddress {
			// We can return 403 Forbidden here, but I prefer to return 404 Not Found to indicate
			// plausibly deny that there is something here.
			http.Error(w, "Not found", http.StatusNotFound)
			return
		}
		
		upstreamHandler(w, r)
	})
}

The IP address that is to be allow through would be the public one that we’re using. For those with regular ISP plans without a fixed public IP address, that can usually be found by running a web-search with the query “what is my IP address”.

It might also be a good idea to make the permitted IP address configurable by storing it in an environment variable. That way, when the IP address changes, there’s no need to do any code changes.

Doing this will also make it easy to disable the filter based on the environment variable’s value. For example, setting the environment variable to the empty string can indicate that the filter should be bypassed, allowing all public traffic access to the resource.

func SourceIPFilter(allowedIpAddress string, upstreamHandler http.Handler) http.Handler {
	if allowedIpAddress == "" {
		// We don't need the filter, so simply return the upstream handler
		return upstreamHandler
	}
	
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		sourceIp, err := requestSourceIp(r)
		if err != nil {
			// Unfortunately it's possible to get an error
			http.Error(w, "Internal server error", http.StatusInternalServerError)
			return
		}
		
		if sourceIp != allowedIpAddress {
			// We can return 403 Forbidden here, but I prefer to return 404 Not Found to indicate
			// plausibly deny that there is something here.
			http.Error(w, "Not found", http.StatusNotFound)
			return
		}
		
		upstreamHandler(w, r)
	})
}

func main() {
	var serviceHandler http.Handler = newServiceHandler()
	
	allowedIpAddress := os.Env("ALLOWED_IP_ADDRESS")
	http.ListenAndServe(":8080", SourceIPFilter(allowedIpAddress, serviceHandler))
}

That’s pretty much it. You now have a simple IP address filter that can be used to protect access to handlers based on the source IP address, even if requests are passed through load balancers or reverse proxies. I’ve found that this is one of those useful utilities that can be kept in an internal library, and pulled out when necessary. I hope you find this useful as well.


  1. This will depend on the actual load balancer that you use, and what layer of the ISO network stack they operate on. If you’re deploying your application to AWS, for example, Application Load Balancers will actually terminate the IP connection, which will change the source IP address; whereas, Network Load Balancers, will not. ↩︎

Published two tracks last night: Taxonomy and Hazard. Both of these were made during the depths of the Melbourne Lockdown 2.0, although they’re not about the pandemic. Also, this whole self-promotion thing is new to me so apologies if this post looks a bit weird.

It’s a little bit shocking, the minute you move to a position with a bit more leadership responsibility, how quickly your calendar fills with meeting requests.

That feeling you get when you see a class or struct that was defined in another library that you’re using; and you wish you could change it, but doing so will triple the time it takes to complete your task.

Working on an album cover for some music I’m hoping to publish soon. Who knew that one the skills required for publishing music online is graphic design?

Apart from a small cluster that’s been contained a week ago, there’ve been zero locally acquired cases of Covid-19 in Victoria. That means gyms have dropped the need to book in advanced. I kind of miss it though: being required to book forced me to actually commit to a session.