Bocce in Fitzroy Gardens, Melbourne

I’m interested in starting a daily log, similar to what Dave Winer and a few others I follow on Micro.blog are doing. I’ve tried starting one using Little Outliner a few months ago, but I never got around to keeping it up to date. I’m hoping for something that is a bit more fit for purpose: as much as outliners are useful — I use them for keeping notes for work — I’m not sure it’s the best tool for me for daily logging. There are a few other features that I’m looking for: such as the ability to record private entries, and maybe some very to-do base features.

So, I spent some time today kicking off a new Buffalo project to build something that I think might work for me. I managed to get something running privately, although it’s pretty bare bones at the moment. We’ll see down the line if I’ll actually use it.

I subscribe to a few blogs that are well known for posting daily. Today, one of them is late. “Why haven’t you posted today?”, I imagined asking them, slightly perturbed.

Then, I imagined them smiling and answering back: “Why haven’t you?”

I took the plunge and purchased Logic Pro today. Although the purchase itself is not super interesting, the fact that I’m putting that much money into a piece of software that’s only available for macOS is an indication that I’m now all in on Apple’s desktop platform.

This is something that I’ve been quite tentative about for a while. I first came to macOS in 2017 after using Linux for my home setup. Linux was great for software development but I wanted to get back into music production, something that Linux is not known for. I’ve never really used the Mac at this stage but I knew that macOS was a decent platform for both activities, so moving to it was enticing. It also helped that I learnt about the user experience of the macOS from all the Apple tech podcasts that I listen to, like ATP and The Talk Show.

But I tend to hedge these sort of moves, so even though I started using a Mac Mini, I didn’t commit to purchasing any expensive software (say, anything more than $30-50) that only runs on Apple’s platforms. I had my doubts that I’ll actually go back to Linux (I knew I wasn’t going to Windows though); and after using a Linux laptop for a couple of months, I remembered how Linux has still got some way to go to provide a great user experience. Nevertheless, I always felt that I needed an exit.

Well, that door is now closed. Purchasing the Nova text editor last month was probably the first indication that I’m intending to stay, and the purchase of Logic Pro today seems to have solidified it.

So I guess now I’m officially a committed Mac user.

Follow on from the last post, I also had the opportunity to spot a few trains. Here’s one on it’s way to Bendigo, taken just outside Woodend.

Christmas Eve spent hiking around Macedon and Trentham in regional Victoria.

Macedon walking track Macedon walking track Old telegraph pole alongside walking track in Trentham

Today is the last day of work for the year, and although I haven’t got much planned for the break, it would be good to have some time off.

I wonder if Tim Berners-Lee ever imagined that we would be using the web as a replacement for things like phones, radios, and televisions.

A Tour of Domain Records for Email

There are growing concerns, in the circles that I travel in, about the use of “free” email services like Gmail, that lock you into a service that may not have your whole interest in mind. The remedy for this is to use an email address with a domain that you control. Setting one up can seem a little daunting for those have haven’t done this: thanks to the wild west that is email, a hodgepodge of technologies have grown up in an attempt to stop the occurrences of spam and impersonations. The result is a series of DNS records that have to be set up for each email domain; each one that needs to be done correctly, lest your emails get rejected or your domain gets blacklisted.

I say this as someone that has experienced this myself: not only for my own domain, but also for a bunch of other domains that I was responsible for. As such, I had to have some understanding as to what each of these record types actually were. After a bit of experience doing this a few times, along with reading up on what these records are actually for, it felt right to put together this tour for anyone else looking to do the same.

This is by no means a complete how-to for setting up a new email domain, but rather a guide of the types of domain records that you will be dealing with. It is assumed that you, dear reader, are setting up a new email domain for use with an online email service, such as FastMail, Protomail or Hey.com. These services should have the tools and documentation detailing the specifics on how to setup a domain for their service. Each service might have slightly different requirements, so it’s always a good idea to read their instructions carefully, and give it a test before sharing your new email address around. Those setting up their own mail server fall beyond the scope of this post, although the general aspects will probably be applicable here as well.

So, without further ado, here are the domain records you’ll need for email.

MX: What Servers Should Email For Me Be Sent To?

The MX, or Mail Exchange record, is the simplest record to understand. It identifies the mail server that should be used to receive email that is addressed to you. Sending an email to someone@example.com, for example, will trigger the mail client to open a connection to one of the hosts listed in the MX for the example.com domain, in order to transmit the message.

Mail exchange hosts are specific by adding a domain record with the MX type1. The value will consist if a number which indicates a priority, followed by a hostname or an IP address, e.g. 10 smtp1.mailserver.example.com. It’s possible to list multiple MX records for a single domain, with each one having a different priority. In general, a MX record with a lower priority will take precedence over one with a higher one.

SPF: What Servers Are Allowed To Send Your Emails?

SPF, or Sender Policy Framework, is a mechanism for declaring whether a specific mail server is permitted to send emails for a particular domain. It is used by remote mail servers receiving emails from your domain to confirm that they are coming from a mail server that you know about and have authorised to send emails on your behalf. The policy part are the rules governing this, and it’s up to the domain owner to specify what these are. The rules themselves are are quite expansive; and can include things like how to handle email that did not come from a server that you control – as would be the case for email sent from spambots pretending to be from you — along with referencing policies defined in another domain.

A SPF policy is declared for a domain using a TXT record type2. The value will be something that starts with v=spf1, followed by the actual policy specific in a simple domain specific language. The mail service that will usually indicate what you should use for an SPF policy, and there will usually be one SPF record for each email address domain.

SPF itself does not given total protection over those attempting to impersonate a sender from your domain. It will only verify that the email “envelope” is from the domain that you can control. But a single email may be transmitted via multiple servers, with each of these “hops” using a new envelope and thereby being subjected to a different SPF policy. We need one more thing to verify the original message end-to-end.

DKIM: Is The Email Really From You?

DKIM, or DomainKeys Identified Mail, verifies that the actual message itself is really from you. Once the email reaches the final destination, the receiving mail server can confirm that the message is from you using the DKIM records that are stored against that domain. The specifics on how this is done is beyond the scope of this post, but the short version is that a cryptographic signature is created when the mail is sent, and the receiving mail service can use the public keys stored in the DKIM records to verify that the signature is correct.

DKIM records can either be TXT or CNAME records. They will usually have names like <something>._domainkey; and either a large value beginning with v=DKIM1 if they are a TXT record, or a reference to another domain if they’re a CNAME records. You may get two or three DKIM records for each domain that you register, each one with a slightly different names.


It should be noted that this is not the complete list of domain records that are available for dealing with email. This is also DMARC which extends these two policies, and is available if required. I personally have not had to set up a DMARC record for the domains that I had to deal with, so there is not much I can say about it.

I hope you found this post useful. General information about the various technologies required to setup a domain for email delivery was hard to come by, and I’d wish a post like this was available to me when I was first learning how to do this.


  1. It’s interesting to be reminded about how old email is by looking at when these DNS record types were defined. MX record types were proposed in 1987, and are actually a replacement for the MD and MF record types, which are even older and were part of the original DNS specific↩︎

  2. SPF was originally going to have their own mail record type as well, but this was abandoned due to lack of support. ↩︎

Follow up from yesterday’s post about my non-reading habits, a colleague of mine shared this article about the topic. It looks like the condition I have is tsundoku, which is “a Japanese term used to describe a person who owns a lot of unread literature.”

Dealing With Errors in Go

There’s a lot to like about Go, and I’ll happily admit being a huge fan of the language. But it will be dishonest of me not to acknowledge that some aspects of programming in Go results in more code than the equivalent in another language. My guess is that the best example of this is how Go programs deal with errors.

For those unfamiliar with how errors work in Go, the short version is that they are just like any other type that you deal with — like strings and integers — and they have no special control structure to handle them in any specific way. This means that, unlike languages that have exceptions, there is nothing like try/catch blocks; you are left with the standard control statements that are available.

The result; you will fall into the practice of having a lot of code that looks like this:

func doStuff() error {
	firstResult, err := doFirstThing()
	if err != nil {
		return err
	}
	
	secondResult, err := doSecondThing(firstResult)
	if err != nil {
		return err
	}
	
	lastResult, err := doOneLastThing(secondResult)
	if err != nil {
		return err
	}
	
	return processResult(lastResult)
}

This is not bad in-and-of-itself, but it does have some shortcomings over the equivalent in languages that use exceptions. The use of an if block after every call to a “do function”1 adds a bit of noise making it harder to separate the happy path from the error handling. And although the code is technically correct — whereby the errors are being handled appropriately — you may come to wonder whether this could be done in a nicer way.

This post explores some alternatives for dealing with Go errors. It is by no means exhaustive; it’s just a few patterns I’ve found works for me. It is also by no means suggestive that you even should use one of the alternatives. Each use case is different; and one alternative might be a better solution in a particular case, but dramatically worse in another. Everything in coding, much like life itself, is a tradeoff, and I would suggest being mindful of the potential costs of adopting any one of these options alongside the benefits they may bring.

With that said, let’s look at some alternatives.

Option 0: Keep It As It Is

This is probably not an option that you’d like to hear, but it is one worth considering if the function is small enough, and you don’t have the ability to change the functions you are calling. As ugly as the code looks, it does have some advantages:

  • It makes the erroneous path crystal clear: it indicates that any one of these operations can fail with an error, and that it is the job of the function to handle it in some way, even if it is simply returning it as it’s error.
  • It makes it reasonably easy to move things around or change how the error is to be handled: if doSecondThing returns an error that no longer blocks the call to doOneLastThing, you only need to adjust one if statements. This is harder to do in any generic solution you may adapt.
  • It provides an incentive to keep functions small: for example, if there is a need to expand the number of operations from 4 to 30, and each operation returns an error that needs to be handled, then that would impart a large enough pressure to refactor the code, and break it up across multiple functions.

So I’d recommend considering this as a viable option first, if only briefly. Spending effort on a solution that may look neater can actually have the opposite effect of making the code less understandable, while also making it harder to maintain.

Option 1: Don’t Handle The Error

No, don’t go. Let me explain.

This is not a viable option if you are writing code intended for a production setting. Arguably, if something can fail in some way, you should handle it. But I’m listing this option here as it is an alternative to the if statements above if one of the following scenarios is true:

  • The functions don’t return errors: they may be required to implement a type that does, but if it is clear within the public documentation that they never do, then there is no real need to handle the error. Some care will need to be observed in this case though: public documentation is not the the same as an API, and depending on who’s maintaining these functions, it is very possible that, down the line, the implementor takes advantage of the error return type, and starts returning them.
  • The error can be safely ignored: this might be test code, or code that is written once, then thrown away. In these case, it may not be worth your while adding support for error handling if it provides no real value.

Adopting this option may make the function look like the following. It may not be possible to simplify further unless you can actually change the return type of the “do functions”; in which case, there are no errors that need handling.

func doStuff() error {
	firstResult, _ := doFirstThing()
	secondResult, _ := doSecondThing(firstResult)
	lastResult, _ := doOneLastThing(secondResult)
	
	// You can probably ignore this error as well, but it's simpler to just return it
	return processResult(lastResult)
}

Option 2: Use Panic

The second option is to use panic to throw the error, and handle it in a single defer and recover handler.

A first draft of this solution may look something like the following:

func doStuff() (err error) {
	defer func() {
		if e, isErr := recover().(error); isErr {
			err = e
		} else {
			panic(e)
		}
	}()
	
	firstResult, err := doFirstThing()
	if err != nil {
		panic(err)
	}
	
	secondResult, err := doSecondThing(firstResult)
	if err != nil {
		panic(err)
	}
	
	lastResult, err := doOneLastThing(secondResult)
	if err != nil {
		panic(err)
	}
	
	return processResult(lastResult)	
}

which is not much better than what we had to begin with. However, if we can modify the “do functions” themselves, we can replace them with versions that panic instead of return an error. These new “must do functions” — so named as the must prefix is used to indicate that they will panic if things go wrong — can bring out the happy path quite clearly:

func doStuff() (err error) {
	defer func() {
		if e, isErr := recover().(error); isErr {
			err = e
		} else {
			panic(e)
		}
	}()
	
	firstResult := mustDoFirstThing()
	secondResult := mustDoSecondThing(firstResult)
	lastResult := mustDoOneLastThing(secondResult)
	mustProcessResult(lastResult)
		
	return 
}

This can be improved further if the “do functions” actually return values of the same type. For that, we don’t actually need to replace the function. Instead, we can build a function that simply takes the result and error, and either return the result or panic depending on whether an error was returned2:

type doResult struct { ... }

func must(res doResult, err error) doResult {
	if err != nil {
		panic(err)
	}
	
	return doResult
}

func doStuff() (err error) {
	defer func() {
		if e, isErr := recover().(error); isErr {
			err = e
		} else {
			panic(e)
		}
	}()
	
	firstResult := must(doFirstThing())
	secondResult := must(doSecondThing(firstResult))
	lastResult := must(doOneLastThing(secondResult))
	
	if err := processResult(lastResult); err != nil {
		panic(err)
	}
		
	return 
}

This will work, but I’d argue it’s not the best use of panic. Go panics should be reserved for unexpected errors, instead of as a poor substitute for exceptions. Instead, I suggest the following option, which provides a nicer API while maintaining the principal of errors as values.

Option 3: Encapsulate The Error Handling Using A Dedicated Type

This is probably my preferred option of the three. Basically, the idea to track the state of the error using a dedicated type, and wrapping the functions that return errors within methods that don’t. The methods will be defined on the type that is maintaining the error state, and each one will check whether an error has been raised before invoking the wrapped function. Finally, the struct will offer a way to get the error, so that it can be logged or returned.

The way this looks in code would be the following:

type DoOperations struct {
	err  error
}

func (d *doOperations) DoFirstThing() (res doResult) {
	if d.err != nil {
		return doResult{}
	}
	
	res, d.err = doFirstThing()
	return
}

func (d *doOperations) DoSecondThing(op doResult) (res doResult) {
	if d.err != nil {
		return doResult{}
	}
	
	res, d.err = doSecondThing(op)
	return
}

// Likewise for doOneLastThing and processResult

func (d *doOperations) Err() error {
	return d.err
}

Then, the our new function will simply become:

func doStuff() error {
	ops := new(doOperations)
	
	firstResult := ops.DoFirstThing()
	secondResult := ops.DoSecondThing(firstResult)
	lastResult := ops.DoOneLastThing(secondResult)
	ops.ProcessResult(lastResult)
	
	return ops.Err()
}

This gives us the ability to write a function with the happy-path clearly shown while still maintaining best practices around error handling.

As nice as this is, the one downside is that this type is specific to the set of operations that we need to deal with. If this is the only function that needs to perform these operations, then the additional maintenance overhead might not offset the nicer API that this gives you. But if you find yourself writing out this sequence of operations in a variety of different ways, it may be worth your while to consider this approach.

To Be Continued

It’s very likely that this topic may be revisited as the Go language evolves. With the design of type parameters imminent, and discussions around adding language features to make error handling nicer in general, additional options may become possible down the line. But for now, these are the options that seem like current viable alternatives to the function-call-then-if-block pattern seen in a lot of Go code.


  1. For brevity, I’ll be referring to the functions doFirstThing, doSecondThing and doOneLastThing collectively as the “do functions”. ↩︎

  2. An example of this in the standard library is the template.Must() function. ↩︎

I don’t know if it’s just me, but I’ve got this annoying habit of seeing an article that looks interesting, and I decide to… just not read it. Instead I “save it for later”. The result: I end up never reading the article at all.

Vivaldi - My Recommended Alternative to Chrome

I’m seeing a few people on Micro.blog post about how Chrome Is Bad. Instead of replying to each one with my recommendations, I figured it would be better just to post my story here.

I became unhappy with Chrome about two years ago. I can’t remember exactly why, but I know it was because Google was doing something that I found distasteful. I was also getting concerned about how much data I was making available to Google in general. I can prove nothing, but something about using a browser offered for free by an advertising company made me feel uneasy.

So I started looking for alternatives. The type of browser I was looking for needed the following attributes:

  • It had to use the Blink rendering engine: I liked how Chrome rendered web pages.
  • It had to offer the same developer tools as Chrome: in my opinion, they are the best in the business.
  • It had to run Chrome plugins.
  • It had to be a bit more customisable than Chrome was: I was getting a little bored with the Chrome’s minimalist aesthetic, and there were a few niggling things that I wanted to change.
  • It had to have my interests at hand: so no sneaky business like observing my web browsing habits.

I heard about the Vivialdi browser from an Ars Technica article and I decided to give it a try. The ambition of the project intrigued me: building a browser based on Chromium but with a level of customisation that can make the browser yours. I use a browser every day so the ability to tailor the experience to my needs was appealing. I decided to download it and give it a try. First impressions were mixed: the UI is not a polished as Chrome’s, and seeing various controls everywhere was quite a shock compared to the minimalism that Chrome offered. But I eventually set it as my default browser, and over time I grew to really like it.

I’m now using Vivialdi on all my machines, and my Android phone as well. It feels just like Chrome, but offers so much more. I came to appreciate the nice little utilities that the developers added; like the note taking panel which comes in really handy for writing Jira comments that survive when Jira decides to crash. It’s not as stable as Chrome; it feels a tad slower, and it does occasionally crash β€” this is one of those browsers that need to be closed at the end of the day. But other than that, I’m a huge fan.

So for those looking for an alternative to Chrome that feels a lot like Chrome, I recommend giving Vivialdi a try.

Revisiting the decision to build a CMS

It’s been almost a month since I wrote about my decision to write a CMS for a blog that I was planning. I figured it might be time for an update.

In short, and for the second time this year, I’ve come to the conclusion that maintaining a CMS is not a good use of my time. The largest issue was the amount of effort that would have been needed in order to work on the things that don’t relate to content, such as styling. I’m not a web designer, so building the style from scratch would have taken a fair amount of time, which would have eaten into the amount of time I would have spent actually writing content. A close second was the need to add additional features to the CMS that were missing, like the ability to add extra pages, and a RSS feed. If I were to do this properly without taking any shortcuts, this too would have resulted in less time spent on content.

The final issue is that the solutions that I was trying to optimise for turned out not to be as big a deal as I first thought, such as the “theoretical ability to blog from anywhere”. I’ve tried using the CMS on the iPad a few times, and although it worked, the writing experience was far from ideal, and making it so would have meant more effort put into the CMS. In addition to this, I’ve discovered that I prefer working on the blog using my desktop, as I’ll more likely be in the state of mind to do so. Since my desktop already has tools like Git, I already had what I needed to theoretically blog from anywhere, and it was no longer necessary to recreate this requirement within the CMS itself.

So I’ve switched to a statically generated Hugo site served using GitHub Pages. I’m writing blog posts using Nova, which is actually a pretty decent tool for writing prose. To deploy, I simply generate the site using Hugo’s command line tool, and commit it all to Git. GitHub Pages does the rest.

After working this way for a week and a half, it turns out that I actually prefer the simplicity of this approach. At this stage both the CMS, and the blog it would have powered, has been in the works for about a month. The result is zero published posts, and it will probably not be launched at all1. Using the new approach, the new blog is public right now and I have already written 5 posts on it within the last 11 days, which I consider a good start.

We’ll see how far I’ll go with this new approach before I consider a custom CMS again, but I think it’s one that I’ve managed to get some traction now, particularly since it actually resulted in something. I think it’s also an approach I will adopt for new blogs going forward.


  1. This is not wholly because of the CMS: the focus of the blog would have been a bit narrower than I’d would have liked; but the custom CMS did has some bearing over this decision. ↩︎

It’s a shame that more and more bloggers are moving to email newsletters in leau of RSS. Even for bloggers that don’t charge for their content, I’m seeing more blogs that are encouraging readers to sign up to an email newsletter, instead of providing a link to an RSS feed.

I can see why they do this. Email addresses are valuable, and sites like Stratechery and services like Substack show that it’s possible to have a viable business writing a daily newsletter to people willing to pay for it. But one thing Ben Thompson offers that Substack doesn’t is a private RSS feed for subscribers, so they can read the daily updates within a feed reader. This is how I use to consume Stratechery, before the release of the daily update podcast.

So email subscriptions makes sense for those writing for a living, but I’m not sure it makes sense for those maintaining a free blog. If your content is available without charge, then it makes sense to me to offer an RSS feed to those that prefer to read your content within a feed reader. The reading experience in NetNewsWire is preferable to the one offered by my email client, and I don’t need to see all the emails that I’m trying to avoid. By all means offer email subscriptions as well, but don’t require one just so that I can see your latest updates.

A Brief Look at Stimulus

Over the last several months, I’ve been doing a bit of development using Buffalo, which is a rapid web development framework in Go, similar to Ruby on Rails. Like Ruby on Rails, the front-end layer is very simple: server-side rendered HTML with a bit of jQuery augmenting the otherwise static web-pages.

After a bit of time, I wanted to add a bit of dynamic flare to the frontend, like automatically fetch and update elements on the page. These projects were more or less small personal things that I didn’t want to spend a lot of time maintaining, so doing something dramatic like rewriting the UI in React or Vue would have been overkill. jQuery was available to me but using it always required a bit of boilerplate to setup the bindings between the HTML and the JavaScript. Also, since Buffalo uses Webpack to produce a single, minified JavaScript file that is included on every page, it would also be nice to have a mechanism to selectively apply the JavaScript logic based on the attributes on the HTML itself.

I since came across Stimulus, which looks to provide what I was looking for.

A Whirlwind Tour of Stimulus

The best places to look if you’re interest in learning about Stimulus is the Stimulus handbook, or for those that prefer a video, there is one available at Drifting Ruby. But to provide some context for the rest of this post, here’s an extreamily brief introduction to Stimulus.

The basic element of an application using Stimulus is the controller, which is the JavaScript aspect of your frontend. A very simple controller might look something like the following (this example was taken from the Stimulus home page):

// hello_controller.js
import { Controller } from "stimulus"
    
export default class extends Controller {
  static targets = [ "name", "output" ]
    
  greet() {
    this.outputTarget.textContent =
      `Hello, ${this.nameTarget.value}!`
  }
}

A controller can have the following things (there are more than just these two, but these are the minimum to make the controller useful):

  • Targets, which are declared using the static target class-level attribute, and are used to reference individual DOM elements within the controller.
  • Actions, which are methods that can be attached as handlers of DOM events.

The link between the HTML and the JavaScript controller is made by adding a data-controller attributes within the HTML source, and setting it to the name of the controller:

<div data-controller="hello">
  <input data-hello-target="name" type="text">  	
  
  <button data-action="click->hello#greet">Greet</button>

  <span data-hello-target="output"></span>
</div>

If everything is setup correctly, then the controllers should be automatically attached to the HTML elements with the associated data-controller annotation. Elements with data-*-target and data-action attributes will also be attached as targets and actions respectively when the controller is attached.

There is some application setup that is not included in the example above. Again, please see the handbook.

Why Stimulus?

Far be it for me to suggest yet another frontend framework in an otherwise large and churning ecosystem of web frontend technologies. However, there is something about Stimulus which seems appealing for small projects that product server-side rendered HTML. Here are the reasons that attract it to me:

  1. It was written by, and is used in, Basecamp, which gives it some industry credibility and a high likelihood that it will be maintained (in fact, I believe version 2.0 was just release).

  2. It doesn’t promise the moon: it provides a mechanism for binding to HTML elements, reacting to events, and providing some mechanisms for maintaining state in the DOM, and that’s it. No navigation, no pseudo-DOM with diffing logic, no requirement for maintaining a global state with reducers, no templating: just a simple mechanism for binding to HTML elements.

  3. It plays nicely with jQuery. This is because the two seem to touch different aspects of web-development: jQuery with providing a nicer interface with the DOM, and Stimulus with providing a way to easily bind to DOM elements declared via HTML attributes.

  4. That said, it doesn’t require jQuery. You are free to use whatever JavaScript framework that you need, or no framework at all.

  5. It maintains the relationship between HTML and JavaScript, even if the DOM is changed dynamically. For example, modifying the innerHTML by including an element with an appropriate data-controller attribute will automatically setup a new controller and bind it to the new elements, all without having you to do anything yourself.

    It doesn’t matter how the HTML gets to the browser, whether it’s AJAX or front-end templates. It will also work with manual DOM manipulation, like so:

    let elem = document.createElement("div");
    elem.setAttribute('data-controller', 'hello');
    
    document.append(elem);
    

    This allows a dynamic UI without having to worry about making sure each element added to the DOM is appropriately decorated with the logic that you need, something that was difficult to do with jQuery.

Finally, and probably most importantly, it does not require the UI to be completely rewritten as JavaScript. In fact, it seems to be built with with this use case in mind. The tag-line on the site, A modest JavaScript framework for the HTML you already have, is true to it’s word.

So, if you have a web-app with server-side rendering, and you need something a bit more β€” but not too much more β€” than what jQuery or native DOM provides, this JavaScript framework might be worth a look.

Some uninformed thoughts about Salesforce acquiring Slack

John Gruber raised an interesting point about the future of Slack after being purchased by Salesforce:

First, my take presupposes that the point of Slack is to be a genuinely good service and experience. […] To succeed by appealing to people who care about quality. Slack, as a public company, has been under immense pressure to do whatever it takes to make its stock price go up in the face of competition from Microsoft’s Teams.

[…]

Slack, it seems to me, has been pulled apart. What they ought to be entirely focused on is making Slack great in Slack-like ways. Perhaps Salesforce sees that Slack gives them an offering competitive to Teams, and if they just let Slack be Slack, their offering will be better — be designed for users, better integrated for developers.

When I first heard the rumour of Salesforce was buying Slack, I really had no idea why they would. The only similarity between the markets the two operate in is that they are things businesses buy, and I saw no points of synergy between the two products that would make this acquisition worth it.

I’m starting to come round to the thinking that the acquisition is not to integrate the two products, at least not to the degree I was fearing. I think Gruber’s line of thinking is the correct one: that Salesforce recognises that it’s in their interest to act as Slack’s benefactor to ensure that they can continue to build a good product. Given that Salesforce has bought Tableau and Heroku and more-or-less left them alone, there’s evidence that the company can do this.

As to what Salesforce gets out of it, Jason Calacanis raises a few good reasons in his Emergency Pod on the topic around markets and competition. Rather than attempt to explain them, I recommend that you take a listen and hear them from him.

I love the idea of events like Microblogvember to help reinforce the act of writing frequently. I will admit it was difficult at times, but I definitely find it beneficial participating in these events when they come around. #mbnov

It’s interesting how the activities that would have seen quite pedestrian before the lock-down, like going to a cafe, are quite novel after the lock-down. #mbnov

It’s surprising how quickly you can get use to a mask outside when you’re required to wear one, and then get use to not wearing one outside when you’re not. #mbnov