Arne Bahlo

Christmas with COVID-19

Network Effect – Martha Wells


The first full-length novel from the Murderbot series. Loved reading it and can't wait for the next book!

I‘m writing an app

The last two weeks I‘ve spent quite some time on evenings and weekends to work on an iOS app. I won’t tell you what it is though, it‘s way too early for that.

This is the first post in a series and this one is about technologies and tooling. See all posts in this series.


I‘m writing a native app in Swift and UIKit.

The decision against using React Native or any other third-party mobile technology was mostly because native feels better (to me), is often faster and gets new features first. Using React Native would have sped up the development process a lot (mostly because I know that technology and my Swift/UIKit knowledge is a little rusty), but I wanted it to be native and I‘m in no rush.

The project was initially set up to use SwiftUI as it seems to be the future of iOS development, but I quickly hit some limits. More experienced iOS-Developers would have probably found a way around it, but I feel more at home with UIKit and it‘s a stable, battle-tested technology.


Spending most of my working time writing Go I‘ve come to value automatic formatting and good linting. For that I‘m using SwiftFormat and SwiftLint, both with the default configuration. You can configure Xcode to automatically run those when building, which I highly recommend.

For dependency management I've started using Carthage, mostly because that was the defacto-standard when I last wrote an app. But I've switched over to Swift Package Manager because it has great Xcode-integration and is generally stable.

Most iOS developers have experienced their share of .xcodeproj-related merge conflicts or weird diffs. Generally I prefer all files in my project to be human-readable. For that I've used XcodeGen, which allows you to define your project in a simple project.yml and ignore your project with these lines in you .gitignore:


That means that on CI, new clones or changes in project.yml you need to install XcodeGen and run xcodegen generate to set it up, but for me that's worth it.

Talking about CI, I've set up Fastlane for linting and testing. It allows easy definition of rules and automatic TestFlight or even AppStore submissions (including taking screenshots), which is awesome. It also has a ton of plugins.

Since Travis CI costs a small fortune for private repositories I've migrated away and now use GitHub Actions. It took some time to configure it right, so here's my .github/workflows/ci.yml, hoping it's useful for some people:

name: CI

on: [push]

runs-on: macos-latest
- run: sudo xcode-select -s /Applications/
- name: Checkout code
uses: actions/checkout@v2
- name: Cache Bundler
uses: actions/cache@v2
path: vendor/bundle
key: $-gems-$
restore-keys: |

- name: Cache Swift Package Manager
uses: actions/cache@v2
path: .build
key: $-spm-$
restore-keys: |

- name: Install dependencies
run: |
bundle update --bundler
bundle install
brew install xcodegen swiftlint swiftformat

- name: Generate Xcode project
run: xcodegen generate
- name: Lint
run: bundle exec fastlane lint
- name: Test
run: bundle exec fastlane test

That's my setup so far. If you have any questions or suggestions, hit me up.

Automate #2: Checklists with Things

This is the second post of my series Automate. All posts in this series.

On the Cortex podcast (which inspired the whole series), CGP Grey and Myke Hurley sometimes talk about their checklists; whole projects that can be invoked by a tap if needed. These lists are for things that are important to get right, but you do them not often enough to remember every step, examples are an Airport or a YouTube checklist. They mostly use OmniFocus for this, which can export and import projects as TaskPaper.

When thinking about this, I found more and more usecases for checklists and really wanted to have this set up, but I personally can't deal with the way some things work in OmniFocus. I use Things for my todos, which doesn't support TaskPaper, but supports its own JSON format. I didn't want to store my checklists in a proprietary format like this Things-specific JSON, I wanted TaskPaper.

Then I remembered playing around with Scriptable, which could execute JavaScript on iOS with native bindings to UI, Clipboard, etc. – this sounded perfect for my usecase, so I started writing a script to convert TaskPaper to Things JSON, which will be the core of my checklist flow.

The script

I ended up writing a 300-line script (it's on GitHub) with only enough functionality to fit my needs (e.g. it only supports tabbed indentation and may break on the slightest deviation).

Since I wanted to run this script from Apples Shortcuts app, I needed to use a hack for input/output data. Many apps face these problems and use the clipboard (e.g. AutoSleep), so that's what I did as well. If you run the script in the Scriptable app, it will get the clipboard contents expecting text in TaskPaper format, convert it to the Things JSON and copy it.

Here's how the @tag and @attr(value) work:

The Shortcut

I wanted one shortcut to start all checklists from. Sadly you cannot get a directory listing from an iCloud folder, so for now if I add a new checklist, I have to add the filename to the List at the top.

This is how the Shortcut looks:

Shortcut screenshots

After a checklist was chosen, the shortcut gets the file from iCloud Drive (it has to be in the Shortcuts application folder to be accessible).
Then it copies the file contents to the Clipboard and runs the script (the Scriptable action should appear automatically in your Shortcuts app once you created it). After that, it retrieves the Things JSON from the clipboard, url-encodes it and opens Things with its URL scheme.


This shortcut-scriptable-combination helps to use checklists in Things from anywhere while storing them in a highly-compatible format. I really love to find more use cases for checklists. Do you use checklists? What usecases do you have? I'd love to hear from you.

Automate #1: Spotify Podcast Mirror

This is the first post of my series Automate. All posts in this series.

In this post I describe how I created an Automator application, which will record the latest episode of a Spotify podcast, fill out metadata like title and description and generate a file for metadata for a podcast client to subscribe to.


If you want to follow along, you need a machine with macOS and the following installed:

You also need a bit of knowledge in Apple Script, Bash scripting and JSON and be comfortable in a terminal.

Create Audio Hijack session

Audio Hijack is an application that can do complex audio pipelines, often used for recording podcasts or live streaming. We use this to capture the output of Spotify to a file. Audio Hijack has a concept of sessions, meaning various saved pipelines.

To prepare things, start Audio Hijack and create a new session based on the Application Audio template. This post assumes you only have on session. If you have multiple, make sure it's at the first position. In the session, select Spotify as Application and change the Recorder to 128 kbps (or whatever you prefer). I recommend disabling the Output Device, but it's helpful if you want to debug something. After saving, it should look similar to this:

Audio-Hijack Session

Spotify credentials

We need various metadata about the latest show, like title, Spotify uri or duration. To get those, we use the Spotify Web API. You need to create a Spotify App by going to the developer dashboard and create a client id. Fill out the fields and you'll get a Client ID and a Client Secret.

Spotify ID

We also need the Spotify ID of the podcast you want to mirror. To get that, click the three dots on the podcast page in Spotify, select Share and Copy Spotify URI. The uri looks something like spotify:show:abcdef, where abcdef is your Spotify ID.

Folder setup

Lastly, there needs to be a folder where the podcast lives. We use a tool in our last step to generate the RSS feed, so we can subscribe to the podcast from our favourite podcast app. Fot the tool to work, we need to have a podcast.toml similar to this example, but without the [[episodes]] parts in our folder. There also needs to be an empty subfolder called dist/.

Let's do this

1. Create an automator project

Open the Automator app and choose New Document. Then click on Application to build a .app file you can launch later.

2. Get latest episode metadata

We need the following attributes of the latest episode:

We use Bash to get the data, so drag a new Run Shell Script onto your workflow with the following contents:


# Exit the script if any command returns an error code.
set -e

# Define variables
AUTH_HEADER='abcdef123456' # base64 encoded `<client id>:<client secret>`

# Get Spotify access token
TOKEN_RES=$(curl -s -X POST '' \
-H Authorization:"Basic $AUTH_HEADER" --data 'grant_type=client_credentials')

ACCESS_TOKEN=$(echo $TOKEN_RES | /usr/local/bin/jq -r .access_token)

# Get show
SHOW_RES=$(curl -s "$SPOTIFY_ID" \
-H Authorization:"Bearer $ACCESS_TOKEN")

# Get first (latest) item
ITEM=$(echo "$SHOW_RES" | /usr/local/bin/jq '.episodes.items[0]')
# Get release date
RELEASE_DATE=$(echo "$ITEM" | /usr/local/bin/jq -r .release_date)

# If the episode already exists, exit early
if [ -f "$PODCAST_PATH/dist/episodes/$RELEASE_DATE.mp3" ]; then
(>&2 echo "Episode already exists")
exit 1

# Get and echo metadata, each line will be a parameter to the next action
# in the Automator workflow.
echo "$ITEM" | /usr/local/bin/jq -r .uri
echo "$ITEM" | /usr/local/bin/jq -r .duration_ms
echo "$ITEM" | /usr/local/bin/jq -r .name
echo "$ITEM" | /usr/local/bin/jq -r .description

We need the metadata again later, so save it in a variable by dragging the Set value of variable below the Bash action and giving it a name (e.g. Metadata).

3. Start recording process

Now we get to the messy part. Drag a Run AppleScript action to the end of the workflow and paste the following code.

Read the comments (lines starting with --) to know what it's doing. Make sure the checkbox next to Automator in System Settings → Security & Privacy → Accessibility) is checked, otherwise you will get an error and the keyboard shortcuts won't work.

on run {input, parameters}
set uri to item 1 of input
set duration_ms to item 2 of input
set title to item 3 of input
set description to item 4 of input
set release_date to item 5 of input

-- Prepare Spotify
tell application "Spotify"
-- Start spotify

-- Start playing our episode
play track uri

-- Make sure we start at 0s
-- If we are at 0s, this will skip to the previous track
delay 1
set episode_id to id of current track
previous track
delay 1
if episode_id is not id of current track then
play track uri
end if
end tell

-- Start recording
tell application "Audio Hijack"
-- Start Audio Hijack

tell application "System Events"
-- CMD + 1 to open Sessions
keystroke "1" using command down
-- Arrow down to select the first one
key code 124
-- Open session
keystroke "o" using command down
-- Start recording
keystroke "r" using command down
end tell
end tell

-- Play track
tell application "Spotify"
-- Bring to foreground

-- Start playing our track again
play track uri

-- Wait the length of our episode
delay duration_ms / 1000

-- Press pause

-- Quit spotify
end tell

tell application "Audio Hijack"
-- Bring Audio Hijack to foreground

tell application "System Events"
-- Stop recording
delay 1
keystroke "r" using command down
end tell

-- Quit Audio Hijack
end tell
end run

4. Save release date to it's own variable

We use the publish date of the episode as filename, so we need to get this first. Drag the action Get Value of Variable into your workflow and select the Metadata variable (or whatever you called it). After that, drag a Run AppleScript action at the end with the following contents.

on run {input, parameters}
-- Return the fifth parameter of our Metadata variable,
-- which is the release date
return item 5 of input
end run

Set the variable ReleaseDate using a Set Value of Variable action.

5. Move file to destination

Use a Find Finder items action to search the destination folder of your Audio Hijack session for files with all of the following attributes:

Then drag a Run AppleScript with the following contents into your workflow to get the first item (which will be the latest).

on run {input, parameters}
# Get latest item
return item 1 of input
end run

After that, rename the file using Rename Finder Items. Choose Name Single Item and set Basename only to ReleaseDate.

Then move the renamed file to podcast-folder/dist/episodes/ with a Move Finder Items action.

6. Metadata and deployment

Now that we have our recording, we have to generate the RSS feed with the metadata so podcast clients can display the episodes in a nice list. After that we upload both the feed and the episode.

Use Get Value of Variable to get the Metadata variable again.

To generate the feed.xml, we append the metadata of an episode to a simple TOML file. I wrote a small tool for this, which you can find here: toml-podcast. After generating the feed.xml, we use the AWS CLI deploy it and the newly recorded episode to an AWS S3 bucket.

Add a Run Shell Script action with the following contents:


set -e

# For description, we have to escape double quotes ("), because the TOML strings
# use double quotes as well.
# If the TOML is invalid, toml-podcast will crash.
DESCRIPTION=$(echo "$5" | sed "s/\"/\\\\\"/g")

# Add episode metadat to the end to podcast.toml
cat << EOT >> "$PODCAST_PATH/podcast.toml"


# Build feed.xml using toml-pdocast
# Your $GOBIN will look different, find out your path by typing `which toml-podcast`
# in your terminal

# Set AWS credentials (replace with your real credentials)

# Upload episodes + feed.xml
cd dist/
/usr/local/bin/aws s3 sync episodes s3://fuf-mirror/episodes
/usr/local/bin/aws s3 cp --cache-control max-age=600 feed.xml s3://fuf-mirror/feed.xml

I use a Cache-Control header because I serve the podcast via a CloudFront, the AWS CDN and I want the feed.xml to be at most 10 minutes old.


Now recording the latest episode of your favourite Spotify podcast takes one click. If you have a spare Mac you could even start it automatically matching the release cycle of the podcast.

Announcing the Automate Series

I listen to a lot of podcasts every day, e.g. when doing chores or commuting. One of the shows I particulary enjoy recently is Cortex, where Myke Hurley and CGP Grey talk about their ways to be productive. Every year, they define yearly themes, which are a bit like new year's resolutions, but instead of hard targets, they are more like directions in which you want to go. I highly recommend listening to the Yearly Themes Episode of 2019.

Since this is such a good idea, I naturally defined my own yearly themes for 2019. One of those is called Year of automation, which to me means automating everything I can in my private as well as professional life.

I will try and documente all bigger efforts (you don't want to see all my email rules) in this blog series. You will find all episodes under the tag automate-series.

I18n in Node.js

Yesterday we added unit tests for a component that uses the Intl API to a frontend project. Everything worked flawlessly on our local machines, but it failed on CI. The failing tests showed a number formatted in English instead of the expected German format.

We use Jest for running the tests, which is running on Node.js. As we found out in the process, Node.js uses ICU (International Components for Unicode) for its i18n support and, if not otherwise specified, only contains English ICU data.

The reason it worked on our local machines was, that they had node installed with full ICU data (Homebrew, for example, installs all by default), but the Docker image we used on CI didn't.


To have full ICU data available, you can either

Since it was only failing on CI, we updated the build script to install the full-icu package and export the environment variable. Also, in case you wondered, all major browsers support the Intl-API, according to Can I use.


If you use any i18n features in your JavaScript application, make sure to include a basic test to be sure all locales you need are supported.

On Hexagonal Architecture

A good architect maximises the number of decisions not made

— Robert C. Martin in Clean Architecture

Most web services I worked with use a MVC-style architecture, with a handlers package and, if at all, a repository package. While this may be great for small services, the handlers package introduces a big problem: It mixes transport logic with business logic. This makes refactoring hard (imagine switching your HTTP framework) and therefore forces you to make decisions about these kind of things before even starting the project. So when I started a new project recently, I decided to use the hexagonal architecture (aka Ports and Adapters) and so far I'm really happy.
Here's what I like about it:

All dependencies point inward

All outer layers like transport, storage or logging depend on the business logic, but never the other way around. The business logic is agnostic of any other layer, it doesn't care, how it's served or how data is stored, it's pure code. This makes changing it super simple.

You can defer decisions

You can defer many decisions about technologies used until you really need them. You could, for example, start out with an inmem package for storage and only decide which database to use when you really need persistence.

Refactorings are simple

Since everything is contained in it's domain, refactoring the transport package, for example, is refactoring only transport code. There is pure separation of concerns and everything has a clear place.

Testing business logic is simple

Since you only have pure code without layer dependencies, you can easily inject an inmem package as storage for example. No need for mocking complex database structs, which cost time.

Why a hexagon

It actually doesn't matter how many sides the shape has. The point is, that there are many. The center represents the business logic and every side represents a port into or out of our application (that's why it's also called Ports and Adapters).

A real-world example

For this post let's assume that we're building an API to manage an inventory of some sort – it's still similar to the application I'm building. You should be able to list all items in the inventory and logged-in users should have basic CRUD access.

To set things up, I created these packages:

The user and inventory packages define an interface for the storage struct. This way we can inject any struct that implements that interface and keep the separation of concerns.

Here's how our main.go would look like, simplified:

package main

import (

transport ""

func main() {
userStorage := inmem.NewUserStorage()
userService := user.NewService(userStorage)
http.Handle("/v1/user", transport.UserHandler(userService))

inventoryStorage := inmem.NewInventoryStorage()
inventoryService := inventory.NewService(inventoryStorage)
http.Handle("/v1/inventory", transport.InventoryHandler(inventoryService))

http.ListenAndServe(":8080", nil)


In real-world applications, we always have to make compromises and fight for clear separation of concerns. This is a list of problems I ran into while building the application.


Let’s say we want to implement a token-based authentication and we need to protect some routes of the inventory service. Where should we get the token-validation function from? Getting the user service via our NewService constructor would result in unnecessary dependencies.
What I ended up doing was a http.Guard(userService) function, which returned a http middleware (func (next http.Handler) http.Handler) which parses the token and validates with userService.UserForToken. The middleware is then passed into the transport.InventoryHandler and wrapped around methods that needed protection. This way, there are no new dependencies.

Model decorators

Another problem are Go tags (read up). Let's say we need some for transport (e.g. json:"id") and database (e.g. db:"first_name"), but they're attached to the model, which lies in the domain logic. A fix would be having the model structs duplicated in the other packages with the needed tags, but this introduces a lot of duplicate code and unnecessary complexity, so I decided to leave it as-is right now until I found a better solution.


I'm really happy with my application and I don't think I'll go back to MVC-style applications any time soon.
I encourage you to try and build a service using a hexagonal architecture and share your experience – doesn't have to be Go. If you want to read more on hexagonal architecture, I recommend checking out go-kit. There is also a GopherCon 2018 talk by Matt King, here is the script until the videos are up. Also if you have comments or suggestions, hit me up 🤙🏻.

Hello World

This is not a post, please move along. Check out my about page to learn more about me, subscribe and expect a new post any day.