TFW You Realize What Technical Debt Actually Means

February 21, 2019

A few weeks ago I set out to write a blog post about technical debt and the complexities of getting rid of it, or some of it, when you work for a company.

I wanted to see what others had written about it and of course I landed on Martin Fowler’s article about technical debt. When I started reading it I realized that up to that point I didn’t really know what tech debt was.

It seams that while being confronted with the eternal vastness of software engineering, this is what my brain does: when I hear a term for the first time and I can deduce its approximate meaning from context, I store it as a known term. Even though I don’t know exactly what it is.

What I deduced it to be was: “Legacy code, that makes it hard to maintain your code or to add features.”

And every time I heard or used the term “technical debt” there was a tiny little voice in the back of my head going: “Why is it “debt”!? I don’t get it.”

Anyways good ol’ Martin cleaned up that part of my brain and made it crystal clear for me:

Technical Debt is a wonderful metaphor developed by Ward Cunningham to help us think about this problem. In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt.

Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice.

We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.

It’s a metaphor! It is actually a term that was invented in order to explain consequences of sloppy coding to people in suits!

Fowler goes on to write:

The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline.

The all too common problem is that development organizations let their debt get out of control and spend most of their future development effort paying crippling interest payments.

Thank. You. Technical debt can lead to crippling interest payments. These can slow down your development teams so badly that you can’t compete anymore.

The term “technical debt” carries all the information you need in order to make the argument to company leadership why getting rid of it or keeping it at bay may be a wise business decision.

Who knew?! 😂

Nested Loops Bow-Out

February 6, 2019

As you may know, I am a member of the JavaScript band Nested Loops. We performed on the last three opening performces of

We will not be performing at this year.

It was a great honor and privilege for us to be able to do that and we are thankful for the opportunity.

We produced original music for the conference and performed it in a browser. If you want, you can re-live of our performances on YouTube. And you can listen to our story on the Changelog podcast here.

I have no idea if we will ever perform again at or any other event.

If you want to talk to us about performing at your JavaScript conference or any other event, or if you needs some original music just drop me an email or DM me on Twitter.

How to Use Async Functions

January 22, 2019

This article by Dr. Axel Rauschmayer was exactly what I needed to wrap my head around how to use async functions without confusion.

Because I was just using them intuitively so far and because of their synchronous style I got confused about when to try-catch. I also attempted to call an async function without await in front of it while using await in its body, fully expecting it will be executed synchronously.

It’s important to remember that the foundation of async functions is Promises.

The most interesting parts of Axel’s article to me, were these:

blog-cli: A CLI for Blogging with Static Site Generators

January 18, 2019

My blog is built with Hugo. Every blog I ever had was built with a static site generator or a file based CMS. I love static site generators, they make content management simple, they are secure and it’s fun to build websites with them.

For me, they have one problem: creating a blog post is annoying. Typically the file for the post needs to have the date in it and the slug and then you need to put in the Front Matter for the post. It is all just very tedious and annoying.

That’s why I made blog-cli. It creates the Markdown file for me at the right location with the correct file name, inserts the basic Front Matter and opens the file in my favorite Markdown editor. This means I go from post idea to writing in 1 second.

This should work for most static site generators. At least for simple setups.

Here is how it works.

First you have to install blog-cli. You need Node.js and npm for that.

npm install --global @kahlil/blog-cli

Then you need to tell blog-cli where you want it to put your posts.

blog --path ~/my-blog/posts

Then you need to tell blog-cli about your favorite Markdown Editor.

blog --editor 'ia writer'

Now you are all set and you can create a new post and open it in your editor by simple specifying a slug.

blog my-new-cool-post

This will create a new file with the filename: in the directory you specified, ~/my-blog/posts in this case.

The Front Matter that is inserted looks something like this:

draft: true
date: 2019-01-18T10:03:48.620Z
title: ""

Now, if you are part of the cool kids club then you probably keep your files in a Git repository, commit new blog posts and push them to Github at which point it gets deployed to Netlify.

It turns out that blog-cli can help you with that as well!

blog --publish

Will automatically commit all changes with the message ‘new post’ and execute a git push.

Nifty, right?! If you are static-site-generator-blogging as well I hope blog-cli can help you.

If you have any ideas to improve it please send an issue or a PR on GitHub.

Blogging is Back

January 17, 2019

I’m excited about blogs this year. It really feels like blogs and RSS feeds are back. I am especially happy to see some developers getting serious about it.

Dan Abramov started the year off with a barrage of really good posts of which some already went viral. People even started to translate them into different languages. Wow.

Yoshua Wuyts started the year with a new blog as well. With a sick dark theme, too.

CSS Tricks relaunched their blog with an amazing dark theme design. Not a new blog obviously but man did they make a splash with this. Their post about how they came up with the design is really impressive.

Just a few days ago I stumbled over You can upload any picture with a face on it and it will give you back the same picture with the background removed and made transparent. It’s quite astonishing. How do they do this?!

January 16, 2019

Salary Negotiations for JavaScript Developers (and Anybody Else)

January 4, 2019

Salary negotiations at the start of a job always feel somewhat like a war to me.

Each party is trying to get on higher ground to get the tactical advantage over the other.

Typically one of those parties comes to that fight ill-prepared. Generally that’s the prospective employee.

It’s weird that your first interaction with your future employer is them basically trying to get the best of you. Companies should just pay fair market by default but they don’t. They try to get you as cheap as they can.

I learned a few things about salary negotiations from a friend, from experience and from this twitter thread that led me to this amazing 7k word Kalzumeus article on the topic.

Here are the three most important things I learned.

1. Never give a number

If they want to know what you want to make, they want to gain leverage over you in order to lower your ask, don’t tell them.

Rather tell them:

First and foremost I am interested if we are a mutual fit. I am happy to talk about the financials later on in the process.

If they keep asking say this.

Money is not that important to me right now, I would like to find out if we are a mutual fit first. I do expect to be paid a salary that is fair market.

If they want to know what you currently make, they want to gain leverage over you to see if they can low-ball you, don’t tell them.

Rather tell them:

I don’t feel comfortable talking about the internals of my current working arrangement out of respect to my current employer.

Or just be blunt and say:

I don’t see how my current salary factors into these discussions. Let’s find out if we are a mutual fit first.”

The goal is to get them to make an offer first. That offer will be in the middle range of what they can offer you and you can negotiate up from there.

2. Always be negotiating

Always negotiate. Not necessarily because you need the money but because it represents your value at the company. It is a matter of respect and it influences how you are perceived at the company.

3. Start negotiating when you receive an offer

Since you are not giving them a number, they will make an offer. Here is where you start negotiating. At this point it is highly unlikely that they would reject you for negotiating. They simply invested too much into you already. Take your time, maybe say you have to talk it over with your spouse, or that you need to read through it in peace.

Then you could say that the offer is “interesting” but not quite there to get this done. Ask if there is flexibility on that number. They might make you another offer that is higher and that they can’t go any higher. This is where you can tell them that this offer will work if they can throw in a few more vacation days or something like stock options etc. As @patio11 puts it in his article:

You Have A Multi-Dimensional Preference Set. Use It.

This kind of tactic should get you to the high end of what is possible to get for your position. This will make you feel good and strengthen your position in the company from the get go.

To be honest, I think it is a flaw in the system and really uncomfortable that one of your first interactions with your new employer is of this nature.

There are rare cases in which it’s done differently. Basecamp for instance pays everybody the same competitive salary based on job title and current market situation. You should read their article about it, it’s such a refreshing take on this matter.

If you want to get deeper into this I highly recommend you read @patio11’s article that I mentioned a few times already. He links to more resources as well.

Happy negotiating!

Just Fucking Ship Taught Me How to Ship Blog Posts

November 29, 2018

Just Fucking Ship by the most excellent Amy Hoy is a great book. It’s about how to bootstrap and ship any (side-)project.

The most valuable part of the book for me personally was how she describes outlining a blog post.

I always thought outlining a post was to come up with some high-level headlines. That never really helped though, because they were too high-level and I was still busy working on the structure of the blog post on the fly while filling in the gaps between the headlines.

No, what I learned from Amy is that you have to write down each and every thought that you want to put into that post. Write them down, fuck structure, fuck grammar, fuck spelling.

Put it all on (virtual) paper. Now that you did that you can look at it and move your points around, give the whole thing some structure, delete some, add some.

Once you are happy with that, you go ahead and write the blog post. At this point its super easy because all you need to do is write it out, you are not structuring it while you write it anymore.

This makes so much sense because it completely separates writing a blog post into two smaller isolated steps:

  1. Determine content & structure
  2. Write it

Take this here blog post for example. When I first started writing it I started with a big introduction about the book and Amy and bla and immediately I got bored of my own words.

That made me stop and think again: what is it really I want to convey with this post?

  1. Just Fucking Ship is great
  2. Here is why it was great for me

So I deleted everything and just wrote Just Fucking Ship is a great book and went on writing about my greatest take-away from it.

I feel like it makes this post much more meaty and interesting for somebody to read. It definitely is much more fun to write.

For closing I would like to add that there are also many more surprising tips in that book to help you finally start and also actually ship your side project.

But instead of telling you what they are I would rather you go and support Amy Hoy and get yourself the book! It’s a really quick read and it is worth it.

Vimming in the Squasher: How to Squash Your Commits with VIM

November 21, 2018

Whenever I use git rebase -i to squash commits, Git opens the squasher (that’s how I just named the text view for the interactive rebase) in VIM.

My knowledge of vimming is not very great so I used to just type i and then inch around with the arrow keys and delete character by character in order to delete the word “pick” a bunch of times and then type “squash” a bunch of time.

Of course that was incredibly annoying, so I finally looked up how to vim in the squasher and it is glorious. So here is how you do it:

First, move to the line with the first commit you want to squash, which is typically the second one.

Then, enter into something called visual block mode (WTF?! What does that even mean?) by hitting ctrl + V. Something magical happens: when you move the curser it blocks out every character you move over with white and if you move down it will cover as many characters you covered in the line above. It looks like blocks. So I guess that’s where the “visual block” comes in 😅.

Select all the rows you wish to squash while visually blocking out the word “pick”.

Now, more magic: hit C and type the word “squash”, after that hit esc and see the word “squash” applied in all places that were “visually blocked out”.

If you only could exit VIM at this point, you would actually successfully squash all these commits. A boy can dream, right?

Nested Loops on the JS Party Podcast

November 16, 2018

Jan and I had the distinct honor to guest on the JS Party podcast last week to talk about Nested Loops. First of all it was a really great experience to be a guest on there. The Changelog family of podcasts are very professionally run and it was a great pleasure to be on. @noopkat, @jerodsanto and @adamstac made us feel very welcome and comfortable.

We talked about how Nested Loops was founded and immediately had band problems. Why I am rapping with a Jamaican accent. What @bonotes role is. How the tech works and evolved on the music, video and effects sides. I think it came out great and you should definitely give it a listen.

Get it by looking for JS Party in your favorite podcast app or listen right now, right here:

JS Party 52: Nest ‘dem loops – Listen on

Flip the Switch

November 16, 2018

Reactive Programming is a paradigm that solves a lot of problems that many JavaScript frameworks partly implement by introducing some sort of “reactivity”.

That typically ties the reactivity down to one use case in the framework itself. Reactive Programming can be helpful in many more scenarios though.

One huge hurdle for adoption seems to be to grasp the actual concepts. Specifically doing push-based programming instead of pull-based.

I can relate because it did take me quite some time to get there but once the switch was flipped, so to speak, I started to think in terms of Reactive Programming instantly.

That seems to be exactly the problem, it’s not a hard concept to understand. It just takes a lot of effort to start thinking about your programming problems differently.

I am currently putting together a talk that is trying to flip that switch. I have a few ideas based on things I wished I knew in the beginning.

Basically it will focus on making you understand three things about RxJS Observables:

  • Observables are just functions receiving an observer
  • Observables are lazy
  • Observables are ubiquitous

This could work 🤔… maybe?

On Dropping Side Projects

November 13, 2018

I stopped working on my code-related side projects grit, belly and flow-state. I have a family and we recently moved. So most of my free time is spent working on the flat or spending time with the family.

There are a few side proj<ects I am not abandoning though.

  • I will continue to co-organize KarlsruheJS
  • Even though the podcast is on a break right now, we will continue with Reactive Podcast soon
  • I will continue to work on musical projects every now and then
  • I will continue to write on this blog

Writing regularly is somewhat of a new habit for me. I always thought it was not for me, but I did feel an urge to write that became bigger over the years. Now I’m just acting on it.

A big reason why I feel OK dropping my code related side projects is the fact that I feel quite happy and challenged at work. I don’t have to feed my desire to learn and grow with my side projects at the moment.

That’s a pretty new and awesome feeling 😀.

Static Site Generators for Documentation

November 13, 2018

We are using some internal libraries at work that are not very well documented. As a result it is really difficult and time-intensive to onboard onto the project. It became clear to me that not having good documentation can come at a very high cost when it comes to long-term productivity for a team with changing team members.

As a result I started to work on documenting the inner workings of our application and it does feel satisfying to bring it to paper.

With documentation on my mind I attended our local JavaScript meetup KarlsruheJS, which I also co-organize. Interestingly it featured a talk about VuePress a static site generator based on VueJS which is specifically tailored towards creating documentation. I was smitten instantly!

After tweeting about my excitement about VuePress KarlsruheJS member Carsten directed my attention towards Docusaurus by Facebook, which is (obviously) a React-based static site generator for documentation. And since my co-worker Emma is totally into Gatsby I took a closer look there too and also found a Gatsby theme for documentation as well.

My docs are not ready to be hosted yet but it is good to know that there are some solid ways to get a documentation site up quickly. Since our project is written in React I will not go with a Vue-based solution though. It will be either Docusaurus or Gatsby.

Kottke at The Talk Show

August 15, 2018

One of my favorite podcasts is The Talk Show with John Gruber. Gruber is the man behind Daring Fireball, a very successful and long standing Apple news and analysis website.

The episode I would like to point out is the one on which Jason Kottke joined as a guest. John invited him to celebrate the 20th anniversary of

I am not a big reader but nevertheless I found his story and the history of the site very interesting. I am always fascinated with these type of project where one person runs a successful business by just relentlessly publishing on the internet in a very focused way.

Check out The Talk Show episode in question right here.


August 10, 2018

Some people react to what I am doing with belly with the question why I didn’t use Git aliases or bash aliases. It is true, that the general behavior could have been implemented using aliases. But even in its current MVP-state belly does more than I comfortably could cover with aliases.

The Spinner

In order to show that belly is working, I am using a command line spinner. The spinner has different states and shows different text for those different states.

Clearly this could have been done with bash scripts of some sort but yeah, why would I do that?

I am JavaScript developer so I use the tools that I am comfortable with, to make things.

Also whoop whoop for npm because that said spinner is actually a ridiculously useful and powerful npm package by none other than the one Sindre Sorhus called ora.

Shareability & Portability

Aliases are just a little text in a config file and therefore quite easy to share and to drag around to your other machines or friend’s machines.

It does not beat npm install though. If you want to port your config files to other computers or share it with other people you have to back them up somewhere and also provide some installation script in order to add them easily to a Git setup (if you’re friendly).

By keeping this functionality in an npm package I get backup, easy-install and shareability out-of-the-box. On top of that, many developers are very familiar with using npm to install tooling.

More UX

Going forward I would like to further improve belly’s CLI-UX by, for instance, improving error display and improving the spinner states and who knows what I come up with.

By writing this tool with JavaScript, it is extremely easy for me to extend the tool and to keep iterating. Aliases with bash scripts would just break my brain and it would take me forever.

Try belly

If you want to know what I am talking about, get belly by doing npm i -g belly. If you have any ideas for extending it or improving it, hit me up in the belly issues.

belly: Improving Git's Command Line User Experience

August 8, 2018

My preferred way to use Git is on the command line. I have a set of Git command sequences that I use all the time and that I know by heart but are a nuisance to type out every time and a set of command sequences that I need regularly but have to look up all the time. In order to improve my personal Git user experience I created belly.

belly is a cli-tool that provides a better user experience for some common Git command sequences:

Commit & Push

When working on a personal project or on a feature branch within GitHub Flow, I do this all the time:

git commit -m "my commit message"
git push

That’s a lot of typing for something I do tens of times a day.

belly combines this into one command:

belly c my commit message

Tip: instead of typing belly you can just type b.

Git Checkout Branch Or Create And Checkout

I create branches for features, bug fixes and hot fixes all the times. I also often have to switch between branches.

belly combines switching and creating-and-switching into one command. So instead of:

git checkout my-branch


git checkout -b my-branch

I can just do:

belly s my-branch

belly will switch to my-branch if it exists and it will create-and-switch to my-branch if it doesn’t exist yet.

Set And Delete Tags Locally And On Remote

When you set a version tag you typically want it set in the local and the remote repository. belly allows you to do that with:

belly t v1.0.0

belly can also delete those tags for you:

belly t v1.0.0 -d

Rename A Branch Locally And Remote

This is one of those Git commands I always have to look up. How do I rename a branch again and why is the command to rename it on the server so different?

Anyway, no need to break out Google for that anymore. Just:

belly n my-new-branch-name

and it will rename your local branch as well as the corresponding remote branch for you.

Easy Squashing

There are a couple ways to squash your feature branch commits. If you want to squash everything down to master and add a commit message you can use belly’s q command:

belly q my commit message

Force-Push The Right Way

When you work with rebasing and squashing in order to keep your Git history legible you have to force-push in your feature branches regularly. Rather than using --force-push it is recommended to use --force-push-with-lease which is an awkwardly named flag that does not allow you to force-push if the remote branch has been updated by somebody else. This should be the default.

With belly you can use:

belly p

To force-push-with-lease your current branch state.

If you think any of these convenience methods would be useful to you, feel free to get belly. PRs are welcome, if discussed previously in an issue.

I will be improving belly over time and add more features if I think it would be useful.

Enums For The Uninitiated

July 12, 2018

Before TypeScript I have never used or heard of enums. So needless to say when I saw them in the docs I was confused.

I was so used to describing the shape of data structures with interfaces that I didn’t understand what the benefits are of using enums.

I was also confused because enums hold values as well as being a type. I didn’t think of TypeScript as something that gives me things to hold values with.

So in order to understand I had to look into it some more and here is how I understand enums:

Enums give you an easy way to logically group constants in order to make your code easier to read and understand.

So if you have a bunch of constant values in your program that logically belong together. Like the days of the week, directions (up, down, left, right) or return types of an asynchronous operation for example you can use enums to group them and use them in your code.

Enums give you type and data structure in one go without a lot of work.

If you declare the following enum:

enum Directions {

You are doing a bunch of things in one go:

  • Declaring a type Directions
  • Assigning members to the enum
  • Implicitly assigning values to the members of the enum

The assigned values are numbers by default. Starting at 0 and incremented with each following property.

So it’s kind of like a JavaScript object just that values are assigned automatically.

You can change the starting point of the assigned numbers by assigning a custom number to the first property.

enum Directions {
  Up = 1,
  // ...

This means the increment starts at 1 instead of 0. That can be useful if you want to make sure that the first value can never be falsy.

You can also manually assign values to all properties of the enum. They can be of type number or string.

So now you can go ahead and use these constants in your code like:

if (direction === Directions.Up) {
  // Do things.

Simple and convenient.

The TypeScript docs say this:

Enums allow us to define a set of named constants. Using enums can make it easier to document intent, or create a set of distinct cases.

After having used enums a little bit in recent projects I find them very convenient and useful. I think especially “documenting intent” is what they do quite well in a convenient way.

One cool thing about enums with numeric values, is that you can retrieve the member name of a numeric value by accessing it with the number, in our case accessing a Directions member with the number 1 returns the string "Up".

Directions[1] // returns "Up"

It’s cool but so far I have never needed it. To make this possible during runtime enums get compiled to this:

var Directions;
(function (Directions) {
    Directions[Directions["Up"] = 0] = "Up";
})(Directions || (Directions = {}));
var direction = Directions.Up;
var nameOfA = Directions[direction]; // "Up"

It’s an iife that gets passed an object. That function then assigns numbers to the member names and member names to the numbers.

This line does two things at the same time.

Directions[Directions["Up"] = 0] = "Up";

Directions["Up"] = 0 assigns the number to the member name and because that line is a JavaScript expression it evaluates to a value and that value is the number that was assigned.

This means that the whole line evaluates to Directions[0] = "Up";.

So what you end up with in JavaScript is:

  0: "Up",
  Up: 0

Pretty neat right? But let’s say resources are scarce and you don’t need to access member names then you can use const enums.

They evaluate to their respective values. The enum code is completely removed during compilation. This means:

const enum Directions {
  Up = 1,

if (direction === Directions.Up) {
  // Do things.

Is compiled to:

if (direction === 1) {
  // Do things.

So that’s my take on enums. After getting comfortable using enums for a bit, I find places where they can be useful all the time and I really like how they do make intent very explicit. Enums can make your code easier to read.


June 26, 2018

Vanilla is a menubar Mac app that allows you to hide menubar items. It’s the first menubar icon management app that stuck with me. It is incredibly simple and just so cute!

When you launch it a little arrow and a dot appear in the menubar. You move the dot behind the last icon you want to hide and then click on the arrow. The arrow then swooshes over and lands on the spot where you put the dot with a nice little bouncy animation.

Such a joy!


June 26, 2018

While working on improving accessibility features of an application I learned about an HTML feature that I was not aware of before: the accesskey attribute.

The attribute allows you to add keyboard shortcuts to an element.

In order to assign a keyboard shortcut you add the accesskey attribute to an element and assign it a letter. For instance the letter “a”, like this:

<div accesskey="a"> ... </div>

This automatically binds a shortcut to the element that either focuses it or triggers a click event.

The actual shortcut varies between browsers and platforms but in general in modern browsers it is alt + a for Windows and Linux and alt + ctrl + a on the Mac and alt + shift + a on Windows and Linux on Firefox.


June 15, 2018

MDX is a JSX in Markdown loader, parser, and renderer for ambitious projects. It combines the readability of Markdown with the expressivity of JSX. The best of both worlds.

If you configure MDX for your project you can do wild things like importing React components into your markdown file and use them in there.

import Graph from './components/graph'

## Here's a graph

<Graph />

And on the other hand you can import your MDX files into React components and use them as regular React components.

import React from 'react'
import Hello from '../'

export default () => <Hello />

Powerful stuff. If you are a developer building something with React or Next.js and you want to add content to the project, that’s a really nice way to do it. It feels kind of mind blowing once you use it.

MDX is a superset of the CommonMark specification that adds embedded JSX and the import/export syntax.

CommonMark was created by the people who built Discourse. They support Markdown on their platform and they needed a clear specification that handles certain edge cases. The original specification by John Gruber was not specific enough. At first they wanted to call it Standard Markdown but Gruber threw a hissyfit, wrote an angry email and even talked about it on his podcast The Talkshow.

Discourse never sought to upset him or take anything away from him so they renamed it to CommonMark.

It’s great to see that things like MDX can be built quite safely and relatively easy because there is a Markdown specification.

Nested Loops at the JSConf EU 2018 Opening Performance

June 11, 2018

Once again Boris, Jan and me have been honored by the organizers of JSConf EU to participate in this year’s opening performance of the conference as NESTED LOOPS. That’s the name of the band we formed when we did the opening for the first time in 2015.

Thank you for letting us do this. The result was quite epic if I may say so. I don’t think the video really does it justice. The screen was so huuuuuuge!

May 23, 2018

This is Raquel’s farewell episode. Sadly she is leaving the podcast as a co-host. But don’t fret, she will join us every once-in-a-while to tell us about her adventures at Slack and quirky animals.

Other than that your three fav co-hosts chop it up about what’s going on at Slack right now, various conferences they are going to, social media nowadays, the only thing Facebook is actually good for and the arrival of the animal of the week at Slack planning meetings.

It is true. Raquel left the pod. It is sad. But at the same time it is an opportunity. Me and Henning will keep on going no matter what and we are on the search for a new co-host. Suggestions welcome.

Chopping It Up With Henning

April 12, 2018

We’ve been having a bit of a break on the Reactive Podcast. The break was involuntary and just caused by normal life stuff. But we are working on getting back on track. Henning and I managed to put out a couple of episodes in which we chop it up about various things.

The topics vary from what we’re working on currently to other random stuff that shot through our heads while recording the podcast.

Check out 97: A Smoother-ish On-Boarding Process and 98: It’s Tiring Too, But It’s Good everywhere where you like to get your podcasts (unless its Spotify, we’re not on Spotify).

New Job, Who Dis?

April 11, 2018

Last month I started working at LogMeIn as a Staff Software Engineer. My team is working on the web version of GoToMeeting. We are migrating legacy code to a new, modern tech stack, while improving the product as well as working on features.

The new tech stack consists of TypeScript, React, Redux & redux-observable.

Coming from Angular, not being super-fond of the complexity of the framework, I was expecting to enter the lands of simplicity with React and Redux. Little did I know what was about to transpire right into my face!

Using React with the canoncial way of using Redux is so far off straight-forward, you get dizzy just thinking about it. I honestly felt reminded of Angular’s abstraction hell.

I always felt that the concept of Redux actually fits much better into a Reactive Programming paradigm. That’s why I built flow-state and never reached for the original Redux.

Reactive Programming allows you to granularly manage how, when and under which conditions the new state should reach the components. This means you can just set the framework to re-render whenever new state reaches, no other checks necessary.

But I digress, so, new job. So far so good. Lots to learn. I like the team and I do like working with React etc. We’ll see where this leads to.

Just One Link

February 16, 2018

Many designers and developers are taking the time, in various forms, to curate link-list-newsletters.

They contain 5-ish links in fat type with a low-contrast small-type link description below each link.

Over time I have subscribed to many of those and although I appreciate the work that people put into them, they have lost their value to me.

It’s their sameness. I also don’t know why the author has chosen those links. Did they actually read the content? Why do they think it is important?

I don’t even know which is which anymore. I just see a link list, maybe read a couple link titles and am back to whatever I was doing.

There is nothing that keeps me in the mail and it has become work to figure out which link could be interesting for me to check out.

This is what I would like to see instead:

Send an email with just one link. Write a paragraph about why you are sending out this link. Why does it excite/interest/enrage you?

Let me read your voice.

I’d be thrilled to receive your newsletter and read it every time.

February 07, 2018

Henning and Raquel talk about, the Poison Dart Frog, moving cheese, when to maintain and when to develop features, a laptop theft ring and the fact that Slack is mostly PHP 😱

Another stellar conversation between my incredible co-hosts Raquel and Henning.

February 07, 2018

Inspiriert durch einen Artikel von Mikeal Rogers berichtet Kahlil von seiner Begeisterung für Web Components. Neben konkreten Webstandards wie Custom Elements, Shadow DOM, Template-Elementen und HTML Imports (RIP) und Tools im Stile von Polymer und dem CDN unpkg treibt uns auch die Frage nach dem Warum und den möglichen Vorteilen von Web Components um.

Während Kahlil in Web Components, kombiniert mit modernen Template-Libraries (lit-html, hyperHTML/viperHTML) bzw. Data Binding für Template Elemente (wie in Revision 319 besprochen) eine Alternative zu bzw. neue Basis für Frontend-JS-Frameworks sieht – zu nennen wären neben Polymer X-Tag und Stencil – ist Peter weniger euphorisch.

Die durch Web Components hergestellte Interoperabilität von Komponenten kann ein großes Plus sein (EA-Erfahrungsbericht bei der Polymer-Conf), doch wie oft sich das wirklich positiv zu Buche schlägt, bleibt dahingestellt. Peter setzt selbst Web Components ein (html-import, scoped-style), sieht sie jedoch nur als eine für sehr bestimmte Anwendungsfälle brauchbare HTML-Abstraktion, quasi das jQuery-Plugin 2.0.

Habe beim Working Draft Podcast ein wenig über Web Components herum gestammelt. Shoutout an Peter und Hans für’s aushalten!

February 01, 2018

Stimulus works by continuously monitoring the page, waiting for the magic data-controller attribute to appear. Like the class attribute, you can put more than one value inside it. But instead of applying or removing CSS class names, data-controller values connect and disconnect Stimulus controllers.

Think of it like this: in the same way that class is a bridge connecting HTML to CSS, data-controller is a bridge from HTML to JavaScript.

On top of this foundation, Stimulus adds the magic data-action attribute, which describes how events on the page should trigger controller methods, and the magic data-target attribute, which gives you a handle for finding elements in the controller’s scope.

Stimulus offers a very simple, elegant and concise way to upgrade your server-rendered site with JavaScript.

Once you see it, you think: “Wow, why didn’t anybody come up with this earlier?”.

To be fair: a big part of the elegance is to be able to use ES2015 classes.

And to be even fairer: other people had similar ideas way before Basecamp. Flight by Twitter for instance has a similar approach, but IMHO it was stuck in the past already a few years ago and it is not under active development anymore.

Stimulus is basically the better, modern version of Flight.

Fun fact: it turns out that Google is using almost identical patterns to Stimulus in most of their consumer facing web apps, since years:

Malte Ubl, head honcho of Google AMP on Twitter:

Malte also said they decided against open sourcing it years ago.

Which I would argue, is not a bad thing. Basecamp found a much nicer name, they made a really beautiful website for it and the writing in the Stimulus handbook is really great! I doubt Google would have done as good of a job at that.

January 29, 2018

Henning and Raquel talk about The Swamp Hackathon, working at Slack and a book called Punished By Reward.

I couldn’t make it on this episode because of my new flamingo farm, but I really enjoyed the discussion my two co-hosts were having there.

January 29, 2018

We’re back and talk about Raquel working at Slack, Henning and the hackathon, blockchain, Kahlil’s new Electron app and Bootstrap 4.

Our first episode this year!

January 26, 2018

In a default setup, Electron serves the app’s index.html file directly from disk with the file:// protocol. This does not work well with JavaScript apps that want to use client-side routing.

The browser does not support history.pushState for files served from disk. This means every time you navigate to a different route with a client-side router it will try to resolve the path on disk which leads to a 404.

Thankfully earlier this year, @sindresorhus published electron-serve. This package registers a custom file protocol called app:// with a slightly tweaked behavior to file://:

It serves a file if it exists and serves index.html as a fallback if it doesn’t.

This makes it possible for client-side routers to process routes and for the app to respond to them without having to spin up a server in your Electron app.

One thing I had to do to get it to work though was to pass a fully resolved path to the require call for my entry file. I used path.resolve to do that. It did not work with a regular relative path like require('./renderer').

Keeping TypeScript Benefits In A JavaScript Project With Visual Studio Code

January 25, 2018

When I started Grit I was excited to find out that the Electron team is shipping types with Electron which expose the Electron API in your tooling. Since I am a fan of TypeScript I set up my dev environment to write my code in TypeScript as well.

After a few days of writing TS code to transpile it for my app, I started to get annoyed with the fact that I am transpiling code. And here is the reason:

In Electron you write code for a very capable browser. ES2015 is fully implemented minus ES Modules. That means the code I can write for the browser directly is already so close to ideal for me that it felt wrong to transpile from something else. So I converted my project back to JavaScript.

After years of dealing with Babel and TypeScript, writing ES2015+ code directly for the browser feels very freeing.

No source maps to decode, no types to manage.

Losing the types would be a little annoying though. For me personally, in this one-man-project, I enjoy the types especially because they enhance the tooling.

Thankfully there is VSCode!

VSCode has introduced something that is called jsconfig.json. It’s a configuration file that tells VSCode that the folder containing it is a JavaScript project.


jsconfig.json is a descendent of tsconfig.json, which is a configuration file for TypeScript. jsconfig.json is the tsconfig.json with the "allowJs" attribute set to true.

Adding that file tells VSCode to turn on the JavaScript Language Service which is based on the TypeScript Language Service. This gives you powerful Intellisense features throughout the project.

This means you get autocompletion and type errors for a normal JavaScript project. Types are being inferred by TypeScript type definition files as well as JSDoc comments.

Setting up jsconfig.json For Electron

In order to get it to work satisfyingly for my Electron setup, I had to configure a few things. jsconfig.json is just a tsconfig.json so the options are the same.

First of all, I excluded node_modules since I don’t want VSCode to type check all my dependencies.

"exclude": ["node_modules"]

In the compileroptions property, I set the checkJs property to true so that the JavaScript code is type checked as much as possible.

Because I (have to) use CommonJS Modules in Electron I had to set the module property to commonjs so that index.js files are resolved CommonJS-style.

And last but not least in order to be able to write ES2015+ code without warnings I set the target property to es2017.

You can have a look at my config file right here.

Adding A Type Definition For HyperHTMLElement

I had to create a type definitions file for HyperHTMLElement because the JavaScript Language Service didn’t like me having to use the default property on the required module. This is the code in question:

const HyperHTMLElement = require('hyperhtml-element').default;

I went to the trouble to actually add all the class properties in the definition which gives me the sweet sweet code completion feature in VSCode. If you need it you can pick it up from here.

This setup allows me to write modern JavaScript code directly for the browser as well as benefit from many TypeScript features. Love it.

Building Grit

January 18, 2018

Static Site CMS is now Grit. I wanted to give it a name because I want to keep writing about building this thing and Static Site CMS is a shitty name.

Grit will be a Markdown editor for managing blog posts for your static site. There is nothing to see yet, I am still working on the MVP.

I’ll be writing a series of posts about the process of building Grit. The titles of these blog posts will be prefixed with “Building Grit:“.

I am using Electron, Web Components, a bunch of packages by Sindre Sorhus and a Router called Navigo to build Grit. I am constantly learning new things and I am looking forward to sharing them here.

Why Grit?

It has two meanings.

  1. dirt
  2. determination despite difficulty

Namely the second meaning is very fitting since it takes real grit to keep a blog going. Using Grit is supposed to remove some of the friction that comes with blogging on a static site, which should help you having the grit to keep going.

But to be quite honest “dirt” is fitting as well since this software will be quite rough around the edges for a while lol.

Gritty Road

The road ahead for Grit is simple. I am in the process of building a simple MVP with an extremely limited feature set and we go from there.

Let me know if you’re interested in trying it out! ✌️

Switching To hyperHTML And HyperHTMLElement

January 10, 2018

When starting work on Static Site CMS I originally planned on using Custom Components and lit-html instead of a JavaScript framework.

In the process of setting up my development environment I was a little annoyed to realize that lit-html can only be used with ES Modules and not with CommonJS modules. Since Electron uses CommonJS that is a requirement for my current project.

I heard that hyperHTML basically does the same thing as lit-html so I went and checked it out.

I was very happy to discover that not only does hyperHTML do basically the same thing as lit-html, it also supports all module types and as a bonus there is a little ecosystem around it that enhances Custom Elements and even supports server side rendering. What?! Nice! See a chart that compares the two right here.

HyperHTMLElement is a Custom Element subclass that is exactly what I just had started to do with LitElement, only better. The author @WebReflection knows these emerging web standards very well and does an amazing job making them really usable and useful today.

In turn @WebReflection is not a fan of TypeScript so neither hyperHTML nor HyperHTMLElement have type definitions. Since I decided to use TypeScript in my app I created a simple type definition file for HyperHTMLElement. I didn’t add it to @types yet but you can grab it from my project right here.

All in all lit-html and hyperHTML are very similar in what they do. The biggest difference at the moment is that hyperHTML is more complete and feels more mature, not to mention the addition of HyperHTMLElement and server side rendering via ViperHTML.

Working Title: Static Site CMS

January 5, 2018

Product development has always been of huge interest to me. I love creating something with web technologies that solves a problem and iterating on it. That is basically what I do at work but I always wanted to have my own thing.

During the last year I have played around with a couple of ideas while trying out different JavaScript frameworks. Altogether I think I must have at least started working on 4 or 5 things. Among those I made two small barely usable products: TinyDraft and Kaf. They are very simple and incomplete but they do implement a very minimal use case quite OK.

I always end up hitting a wall when it comes to adding a backend with database, authentication and authorization. Typically I would use services for that. But since I really just get to work on my personal stuff on the road during my commute I can’t easily add any services. My connectivity is very limited during those train rides.

I also tried out using Hoodie which is great to work with locally but deploying a Hoodie backend wasn’t trivial to me and ultimately I only really want to think about the frontend side and have the backend just work and be there in local development as well as in production.

So these products are going nowhere.

I realized in order to build and iterate on anything during my very limited free time I have to remove as much friction as possible. The solution to that problem came to me just a couple weeks ago.

Recently I hooked up NetlifyCMS to my site. It uses Netlify’s auth service and GitHub to add posts to your site. If you set up automatic deployment via GitHub on Netlify that is a really great solution to adding posts to a static site through a nice UI.

But here I had the same problem. I write my stuff on the road and I can’t rely on being online. But I really liked some of the functionality that NetlifyCMS provided, namely the scaffolding of posts, easy Front Matter editing as well as the automatic Git commit functionality.

I thought to myself: “That’s cool, I wish I had that for the desktop.”

That’s the moment when it clicked! I could make this myself with Electron! I actually love working on Electron apps and I would be solving a problem for myself as well as removing my above-mentioned friction from app building. This type of Electron app can be developed without being online at all. Hooray!

I have started working on this over the holidays and I am developing it in the open, the code is on GitHub and I will be posting about my progress here.

For now the product really does not have a name. The working title is Static Site CMS.

The empty state

The empty state

In my next post I will be writing about the setup I am using to build it.

Template Instantiation

December 20, 2017

Happily stumbled upon this little video by @dassurma and @jaffathecake. Apparently there is a template instantiation proposal out there somewhere that would add lit-html / hyperHMTL - like functionality to the template element.

This would be huge if it made it into the platform. This would mean that Web Components would be complete because, as I outlined before, the only web standard missing for them to be useful out of the box is an API like lit-html that allows for their DOM to be rendered over and over again in an efficient way.

Here the (extremely short) video for your viewing pleasure:

ZeroFux - A Stateless Unidirectional Data Flow Implemented With Custom Events

December 20, 2017

Undirectional Data Flow, Flux, Redux, Whateverux is essentially this:

  • something happens
  • what happened is being described with an action object
  • that action object is being dispatched through a central point, the dispatcher
  • on the other side of that dispatcher actions are matched to reducers
  • the reducers take the information of the actions and return state

This allows you to manage interactions in your UI in a stateless and synchronous manner.

As a developer, the only thing that really interests you are the actions and the reducers, all the rest is just implementation detail. Actions and reducers shape the app‘s state.

In the following I describe a really simple way how you can implement this type of state management with Custom Events.


First let’s define what an action is. An action is a JavaScript object that has one required property with the key „type“ and two optional ones with the keys „payload“ and „error“. Here an action defined as a TypeScript interface.

  type: string;
  payload?: any;
  error?: boolean;

I think we can agree that the JavaScript community mostly agrees on this definition pioneered by Redux, maybe minus the error property. I stole that from redux-observable.


So, now that we have actions how do we dispatch them through the dispatcher and what is the dispatcher?

We want to use Custom Events. Those event get dispatched on a DOM element with the dispatchEvent method. That means our dispatcher is a DOM element. It is really not important which one but let’s just use the body element since that element is present on any web app.

const dispatcher = document.querySelector('body');

Great. Now that we have the dispatcher how do we dispatch an action? That’s where the Custom Events come in. We‘re using Custom Events because they allow us to add Custom event data (which we will, sneakily, call actions).

  // The first argument of the Custom Event is 
  // the event name. The event name is the same
  // as the action type.
  // The second argument are options.
  new CustomEvent('SOME_ACTION', {
    // Here goes our custom data, the action object.
    detail: { 
      // Action type and event name are the same.
      type: 'SOME_ACTION', 
      // Here goes the optional data.
      payload: someData, 
      error: false,

So now that all these actions are being piped through one point in the DOM via Custom Events, we can match them to reducers.

In your component that expects some some state, take an array of action names that the component is interested in and set up event listeners. In the event listeners callback match a reducer with the same name per action to update the component state:

// Some example action names.
const ACTIONS = [


// As a convention components need a setter and 
// a getter for the state property. 
// That allows you to call a render function or similar
// whenever state is set to a new value.
set state(s) {
  this._state = s;
  // Use lit-html or some other library that efficiently
  // can update DOM in the render function. 

get state() {
  return this._state;

// This is a method on some component.
setReducers() {
  ACTIONS.forEach(ACTION => {
    if (reducers[ACTION]) {
      // Again we are using the reference to the body
      // element as the dispatcher.
      dispatcher.addEventListener(ACTION, e => {
        // Reducers are kept in an object and matched
        // via action name.
        this.state = reducers[ACTION](e.detail, this.state);
    } else {
      throw new Error(
        `Please add a reducer for the "${ACTION}" action.`


The ZeroFux Library

The code above is a little boilerplate-y so let’s make a simple library out of it. No Flux plus no Redux euquals ZeroFux:

export class ZeroFux {
  constructor(element) {
    if (element) {
      this.dispatcher = element;
    } else {
      this.dispatcher = document.querySelector('body');

  // The dipatch method takes an action argument
  // of the previously defined action type.
  dispatch(action) {
      new CustomEvent(action.type, {
        detail: action,
        // In case you set a custom dispatcher element
        // and want them to bubble.
        bubbles: true,
        // In case your custom dispatcher is in the
        // Shadow DOM and you want them to bubble between
        // the borders or Shadow DOM and regular DOM.
        compose: true,

  // This method takes an array of action types
  // that can influence a component's state,
  // an object with reducers with the same names
  // as the action types and a reference
  // to the component on which we want to set
  // the state propery.
  setReducers(actionTypes, reducers, component) {
    actionTypes.forEach(actionType => {
      if (reducers[actionType]) {
        this.on(actionType, e => {
          const action = e.detail;
          component.state = reducers[actionType](component.state, action);
      } else {
        throw new Error(
          `Please add a reducer for the "${actionType}" action.`

export const zeroFux = new ZeroFux();

🎉 tadaa!

It’s up on Github and npm right now if you want to try it.

You can see ZeroFux in action in this CodePen.

Side Effects

“Ah-haaa! How do we manage side effects with ZeroFux?”, you may ask. Well, there is actually a simple zero-fux way to deal with this.

Since these custom events are all streaming through one point in the DOM, the point that we can access via zeroFux.dispatcher, we can just listen to these events separately and fire effects on certain actions.

These side effects have to fire an action themselves when they are done with whatever they were doing. That’s how we introduce data coming from theses side effects synchronously back into the the data flow.

This is how your SideEffects class could look:

import { zeroFux } from 'zero-fux';

export class SideEffects {
  run() {
    const on = zeroFux.dispatcher.addEventListener;
    on('SOME_ACTION', () => { 
        .then(data => zeroFux.dispatch({
          type: 'SOME_RESPONSE_ACTION',
          payload: data,

See it in action in the CodePen.


So there it is, a bare bones, straight forward unidirectional data flow implementation using custom events.

It uses the same principle I have also used in oddstream, a unidirectional data flow library implemented with RxJS: matching a “stream of actions” to reducers. This just has zero dependencies and is practically no code.

In Node this could be implemented using EventEmitter.

I think this solution for a unidirectional data flow could be used in apps of any size because ultimately all you have to manage and think about is actions and reducers, same as in any other unidirectional data flow solutions.

Use Web Components To Build JavaScript Apps

December 14, 2017

Web Components are generally described as “custom, reusable, encapsulated HTML tags” that encapsulate some DOM, some styling and behavior implemented with JavaScript.

I don’t understand why this use case is pushed so extensively when Web Components offer a perfectly good component abstraction for building complete JavaScript web apps.

I’m not so much interested in Web Components for individual new custom tags à la <google-maps></google-maps> or something. I’m interested in using Web Components for building full fledged JavaScript apps with a component-tree architecture, unidirectional data flow and efficient DOM rendering. And I think they are perfect for it!

And they are perfect for it now.

Web Components are typically described as the combination of the following four web standards:

  • Custom Components
  • Shadow DOM
  • HTML Template
  • HTML Imports

First of all let’s forget about HTML Imports. Vendors don’t agree on them, they block rendering and honestly they feel super clunky to me.

We have ES Modules. Web Components don’t work without JavaScript anyway so let’s import them via ES Modules or bundle them up and serve them from a server or CDN.

Mikeal Rogers actually already built a solution for himself that allows him to write Web Components in JavaScript, publish them to npm and automatically serve them automatically on a CDN via unpkg.

That’s totally the way to go. But he also talks about isolated components as far as I can see.

Like I said I think they are the perfect building blocks for building JavaScript web apps how we build them today with React, Vue, Angular and so on.

My opinionated list of things that of what Web Components consist looks like this.

  • Custom Components (with Shadow DOM and HTML Template behind the scenes)
  • Tagged Template Literals

Custom Components are the heart. They are the building blocks for web applications. Shadow DOM should always be used for encapsulation and HTML Template should be used to efficiently build the DOM for the component.

The developer should really just interact with a Custom Component class without having to think about Shadow DOM and the HTML Template.

What Custom Components are missing is an efficient way to automatically update the DOM on component state changes.

Basically something like React‘s vDOM.

But adding a vDOM library doesn’t feel right because one of the big wins of Web Components is that you don’t have to manage a separate DOM next to the one in the browser. So what to do?

While watching the talks of the 2017 Polymer Summit I stumbled on this talk about lit-html and I was impressed by it right away.

It’s a genius little 2k-sized library that goes with Web Components beautifully. It allows you to build the DOM for your Custom Component with a tagged template literal.

Tagged Template Literals are a JavaScript standard. They are Template Literals that are marked by a function name. The string contained in the Template Literal is processed by that function before it’s returned. In the case of lit-html it looks like this:

const markup = html`

lit-html comes with the html-function to use with Template Literals and with a render function that efficiently renders updated state to the DOM.

Under the hood lit-html uses HTML Template to efficiently clone the markup. The html function returns something called a TemplateResult which gets passed to the render function along with the DOM element to which it should be rendered. lit-html remembers the dynamic parts of the template and makes sure these get updated when needed. The static parts of the template are always just rendered once.

According to this talk from the Google Web Summit lit-html fairs pretty well performance-wise.

I think it is the perfect library for managing DOM updates of Custom Components because it it just uses Web Standards to do its job and refrains from maintaining a second DOM tree in order to be fast.

With a little luck some sort of efficient DOM updating will also land in the browser as a standard but for now this is great!

So in conclusion: the perfect web app building block for me is just a Custom Component that uses a base class that hides away creating the Shadow DOM and the usage of lit-html.

I made such a subclass for my little Café search app, it’s called LitElement and it is str8 🔥.

// A super chill custom element subclass with
// some nifty default behavior.
export class LitElement extends HTMLElement {
  constructor() {
    // Initialize the state variable.
    this.state = {};
    // Create the Shadow DOM for this element.
    this.attachShadow({ mode: 'open' });
    // Just a convenient alias for addEventListener
    this.on = this.addEventListener;
  // The state getter.
  get state() {
    return this._state;
  // The state setter calls the 
  // invalidate function, which invalidates the
  // state and calls render.
  set state(s) {
    this._state = s;

  // This function makes sure that 
  // the lit-html render function is called 
  // when invalidate() is called. 
  // But it makes sure it is always called 
  // on next tick so that render calls are 
  // batched.
  invalidate() {
    if (!this.needsRender) {
      this.needsRender = true;
      Promise.resolve().then(() => {
        this.needsRender = false;
        // this.render is the render function of 
        // the Custom Component that subclasses this
        // class and it returns a TemplateResult created
        // with the lit-html html-function and a Tagged 
        // Template Literal. 
        // The location to which the DOM is rendered to is
        // always the shadow root of the component.
        render(this.render(this.state), this.shadowRoot);

❤️ the platform.

Web Components And The CMD-R Development Workflow

November 16, 2017

A couple of months ago I stumbled across Mikeal Rogers’ article I’ve seen the future, it’s full of HTML. in which he lays out his reasons why he started to dive into Web Components for the web apps he is working on right now and hinted to the workflow he uses.

His enthusiasm for Web Components is infectious and because I have a lot of respect for Mikeal and his work, his article was a strong signal for me personally that Web Components might be something to take a closer look at.

I have ignored them so far because the little I knew about them seemed a little gross to me: HTML imports? HTML, CSS and JavaScript in one file? Ewww. Who wants to write code like that?

Polymer didn’t help a lot either: all I heard was Polyfills and Bower and WTF is Shady DOM?

And oh yeah everybody kept screaming “OH PRAISETH OUR NEW LORD AND SAVIOR HIS HOLINESS THE REACT”.

So I kept ignoring it.

But then Mikeal came along and showed how to write Web Components purely with JavaScript. He said the only tooling he uses is a little Browserify in order to package the component so he can distribute it via npm and automatically serve it via a CDN.

I was like: “No tooling? Hmmm… that sounds pretty sweet.”

A few weeks ago I took some time to write a tiny application just with Web Components. I used no tooling, no compilation no polyfills, no nothing. Just Chrome, ES 2015+, ES Modules, <template>, Custom Elements and an “actions up data down” data flow with Custom Events.

I used the CMD-R (CTRL-R on Windows) web development workflow as Alex Russel calls it.

Now that’s a developer experience 😉!

It was surprising to me how freeing it was to be able to use modern syntax, modules and a component architecture in the browser without any tooling. Not to mention the excellent debuggability since you are feeding the browser what you write directly.

The browser has become the only tool and even the only “framework” we need to write JavaScript apps.

Just write your app using Chrome and then make a bundle with dynamically loaded polyfills to make it work everywhere.

As it stands currently that latest versions of Google Chrome, Safari and Opera support these technologies natively, Firefox and Edge are lagging behind but working on it.

Why I wrote Just. by Angus Croll →

Just is a collection of dependency-free modules for common operations on Arrays, Collections, Strings, Objects and Functions started by @angustweets. “Do we really need something like that when we have Lodash?” you might ask.

Angus does a great job explaining why he started Just:

Just is designed for those who value easy to follow, debuggable utilities over version lotto and yak-shaving down the module tree. There’s nothing fancy or particularly clever here. You won’t find elaborate routines that optimize for trillion element arrays; just short, cohesive, readable code––the sort of helper functions we all inline in our projects every day because edge case optimizations that we’ll never need aren’t worth the overhead and the uncertainty of a sprawling dependency chain.

October 26, 2017

I Finally Understood Functions As A Service

October 24, 2017

Since I heard it for the first time, I was struggling to understand what “Functions As A Service” like AWS Lambda really is. I heard people explaining it on podcasts and read what it said on the AWS Lambda landing page but it just didn’t click.

Last week me and Henning recorded the lastest episode of our podcast REACTIVE. On that episode Henning talks about how he uses AWS Lambda and an AWS database to build an API for their app at work. This made me finally understand what this is all about.

They built the API by writing some code that parses request parameters, retrieves some data from the database and then sends that data back as JSON in the JSON API format. That code is the function that is being provided “as a service”.

That is it.

The HTTP layer, security and scalability is all provided by AWS services. Functions As A Service also means that you only pay for computing time when the function is used. When there is no requests to the API then you don’t pay.

This is an incredibly fast and efficient way to build an API that is production ready in no time.

On the podcast we also talked about how more and more of these “solved problems” like security and scalability will be packed up into some service and how the usage of them will certainly be very widespread in the not so distant future.

@codepo8 said it best on Twitter yesterday:

How Matt Mullenweg Single-Handedly Made Facebook Relicense React to MIT

September 26, 2017

OK, I have no proof that this is actually true, but it is a fun thought experiment / conspiracy theory. Here me out.

Facebook just announced that they are relicensing a few of their open source projects to the MIT license.

I am pretty sure this change will be extremely well received by everybody using React.

It will also ensure that the majority of React users will switch to React 16 (the first version under the MIT license) because their BSD + patents license is widely mistrusted and / or disliked.

I for one like to think that this change was triggered by the shots that Matt Mullenweg fired in his post “On WordPress and React” a few days ago. I re-read his post a couple of times because I found some of the statements he made very interesting. At first glance the post reads like “OK WordPress just ditched React no big dizzle” but after re-reading it I realized that Matt just deprived React of a quantum leap in growth. Wow. Let’s go through the interesting points:

First 💥:

Big companies like to bury unpleasant news on Fridays

This statement is in context of Facebook’s post about sticking with the BSD + patents license after the Apache Foundation put their license on the black list.

Right in the beginning of his article he accuses Facebook of shady behavior.

Second 💥:

I’m not judging Facebook or saying they’re wrong, it’s not my place.

This is hilarious. Translation: “I am judging Facebook and I am saying they are wrong.”

Third 💥:

We had a many-thousand word announcement talking about how great React is and how we’re officially adopting it for WordPress, and encouraging plugins to do the same. I’ve been sitting on that post, hoping that the patent issue would be resolved in a way we were comfortable passing down to our users.

Now for the squeeze.

Translation: “We were going to officially adopt React as the WordPress JavaScript thing and make a big fuss about it which would have been massive promo for you for free and would most likely have pushed React adoption into jQuery-like spheres. But now we’re not doing that, lol.”

Fourth 💥:

Squeeze even harder:

Core WordPress updates go out to over a quarter of all websites, having them all inherit the patents clause isn’t something I’m comfortable with.

Translation: “For realz Facebook?!?!? You don’t want your Framework to be used by over 25% of all websites on the internet with a snap of a finger!?!? Maybe y’allz want to think about that a little bit?”

Fifth 💥:

Now for the kill:

But we have a lot of problems to tackle, and convincing the world that Facebook’s patent clause is fine isn’t ours to take on. It’s their fight.

Translation: “We don’t need all your drama. You’re on your own.”

We’ll look for something with most of the benefits of React, but without the baggage of a patents clause that’s confusing and threatening to many people.

Translation: “React is not indispensable and can easily be switched out with something similar, thank you.”

So good. He whacked them over the head in the nicest way possible.

It makes total sense to me that Matt’s post would make them rethink their licensing strategy. The Apache Foundation reject did not really mean a big impediment to their growth but loosing WordPress would deprive them of more massive adoption and may even mean a decline in React adoption going forward because Matt and WordPress as a project are just incredibly influential in the web development community.

Potentially being the JavaScript thing for WordPress and WordPress plugins and the growth React would gain by that must outweigh the litigation costs they would save by sticking with BSD + patents.

Matt has already reacted to Facebook’s anouncement:

I am surprised and excited to see the news that Facebook is going to drop the patent clause that I wrote about last week. They’ve announced that with React 16 the license will just be regular MIT with no patent addition. I applaud Facebook for making this move, and I hope that patent clause use is re-examined across all their open source projects.

Matt is pleased about the change.

But Facebook has lost the chance to be the designated WordPress JavaScript framework:

Particularly with Gutenberg there may be an approach that allows developers to write Gutenberg blocks (Gutenblocks) in the library of their choice including Preact, Polymer, or Vue, and now React could be an officially-supported option as well.

Looks like the WordPress team found out about this new-fangled compiler that Jason Miller of Preact-fame is working on which compiles, Web Components, VueJS components and Preact components into highly optimized Preact components. That’s awesome.

Interestingly Matt is not mentioning if they will still switch away from React for Calypso. Somebody opened an issue asking about it but nobody has responded yet.

Meanwhile React 16 has been released and the license is indeed MIT.

All New

September 15, 2017

My blog used to be a Medium publication with a linked domain but I ditched Medium. I was disappointed with the design directions they are taking. I was a big fan of Medium’s design before the recent change to the new serif wordmark. It feels disjointed to me now and it is apparent that they are pushing their paid services and are working hard to make an actual business out of Medium. That’s all good and fine but I don’t like the changes and that made me realize I need to have more control over my blog again.

Also I wanted to consolidate and So here we are. New blog same old me.

For now I am leaving my old blog posts over there on Medium. I will port them over eventually. Maybe. I will be cross posting my blog posts from here over at Medium as well. The readership you get over there via their network is really nice.


The initial inspiration for the design was a quick study that @magalhini posted to Twitter a while ago. I played around with it but wanted a different font pairing. After some googling I found Typewolf Google Fonts and Great Simple, two websites that suggest Google font pairings. I ended up choosing Roboto Mono as a main font and Rubik for my wordmark.

I found the pairing here and after I saw it I kept coming back to it. So I went with it. I really like using a monospace font for the body text.

I am happy with it. Keeping it simple.

It’s A Static Site

As a web developer I have to use a static site generator of course. Which one you ask?


Hugo is written in Go and comes as a binary. It’s wicked fast and can easily deal with huge amounts of posts.

It has been around for a while now and is really flexible. It works very similarly to other static site generators like Jekyll or Metalsmith.


For hosting I went with Netlify, I’ve been really impressed with their user experience for hosting static pages. If your site is on Github it is incredibly easy to deploy it to Netlify:

  • link your Github respository
  • tell them the command that builds your site
  • tell them the name of the folder that the built site end up in


The site gets redeployed every time you push to master (this feature can be turned off) and if I push a branch to the site’s Github repo Netlify will attempt to deploy a preview automatically. It’s truly a joy to use.

If you are interested in the code for this site you can find it on Github.