Tags in JSON Feed

It seems I just can’t stop tinkering with my site. I was doing some optimizing of my deploy scripts (which now runs via Docker, because why the hell not), when I out of the blue re-read the JSON Feed spec and saw that there was an optional support for tags.

From the JSON Feed spec:

tags (optional, array of strings) can have any plain text values you want. Tags tend to be just one word, but they may be anything. Note: they are not the equivalent of Twitter hashtags. Some blogging systems and other feed formats call these categories.

An array in JSON is pretty much just this:

[ "item-1", "item-2", "item-3" ]

In context, according to the spec, each item in the items array would need the following (assuming our post has the tags “css” and “html”):

"tags": [
  "css",
  "html"
]

Getting things done in Jekyll

So this whole exercise revolves around you putting an array of tags in your front matter for each post. I’m pretty much assuming that this is something that you’ve already done, but just in case, here’s a sample post:

---
title: My blog post
tags:
  - css
  - html
---

Some thoughtful content…

Considering my previous post about getting JSON Feed in Jekyll, here’s the additional Liquid we need to get these tags in the JSON Feed code:

{% if post.tags %}
"tags": [
{% for tag in post.tags %}
  "{{ tag }}"{% if forloop.last == false %},{% endif %}
{% endfor %}
],
{% endif %}

Since JSON is picky with trailing commas, we need to utilize foorlop.last in order to keep tabs on whether we’re at the last item in the loop or not. Also note the trailing comma after the last bracket on the sixth row. Depending on where you put this snippet in your JSON Feed template, you may or may not need it.

Putting it all together

Here’s the full code for my own feed.json template, complete with the new section for tags.

---
layout: null
sitemap:
  priority: 0.7
  changefreq: weekly
---
{
  "version" : "https://jsonfeed.org/version/1",
  "title" : "{{ site.title }}",
  "home_page_url" : "{{ site.url }}",
  "feed_url" : "{{ "/feed.json" | absolute_url }}",
  "author" : {
    "url" : "{{ site.url }}",
    "name" : "{{ site.author }}"
  },
  "icon" : "{{ "/apple-touch-icon.png" | absolute_url }}",
  "favicon" : "{{ "/favicon-32x32.png" | absolute_url }}",
  "items" : [
  {% for post in site.posts %}
    {
      "title" : {{ post.title | jsonify }},
      "date_published" : "{{ post.date | date_to_xmlschema }}",
      {% if post.updated %}
      "date_modified": "{{ post.updated | date_to_xmlschema }}",
      {% else %}
      "date_modified": "{{ post.date | date_to_xmlschema }}",
      {% endif %}
      "id" : "{{ post.url | absolute_url }}",
      "url" : "{{ post.url | absolute_url }}",
      "author" : {
        "name" : "{{ site.author }}"
      },
      "summary": {{ post.description | jsonify }},
      {% if post.tags %}
      "tags": [
      {% for tag in post.tags %}
        "{{ tag }}"{% if forloop.last == false %},{% endif %}
      {% endfor %}
      ],
      {% endif %}
      "content_text": {{ post.content | strip_html | strip_newlines | jsonify }},
      "content_html": {{ post.content | strip_newlines | jsonify }}
    }{% if forloop.last == false %},{% endif %}
  {% endfor %}
  ]
}

There we go! We now have tags from each post in our JSON Feed.

Improved feeds

I get the feeling that things are happening when it comes to syndicated feeds online. A few years back, JSON Feed entered the scene, courtesy of Brent Simmons and Manton Reece. This summer, the very same Brent Simmons released version 5 of NetNewsWire, a free and open source, pure-Mac application, that’s a joy to use. While light on features on its initial release, it’s snappy and stable. Even better, Brent made the decision to at least support one feed service on day one. As luck would have it, he chose the very excellent Feedbin, a service that I’ve happily been paying for since day one of its release after the demise of Google Reader.

Anyway. After years of questionable alternatives to syndicated feeds (like Facebook, Twitter and whatever else people say they use instead), I’ve kept using RSS and Atom like a stubborn mule. Most sites worth its salt support syndicated feeds in some form, which allows users to easily consume content.

There’s of course the occasional fly in the ointment. And what’s worse, I myself am guilty of it. I’m talking about only providing a short summary in the feed in order to drive traffic to the site itself. Well, no more. As of today, I’ve run a deploy that provides full content both via JSON Feed and Atom for this site. The reasons were quite simple:

  • I don’t actually need to drive traffic to my site since I’m not doing any kind of advertising
  • You as a reader have the option of consuming the content any way you like

The counter point, as mentioned by someone online, is that this makes it easier for less than honest people to “steal” content and publish it as their own somewhere else. My position on this is that these people very likely would do so anyway, and it’s not a strong enough argument against not making things easier for everyone else.

And so, here we are.

Improving the fixed/sticky bookmarklet

Last week I wrote about a bookmarklet I found online. I found that there was some room for improvement since the bookmarklet didn’t handle the more modern variant of position: sticky;.

While using my new and (sort of) improved bookmarklet, I noticed that on some sites, annoying overlays not only covered the content, but also disabled scrolling of the entire page. So if you were to remove all elements that were either fixed or sticky, you still couldn’t scroll the page. Normally, I would often reach for the built-in reader mode of Safari to get to the content. However, this might not always be possible or applicable, depending on the site’s content.

Most sites disable scrolling by simply setting overflow: hidden; on the <body>. So all we have to do is look for this property and then unset it.

The code – improved

I took the liberty of adopting more modern ES6 syntax this time. This of course limits the browser support, but if you’re using something older, like Internet Explorer – well, sucks to be you, my friend. 😉

(function () {
  const elements = document.querySelectorAll('body *');
  const body = document.querySelector('body');

  if (getComputedStyle(body).overflow === 'hidden') {
    body.style.overflow = "unset";
  }

  elements.forEach(function (element) {
    if (["-webkit-sticky", "sticky", "fixed"].includes(getComputedStyle(element).position)) {
      element.parentNode.removeChild(element);
    }
  });
})();

It’s important to use the unset property for overflow, since any CSS we’re setting via JavaScript in this manner becomes a style attribute on the target element. So this means that we have to override any styling set via an external stylesheet.

Improved, but not perfect

So this improvement might handle most cases, but not all of them. There’s almost as many ways to mess with the user experience as there are websites. I’d be happy for any feedback and suggestions to improve this bookmarklet. The easiest way is of course via the public Gist I’ve set up for the bookmarklet code. There’s also a CodePen if you want to fork and play around easily.

Finally, here’s the updated bookmarklet, for your convenience.

Kill sticky/fixed

Happy browsing!

Killing both fixed and sticky headers

Last year someone linked to an article by Alisdair McDiarmid containing a bookmarklet that killed any element on a page that had the property position: fixed;.

Knowing how the modern web sometimes might look, this type of bookmarklet is easy to love. However, with the advent of more modern solutions in CSS such as position: sticky;, the bookmarklet is in need of some updating. What better way than to do it yourself, then?

Here’s the code

There’s just a few minor additions needed to the original code. In addition to checking for position: fixed;, we also need to check for position: sticky;. There’s one caveat, though. Safari still uses a vendor prefix for sticky positioning, so we need to make sure to look for -webkit-sticky as well.

(function () {
  var i, elements = document.querySelectorAll('body *');

  for (i = 0; i < elements.length; i++) {
    if (["-webkit-sticky", "sticky", "fixed"].includes(getComputedStyle(elements[i]).position)) {
      elements[i].parentNode.removeChild(elements[i]);
    }
  }
})();

All done! But this won’t do us any good unless it comes in the form of a handy bookmarklet, so here’s that as well. Drag this link to the bookmark bar of your browser of choice (or just save it).

Kill fixed/sticky

The caveat with these kinds of bookmarklets is that they are only working on the current page. If you leave the page or reload it, the effect disappear.

There’s a public gist up if you want to fork the code and play around with it yourself. There’s also a really neat online tool for generating bookmarklets of your own.

That’s it! Enjoy!

Cache busting in Jekyll revisited

I never quite warmed up to Gulp. It was yet another tool I had to learn in order to get stuff done. Before Gulp there was Grunt, and somewhere along the way I had to cope with Webpack in React projects. The latter was surely fine for those projects, but for my own needs, it was way overkill or not even the right tool for my needs.

I then came upon two blog posts that piqued my interest; Why I Left Gulp and Grunt for npm Scripts and How to Use npm as a Build Tool. I felt inspired and got to work.

npm scripting – the why

Cutting down on stuff is a favorite pasttime of mine. If I can, I minimise and optimise as much as I can, both in code and in real life. So what these blog posts were about resonated quite well with me. For the same reason that I dislike CSS preprocessors like Sass and Less because they add more problems than they claim to solve (which I don’t believe they do anyway), I disliked Gulp because it also was an abstraction layer that I felt added more problems than it solved for me. And the plugins. Oh, all those damn Gulp plugins that I had to use for everything. Ugh.

npm scripting – the what

Getting rid of Gulp means that you try to rely just on npm scripts in package.json instead and that in turn means to mostly rely on CLI versions of different tools. Step one is to identify what I need to happen in my tool stack:

  1. Transpile custom properies in my CSS for better backwards compatibility for legacy browsers
  2. Mash together what little JavaScript I use from several files into one
  3. Generate an SVG sprite
  4. Run Jekyll
  5. Lint my CSS and JavaScript
  6. Deploy code on my live server

It doesn’t take long to find the packages that we need over at npmjs.com. This is what I like about this approach. My package.json only has 12 dependencies since I dropped Gulp. Twelve.

I could almost cry from happiness.

Anyway, here’s what we’ve got:

"devDependencies": {
  "concurrently": "^4.1.0",
  "eslint": "^5.12.1",
  "foreach-cli": "^1.8.1",
  "hashmark": "^5.0.0",
  "onchange": "^5.2.0",
  "postcss-cli": "^6.1.1",
  "postcss-custom-properties": "^8.0.9",
  "stylelint": "^9.10.1",
  "svg-sprite": "^1.3.7",
  "uglify-es": "^3.0.28",
  "uglifycss": "^0.0.29",
  "yarn": "^1.5.1"
}

Let’s quickly go over what each packages does:

  • browserify: mashes all my JavaScript together into one unholy file. You could also use something like requireJS, if you like.
  • eslint: JavaScript linting (optional, of course)
  • foreach-cli: I use this to iterate over files to do things to them. More on this later.
  • hashmark: My cache busting tool. I’ll use this to generate uniqe file names.
  • onchange: Will help me trigger stuff if a change is detected.
  • concurrently: Runs shell commands in parallel and is a bit more versatile than just using & in the shell straight up. Added bonus is the improved compatibility with Windows.
  • postcss-cli: The command line implementation of PostCSS.
  • postcss-custom-properties: Plugin for transpiling a fallback value for CSS custom properties for legacy browsers.
  • stylelint: CSS linting (like eslint, it’s optional)
  • svg-sprite: Builds handy little SVG sprites for me.
  • uglify-es: JavaScript compression.
  • uglifycss: Same as above, but for CSS.
  • yarn: You might recognise this one.

Anyway, that’s what I use. You may use whatever tools you want to get the job done.

npm scripting – the how

To tie everything together, I need to create a couple of tasks in my package.json that will help me get my development environment going again, this time without Gulp. At this point, I assume that you already know how to write stuff in you own package.json. I’m also assuming that you’ve already read the two blog posts that I linked to in the beginning of this post.

"scripts": {
  "start": "concurrently 'yarn run build:watch' 'yarn run jekyll:serve'",
  "prebuild": "touch _includes/sprite.svg & mkdir -p dist",
  "build": "yarn run build:css && yarn run build:js && yarn run build:svg",
  "build:watch": "onchange ./src/** -i -- yarn run build",
  "prebuild:css": "rm -rf ./dist/css/*",
  "build:css": "postcss -c postcss.config.js ./src/css/*.css -d dist/css",
  "build:js": "rsync --checksum --recursive --delete src/js/ ./dist/js",
  "postbuild:js": "hashmark -r -l 8 dist/js/vendor/require.min.js 'dist/js/vendor/{name}-{hash}.js'",
  "build:svg": "svg-sprite -C svg-sprite.config.json --dest ./_includes src/svg/*.svg",
  "deploy:css": "postcss -c postcss.config.live.js ./src/css/*.css -d dist/css",
  "css:uglify": "foreach -g 'dist/css/*.css' -x 'uglifycss #{path} --output #{path}'",
  "css:hash": "hashmark -r -l 8 dist/css/*.css 'dist/css/{name}-{hash}.css'",
  "postdeploy:css": "yarn run css:uglify && yarn run css:hash",
  "deploy:js": "yarn run build:js",
  "lint": "yarn run lint:css && yarn run lint:js",
  "lint:css": "stylelint --color -f verbose src/css/**/*.css",
  "lint:js": "eslint src/js/**/*.js",
  "jekyll:serve": "sleep 1; jekyll serve --incremental --drafts",
  "test": "yarn run lint",
  "dist:clean": "mkdir -p ./dist && rm -rf ./dist/*",
  "predeploy": "yarn run dist:clean",
  "deploy": "yarn run deploy:css && yarn run deploy:js && yarn run build:svg"
}

The scripts section does grow somewhat when you’re not using Gulp anymore. Even if JSON has its shortcomings, like the lack of commenting — if you keep the naming as descriptive as possible, you should be fine.

A quick word on the pre and post hooks. As you can see in the above code snippet, there’s a few entries with the prefixes pre and post, like prebuild and postdeploy:css. This is a neat feature of npm wherein any script that has either a post or a pre hook, will automatically run either before or after that script. The above linked post by Keith Cirkel does a much better job than me in explaining the intricacies of these hooks.

CSS in particular

For the sake of brevity, let’s focus on the CSS part.

"build:css": "postcss -c postcss.config.js ./src/css/*.css -d dist/css",

First, I’m using PostCSS to transpile any custom properties to provide fallback properties for legacy browsers. In effect, this means that the following:

:root {
  --text-color: #333;
}

body {
  color: var(--text-color);
}

Will be transpiled into:

:root {
  --text-color: #333;
}

body {
  color: #333;
  color: var(--text-color);
}

Due to the wonderful way that CSS is progressively enhanced, any browser that does not understand a property will simply ignore it and move on. This mean that the occurrence of var(--text-color) will for example be ignored by Internet Explorer 11 and the previous value (#333) will still apply since it was declared before.

The following two lines are only run when I want to deploy to production, and this is also where the fun happens in terms of cache busting.

"css:uglify": "foreach -g 'dist/css/*.css' -x 'uglifycss #{path} --output #{path}'",
"css:hash": "hashmark -r -l 8 dist/css/*.css 'dist/css/{name}-{hash}.css'",

The tool first used; uglifycss takes care of compressing the CSS by remove line breaks and whitespace. Since we’re not concatenating all our stylesheets into one big file, as has been traditional — but rather leverage the power of HTTP/2 multiplexing, we just run it on each file in place.

The second line is all about cache busting. hashmark enables us to do this by providing us with a unique file name base in the hash of the file and, more importantly, only change this hash if the file actually has changed. The flags used are firstly -r which means replace whatever file you’re working one with the hash renamed one, and -l 8 tells hashmark to limit the length of the hash in the filename to eight characters, which will be more than enough for our needs. The pattern {name}-{hash}.css should be pretty self explanatory; file.css would become file-d121a5d4.css.

How Jekyll finds the files

We have a few problems with this approach to solve. Since we’re keeping all of our stylesheets as separate files, we could of course manually link them all in our Jekyll templates, but that’s not really maintainable and as soon as we start hashing our files, that strategy goes straight out the window. So how to make Jekyll aware of these dynamically changing files, without us having to poke around manually each time something changes?

Luckily, Jekyll keeps track of static files.

A static file is a file that does not contain any front matter. These include images, PDFs, and other un-rendered content.

They’re accessible in Liquid via site.static_files

Using this info, we can use the metadata to filter out the files in /dist/css (i.e. where npm is putting our source files once they’ve been transpiled) and then iterate over each file and output the path. It’ll look something like this:

{% for css in site.static_files %}
  {% if css.path contains "dist/css" %}
    <link rel="stylesheet" href="{{ site.baseurl }}{{ css.path }}">
  {% endif %}
{% endfor %}

There’s a caveat with this method, though. This implementation would list any kind of file present in that folder, even those that aren’t legitimately CSS files. In this example, that would never happen, but if you want your solution to be more robust (like if someone haphazardly starts putting PNG files or JavaScript files in your folder), you can also filter using css.extname. Now, if everything is working correctly, assuming that we have three files in /dist/css, that we say are named file1.css, file2.css and file3.css, Jekyll would render markup accordingly:

<link rel="stylesheet" href="/dist/css/file1.css">
<link rel="stylesheet" href="/dist/css/file2.css">
<link rel="stylesheet" href="/dist/css/file3.css">

Awesome! No matter how many files we add to our project, or whatever names they will dynamically get from hashmark, Jekyll will take care of linking them properly for us.

Wrapping up

Simplifying things and throwing out superflous tools felt really great! The less of them I have, the quicker my development environment got and I felt that the overall robustness went up a few ticks. There is still some things that I likely will never be rid of, like the cache busting feature, since the benefit is too big (and I just can’t wrap my head around getting caching headers right).

I hope this might be of some use to someone else than me. The basic principle isn’t really tied to Jekyll (apart from the static files functionality), so you should be able to implement this with whatever tools you choose. If nothing else, it might’ve served as an inspiration to cut down a little in your own tool stack.