{ "version": "https://jsonfeed.org/version/1", "title": "Fredrik Frodlund – frippz.se", "home_page_url": "https://frippz.se", "feed_url": "https://frippz.se/feed.json", "author": { "url": "https://frippz.se", "name": "Fredrik Frodlund", "avatar": "https://frippz.se/images/just-me.jpg" }, "icon": "https://frippz.se/apple-touch-icon.png", "favicon": "https://frippz.se/favicon-32x32.png", "items": [ { "title": "A new accessibility tool has joined the party", "date_published": "2023-12-08T07:00:00+00:00", "id": "https://frippz.se/2023/12/08/a-new-accessibility-tool/", "url": "https://frippz.se/2023/12/08/a-new-accessibility-tool/", "author": { "name": "Fredrik Frodlund" }, "summary": "NetNewsWire got updated and I had the opportunity to compare the UI", "tags": [ "accessibility", "a11y", "testing", "tools" ], "content_html": "
When it comes to testing accessibility, there is no substitute for the human element. However, a great tool can help identify all the tiny details that our human eyes might miss. In addition to this, performing continuous automated testing is something machines do quite well. Just as continuous integration is the norm in today’s front-end world, so too should accessibility testing be.
\n\nEnter Squidler; a new accessibility testing tool recently released into the wild. Full disclosure: I happen to know a few of the people behind the product and they’re very talented people, but I also had no idea that they were working on this (which just goes to show how deep I must be in my own work).
\n\nI haven’t had too much time to test the tool out yet, but a few things stood out to me, that I will outline.
\n\nJust paste a URL into the input field and let Squidler go to town on your site. It also crawls through your site, so you don’t just get a report on the URL you submitted, but several other pages as well. I had a quick chat with a friend of mine working on the product and he mentioned that the crawling will improve with time and be more aware of pages it crawled in the past, hopefully making it more efficient
\n\nOnce you’ve entered a web site that you want to test, Squidler will keep crawling it periodically. This is a nice feature to me and something that will come in handy once you start fixing issues that the tool will find. Keep fixing issues and deploy them, and let Squidler handle future checks automatically. Nice.
\n\nIssues found are clearly explained with suggested fixes, links to documentation about the particular issue as well as screenshots from your site highlighting where the issue resided. You can also step through a timeline that reveals how the tools went over your pages. Love it!
\n\nI understand that this tool is in the early stages of its life and so, there’s a few things I noted that could improve it. In no particular order:
\n\nOne URL only? - running tests like these surely doesn’t come cheap in terms of server costs, so I get that there might be limitations. I would however love to see the possibility to test multiple sites. If this feature isn’t currently available, it would be a valuable addition for future updates.
\nVery minimal UI - I should emphasize that this is generally a good thing, but if I dive into one of the reported problems, I would love to have some kind of breadcrumb or other way of getting back to the overview. The back button works just fine, but my confidence is a bit lowered by such patterns. I guess poor web site designs over the years with terrible single page apps have ruined me a bit.
\nI want to be clear that these are very early impressions after only testing the tool for a few hours. As I understand it, they’re still working on improving plenty of things, so take my impressions with a heap of salt.
\n\nAs I already mentioned, I’ve only tested the tool for a few hours and I haven’t looked over everything the tool has to offer, like the daily and weekly test reports mentioned in their list of features.
\n\nThe pricing is € 49 per month, which might be steep for a private person, but I don’t think they’re the target audience here. For companies that not only values offering a good experience for their users, but that also cares about the forthcoming European Accessibility Act, this is a low price to pay.
\n" }, { "title": "Pointless post of the day – NetNewsWire before and after their Big Sur update", "date_published": "2021-03-30T07:00:00+00:00", "id": "https://frippz.se/2021/03/30/netnewswire-before-and-after/", "url": "https://frippz.se/2021/03/30/netnewswire-before-and-after/", "author": { "name": "Fredrik Frodlund" }, "summary": "NetNewsWire got updated and I had the opportunity to compare the UI", "tags": [ "macOS", "apps" ], "content_html": "This morning, my app of choice for reading RSS feeds; NetNewsWire, notified me of an update. This was a major version update which included a new app icon and an updated UI for Big Sur. This gave me the opportunity to grab a screenshot to compare. So I did.
\n\n\n\n\n\nIt’s a bit pointless, but also interesting to compare how Big Sur affects the design of apps on the Mac. I’m also thankful I get to keep the text below the icons in the toolbar.
\n" }, { "title": "Your website is making me sick — or why you should respect the preferences of your users", "date_published": "2020-11-24T16:00:00+00:00", "date_modified": "2022-05-11T07:51:00+00:00", "id": "https://frippz.se/2020/11/24/your-website-is-making-me-sick/", "url": "https://frippz.se/2020/11/24/your-website-is-making-me-sick/", "author": { "name": "Fredrik Frodlund" }, "summary": "On the importance of respecting your more sensitive users", "tags": [ "a11y", "accessibility", "css" ], "content_html": "There’s an accessibility setting in macOS and iOS that allows users to reduce motion in the interface. On iOS, this removes animations in the OS itself and well made apps should also respect this setting. In CSS there’s a media query for it; (prefers-reduced-motion: reduce)
, which has for example been supported in Safari since version 10.1. If you are using animations on your website, consider adding this snippet to your stylesheet.
@media screen and (prefers-reduced-motion: reduce) {\n * {\n transition-duration: 0.001ms !important;\n animation-duration: 0.001ms !important;\n animation-iteration-count: 1 !important;\n scroll-behavior: auto !important;\n }\n}\n
But why didn’t I remove the transition and animation durations altogether? Well, retaining the durations but changing them to something that’s imperceptible to the human eye, helps to avoid breaking anything that is tied to CSS-based animations. Make sure test your site, though. I’ve seen stuff break in the weirdest of ways with this styling, so your mileage may vary. Resetting animation-iteration-count
disables infinite animations, instead of making the loop crazy fast!
Almost all of the major browsers has had support for a good while, the exception being Legacy Edge, which lacks support entirely. Check caniuse.com for more details.
\n\nSupporting browsers can read this preference from the OS of your choice. A few examples below.
\n\nRemember, respect your users. ❤️
\n" }, { "title": "Touch ID with sudo", "date_published": "2020-11-19T18:00:00+00:00", "id": "https://frippz.se/2020/11/19/touch-id-with-sudo/", "url": "https://frippz.se/2020/11/19/touch-id-with-sudo/", "author": { "name": "Fredrik Frodlund" }, "summary": "Use your Macs Touch ID sensor to authenticate when using sudo", "tags": [ "macOS", "apple" ], "content_html": "When the first MacBook Pro with a Touch ID sensor was released, I was thoroughly excited. Rightly so. Apps like 1Password was quick to implement support for it. There was one thing that was missing though; authentication with sudo
.
I’m almost ashamed that after having owned at least two MacBook Pros with Touch ID, I didn’t find out until today about this. So it’s time to write it down. Hat tip to Stanislas and his post “Using Touch ID for sudo authentication on a MacBook” for showing me the way.
\n\nEdit (as root) /etc/pam.d/sudo
:
# sudo: auth account password session\nauth sufficient pam_smartcard.so\nauth sufficient pam_tid.so\t\t# <= Add this line!\nauth required pam_opendirectory.so\naccount required pam_permit.so\npassword required pam_deny.so\nsession required pam_permit.so\n
For clarity, the line you want to add (as seen above) is:
\n\nauth sufficient pam_tid.so\n
That’s all you need! Oh, and your finger, of course! 😉
\n" }, { "title": "Labs are back!", "date_published": "2020-11-18T14:00:00+00:00", "id": "https://frippz.se/2020/11/18/labs-are-back/", "url": "https://frippz.se/2020/11/18/labs-are-back/", "author": { "name": "Fredrik Frodlund" }, "summary": "I finally restored the labs section, this time with actual content.", "tags": [ ], "content_html": "Maybe two or three years ago, I added a section called “Labs” that didn’t actually contain anything. I kept it up for far too long and I never got around to adding the planned content. Well, no more! In a spur of inspiration and motivation, I whipped something up today so that I could finally get some content in. For starters, it’s just some of the more popular CodePens I’ve made over the years. More stuff to come.
\n" }, { "title": "Considering viewports", "date_published": "2020-10-14T07:00:00+00:00", "id": "https://frippz.se/2020/10/14/considering-viewports/", "url": "https://frippz.se/2020/10/14/considering-viewports/", "author": { "name": "Fredrik Frodlund" }, "summary": "Apple releases new iPhones and the rest is history", "tags": [ "css", "mobile", "iphone" ], "content_html": "Following yesterdays Apple event, I spent the morning perusing my RSS feed, as I do most mornings. One of the articles from developer Michael Tsai pondered the screen sizes of the new iPhone models. He compiled a list of most available models (including the older 5s/SE that’s no longer as easy to buy) and their viewport sizes in CSS pixels, or the unit he chose to describe them with; points.
\n\niPhone Model | \nWidth | \nHeight | \n
---|---|---|
5s/SE | \n320 pts | \n568 pts | \n
12 mini | \n360 pts | \n780 pts | \n
8/SE 2 | \n375 pts | \n667 pts | \n
11 Pro | \n375 pts | \n812 pts | \n
12/12 Pro | \n390 pts | \n844 pts | \n
XR/11/11 Pro Max | \n414 pts | \n896 pts | \n
12 Pro Max | \n428 pts | \n926 pts | \n
The iPhone 12 Mini is the model that I believe will be the “people’s iPhone”. That is to say, it’ll most likely be the top seller. With that in mind, it’s interesting to note the width of the device. 360 CSS pixels. I’ve noticed in my day job that our designers have designed with the iPhone 8/SE 2 in mind, that is to say that they make their most narrow designs 375 CSS pixels. While that might be a problem in and of itself since there’s still plenty of devices out there that is reporting 320 CSS pixels and you should cater to these too, it’s interesting to note that our designers will at least have to shave off 15 CSS pixels in their future designs, since this will very likely be the majority for the future.
\n\nThen again, if we all just built stuff fully flexible and did not cater to a specific minimum viewport, this wouldn’t be a problem at all.
\n" }, { "title": "The war against sticky toolbars continue", "date_published": "2020-09-08T09:00:00+00:00", "id": "https://frippz.se/2020/09/08/the-war-against-sticky-continues/", "url": "https://frippz.se/2020/09/08/the-war-against-sticky-continues/", "author": { "name": "Fredrik Frodlund" }, "summary": "My anti-sticky bookmarklet get a much needed update", "tags": [ "css", "javascript", "usability" ], "content_html": "The wars against sticky toolbars, annoying overlays, unwarranted modals and scroll-locking continues. It’s a never-ending war of attrition. It’s been close to a year since I last updated my bookmarklet and since, I’ve come across more annoying ways to mess with the user experience on different websites.
\n\nScenarios that my bookmarklet didn’t handle:
\n\noverflow: hidden !important
thus escalating the specificity wars further (my original script just set overflow: unset;
which didn’t override !important
styles)<body>
element, but also the <html>
element, sometimes at the same time to really make sure that the poor user can’t scrollNo need to beat around the bush, this is the updated script.
\n\nconst elements = document.querySelectorAll('body *');\nconst containers = document.querySelectorAll('html, body');\n\ncontainers.forEach(el => {\n if (getComputedStyle(el).overflow === 'hidden') {\n el.style.setProperty ('overflow', 'unset', 'important');\n }\n});\n\nelements.forEach(function (element) {\n if ([\"-webkit-sticky\", \"sticky\"].includes(getComputedStyle(element).position)) {\n element.style.position = \"unset\";\n }\n else if([\"fixed\"].includes(getComputedStyle(element).position)) {\n element.parentNode.removeChild(element);\n }\n});\n
The biggest changes are that I had to use body.style.setProperty
instead of body.style.overflow
in order to also set !important
. In addition, I’m also checking for that styling on the <html>
element and unset it there if needed.
As always, below is the packaged bookmarklet for you to drag into your own bookmarks. This time featuring a cool emoji, fitting the situation.
\n\n\n\nBig thanks to Joacim de la Motte for providing valuable feedback and helping me improve the script.
\n" }, { "title": "Happy CSS Naked Day!", "date_published": "2020-04-09T06:30:00+00:00", "id": "https://frippz.se/2020/04/09/happy-css-naked-day/", "url": "https://frippz.se/2020/04/09/happy-css-naked-day/", "author": { "name": "Fredrik Frodlund" }, "summary": "Strip your website of its CSS and show the world you know what you’re doing", "tags": [ "web", "css" ], "content_html": "Today is April 9, which means that it’s CSS Naked Day! Never heard of it? It basically means that if you have a website, you should strip it of its CSS.
\n\nWhy on earth would you do that?
\n\nWell, if you know your stuff, you should author your markup before doing any styling. This is unfortunately something that most developers almost never do, which is a sad thing. The information architecture of a website should be good enough that no styling should be required to be able to use the site. That’s the whole point of CSS Naked Day.
\n\nSo, how did my site fare?
\n" }, { "title": "Facebook discovers that native is better (updated)", "date_published": "2020-03-09T07:30:00+00:00", "date_modified": "2020-03-09T09:25:00+00:00", "id": "https://frippz.se/2020/03/09/facebook-discovers-that-native-is-better/", "url": "https://frippz.se/2020/03/09/facebook-discovers-that-native-is-better/", "author": { "name": "Fredrik Frodlund" }, "summary": "The web just can’t compete with native on performance.", "tags": [ "web", "apps", "native" ], "content_html": "The engineers over at Facebook rolled out a new Messenger app and later on posted a blog about the project, dubbed Project LightSpeed.
\n\n\n\n\n\n
\n- We are excited to begin rolling out the new version of Messenger on iOS. To make the Messenger iOS app faster, smaller, and simpler, we rebuilt the architecture and rewrote the entire codebase, which is an incredibly rare undertaking and involved engineers from across the company.
\n- Compared with the previous iOS version, this new Messenger is twice as fast to start and is one-fourth the size. We reduced core Messenger code by 84 percent, from more than 1.7M lines to 360,000.
\n- We accomplished this by using the native OS wherever possible, reusing the UI with dynamic templates powered by SQLite, using SQLite as a universal system, and building a server broker to operate as a universal gateway between Messenger and its server features.
\n
What’s gotten people’s attention online is that they seem to have accomplished this by axing React Native and going (possibly mostly) full native. The reaction has been just what you’d expect; no shit, Sherlock. I love the web and (at least most of) it’s many technologies. These days you can accomplish amazing things with HTML, CSS and JavaScript.
\n\nBut web just can’t compete with native.
\n\nI’m not going to stick my neck out and say that it never will, but I honestly don’t see that happening for many years to come. Plus, as front end developer since the 90s, I don’t see web technologies as the correct set of tools for building such apps. I loathe Electron and other stuff that’s not native to Mac, for example. I feel the same way about iOS as well. I want snappy, responsive and native apps on all my platforms. Looks like the engineers over at Facebook wants that too.
\n\nUpdated: The Messenger app was apparently native before the rewrite as well, so the title of this post is pretty off target. My point about native vs. web still stands, though.
\n\n" }, { "title": "On form elements and JavaScript", "date_published": "2020-01-28T08:00:00+00:00", "id": "https://frippz.se/2020/01/28/form-elements-and-javascript/", "url": "https://frippz.se/2020/01/28/form-elements-and-javascript/", "author": { "name": "Fredrik Frodlund" }, "summary": "Why use a form element when submitting fields with JavaScript? Because it’s better across the board.", "tags": [ "javascript", "accessibility", "html" ], "content_html": "Chris Ferdinandi of Go Make Things answers the question “Why use a form element when submitting fields with JavaScript?” and does so quite succinctly:
\n\n\n\n\n\n
\n- It makes your life easier.
\n- Semantics (and the accessibility that happens as a result).
\n
From the perspective of JavaScript, he goes on to make the case for using the submit
event on the <form>
element to keep things not only simpler, but also a lot more usable and accessible.
His post was inspired by a question posted by Coding Journey on Twitter:
\n\n\n\n\nQuestion: If we are preventing default behavior of form submission and manually handle it (e.g. with Fetch API), is there a reason to use the <form> tag? (other than form submission with enter/return key…)
\n
I’d say that you needn’t look further than that last sentence. There’s so many poorly designed forms out there built by developers who doesn’t know any better (seriously, I’ve seen forms in the wild that are nothing more than <div>
elements with JavaScript triggers). Just the fact that you can actually submit a form in any other way than using a mouse (or tapping on a screen) goes such a long way. Proper semantics helps GUI-less applications like 1Password as well, since it looks for forms with properly named input fields and submit buttons.
Ok, so long story short — learn the basics, use proper forms.
\n" }, { "title": "Guess the hex color code", "date_published": "2019-12-16T10:40:00+00:00", "id": "https://frippz.se/2019/12/16/guess-the-hex/", "url": "https://frippz.se/2019/12/16/guess-the-hex/", "author": { "name": "Fredrik Frodlund" }, "summary": "Some people just want to watch the world burn…", "tags": [ "css" ], "content_html": "Someone on Twitter saw fit to drive himself and others insane with a game that lets you guess the correct hex color code.
\n\nThis is as strong a case as any for switching to HSL. Jesus…
\n\nAlso, check out this talk by David DeSandro about how to actually read color hex codes. It’s cool as hell, but seriously, just use HSL instead.
\n" }, { "title": "I did dark mode for my blog", "date_published": "2019-10-14T10:40:00+00:00", "id": "https://frippz.se/2019/10/14/i-did-dark-mode-for-my-blog/", "url": "https://frippz.se/2019/10/14/i-did-dark-mode-for-my-blog/", "author": { "name": "Fredrik Frodlund" }, "summary": "Dark mode is all the rage, so I added support for it on my own site, but forgot to write about it. So now I have.", "tags": [ "css" ], "content_html": "Damn my laziness. If you happen to run macOS Mojave or iOS 13, you might’ve noticed that I added support for dark mode quite a while back. Silly ’ol me didn’t write about it, even though I was quite happy with myself for utilizing CSS custom properties to simplify the theming. Both Chris Ferdinandi and Jeremy Keith beat me to it (or rather, their posts got me to finally write at least something about it).
\n\nIn the words of Mozilla Developer Network, the prefers-color-scheme
CSS media feature is used to detect if the user has requested the system use a light or dark color theme. The spec is part of Media Queries Level 5 with a status of “Editors’s Draft” as of writing this post.
@media (prefers-color-scheme: dark) {\n /* Dark styles goes here */\n}\n
I’m just glad that we’re (mostly?) past vendor prefixes at this point.
\n\nTo keep all things related to dark vs. light themes in one place, I leveraged the power of CSS custom properties. All colors that I use across the site is residing in a variables collection; /css/00_variables.css
.
:root {\n color-scheme: light dark;\n\n --body-background: hsl(45, 40%, 94%);\n --body-text: hsl(0, 0%, 20%);\n}\n\n@media only screen and (prefers-color-scheme: dark) {\n :root {\n --body-background: hsl(0, 0%, 20%);\n --body-text: hsl(45, 40%, 94%);\n }\n}\n
This lets me keep only one instance of prefers-color-scheme: dark
around, which is nice. All my other stylesheets just contain references to the custom properties.
body {\n background-color: var(--body-background);\n color: var(--body-text);\n}\n
If you already utilize the cascade to its full potential, which luckily I have, you really shouldn’t need to change styles much. One such instance is the use of currentColor
property value.
I never was a fan of using someone elses CSS reset, like Normalize or even Eric Meyer’s reset.css, but this modern take of a CSS reset caught my eye for several reasons.
\n\n\n\n\nIn this modern era of web development, we don’t really need a heavy-handed reset, or even a reset at all, because CSS browser compatibility issues are much less likely than they were in the old IE 6 days. That era was when resets such as normalize.css came about and saved us all heaps of hell. Those days are gone now and we can trust our browsers to behave more, so I think resets like that are probably mostly redundant.
\n
The reset for lists is frickin’ genius!
\n\n/* Remove default padding */\nul[class],\nol[class] {\n padding: 0;\n}\n
Why didn’t I think of that?
\n\nExtra bonus points for the piece by piece explanations for each section.
\n" }, { "title": "Further sticky bookmarklet fun", "date_published": "2019-09-23T11:37:00+00:00", "date_modified": "2019-09-23T13:06:00+00:00", "id": "https://frippz.se/2019/09/23/further-sticky-bookmarklet-fun/", "url": "https://frippz.se/2019/09/23/further-sticky-bookmarklet-fun/", "author": { "name": "Fredrik Frodlund" }, "summary": "I just couldn’t leave that poor bookmarklet alone and now it has turned into some kind of benevolent monster", "tags": [ "css", "javascript", "usability" ], "content_html": "I just couldn’t leave that poor bookmarklet alone, could I? Using the previous version in the wild, I noticed some use cases where there was room for improvement. Right now my bookmarklet does two things:
\n\n<body>
for any scroll disabling styling and tries to unset itposition: fixed
or position: sticky
and deletes themSounds good enough, right? Well, not quite. I found one big glaring issue with this; headers. Many times, modern web sites use position: sticky
for headers. If I delete them — well, then I’ve broken the site a little too much.
Based on my very scientific research following this, by visiting as many as ten web sites, I could conclude the following:
\n\nposition: fixed
— these we can safely destroy with fireposition: fixed
— these we want to keep, but away from scrolling viewThese assumptions (dangerous as assumptions may be), lead me to modify the script accordingly.
\n\nconst elements = document.querySelectorAll('body *');\nconst body = document.querySelector('body');\n\nif (getComputedStyle(body).overflow === 'hidden') {\n body.style.overflow = 'unset';\n}\n\nelements.forEach(function (element) {\n if (['-webkit-sticky', 'sticky'].includes(getComputedStyle(element).position)) {\n element.style.position = 'unset';\n }\n else if(['fixed'].includes(getComputedStyle(element).position)) {\n element.parentNode.removeChild(element);\n }\n});\n
I first check for position: sticky
styled elements, and instead of removing them, I force the inline style to position: unset
. This will reset to the initial value of position: static
and so “un-sticky” any such elements on the page. Everything else that has position: fixed
will instead just get deleted.
There are of course some caveats with this approach, mostly that older sites might still have sticky-ish headers that are positioned with the fixed
property. I believe this is fine. Compared to the previous version, that just killed everything in sight, regardless of fixed
or sticky
, we now get something a bit more precise.
I’m beginning to have trouble coming up with good names for these bookmarklets, so this time around I’m just calling it “unSticky”. Besides, you’re completely free to name it whatever you wish.
\n\n\n\nOk, that’s it for now. Let’s see how long I can leave this one alone.
\n\nUpdated: Well that didn’t take long. My friend, Joacim de la Motte, decided to do some optimization for me. Apparently the IIFE is not really necessary, so I’ve removed it. The bookmarklet will still run without it.
\n" }, { "title": "Jeremy Keith on getting started", "date_published": "2019-09-07T07:08:00+00:00", "date_modified": "2019-09-07T07:17:00+00:00", "id": "https://frippz.se/2019/09/07/jeremy-keith-on-getting-started/", "url": "https://frippz.se/2019/09/07/jeremy-keith-on-getting-started/", "author": { "name": "Fredrik Frodlund" }, "summary": "Jeremy Keith shared some excellent sources and tips for people getting into front end development", "tags": [ "css", "html", "javascript" ], "content_html": "Jeremy Keith writes:
\n\n\n\n\nI got an email recently from a young person looking to get into web development. They wanted to know what languages they should start with, whether they should a Mac or a Windows PC, and what some places to learn from.
\n
I’ve gotten the question myself on more than one occasion; how should I get into front end development? Besides my recommendation of starting out with HTML, then moving on to CSS and lastly get into JavaScript, I often had to dig deep to find good sources for people to read that would help and inspire them. Thanks to Jeremy, that job has been done for me. From now on, I’ll link to his post.
\n\nI would like to add that Jeremy’s own excellent book “Resilient Web Design” should be part of the curriculum. Best of all is that it’s available for free online.
\n\nThanks, Keith!
\n" }, { "title": "Tags in JSON Feed", "date_published": "2019-09-05T11:55:00+00:00", "id": "https://frippz.se/2019/09/05/tags-in-json-feed/", "url": "https://frippz.se/2019/09/05/tags-in-json-feed/", "author": { "name": "Fredrik Frodlund" }, "summary": "I felt the need to tinker even more with my JSON Feed and so I added support for tags", "tags": [ "jekyll", "json feed" ], "content_html": "It seems I just can’t stop tinkering with my site. I was doing some optimizing of my deploy scripts (which now runs via Docker, because why the hell not), when I out of the blue re-read the JSON Feed spec and saw that there was an optional support for tags.
\n\nFrom the JSON Feed spec:
\n\n\n\n\n\n
tags
(optional, array of strings) can have any plain text values you want. Tags tend to be just one word, but they may be anything. Note: they are not the equivalent of Twitter hashtags. Some blogging systems and other feed formats call these categories.
An array in JSON is pretty much just this:
\n\n[ \"item-1\", \"item-2\", \"item-3\" ]\n
In context, according to the spec, each item in the items array would need the following (assuming our post has the tags “css” and “html”):
\n\n\"tags\": [\n \"css\",\n \"html\"\n]\n
So this whole exercise revolves around you putting an array of tags in your front matter for each post. I’m pretty much assuming that this is something that you’ve already done, but just in case, here’s a sample post:
\n\n---\ntitle: My blog post\ntags:\n - css\n - html\n---\n\nSome thoughtful content…\n
Considering my previous post about getting JSON Feed in Jekyll, here’s the additional Liquid we need to get these tags in the JSON Feed code:
\n\n{% if post.tags %}\n\"tags\": [\n{% for tag in post.tags %}\n \"{{ tag }}\"{% if forloop.last == false %},{% endif %}\n{% endfor %}\n],\n{% endif %}\n
Since JSON is picky with trailing commas, we need to utilize foorlop.last
in order to keep tabs on whether we’re at the last item in the loop or not. Also note the trailing comma after the last bracket on the sixth row. Depending on where you put this snippet in your JSON Feed template, you may or may not need it.
Here’s the full code for my own feed.json
template, complete with the new section for tags.
---\nlayout: null\nsitemap:\n priority: 0.7\n changefreq: weekly\n---\n{\n \"version\" : \"https://jsonfeed.org/version/1\",\n \"title\" : \"{{ site.title }}\",\n \"home_page_url\" : \"{{ site.url }}\",\n \"feed_url\" : \"{{ \"/feed.json\" | absolute_url }}\",\n \"author\" : {\n \"url\" : \"{{ site.url }}\",\n \"name\" : \"{{ site.author }}\"\n },\n \"icon\" : \"{{ \"/apple-touch-icon.png\" | absolute_url }}\",\n \"favicon\" : \"{{ \"/favicon-32x32.png\" | absolute_url }}\",\n \"items\" : [\n {% for post in site.posts %}\n {\n \"title\" : {{ post.title | jsonify }},\n \"date_published\" : \"{{ post.date | date_to_xmlschema }}\",\n {% if post.updated %}\n \"date_modified\": \"{{ post.updated | date_to_xmlschema }}\",\n {% else %}\n \"date_modified\": \"{{ post.date | date_to_xmlschema }}\",\n {% endif %}\n \"id\" : \"{{ post.url | absolute_url }}\",\n \"url\" : \"{{ post.url | absolute_url }}\",\n \"author\" : {\n \"name\" : \"{{ site.author }}\"\n },\n \"summary\": {{ post.description | jsonify }},\n {% if post.tags %}\n \"tags\": [\n {% for tag in post.tags %}\n \"{{ tag }}\"{% if forloop.last == false %},{% endif %}\n {% endfor %}\n ],\n {% endif %}\n \"content_text\": {{ post.content | strip_html | strip_newlines | jsonify }},\n \"content_html\": {{ post.content | strip_newlines | jsonify }}\n }{% if forloop.last == false %},{% endif %}\n {% endfor %}\n ]\n}\n
There we go! We now have tags from each post in our JSON Feed.
\n" }, { "title": "Improved feeds", "date_published": "2019-09-03T00:00:00+00:00", "date_modified": "2019-09-05T08:57:00+00:00", "id": "https://frippz.se/2019/09/03/improved-feeds/", "url": "https://frippz.se/2019/09/03/improved-feeds/", "author": { "name": "Fredrik Frodlund" }, "summary": "Blog syndication is an important corner stone of the internet and as such, I decided to improve my feeds.", "tags": [ "rss", "json feed" ], "content_html": "I get the feeling that things are happening when it comes to syndicated feeds online. A few years back, JSON Feed entered the scene, courtesy of Brent Simmons and Manton Reece. This summer, the very same Brent Simmons released version 5 of NetNewsWire, a free and open source, pure-Mac application, that’s a joy to use. While light on features on its initial release, it’s snappy and stable. Even better, Brent made the decision to at least support one feed service on day one. As luck would have it, he chose the very excellent Feedbin, a service that I’ve happily been paying for since day one of its release after the demise of Google Reader.
\n\nAnyway. After years of questionable alternatives to syndicated feeds (like Facebook, Twitter and whatever else people say they use instead), I’ve kept using RSS and Atom like a stubborn mule. Most sites worth its salt support syndicated feeds in some form, which allows users to easily consume content.
\n\nThere’s of course the occasional fly in the ointment. And what’s worse, I myself am guilty of it. I’m talking about only providing a short summary in the feed in order to drive traffic to the site itself. Well, no more. As of today, I’ve run a deploy that provides full content both via JSON Feed and Atom for this site. The reasons were quite simple:
\n\nThe counter point, as mentioned by someone online, is that this makes it easier for less than honest people to “steal” content and publish it as their own somewhere else. My position on this is that these people very likely would do so anyway, and it’s not a strong enough argument against not making things easier for everyone else.
\n\nAnd so, here we are.
\n" }, { "title": "Improving the fixed/sticky bookmarklet", "date_published": "2019-08-27T00:00:00+00:00", "id": "https://frippz.se/2019/08/27/improving-the-fixed-sticky-bookmarklet/", "url": "https://frippz.se/2019/08/27/improving-the-fixed-sticky-bookmarklet/", "author": { "name": "Fredrik Frodlund" }, "summary": "A follow-up post on a wonderful bookmarklet for removing sticky and fixed elements, this time improved a bit further", "tags": [ "css", "javascript", "usability" ], "content_html": "Last week I wrote about a bookmarklet I found online. I found that there was some room for improvement since the bookmarklet didn’t handle the more modern variant of position: sticky;
.
While using my new and (sort of) improved bookmarklet, I noticed that on some sites, annoying overlays not only covered the content, but also disabled scrolling of the entire page. So if you were to remove all elements that were either fixed or sticky, you still couldn’t scroll the page. Normally, I would often reach for the built-in reader mode of Safari to get to the content. However, this might not always be possible or applicable, depending on the site’s content.
\n\nMost sites disable scrolling by simply setting overflow: hidden;
on the <body>
. So all we have to do is look for this property and then unset it.
I took the liberty of adopting more modern ES6 syntax this time. This of course limits the browser support, but if you’re using something older, like Internet Explorer – well, sucks to be you, my friend. 😉
\n\n(function () {\n const elements = document.querySelectorAll('body *');\n const body = document.querySelector('body');\n\n if (getComputedStyle(body).overflow === 'hidden') {\n body.style.overflow = \"unset\";\n }\n\n elements.forEach(function (element) {\n if ([\"-webkit-sticky\", \"sticky\", \"fixed\"].includes(getComputedStyle(element).position)) {\n element.parentNode.removeChild(element);\n }\n });\n})();\n
It’s important to use the unset
property for overflow
, since any CSS we’re setting via JavaScript in this manner becomes a style
attribute on the target element. So this means that we have to override any styling set via an external stylesheet.
So this improvement might handle most cases, but not all of them. There’s almost as many ways to mess with the user experience as there are websites. I’d be happy for any feedback and suggestions to improve this bookmarklet. The easiest way is of course via the public Gist I’ve set up for the bookmarklet code. There’s also a CodePen if you want to fork and play around easily.
\n\nFinally, here’s the updated bookmarklet, for your convenience.
\n\n\n\nHappy browsing!
\n" }, { "title": "Killing both fixed and sticky headers", "date_published": "2019-08-20T00:00:00+00:00", "id": "https://frippz.se/2019/08/20/killing-sticky-headers/", "url": "https://frippz.se/2019/08/20/killing-sticky-headers/", "author": { "name": "Fredrik Frodlund" }, "summary": "A quick post on a wonderful bookmarklet for removing sticky and fixed elements", "tags": [ "css", "javascript", "usability" ], "content_html": "Last year someone linked to an article by Alisdair McDiarmid containing a bookmarklet that killed any element on a page that had the property position: fixed;
.
Knowing how the modern web sometimes might look, this type of bookmarklet is easy to love. However, with the advent of more modern solutions in CSS such as position: sticky;
, the bookmarklet is in need of some updating. What better way than to do it yourself, then?
There’s just a few minor additions needed to the original code. In addition to checking for position: fixed;
, we also need to check for position: sticky;
. There’s one caveat, though. Safari still uses a vendor prefix for sticky positioning, so we need to make sure to look for -webkit-sticky
as well.
(function () {\n var i, elements = document.querySelectorAll('body *');\n\n for (i = 0; i < elements.length; i++) {\n if ([\"-webkit-sticky\", \"sticky\", \"fixed\"].includes(getComputedStyle(elements[i]).position)) {\n elements[i].parentNode.removeChild(elements[i]);\n }\n }\n})();\n
All done! But this won’t do us any good unless it comes in the form of a handy bookmarklet, so here’s that as well. Drag this link to the bookmark bar of your browser of choice (or just save it).
\n\n\n\nThe caveat with these kinds of bookmarklets is that they are only working on the current page. If you leave the page or reload it, the effect disappear.
\n\nThere’s a public gist up if you want to fork the code and play around with it yourself. There’s also a really neat online tool for generating bookmarklets of your own.
\n\nThat’s it! Enjoy!
\n" }, { "title": "Cache busting in Jekyll revisited", "date_published": "2019-02-18T00:00:00+00:00", "id": "https://frippz.se/2019/02/18/cache-busting-in-jekyll-revisited/", "url": "https://frippz.se/2019/02/18/cache-busting-in-jekyll-revisited/", "author": { "name": "Fredrik Frodlund" }, "summary": "I dropped Gulp over a year ago and now it’s time to leverage the benefits of HTTP/2 as well! I also did some Jekyll twiddling and NPM scripting.", "tags": [ "html", "css", "jekyll", "http2", "npm", "cache busting" ], "content_html": "I never quite warmed up to Gulp. It was yet another tool I had to learn in order to get stuff done. Before Gulp there was Grunt, and somewhere along the way I had to cope with Webpack in React projects. The latter was surely fine for those projects, but for my own needs, it was way overkill or not even the right tool for my needs.
\n\nI then came upon two blog posts that piqued my interest; Why I Left Gulp and Grunt for npm Scripts and How to Use npm as a Build Tool. I felt inspired and got to work.
\n\nCutting down on stuff is a favorite pasttime of mine. If I can, I minimise and optimise as much as I can, both in code and in real life. So what these blog posts were about resonated quite well with me. For the same reason that I dislike CSS preprocessors like Sass and Less because they add more problems than they claim to solve (which I don’t believe they do anyway), I disliked Gulp because it also was an abstraction layer that I felt added more problems than it solved for me. And the plugins. Oh, all those damn Gulp plugins that I had to use for everything. Ugh.
\n\nGetting rid of Gulp means that you try to rely just on npm scripts in package.json
instead and that in turn means to mostly rely on CLI versions of different tools. Step one is to identify what I need to happen in my tool stack:
It doesn’t take long to find the packages that we need over at npmjs.com. This is what I like about this approach. My package.json
only has 12 dependencies since I dropped Gulp. Twelve.
I could almost cry from happiness.
\n\nAnyway, here’s what we’ve got:
\n\n\"devDependencies\": {\n \"concurrently\": \"^4.1.0\",\n \"eslint\": \"^5.12.1\",\n \"foreach-cli\": \"^1.8.1\",\n \"hashmark\": \"^5.0.0\",\n \"onchange\": \"^5.2.0\",\n \"postcss-cli\": \"^6.1.1\",\n \"postcss-custom-properties\": \"^8.0.9\",\n \"stylelint\": \"^9.10.1\",\n \"svg-sprite\": \"^1.3.7\",\n \"uglify-es\": \"^3.0.28\",\n \"uglifycss\": \"^0.0.29\",\n \"yarn\": \"^1.5.1\"\n}\n
Let’s quickly go over what each packages does:
\n\n&
in the shell straight up. Added bonus is the improved compatibility with Windows.Anyway, that’s what I use. You may use whatever tools you want to get the job done.
\n\nTo tie everything together, I need to create a couple of tasks in my package.json
that will help me get my development environment going again, this time without Gulp. At this point, I assume that you already know how to write stuff in you own package.json
. I’m also assuming that you’ve already read the two blog posts that I linked to in the beginning of this post.
\"scripts\": {\n \"start\": \"concurrently 'yarn run build:watch' 'yarn run jekyll:serve'\",\n \"prebuild\": \"touch _includes/sprite.svg & mkdir -p dist\",\n \"build\": \"yarn run build:css && yarn run build:js && yarn run build:svg\",\n \"build:watch\": \"onchange ./src/** -i -- yarn run build\",\n \"prebuild:css\": \"rm -rf ./dist/css/*\",\n \"build:css\": \"postcss -c postcss.config.js ./src/css/*.css -d dist/css\",\n \"build:js\": \"rsync --checksum --recursive --delete src/js/ ./dist/js\",\n \"postbuild:js\": \"hashmark -r -l 8 dist/js/vendor/require.min.js 'dist/js/vendor/{name}-{hash}.js'\",\n \"build:svg\": \"svg-sprite -C svg-sprite.config.json --dest ./_includes src/svg/*.svg\",\n \"deploy:css\": \"postcss -c postcss.config.live.js ./src/css/*.css -d dist/css\",\n \"css:uglify\": \"foreach -g 'dist/css/*.css' -x 'uglifycss #{path} --output #{path}'\",\n \"css:hash\": \"hashmark -r -l 8 dist/css/*.css 'dist/css/{name}-{hash}.css'\",\n \"postdeploy:css\": \"yarn run css:uglify && yarn run css:hash\",\n \"deploy:js\": \"yarn run build:js\",\n \"lint\": \"yarn run lint:css && yarn run lint:js\",\n \"lint:css\": \"stylelint --color -f verbose src/css/**/*.css\",\n \"lint:js\": \"eslint src/js/**/*.js\",\n \"jekyll:serve\": \"sleep 1; jekyll serve --incremental --drafts\",\n \"test\": \"yarn run lint\",\n \"dist:clean\": \"mkdir -p ./dist && rm -rf ./dist/*\",\n \"predeploy\": \"yarn run dist:clean\",\n \"deploy\": \"yarn run deploy:css && yarn run deploy:js && yarn run build:svg\"\n}\n
The scripts section does grow somewhat when you’re not using Gulp anymore. Even if JSON has its shortcomings, like the lack of commenting — if you keep the naming as descriptive as possible, you should be fine.
\n\nA quick word on the pre and post hooks. As you can see in the above code snippet, there’s a few entries with the prefixes pre
and post
, like prebuild
and postdeploy:css
. This is a neat feature of npm wherein any script that has either a post or a pre hook, will automatically run either before or after that script. The above linked post by Keith Cirkel does a much better job than me in explaining the intricacies of these hooks.
For the sake of brevity, let’s focus on the CSS part.
\n\n\"build:css\": \"postcss -c postcss.config.js ./src/css/*.css -d dist/css\",\n
First, I’m using PostCSS to transpile any custom properties to provide fallback properties for legacy browsers. In effect, this means that the following:
\n\n:root {\n --text-color: #333;\n}\n\nbody {\n color: var(--text-color);\n}\n
Will be transpiled into:
\n\n:root {\n --text-color: #333;\n}\n\nbody {\n color: #333;\n color: var(--text-color);\n}\n
Due to the wonderful way that CSS is progressively enhanced, any browser that does not understand a property will simply ignore it and move on. This mean that the occurrence of var(--text-color)
will for example be ignored by Internet Explorer 11 and the previous value (#333
) will still apply since it was declared before.
The following two lines are only run when I want to deploy to production, and this is also where the fun happens in terms of cache busting.
\n\n\"css:uglify\": \"foreach -g 'dist/css/*.css' -x 'uglifycss #{path} --output #{path}'\",\n\"css:hash\": \"hashmark -r -l 8 dist/css/*.css 'dist/css/{name}-{hash}.css'\",\n
The tool first used; uglifycss
takes care of compressing the CSS by remove line breaks and whitespace. Since we’re not concatenating all our stylesheets into one big file, as has been traditional — but rather leverage the power of HTTP/2 multiplexing, we just run it on each file in place.
The second line is all about cache busting. hashmark
enables us to do this by providing us with a unique file name base in the hash of the file and, more importantly, only change this hash if the file actually has changed. The flags used are firstly -r
which means replace whatever file you’re working one with the hash renamed one, and -l 8
tells hashmark to limit the length of the hash in the filename to eight characters, which will be more than enough for our needs. The pattern {name}-{hash}.css
should be pretty self explanatory; file.css
would become file-d121a5d4.css
.
We have a few problems with this approach to solve. Since we’re keeping all of our stylesheets as separate files, we could of course manually link them all in our Jekyll templates, but that’s not really maintainable and as soon as we start hashing our files, that strategy goes straight out the window. So how to make Jekyll aware of these dynamically changing files, without us having to poke around manually each time something changes?
\n\nLuckily, Jekyll keeps track of static files.
\n\n\n\n\nA static file is a file that does not contain any front matter. These include images, PDFs, and other un-rendered content.
\n\nThey’re accessible in Liquid via
\nsite.static_files
…
Using this info, we can use the metadata to filter out the files in /dist/css
(i.e. where npm is putting our source files once they’ve been transpiled) and then iterate over each file and output the path. It’ll look something like this:
{% for css in site.static_files %}\n {% if css.path contains \"dist/css\" %}\n <link rel=\"stylesheet\" href=\"{{ site.baseurl }}{{ css.path }}\">\n {% endif %}\n{% endfor %}\n
There’s a caveat with this method, though. This implementation would list any kind of file present in that folder, even those that aren’t legitimately CSS files. In this example, that would never happen, but if you want your solution to be more robust (like if someone haphazardly starts putting PNG files or JavaScript files in your folder), you can also filter using css.extname
. Now, if everything is working correctly, assuming that we have three files in /dist/css
, that we say are named file1.css
, file2.css
and file3.css
, Jekyll would render markup accordingly:
<link rel=\"stylesheet\" href=\"/dist/css/file1.css\">\n<link rel=\"stylesheet\" href=\"/dist/css/file2.css\">\n<link rel=\"stylesheet\" href=\"/dist/css/file3.css\">\n
Awesome! No matter how many files we add to our project, or whatever names they will dynamically get from hashmark, Jekyll will take care of linking them properly for us.
\n\nSimplifying things and throwing out superflous tools felt really great! The less of them I have, the quicker my development environment got and I felt that the overall robustness went up a few ticks. There is still some things that I likely will never be rid of, like the cache busting feature, since the benefit is too big (and I just can’t wrap my head around getting caching headers right).
\n\nI hope this might be of some use to someone else than me. The basic principle isn’t really tied to Jekyll (apart from the static files functionality), so you should be able to implement this with whatever tools you choose. If nothing else, it might’ve served as an inspiration to cut down a little in your own tool stack.
\n" }, { "title": "Inverted colors on focused links", "date_published": "2019-02-09T00:00:00+00:00", "id": "https://frippz.se/2019/02/09/inverted-colors-on-focused-links/", "url": "https://frippz.se/2019/02/09/inverted-colors-on-focused-links/", "author": { "name": "Fredrik Frodlund" }, "summary": "Thanks to CSS custom properties, we can now easily make inverted colors on focused links, no matter the colors.", "tags": [ "html", "css", "custom properties", "accessibility" ], "content_html": "Some time ago, I worked on a project where we made digital teaching aids. It was essentially web based e-books. Most of these materials would contain links leading to other parts, and in some instances the text could be inside colored boxes, for example an info box or something along those lines. Did I mention that links could also be inside those colored boxes?
\n\nSince this was teaching materials for public schools, we had accessibility requirements to fulfill. This included clear and distinct focus states for users navigating the UI with a keyboard. We chose to invert the colors of links — that is, if the text color was black and the background was white, upon focusing a link the background would turn black and the text white.
\n\n\n\nIt’s a fairly simple setup. The required styling would be accordingly:
\n\nbody {\n background-color: #fff;\n color: #000;\n}\n\na {\n color: currentColor;\n}\n\na:focus {\n background-color: #000;\n color: #fff;\n}\n
Thus far, things aren’t too complex. We also chose to let links have the same color as the text and to rely on their underline to indicate that it was a link.
\n\nNow, about those colored boxes. One scenario could be the following:
\n\n.theme-1 {\n background-color: antiquewhite;\n color: darkred;\n}\n
If we wanted the same effect as above, we would need additional styling for the links:
\n\n.theme-1 a:focus {\n background-color: darkred;\n color: antiquewhite;\n}\n
This would need to be repeated for each theme, which in itself could be a bit tedious.
\n\nWhat if you could have just one set of rules for this focus styling, no matter how many themes you create? Well, thanks to the power of variables in CSS — or as they are actually called: CSS custom properties — we now can.
\n\nConsider the following:
\n\n:root {\n --background: white;\n --text: black;\n}\n\nbody {\n background-color: var(--background);\n color: var(--text);\n}\n
This is our starting point. The neat thing about custom properties is that, like other properties in CSS, they are a part of the cascade. This means that they can either be global, as in the above example where we’ve declared two properties in the :root
selector, or they can be scoped to a selector. Now, let’s style our links.
a {\n color: currentColor;\n}\n\na:focus {\n background-color: var(--text);\n color: var(--background);\n}\n
As stated before, we first make sure that links use the same color as the text, then we start leveraging the power of custom properties. This is the inverting of the colors in action. We’re actually done with the styling of the links. Don’t believe me? Check this out. We’re going to make ourselves a nice colored box. First the markup.
\n\n<div class=\"themed-box box-theme-1\">\n Content here\n</div>\n
Then, we do the styling.
\n\n/* Set up our themed boxes */\n.themed-box {\n background-color: var(--background);\n color: var(--text);\n}\n\n/* A pretty, colored box */\n.box-theme-1 {\n --text: white;\n --background: green;\n}\n
That’s it! See what’s going on? All boxes will get the class name themed-box
, which declares that the background color and the color of the text should be set by the custom properties --background
and --text
respectively. Then, all we need to do is assign the values of those custom properties for each theme we create. Now, without any additional styling for focused links, we get the following:
There’s a CodePen up if you want to play around with this yourself. Happy coding!
\n" }, { "title": "Just a text editor and a few hours", "date_published": "2019-01-31T00:00:00+00:00", "id": "https://frippz.se/2019/01/31/just-a-text-editor-and-a-few-hours/", "url": "https://frippz.se/2019/01/31/just-a-text-editor-and-a-few-hours/", "author": { "name": "Fredrik Frodlund" }, "summary": "Rachel Andrew makes a strong case for the value of learning the basics of the web (HTML and CSS) and how it’s the perfect entry point for aspiring web developers, without scaring them off with complex JavaScript frameworks and insane tool stacks.", "tags": [ "html", "css" ], "content_html": "Rachel Andrew, once again, seriously hits the nail right on the head:
\n\n\n\n\nThere is something remarkable about the fact that, with everything we have created in the past 20 years or so, I can still take a complete beginner and teach them to build a simple webpage with HTML and CSS, in a day. We don’t need to talk about tools or frameworks, learn how to make a pull request or drag vast amounts of code onto our computer via npm to make that start. We just need a text editor and a few hours. This is how we make things show up on a webpage.
\n
I’ve been building interfaces for the web since 1996. I first picked this up when I found out that Netscape came with something called Netscape Composer. It didn’t take me long to realise that there was something beneath that WYSIWYG-like editor that interested me even more; HTML. Fast forwards a few years and I had now found out about the wonders of CSS (and the horrors of the browser wars of the early 2000s). One thing have remained constant right up until today; the web still consists of HTML and CSS. No matter the tooling, no matter the frameworks, you still end up with these things (and JavaScript, of course).
\n\nLearning the basics is not only a great entry point, as Rachel points out, it’s also a vital skill if you want to become a truly great web developer.
\n" }, { "title": "Mocking actions with buttons", "date_published": "2019-01-21T00:00:00+00:00", "id": "https://frippz.se/2019/01/21/mocking-actions-with-buttons/", "url": "https://frippz.se/2019/01/21/mocking-actions-with-buttons/", "author": { "name": "Fredrik Frodlund" }, "summary": "Sometimes we might want buttons to take us places even in prototypes. This solution helps mock flows in static HTML prototypes.", "tags": [ "html", "javascript", "prototyping" ], "content_html": "A common scenario in my line of work is when I build UI components into complete pages, and somewhere on this page there might be a form. This form might contain a button, and when you click this button you want something to happen. Now, since pattern libraries and HTML prototypes very often are just static HTML, you are limited in what can happen. Naturally, when it comes to regular links, everything works out of the box since all you have to do is link pages together. Buttons, on the other hand, are a different story.
\n\nLet me first state, very clearly, that this should not be used in production. Ever. If you want to link to something – well, then we already have the excellent <a>
element that does that job amazingly well.
With the disclaimers out of the way, let’s get down to business! We want our little prototype button to feature two things;
\n\n<button>
elementThat’s really all we need. No need to complicate things, right?
\n\n<button type=\"button\" data-prototype-url=\"link.html\">Click me</button>\n
The type=\"button\"
attribute is optional, since the default type for a button is submit
and that might be just what you want. In order to easily enter the URL in question we’ve got the data-prototype-url
attribute. You can name it however you want. but it might be prudent to use a name that communicates its prototype use case as clearly as possible. Again, we do not want this type of code running on a live web site.
You guessed it. We can’t really do very much with this button unless we enlist the help of good ’ol JavaScript.
\n\ndocument.addEventListener('click', function (event) {\n if (event.target.matches('[data-prototype-url]')) {\n var button = event.target;\n var buttonURL = button.getAttribute('data-prototype-url');\n window.location.href = buttonURL;\n console.log(button.textContent + ' clicked');\n }\n}, false);\n
Since we might have any number of buttons in a page, all that might potentially mock a scenario that takes us away from the current page, I found that the most preferrable method is to attach an event listener to the entire document and use event delegation to trigger the actions. This was something that I’ve picked up from Chris Ferdinandi. The short-short version of the script:
\n\ndata-prototype-url
attributeThat’s it! You now have a nifty solution for sending a user to another page by clicking a button, which will enable you to create mock scenarios for user testing or other purposes. If you easily want to test it out, I’ve got a version on Codepen going that you can play around with.
\n\nJust please don’t use it in production, ok?
\n" }, { "title": "Developing websites for Apple Watch", "date_published": "2018-10-24T00:00:00+00:00", "id": "https://frippz.se/2018/10/24/developing-websites-for-apple-watch/", "url": "https://frippz.se/2018/10/24/developing-websites-for-apple-watch/", "author": { "name": "Fredrik Frodlund" }, "summary": "Getting your website to run smoothly on an Apple Watch, because why not?", "tags": [ "html", "css", "apple watch", "responsive design" ], "content_html": "As of WatchOS 5, you are able to render web content on the Apple Watch. Marcus Herrmann did a write-up on the subject this summer and after stumbling over it, I couldn’t resist doing some testing on my own site. Actually, all I did was to add the following to my <head>
:
<meta name=\"disabled-adaptations\" content=\"watch\">\n
Also, I did some quick and dirty experimenting with a (very ugly) media query that should target only Apple Watches. More testing is warranted.
\n\n/* Apple Watch only? How can this possibly come back to bite me in the ass? */\n@media only screen and (max-width: 22em) and (max-height: 357px) {\n\n html {\n font-size: 6.5vmax;\n }\n}\n
It works… ok, I guess? It might need some more work.
\n\nUpdate: It appears that my code blocks looks like crap at the moment. I will have to take a look at that at some point…
\n" }, { "title": "How we talk about CSS", "date_published": "2018-10-12T00:00:00+00:00", "id": "https://frippz.se/2018/10/12/how-we-talk-about-css/", "url": "https://frippz.se/2018/10/12/how-we-talk-about-css/", "author": { "name": "Fredrik Frodlund" }, "summary": "CSS has come a really long way in just a few years and it’s no longer that hacky, weird “language” anymore.", "tags": [ "css" ], "content_html": "Rachel Andrew has been working on new material for upcoming talks and she has been thinking about how CSS has been perceived in the web developer community in the past, and how we should perhaps modify our thinking about it in the future, given how far it has come.
\n\n\n\n\nCSS has been seen as this fragile language that we stumble around, trying things out and seeing what works. In particular for layout, rather than using the system as specified, we have so often exploited things about the language in order to achieve far more complex layouts than it was ever designed for. We had to, or resign ourselves to very simple looking web pages.
\n
Back in 1998, when I started working with CSS, I found it to be mostly hacks and me trying to beat browsers into submission (I’m mostly looking at you, Internet Explorer). Now we have modern layout frameworks like Flexbox and CSS Grid, in addition to – as Rachel points out – many other technologies in CSS that might not be as well known. They are, however, very important.
\n\n\n\n\nNo available space? That’s ok. Nothing is going to break. We often don’t know how much content we have, so CSS gives us sizing based on min and max content sizes, allowing items to grow and shrink into their containing boxes. […]
\n\nThese features are part of the Box Alignment, CSS Intrinsic and Extrinsic Sizing, Writing Modes, and Logical Properties and Values specifications. These specifications tie together the individual layout methods into one system, with various methods to create the type of layout we need to see.
\n
While there’s still much that is left to be desired, the technology has come such long way in just a few years. My day to day work has shifted significantly, from trying to make browsers jump through hoops while using CSS in ways it was probably never intended for, to actually letting me work more creatively.
\n\nOne of the challenges over the years has been to communicate the importance of the frontend designer role. CSS is often seen as a “semi-language”, lacking features (interestingly such features that programmers often find useful), and would best be replaced with something else. This might be why we ended up with things like CSS-in-JS, convoluted preprocessor languages and insanities like CSS modules.
\n\n\n\n\nThere is frequently talk about how developers whose main area of expertise is CSS feel that their skills are underrated. I do not think we help our cause by talking about CSS as this whacky, quirky language. CSS is unlike anything else, because it exists to serve an environment that is unlike anything else.
\n
I agree wholeheartedly with this. But I also think that not appreciating how powerful CSS is, and instead turning to convoluted tooling to solve non-existing problems, is not helping either. I fear that when developers refuse to learn about the core strengths of CSS and instead utilise tools that are based on misunderstandings of how CSS is fundamentally working, that becomes a problem. The work that people like Rachel are doing is very important and I’d hate to see that be undermined. I suppose time will tell.
\n" }, { "title": "Getting JSON Feed in Jekyll", "date_published": "2017-05-18T08:30:00+00:00", "id": "https://frippz.se/2017/05/18/getting-json-feed-in-jekyll/", "url": "https://frippz.se/2017/05/18/getting-json-feed-in-jekyll/", "author": { "name": "Fredrik Frodlund" }, "summary": "How to create a Liquid template for JSON Feed", "tags": [ "web", "tech", "jekyll" ], "content_html": "With regard to the previous post, here’s how I cooked up a Jekyll template for JSON Feed:
\n\n---\nlayout: null\n---\n{\n \"version\" : \"https://jsonfeed.org/version/1\",\n \"title\" : \"{{ site.title }}\",\n \"home_page_url\" : \"{{ site.url }}\",\n \"feed_url\" : \"{{ \"/feed.json\" | prepend: site.baseurl | prepend: site.url }}\",\n \"author\" : {\n \"url\" : \"https://twitter.com/frippz\",\n \"name\" : \"Fredrik Frodlund\"\n },\n \"icon\" : \"{{ \"/apple-touch-icon.png\" | prepend: site.baseurl | prepend: site.url }}\",\n \"favicon\" : \"{{ \"/favicon-32x32.png\" | prepend: site.baseurl | prepend: site.url }}\",\n \"items\" : [\n {% for post in site.posts limit:10 %}\n {\n \"title\" : {{ post.title | jsonify }},\n \"date_published\" : \"{{ post.date | date_to_rfc822 }}\",\n \"id\" : \"{{ post.url | prepend: site.baseurl | prepend: site.url }}\",\n \"url\" : \"{{ post.url | prepend: site.baseurl | prepend: site.url }}\",\n {% if post.external-link %}\n \"external_url\" : \"{{ post.external-link }}\",\n {% endif %}\n \"author\" : {\n \"name\" : \"Fredrik Frodlund\"\n },\n \"content_html\": {{ post.content | jsonify }}\n }{% if forloop.last == false %},{% endif %}\n {% endfor %}\n ]\n}\n
It’s a pretty quick and dirty port of my feed.xml template, but it seems to work. You can get the above code snippet directly on Github as well, but the syntax highlighter doesn’t like the YAML front matter and Liquid template tags too much, so it looks a bit ugly.
\n\nUpdate (2017-05-19): I’ve updated the code snippet a bit with more Jekyll tags for a more dynamic solution (like site.url
and so on). Liquid template offers a great filter called jsonify
that I’m using wherever applicable.
Brent Simmons and Manton Reece:
\n\n\n\n\nWe — Manton Reece and Brent Simmons — have noticed that JSON has become the developers’ choice for APIs, and that developers will often go out of their way to avoid XML. JSON is simpler to read and write, and it’s less prone to bugs.
\n\nSo we developed JSON Feed, a format similar to RSS and Atom but in JSON. It reflects the lessons learned from our years of work reading and publishing feeds.
\n
My inital reaction to this was “But, why?” Then I read the spec, which is really nicely written for us humans, and I realized that JSON really is very prominent in many tech stacks these days. So, why the hell not?
\n\nJust for fun, I decided to spend ten minutes to try and implement a JSON Feed of my own for frippz.se (might be full of bugs). Looking around the web now, a lot of people seem excited about this new spec, so why not jump on the bandwagon early this time?
\n" }, { "title": "That whole “blog” thing", "date_published": "2017-03-28T00:00:00+00:00", "id": "https://frippz.se/2017/03/28/that-whole-blog-thing/", "url": "https://frippz.se/2017/03/28/that-whole-blog-thing/", "author": { "name": "Fredrik Frodlund" }, "summary": "Twitter threads are awful and blogs are awesome", "tags": [ "blogging", "web" ], "content_html": "In the beginning of March, I blurted this out in pure frustration on Twitter:
\n\n\n\nPeople posting mile long threads on Twitter; get a damn blog or something.
— Fredrik Frodlund (@frippz) March 5, 2017
Then, a few weeks later, a gentleman named Paul Lloyd tweeted:
\n\n\n\nHere’s a crazy idea: threaded tweets, but logged together, on a single webpage. A ‘weblog’, if you will.
— Paul Lloyd (@paulrobertlloyd) March 21, 2017
Great minds think alike, yes? 😉 Jeremy Keith shared our view as well. I think this is great for many reasons. For my own part, I created this blog (or journal as I so “hipstery” chose to call it) just to have something fun to tinker with all the time. Plus, I really wanted to get in on this whole static site generator craze everyone was talking about. As a side bonus I got to learn and refresh a whole slew of skills, like setting up and securing a server on Digital Ocean, making build scripts that trigger when I push to Git and more. It’s plenty of fun and it allows me to play with things that I don’t always necessarily get to do at work.
\n\nThere’s more to it, though. Someone once said that owning your content is a pretty good thing. I’m sad to say that I can’t remember who said it, but it’s a good thing nevertheless. I honestly can’t understand why some great authors chose to publish on Medium, but to each his own, I guess. Reading stuff on Medium is fine, I suppose. Still, their highlighting feature is really weird and distracting.
\n\nIn any case, my initial tweet on the matter was borne out of frustration. I was frustrated that people that had great and interesting things to say chose to chop it up in 140 character bits and streamed it intermixed with other people more or less useful thoughts. Threading on Twitter is horribly broken at times and if you instead chose to publish your thoughts on something more user friendly than a Twitter thread (and hey, you can always tweet about that post afterwards), then I salute and thank you!
\n\nKeep posting great stuff, I will keep reading it and share it with others.
\n" }, { "title": "No more prefixing", "date_published": "2017-03-12T00:00:00+00:00", "id": "https://frippz.se/2017/03/12/no-more-prefixing/", "url": "https://frippz.se/2017/03/12/no-more-prefixing/", "author": { "name": "Fredrik Frodlund" }, "summary": "Time to drop all these vendor prefixes in my CSS", "tags": [ "css", "browser support", "progressive enhancement" ], "content_html": "As I was tinkering with the code for this site, as one does on a Sunday afternoon, I came to realise that I hadn’t done some proper, old-school browser testing in Internet Explorer for a while. I had been doing a lot of refactoring in my gulpfile.js
and exchanged some packages for others, most notably the package for minifying my CSS. I had been using gulp-cssnano for quite a while without giving it any thought (I can’t even remember which package I had use prior to that one).
Opening the site up in Internet Explorer 9 on Windows 7 revealed that a few things like font declarations didn’t work at all and after some debugging it turned out that cssnano was indeed the culprit. I didn’t delve too deeply into the cause of the issue, but instead ended up switching to gulp-clean-css, since it did the same thing without breaking IE compatibility. Good enough!
\n\nWhile debugging in IE, something else did catch my eye, though. Man, there sure is a lot of prefixed properties in here! Most of them was for flexbox, which I’m only using on a few places like the footer and the main layout to get a sticky footer on pages with less content. Dropping these would mean that IE 10, to name one, would not get the same slick layout (well, slick-ish, you know…) as modern browsers. And you know what? That’s just fine. The upside was that I got to trim away a few packages in my node_modules
folder, not to mention that I got to delete a few extra lines in my gulpfile.js
. That felt really good!
I did some checking and since I use display: inline-block;
on elements that normally would be flex-items in supporting browsers to at least get them to appear next to each other, things looked pretty ok. Nothing looked too broken and all the content was fully accessible.
So what am I getting at with all this rambling, then? Well, most vendor prefixes are on their way out the door and to my knowledge, new prefixes are not coming in anymore. If we just apply some progressive enhancement thinking to our CSS as well, we can cut down on complexity in our tool stacks (and you know that’s normally not a thing these days!), plus we’re also cutting down on the amount of code our users are downloading.
\n\nAlmost all major browsers are evergreen today and feature support is rapidly progressing across the board. Vendor prefixes was an unfortunate hiccup along the way, that got picked up by developers and misused. It’s time we left that stuff behind us.
\n" }, { "title": "On browser support for evergreen websites", "date_published": "2017-01-18T00:00:00+00:00", "id": "https://frippz.se/2017/01/18/on-browser-support-for-evergreen-websites/", "url": "https://frippz.se/2017/01/18/on-browser-support-for-evergreen-websites/", "author": { "name": "Fredrik Frodlund" }, "summary": "Coping with bleeding edge features in CSS and evergreen browsers", "tags": [ "css", "browser support", "progressive enhancement" ], "content_html": "\n\n\n\n\n“Pixel perfect” meant “this website looks like this graphic”. Designers reacted in horror that users might increase their text size. Browser compatibility ranged from “Best Viewed in Internet Explorer” badges to the development of two separate sites, one for each browser in order to ensure the same design was shown to each.
\n
I remember those days back in the 90s and early 2000s. While it was fun to tackle such challenges, I still shudder thinking back knowing what I know now.
\n\n\n\n\nWe don’t have 99% browser support for border-radius, or for pretty much anything introduced in the last few years. If you think you need 99% support to use any CSS you probably had best stop using CSS altogether.
\n
So true. Applying progressive enhancement in all your work makes your job a lot easier in the end. The challenge is selling the idea to your client. In my experience, transparency is key. Be open with how you communicate your ideas and what benefits it will offer. Telling them that a majority of their users will receive the high-end experience, but at the same time a wider audience can be reached, helps a lot.
\n" }, { "title": "Vim and Apple’s Touch Bar", "date_published": "2017-01-13T00:00:00+00:00", "id": "https://frippz.se/2017/01/13/vim-and-apples-touch-bar/", "url": "https://frippz.se/2017/01/13/vim-and-apples-touch-bar/", "author": { "name": "Fredrik Frodlund" }, "summary": "Ways to handle the missing escape key in Vim on the new MacBook Pro with Touch Bar", "tags": [ "tools", "vim", "apple", "macbook pro" ], "content_html": "Since we basically lost the escape key with the new MacBook Pro models that came out at the end of 2016, Vim users needed a solid backup plan if they were to get one of the new laptops. Harry Roberts has put together some really nice things to consider with regards to the escape key:
\n\n\n\n\nFor almost as long as I’ve been using Vim–which is a long time now–I’ve been using
\njj
andjk
to leave Insert mode. These mappings are on the Home Row, so always easy to reach, and the letter pairs very rarely (if ever) occur in the English language. If there were words that containedjj
andjk
next to each other then I would be flung straight into Normal mode any time I tried to write them. (The reason I haven’t mappedkk
to Escape is because it does occur within words, e.g. bookkeeper.
What did I do? Well, as a macOS Sierra user, I just followed the advice of a former colleague and remapped my Caps Lock key.
\n\n\n\n" }, { "title": "Ending the dyslexia legibility experiment", "date_published": "2016-11-11T00:00:00+00:00", "id": "https://frippz.se/2016/11/11/ending-the-dyslexia-experiment/", "url": "https://frippz.se/2016/11/11/ending-the-dyslexia-experiment/", "author": { "name": "Fredrik Frodlund" }, "summary": "I’m ending the experiment for improved legibility for dyslexics", "tags": [ "accessibility", "dyslexia" ], "content_html": "Update: As of macOS Sierra 10.12.1, the Caps Lock -> Escape remapping can be done natively in the Keyboard System Preferences pane! To remap without any 3rd party software, do the following:
\n\n\n
\n\n \n- Open System Preferences and click on ‘Keyboard’
\n- Click on ‘Modifier Keys…’
\n- For ‘Caps Lock (⇪) Key’, choose ‘⎋ Escape’
\n- Click ‘OK’
\n
Back in February, I decided to do a little experiment right here on my journal web site, whereby I added a simple JavaScript function to toggle styling that should increase the legibility for people with dyslexia. Partly, it was for me to do some JavaScript hacking, but also to try and get some feedback about the validity of the write up “A Typeface For Dyslexics? Don’t Buy Into The Hype”.
\n\nNine months have passed since I published that post, and I feel it’s time to remove the toggle from my web site. While I did get some amount of response from people suffering from dyslexia, I felt it wasn’t quite enough. I’ll update the original post in due time and move the experiment into its own lab page.
\n\nI’d like to thank those to gave feedback and helped me do this little experiment (you know who you are 😊).
\n" }, { "title": "Cache busting with Jekyll and Gulp", "date_published": "2016-02-18T00:00:00+00:00", "id": "https://frippz.se/2016/02/18/cache-busting-with-jekyll-and-gulp/", "url": "https://frippz.se/2016/02/18/cache-busting-with-jekyll-and-gulp/", "author": { "name": "Fredrik Frodlund" }, "summary": "How to do automatic cache busting with Jekyll and Gulp", "tags": [ "html", "javascript", "css", "gulp", "jekyll", "cache busting" ], "content_html": "Since I love tinkering with my journal, updates to both my stylesheets and JavaScript files are quite frequent. Up till now, I’ve just let Gulp generate my files and then I included this in my Jekyll templates in a hard-coded fashion. But it then hit me that maybe I should do something to make sure that recurring visitors always get the latest version of my JavaScript and CSS, if they’ve been changed.
\n\nIn short, cache busting is a way to make sure that a client downloads your very latest version of a file. The simplest way to do this is to either give the file a random name, like ab459ef32da.css
or, in my opinion, the nicer variant of adding a query string along the lines of styles.css?version=ab459ef32da
. Then, each time you do a change to this file, you make sure that the random string changes, and you’ve successfully busted that cache.
Update (2016-02-26): Apparently, according to GTmetrix, some proxies do not cache static resources with a query string in the URL. They recommend that you encode the unique string into the file names themselves.
\n\nAs mentioned, I use Gulp to minify and concatenate both my CSS and JavaScript. The output files are hardcoded in my Gulpfile.js
so all Jekyll needed was the paths to both files in the templates and be done with it.
<link rel=\"stylesheet\" href=\"/gui/css/styles.css\">\n<script src=\"/gui/js/main.js\" defer></script>\n
In order to get some cache busting going, I needed the following:
\n\nSince I only want the string to change when I’ve done something to either the JavaScript or CSS, the best approach would be to use the MD5 checksum from each generated file to indicate when something has changed. So I needed some sort of Gulp plugin to grab the MD5 and then do something with it. But first, we need to sort out how to get this data into Gulp.
\n\nJekyll has this nifty feature that lets you define custom data.
\n\n\n\n\nIn addition to the built-in variables available from Jekyll, you can specify your own custom data that can be accessed via the Liquid templating system.
\n\nJekyll supports loading data from YAML, JSON, and CSV files located in the
\n\n \n_data
directory.
Perfect! This will allow us to pass information into our Jekyll templates. Reading on in the Jekyll documentation lets us know that if we create _data/cache_bust.yml
, the contents of this file will be available in Jekyll via {{ site.data.cache_bust }}
. If the cache_bust.yml
just contains a string, the aforementioned Jekyll tag will output just that. That’s all we need for this job.
There’s a plethora of cache busting plugins for Gulp over at npmjs.com. But all I need is something that grabs the MD5 checksum of my generated files and write that string to a file in the _data
folder. The closest match for me turned out to be gulp-hashsum.
So this is my Gulp task for building the CSS.
\n\n// Process stylesheets\ngulp.task('css', function () {\n return gulp.src(paths.css)\n .pipe(plumber({\n errorHandler: onError\n }))\n .pipe(sourcemaps.init())\n .pipe(autoprefixer({\n browsers: ['last 2 versions'],\n cascade: false\n }))\n .pipe(concat(paths.cssOutput))\n .pipe(cssnano())\n .pipe(gulpif(!isProduction, sourcemaps.write('.')))\n .pipe(gulp.dest(paths.cssDest));\n});\n
In short, I’m using Gulp to do autoprefixing, then concatenate all my files, minify them, and lastly, if I’m not on production, add sourcemaps. So first we install gulp-hashum
.
$ npm install gulp-hashsum --save-dev\n
Require it in our Gulpfile.js
var hashsum = require(\"gulp-hashsum\");\n
Then take a quick look at the code example in the README.
\n\ngulp.src([\"app/**/*.js\"]).\n pipe(hashsum({dest: \"app\"}));\n
Hmm, ok. We don’t actually want to specify what files to get the checksum from. The simplest would just be to pipe hashsum()
in my task right before we write the file to disk.
// Process stylesheets\ngulp.task('css', function () {\n return gulp.src(paths.css)\n .pipe(plumber({\n errorHandler: onError\n }))\n .pipe(sourcemaps.init())\n .pipe(autoprefixer({\n browsers: ['last 2 versions'],\n cascade: false\n }))\n .pipe(concat(paths.cssOutput))\n .pipe(cssnano())\n .pipe(hashsum({filename: './_data/cache_bust_css.yml', hash: 'md5'}))\n .pipe(gulpif(!isProduction, sourcemaps.write('.')))\n .pipe(gulp.dest(paths.cssDest));\n});\n
There we go! Right after cssnano()
(on line 14) we grab the MD5 hash and write it to _data/cache_bust_css.yml
. Let’s give it a go and see what our output is.
6ebeded38c4fc6c1b111172052b6ca17 ../src/css/styles.css\n
Oh. That’s not quite what we were after, but the fact is that a checksum file is supposed to look like this. No matter. I think we can do some magic in Jekyll to get what we want. As I mentioned earlier, all we need is a unique string to properly identify when a file has changed, so we can probably just use one of Jekyll’s output filters to truncate down to the first 10 characters. Luckily enough, there’s something called truncate
that does just that. Damn, I love Jekyll (or Liquid)!
<link rel=\"stylesheet\" href=\"{{ site.baseurl }}/gui/css/styles.css?version={{ site.data.cache_bust_css | truncate: 10, '' }}\">\n
Let’s break down what I just did. Besides adding the ?version=
query string to the href
attribute, I also added {{ site.data.cache_bust_css }}
which basically means output the content from the file _data/cache_bust_css.yml
right into the template. I then added the truncation filter truncate: 10, ''
to just show the first 10 characters from the file. The trailing comma and empty single quotes is just to make sure that truncate
doesn’t add an ellipsis after the string, since that’s the default behavior.
All this will result in a nice, unique string appended to our files, and we didn’t have to muck about with special files names and how to pass that info into Jekyll.
\n\n<link rel=\"stylesheet\" href=\"/gui/css/styles.css?version=362887da69\">\n
Looking good! Just rinse and repeat for the JavaScript task in Gulpfile.js
and we’re done!
This morning, Heydon Pickering posted a link to an article detailing how special typefaces for dyslexics basically don’t work as expected.
\n\n\n\nFonts “for dyslexics” is not inclusive design. More like target marketing. And it doesn't work. https://t.co/HS8JxwPpt6
— Heydon (@heydonworks) February 10, 2016
Intrigued by this, as I have seen a few of these special typefaces and always wondered how they would help people with dyslexia, I had to read on. What caught my eye next was the proposed alternative to relying on special typefaces alone.
\n\n\n\n\nIs there anything that can be done through type to make reading easier for dyslexics? Yes. Studies have shown that dyslexics read up to 27% faster when text lines are very short, up to 18 characters per line, compared to the 60-65 characters of standard text. Putting as much space as possible between letters helps dyslexics too.
\n
Sounds simple enough. With a bit of extra styling, this could be done easily, provided your front end code is up to snuff. Your front end code is up to snuff, right?
\n\nThis is precisely why I keep a journal online; doing cool experiments (besides incoherent rambling, of course). With a few quick additions to my stylesheets and some quick and dirty hacking with JavaScript, I managed to quickly deploy a solution for my experiment. As per the article’s suggestion, I wanted to do a few simple things:
\n\nAs one would expect, this wouldn’t require more than a few lines of extra code in the stylesheets. In addition, I also added some JavaScript to enable users to toggle the functionality.
\n\nThe basic concept involves just setting a modifier class to the <body>
element. This would increase font-size
, line-height
and letter-spacing
. In addition, the container for my content, .landmark-content
as it is called, would have its width reduced in order to provide shorter lines of text.
/**\n * Dyslexic mode toggle\n */\n.dyslexic-mode {\n font-size: 1.4em;\n letter-spacing: .25em;\n line-height: 2;\n}\n.dyslexic-mode .landmark-content {\n max-width: 26em;\n}\n
I also added some quick styling for a <button>
to be placed inside my banner region, to allow toggling of my dyslexics mode.
/* Toggle button */\n.toggle-dyslexic-mode {\n font-size: .65em;\n margin-top: 0;\n position: absolute;\n top: 1.5em;\n right: 0;\n}\n
JavaScript certainly isn’t my strongest skill, but I manage to get by. All we need to do in order to toggle our dyslexics mode is to toggle the modifier class on the <body>
. To get a little progressive enhancement into the mix, I also added the toggle button with JavaScript. This of course means that if JavaScript is not available to the user, they can’t toggle the dyslexics mode. Then again, we won’t have a silly button in the banner region that does nothing.
/**\n * Toggle dyslexic mode\n */\nfunction dyslexicMode() {\n\n // Place button inside role=\"banner\"\n var toggleContainer = document.querySelector('[role=\"banner\"] .landmark-content');\n\n // Create toggle button\n toggleContainer.insertAdjacentHTML('beforeend', '<button type=\"button\" class=\"toggle-dyslexic-mode\" data-text-original=\"Enable dyslexic mode\" data-text-swap=\"Disable dyslexic mode\">Enable dyslexic mode</button>');\n\n // Cache button selector\n var dyslexicButton = document.querySelector('.toggle-dyslexic-mode');\n\n // Function to toggle class and swap text on button\n function toggleDyslexicMode() {\n // Toggle the clas on <body>\n document.body.classList.toggle('dyslexic-mode');\n\n // Swap text on <button>\n if (dyslexicButton.getAttribute(\"data-text-swap\") == dyslexicButton.innerHTML) {\n dyslexicButton.innerHTML = dyslexicButton.getAttribute(\"data-text-original\");\n } else {\n dyslexicButton.setAttribute(\"data-text-original\", dyslexicButton.innerHTML);\n dyslexicButton.innerHTML = dyslexicButton.getAttribute(\"data-text-swap\");\n }\n }\n\n // Swap class & text on click\n dyslexicButton.addEventListener(\"click\", toggleDyslexicMode, false);\n}\n\ndyslexicMode();\n
Note that I chose to do this with vanilla JavaScript, without any jQuery, because I really need to kick that awful habit of mine. As I said, this is pretty quick and dirty, so browser support is basically Internet Explorer 10 and up. Doing this with jQuery would likely be a few less lines of code (but then you’d also need to pull in the library). Since I employ the mustard cutting method for my journal, this code would never be run in legacy browsers anyway.
\n\nRight now, users would need to toggle the function on each page visit. Nothing is being remembered, even within the same session. Hey, it’s a beta. Also, I’ve eyeballed the typographic tweaks and would need feedback from real people with dyslexia, in order to improve this or to even validate that this is something viable.
\n\nIf you wish to give me feedback on this, feel free to hit me up on Twitter!
\n\nUpdate (2016-02-10): I’ve added two screenshots to showcase the difference between the two modes, should you have any issues toggling the looks of my own journal.
\n\n\n\n\n\nUpdate (2016-02-11): Kseso did an alternative solution using only HTML and CSS. While my current solution does use modern DOM API:s, it can pretty easily be adapted to support legacy browsers as well. Kseso’s solution relies on the :checked
pseudo class, which limits the implementation possibilities a bit and won’t work in IE8 and below. Depending on your particular situation, this isn’t necessarily a problem.
Update (2016-11-11): As per this post, I’ve now disabled the toggle button on my own site and will be updating with a separate lab page to showcase the functionality.
\n" }, { "title": "Fixing blurry text on hardware accelerated elements", "date_published": "2016-02-04T00:00:00+00:00", "id": "https://frippz.se/2016/02/04/fixing-blurry-text-on-hardware-accelerated-elements/", "url": "https://frippz.se/2016/02/04/fixing-blurry-text-on-hardware-accelerated-elements/", "author": { "name": "Fredrik Frodlund" }, "summary": "A quick fix to prevent blurry text when applying translate3d on elements", "tags": [ "css3", "html5", "animation" ], "content_html": "A customer project I was working on had the need for animated bubbles with text in them. They were used as information overlays and needed to move around smoothly. To accomplish this, all you need to do is apply transform: translate3d(0,0,0);
to the element in question to enable hardware acceleration in most modern browsers. This, however, caused another issue; blurry text.
Doing a little bit of research revealed that for example Chrome and Safari transforms vector based text into textures when you apply translate3d
, which in turn risks causing blurry text on the elements in question. The fix seems suspiciously simple.
.element-with-blurry-text {\n filter: blur(0);\n}\n
This has appeared to have worked nicely in at least Safari 9 and Chrome 48. Some people have recommended to also add transform: translateY(0);
as well, but I have been unable to confirm that this has any effect whatsoever. Your mileage may vary.
As of OS X Yosemite, the built-in version of ZSH is 5.0.5. If you’re, like me, a fan of the latest and greatest, you might be tempted to use the Homebrew version of ZSH instead of the default system one. As of the writing of this post, the latest version of ZSH is 5.1. Here’s how you install and set it as the default shell.
\n\nThis very short guide assumes that you’re already familiar with Homebrew and have it installed. Once that’s sorted, install ZSH.
\n\n$ brew install zsh\n
Edit /etc/shells
to add a new entry for the Homebrew ZSH.
$ sudo vim /etc/shells\n
At the end of the file add /usr/local/bin/zsh
, which is the path to the Homebrew binary for ZSH. Your /etc/shells
should look like this:
# List of acceptable shells for chpass(1).\n# Ftpd will not allow users to connect who are not using\n# one of these shells.\n\n/bin/bash\n/bin/csh\n/bin/ksh\n/bin/sh\n/bin/tcsh\n/bin/zsh\n/usr/local/bin/zsh\n
Now we need to set our Homebrew ZSH as the default shell.
\n\n$ chsh -s /usr/local/bin/zsh\n
Now open up a new terminal, and we’re done! Welcome to the bleeding edge of ZSH!
\n" }, { "title": "Muddying the waters of progressive enhancement", "date_published": "2015-06-30T00:00:00+00:00", "id": "https://frippz.se/2015/06/30/muddying-the-waters-of-progressive-enhancement/", "url": "https://frippz.se/2015/06/30/muddying-the-waters-of-progressive-enhancement/", "author": { "name": "Fredrik Frodlund" }, "summary": "Musings on how the term “progressive enhancement” are losing its meaning to some people, causing confusion.", "tags": [ "progressive enhancement", "javascript" ], "content_html": "I did some work on a project for a client yesterday, where I was working on some specific JavaScript based functionality. By some weird chance, the JavaScript didn’t execute due to some weird error and nothing was showing up in the browser console. Later on it was found to be Webpack that didn’t build the file, hence no JavaScript. Anyway, this little event prompted me to post the following on Twitter:
\n\nWhy is my JavaScript not doing its stuff?! Oh, it’s not executing at all…
See now kids why progressive enhancement is important? (^ω^)
— Fredrik Frodlund (@frippz) June 29, 2015
\n\nApparently, this just rubbed some people the wrong way.
\n\n@frippz PE is much more than "needs to work without JS" though.
— Anders Ekdahl (@andersekdahl) June 29, 2015
\n\nWhich is absolutely correct, which Christian Heilmann has pointed out very well. But you can’t always get everything you want into 140 characters. Then came the tweet that set off about an hour’s worth of what I’ve come to loathe; arguing over Twitter:
\n\n@frippz And you can certainly be doing PE but still require JS.
— Anders Ekdahl (@andersekdahl) June 29, 2015
\n\nNeedless to say, I disagreed with this. Not in full, mind you. You certainly can and should apply progressive enhancement to your JavaScript as well, but I really see little point in drawing your line there and call it a day. What ensued was a discussion on what progressive enhancement should mean. Or rather, what we thought it meant.
\n\nBack in 2008, Aaron Gustafson wrote for A List Apart on progressive enhancement.
\n\n\n\n\nGetting into the progressive enhancement mindset is quite simple: just think from the content out. The content forms the solid base on which you layer your style and interactivity. If you’re a candy fan, think of it as a Peanut M&M:
\n\n \n\nStart with your content peanut, marked up in rich, semantic (X)HTML. Coat that content with a layer of rich, creamy CSS. Finally, add JavaScript as the hard candy shell to make a wonderfully tasty treat (and keep it from melting in your hands).
\n
This is a great description of what progressive enhancement is, and even if this article is coming up on being seven years old, I see no reason why this definition should change. The key here is the rock solid foundation; the content. You can’t base this on JavaScript and call this your baseline. Take away the JavaScript (which still happens all the time), and you’re left with an empty page. Not very rock solid.
\n\nI do realise that we’re drifting into the deep waters of semantics and that while Anders do have a point of sorts, that progressive enhancement can certainly be applied to just JavaScript, the underlying idea of a rock solid foundation goes out the window. What’s worse to me is the confusion this causes.
\n\nImagine the following scenario, if you will. Developers express an intention to develop a web site using progressive enhancement. Jubilations all around!
\n\n\n\n\nDeveloper 1: “Alright! We’re going to start with a rock solid foundation of semantic HTML, then we’ll add styling using CSS to carefully…”
\n\nDeveloper 2: “Uh…”
\n\nDeveloper 1: “What?”
\n\nDeveloper 2: “That’s not what progressive enhancement is. We need to require JavaScript for this.”
\n
I don’t know. Maybe the core problem is the semantics and like Jason Garber says: maybe we should just call it Responsible Web Design. While I don’t feel that core functionality without JavaScript is some kind of “baggage”, maybe there’s a point to it. I just don’t feel that it is very responsible to scrape the icing off the layer cake and ignore the rest of the layers.
\n" }, { "title": "Maybe some developers are just plain lazy", "date_published": "2015-05-16T00:00:00+00:00", "id": "https://frippz.se/2015/05/16/maybe-some-developers-are-just-plain-lazy/", "url": "https://frippz.se/2015/05/16/maybe-some-developers-are-just-plain-lazy/", "author": { "name": "Fredrik Frodlund" }, "summary": "Some quick thoughts about why tooling has become such a big thing in modern web development and the cost it brings.", "tags": [ "tools", "musings", "rant" ], "content_html": "In the wake of the news that Facebook has put out a new product that allows iPhone users to read news articles without leaving Facebook, several prominent characters on the web have put in their two cents on the matter.
\n\nPeter-Paul Koch was, as always, spot on:
\n\n\n\n\nThe movement toward toolchains and ever more libraries to do ever less useful things has become hysterical, and with every day that passes I’m more happy with my 2006 decision to ignore tools and just carry on. Tools don’t solve problems any more, they have become the problem. There’s just too many of them and they all include an incredible amount of features that you don’t use on your site — but that users are still required to download and execute.
\n
The trend is hard to miss. There’s an insane amount of tools and frameworks out there and using them is more the rule than the exception. PPK asks himself the question of why this is:
\n\n\n\n\nWhy all these tools? I see two related reasons: emulating native, and the fact that people with a server-side background coming to JavaScript development take existing tools because they do not have the training to recognise their drawbacks. Thus, the average web site has become way overtooled, which exacts a price when it comes to speed.
\n
While I absolutely do not disagree with his conclusions, sometimes the answer seem to be even more simple, and depressing. Based on what I’ve seen throughout my career as a front end web developer, the reason often is that developers are just plain lazy. With the advent of libraries and frameworks such as jQuery, Twitter Bootstrap, and not to mention the torrent of CSS preprocessors such as LESS and SASS, developers have not only become disconnected from how to properly build something for the web on their own, some of them have also turned horribly lazy.
\n\nFirst of all, let’s be clear about a few things. Just because you use a framework such as jQuery does not automatically make you lazy. Neither does the use of even Twitter Bootstrap. They all have their uses, and when used properly, they can save a developer loads of time. The problems start to come when you as a developer start to default to dropping in whatever framework comes to mind just because you couldn’t be bothered to give the problem at hand a few extra rounds of brain time. I’ve seen projects where hapless developers have thrown in jQuery UI (and I really mean everything that said framework has to offer including the kitchen sink) just to get a few tabs in their UI.
\n\nThe same can be said for something like the CSS preprocessor SASS. While I personally are not very interested in their use and fail to see what good they are, they can be used responsibly, which of course is close to impossible since they open up for misuse at every turn. In either case, they can save a developer a lot of time, but often that comes at the cost overly bloated code instead. Not always, but very often.
\n\nNow, a lot of developers would surely scream bloody murder at me calling them lazy, citing insane product owners, psychotic project managers and ultra-tight deadlines as reasons for their use of all these frameworks. Maybe, maybe not. I believe, that just as frameworks and preprocessor tools open up for way too many ways to bloat you product, so does the aforementioned circumstances. They are all a source of stress for us as developers and it makes it damn hard not to fall into that trap of throwing tools at the problem.
\n\nThe very uncomfortable truth might just be that you just lack the proper knowledge to effectively solve the problem at hand. Guess what, that’s totally fine. It happens all the time to most developers. No one is perfect and what really makes you a professional is what you choose to do when faced with such problems. Maybe take a step back, admit that you don’t have the right tools for the problem and that you need to aquire them before continuing.
\n\nThe thing is that the tools I’m talking about can’t be downloaded as a zip file from some web site.
\n" }, { "title": "Everyone uses JavaScript, right?", "date_published": "2015-04-27T00:00:00+00:00", "id": "https://frippz.se/2015/04/27/everyone-uses-javascript-right/", "url": "https://frippz.se/2015/04/27/everyone-uses-javascript-right/", "author": { "name": "Fredrik Frodlund" }, "summary": "Although some people would like to think otherwise, there are cases out there when JavaScript just isn’t available.", "tags": [ "javascript" ], "content_html": "Last week, a link came up in my Twitter feed that resonated very well with me, as is often the case when it comes to posts and articles championing the practice of progressive enhancement and responsible use of JavaScript. The article in question was titled “Everyone has JavaScript, right?”
\n\nThis morning, Aaron Gustafson linked to a comment on Reddit from a discussion thread regarding that page, that resonated equally well with me. For the sake of posterity, I’ve choosen to quote the comment in it’s entirety here.
\n\n\n\n\n\n\n\nWhy is this difficult?
\nBecause it’s not a blog full of content - it’s a revolutionary interactive animated graphical UI paradigm which merely happens to deliver textual content to users.
\n\nThey aren’t really on your site to read your article or check what time their train leaves - they’re really there to marvel at your buttery-smooth, hardware-accelerated 60fps animations and 1337 client-side javascript skillz that mean you can browse the entire site without ever once touching the server after the first page-load… just as long as you don’t mind that first page-load being 3MB in size, crapping out on unreliable mobile connections and taking whole seconds between DOM-ready and the UI actually appearing.
\n\nBut it’s ok, because the ToDo app I wrote to test this approach performed pretty well with just me and my mum using it, and I don’t care whether Google indexes it or not or whether blind people can see it because fuck them - they should just get some eyes, amirite?
\n\nLikewise anyone who ever wants to consume my content on a device I haven’t explicitly allowed for (or that isn’t even invented yet) can just go do one. What is it about the word “web” that makes people think of interconnected nodes that all work across a common set of protocols and idioms and allow information to flow unimpeded from one place to another?
\n\nIdiot hippies - they can consume my content in the way I decide they should or they can fuck off, yo. Because I’m a professional and nothing says professional like choosing a technology because all the cool kids are currently going “squee!” over it, rather than because it’s a good solution that follows solid engineering practices and performs well in the specific problem space we’re working in.
\n\nBesides, if people bitch and whine about not being able to bookmark individual sub-pages I can just go out of my way to implement ass-backwards hacks like the hash-bang URL support (I know Google themselves advised against relying on it as anything but a hacky workaround, but what do they know, right? They only invented the technology), forcing the entirety of my routing into the Javascript layer for ever more.
\n\nBecause that’s what we want, right? To force more and more legacy code and functionality into the front-end code we serve to each and every user for the rest of time, because it’s literally impossible to ever route hash-bang URLs on the server? Sweet.
\n\nHell, having built my entire app on the client-side, if it turns out I actually need it to be statically accessible (not that that would indicate I’ve chosen my entire architecture completely, absolutely, 100% wrongly or anything) I can always just intercept the requests for an arbitrary subset of all the clients that might ever need static content, host a client on my server then run the client-side logic in the client on the server, extract the resulting static DOM and send it back to the actual client on the client-side.
\n\nThen the only problems left are looking myself in the eye in the mirror in the morning and ever again referring to myself as a “real engineer” without giggling.
\n\nShit’s easy, yo. I don’t know what all you old grandads are bitching about with your “separation of concerns” or “accessibility” or “declarative data”.
\n\nShit, I don’t even know what half of those words mean. But I still know you’re wrong, right?
\n\n/s
\n\n– User Shaper_pmp on Reddit discussion thread.
\n
I just love this!
\n" }, { "title": "Tags, elements and attributes", "date_published": "2015-02-26T00:00:00+00:00", "id": "https://frippz.se/2015/02/26/tags-elements-and-attributes/", "url": "https://frippz.se/2015/02/26/tags-elements-and-attributes/", "author": { "name": "Fredrik Frodlund" }, "summary": "A basic crash course in correct HTML terminology and why you should learn it.", "tags": [ "html" ], "content_html": "Boy, this never gets old, does it? Back in the day, when I was but a green, eager to learn, front end web developer, I more than once fell into the trap of referring to everything (almost) as a tag. A couple of years down the road, I know better. But some people never learn, it seems. That gives me the urge to vent.
\n\nAn old colleague and mentor of mine once wrote, almost ten years ago:
\n\n\n\n\nWhen talking (or writing) about HTML, it is common for many people to refer to just about everything as “tags” instead of using the proper terms: “tag”, “element”, and “attribute”. A lot of the time what the author really means can be figured out by looking at the context, but sometimes it can be confusing.
\n\nUsing the correct terminology is not very difficult. It will also make it easier for others to correctly interpret what you mean, not to mention lend more credibility to what you have to say.
\n\n– HTML tags vs. elements vs. attributes by Roger Johansson, 456 Berea Street
\n
I think that most people will get what you are trying to say, with little risk of confusion. It’s a fairly simple situation. My gripe with not using the proper terms is a lot more simple; you come off looking like a clueless twit!
\n\nI mean, come on, it’s really not rocket science, is it? (Or do we say rocket surgery now, since Steve Krug titled his wonderful book in that fashion?)
\n\nI guess the biggest problem for me is that each time I correct the culprit, I come off as the asshole. Ok, let’s just do some HTML 101, shall we?
\n\nIn general, a HTML element consists of a start tag and an end tag. Here’s a lovely heading for you.
\n\n<h1>This is a heading</h1>\n
Preferably, there should be some content between the start tag and the end tag.
\n\nTags are what makes up the start and the end of an element. You’ve might come across some of the, like in the previous section, like, five seconds ago.
\n\n<p>\n
The above start tag denotes the beginning of a paragraph. Attributes are optional, like so.
\n\n<p class=\"whatever\">\n
In order to close you paragraph, you’ll need and end tag.
\n\n</p>\n
There are a few HTML elements that requires no end tag. The <img>
element is one good example of those.
<img src=\"photo.jpg\" alt=\"\">\n
Noticed those thingies with an equals sign and some quotation marks? That what we in the business call an attribute. Attribute. Yes. Now, you might’ve heard some clueless nitwit refer to them as an alt tag. That’s complete gibberish, since there’s no such thing as an “alt tag”. If you hear someone use that term, you’re totally allowed to slap them with a rolled up newspaper.
\n\nMy former colleague is ever so courteous in his post on the subject:
\n\n\n\n\nYou may call this nitpicking, but I don’t think it is. Sure, most of the time people will understand what you mean even if you call everything a “tag”. But by using the correct terminology you reduce the risk of being misunderstood, and you will sound more professional, so you really have nothing to lose by learning the difference.
\n\n– HTML tags vs. elements vs. attributes by Roger Johansson, 456 Berea Street
\n
He’s right, of course. You really have nothing to lose by using the proper terms. But remember, you also limit the risk of coming off as a clueless git. 😉
\n" }, { "title": "On browser support and binary choices", "date_published": "2014-12-07T00:00:00+00:00", "id": "https://frippz.se/2014/12/07/on-browser-support-and-binary-choices/", "url": "https://frippz.se/2014/12/07/on-browser-support-and-binary-choices/", "author": { "name": "Fredrik Frodlund" }, "summary": "A post on why we shouldn’t just draw a hard line in the sand when it comes to browser support decisions.", "tags": [ "browser support", "progressive enhancement", "css" ], "content_html": "I have more often than not been faced with the following scenario; a client, a colleague or maybe a project manager will ask the old-as-dirt question:
\n\n“What browsers should be supported in this project?”
\n\nFor way too long I thought of this as a binary issue. You either supported a particular browser, or you didn’t. If you were a really bad apple, you took measures to make sure that the user of a legacy browser you decided shouldn’t be supported got a snarky message as well – “Sorry, your old browser is not supported by us.” – lovely, right?
\n\nAll this is of course a pretty bad practice and parallels can be drawn to a store owner telling potential customers to get lost just because they can’t climb the stairs to get into the store.
\n\nLooking at the browser landscape of today and the future, we can’t keep asking ourselves which browsers to test for and whether we should optimize for legacy browsers, because that just isn’t very practical.
\n\n\n\nBrad Frost wrote a great post called “Support Vs Optimization” back in 2011. You should absolutely read the whole thing, but if not, just read this:
\n\n\n\n\n“There is a difference between support and optimization” is a line I use regularly at work. For time and budget reasons, we can’t make the best experience ever for every connected device, but we have a responsibility to provide a decent experience to those who wish to interact with our products and services. As we move into the future, it’s going to be harder and harder to account for the plethora of connected devices, so we need to be construct our experiences with more consideration.
\n
A lot of people will argue that supporting legacy browsers means putting in loads of hours adding fixes and hacks to make your web site work in something like, say, Internet Explorer 8 (or even the over-a-decade-old IE 6). But before you dismiss the notion of supporting legacy browsers altogether, consider this quote from the same article:
\n\n\n\n\nYou don’t have to treat these browsers as equals to iOS and Android and no one is recommending that we have to serve up a crappy WAP site to the best smartphones on the market. It’s just about being more considerate and giving these people who want to interact with your site a functional experience. That requires removing comfortable assumptions about support and accounting for different use cases. There are ways to support lesser platforms while still optimizing for the best of the best.
\n
To get you started, I’d like to give you a few examples of what you can do to achieve this.
\n\nI will not repeat the many arguments for progressive enhancement, when so many talented individuals have already done this in the past.
\n\nIf you use a method like “Cutting the mustard” to only serve up JavaScript to modern browsers, you will very likely save yourself a lot of pain.
\n\nI’ll refrain from going any deeper into the progressive enhancement part in this post. If you consider yourself to be in the camp that says “But we’re making web apps and this progressive enhancement stuff does not apply to us”, I urge you to have a quick look at the following articles:
\n\nOk, moving on.
\n\nYou might ask yourself what mobile first has to do with legacy browsers. Internet explorer before version 9 does not even understand media queries, right? Well, mobile first in this context could mean that you start with a very basic layout that is defined outside of any dimension based media queries.
\n\n@media all {\n /* Put very basic, mobile first layout here */\n}\n
Then you do more complex layouts with something like this:
\n\n@media screen and (min-width: 0) {\n /* Juicy, complex and cutting edge CSS goes here */\n}\n
While this might be considered a hack, it’s totally valid CSS. min-width: 0;
means all widths from 0 and up, but since IE 8 and older versions doesn’t understand the syntax, they’ll just ignore it. Also, keep in mind that the code snippets above is only the beginning. You of course need to add additional media queries for different breakpoints tailored to your design.
A basic mobile first layout will, provided you’ve kept it simple, work quite well on legacy browsers. Chances are it will look just fine (or fine enough) without you needing to add any quirky hacks to your stylesheet. A great example of how this can look is the web site of Jake Archibald. Visiting his site in something very old (like IE6) and something quite modern (like Safari 8 on OS X) shows something interesting:
\n\n\n\n\n\nThe looks of that site in IE 6 won’t win any design awards (but then again, neither will Windows XP, right?) The most important thing is that it works. The content is there, but all the bells and whistles are absent. My guess is that the designer spent very little time optimizing anything for legacy browsers. However, any unfortunate soul using IE 6 on Windows XP will still be able to get the core experience; the content. This is what matters the most.
\n\nThis post will hopefully give you just a little bit of inspiration and maybe help you realize that supporting legacy browsers does not have to result in a lot of extra work. In return, you will reach a wider audience and most likely be happier for it.
\n" }, { "title": "On styling your anchors", "date_published": "2014-12-06T00:00:00+00:00", "id": "https://frippz.se/2014/12/06/on-styling-your-anchors/", "url": "https://frippz.se/2014/12/06/on-styling-your-anchors/", "author": { "name": "Fredrik Frodlund" }, "summary": "A short post on why you shouldn’t just mindlessly style away the outline on anchor elements.", "tags": [ "accessibility", "a11y", "html", "css" ], "content_html": "This problem really isn’t something new and this specific issue has been repeated on many accessibility oriented blogs over the years. But since I regularly see this mistake being made over and over again, I think it’s time for a revisit. I’m talking about why you shouldn’t style away the :focus
pseudo-class.
Pseudo-classes have been around in CSS since the first version, albeit in a more limited manner and without the associated semantic meaning.
\n\nFrom the Mozilla Developer Network article on pseudo-classes:
\n\n\n\n\nA CSS pseudo-class is a keyword added to selectors that specifies a special state of the element to be selected. For example
\n\n:hover
will apply a style when the user hovers over the element specified by the selector.Pseudo-classes, together with pseudo-elements, let you apply a style to an element not only in relation to the content of the document tree, but also in relation to external factors like the history of the navigator (
\n:visited
, for example), the status of its content (like:checked
on some form elements), or the position of the mouse (like:hover
which lets you know if the mouse is over an element or not).
Clear as a bell, right?
\n\nThe :focus
pseudo-class is often forgotten when developers start messing around with styling anchors on a website. However, it is just as important as :hover
, :active
and :visited
. While the :hover
pseudo-class matches when the user designates an element with a pointing device, the :focus
pseudo-class is applied when an element has received focus, either from the user selecting it with the use of a keyboard or by activating it with the mouse.
In the context of keyboard navigation, the :focus pseudo-class is very important since it helps users who are navigating with a keyboard rather than a pointing device. All browsers worth mentioning have a built-in user agent stylesheet that sets a style for focused elements.
\n\nAs you can see, the Internet Explorer 8 focus outline looks a little… ugly. Fair enough. But it still provides an important visual cue for users navigating with their keyboards. The problem is that this outline is also present when you’re using a pointing device and it also has a habit of becoming stuck after you’ve clicked a link (especially if you are are using JavaScript to catch the event). This has given rise to the following CSS rule on a number of websites:
\n\na {\n outline: none;\n}\n
This is, for obvious reasons, horrible. Suddenly, that ugly, ugly outline is gone and your designer is happy. But the accessibility just went out the window. To put it simply, don’t do it! If you must style away the outline, make sure you replace it with something at least equally clear.
\n\nThe reason I wrote this article is that I still see this transgression on an almost daily basis. I usually navigate with a pointing device (a mouse or a touchpad, for example), but when it comes to forms, I know I’m not alone in tabbing between fields and form controls such as buttons. Say for example that you are about to submit a comment on another blog and you tab out of the text area to press submit, but suddenly, no marker is visible anymore. To make it worse, you might have two buttons after that text area, one for submit and one for clearing the fields. Care to play a little russian roulette with your carefully prepared comment? Didn’t think so.
\n" }, { "title": "Finally!", "date_published": "2014-12-05T00:00:00+00:00", "id": "https://frippz.se/2014/12/05/finally/", "url": "https://frippz.se/2014/12/05/finally/", "author": { "name": "Fredrik Frodlund" }, "summary": "The very first post on this blog!", "tags": [ ], "content_html": "It took me long enough, but after getting some time to finish learning the inner workings of Jekyll (which has been the tool of choice for this project), I welcome you to my personal journal!
\n\nI have to be honest. This entire project is actually created for one person, and one person alone; me. I’m using this web site as a place to collect my thoughts and experiments related to front end web development. But since it is a nice thing to share, I’ll of course make it all public. Should you ever find anything I write here of use, so much the better.
\n" } ] }