Today I Learned: Footnotes in feed readers

There’s a de facto convention for rendering footnotes in HTML that enables feed readers to give them special treatment, e.g. NetNewsWire and Feedbin display them inline if you select the numbered footnote popup. Simon Willison, referencing Chris Coyier, both with screenshots: I found this code in the NetNewsWire source (it’s MIT licensed) which runs against elements matching this CSS selector: sup > a[href*='#fn'], sup > div > a[href*='#fn'] So any link with an href attribute containing #fn that is a child of a <sup> (superscript) element. ...

 · 142 words · Claudine Chionh

IndieWebbing from the couch

I’m not going to pretend for a moment that building a personal website is easier than posting to a platform that someone else has already built. I know that updating content for a static site generator in a git repository from the command line introduces a whole lot of friction (and jargon) to the blogging/publishing process, when I could easily post elsewhere with a mobile app from the comfort of my couch or while commuting on a tram. But there are enough development and automation tools available for iOS that I can use my iPad or phone as a portable development lab. Here’s how I write and publish away from a computer. ...

 · 369 words · Claudine Chionh

Adding Open Library IDs to my Reading page

I have updated the book shortcode1 used on my Reading page to link to the Internet Archive’s Open Library, which provides crowd-sourced book metadata – you can also look for these books in your local library! I’ve used the Open Library identifier for a Work rather than a specific edition. I used the Open Library API to take the ISBN of a specific (physical or digital) book and return the Work ID; publishing and documenting that code might have to wait for another day. ...

 · 92 words · Claudine Chionh

Resisting linkrot

I say I’m an archivist but I’ve been rather blasé about archiving my own web presence. I can’t find an archive of my GeoCities site and the Wayback Machine only has a single capture from my university undergraduate page (I had already graduated a couple of years before that). But I started collecting my own domains around the early 2000s and the Wayback Machine is a reminder of the many iterations of my personal website,1 the different hand-coded templates, CMSes, and static site generators that I used, and all the text that I published and abandoned over the years. ...

 · 512 words · Claudine Chionh

Locking out AI bots with the Dark Visitors API

The Robots Exclusion Protocol was initially developed in the early years of the web to prevent small websites from being overwhelmed by search engine crawlers, and its use has expanded to exclude robots for a variety of reasons, most recently to keep artificial intelligence bots from feeding their large language models with web content. There are various hand-picked lists of known AI bots that have been circulating among web developers, and there’s also Dark Visitors, which maintains a comprehensive list of known bots, categorised by type. You can use this resource to generate your own robots.txt, either by copying the example file, or by using the free Dark Visitors API to customise the types you want to exclude. ...

 · 419 words · Claudine Chionh