Manually applying a snap package update

Snap is a convenient way to install containerised applications. Like all package management systems it has it's flaws, but sees widespread use (in particular on Ubuntu derived distros).

There's a little known feature of Snap that's started catching people out though. Snap has the ability to force updates, and will push notifications about a forthcoming attempt to do so.

Pending update of "signal-desktop snap"

Although this feature was actually introduced back in 2019, it's still not particularly well received at times.


Misleading Notification

One concern is that the notification is quite misleading and doesn't really give a clear indication of what the user is supposed to do

Pending update of "signal-desktop" snap

Close the app to avoid disruption (7 days left)

The call to action seems to suggest (particularly to those familiar with things like AWS degraded instance notifications) that you can avoid the disruption of a forced update by closing the app and re-opening it.

But, this isn't the case. On relaunch, the app will be running the same version and notifications will continue unabated.

It is, however, possible (desirable, even) to update (or, in snap parlance: refresh) the package/application manually rather than waiting for the scheduled update.

This documentation details the (simple) process to refresh a snap package on linux.

Read more…

Tracking and Alerting on LetsEncrypt Certificate Renewals With InfluxDB and Kapacitor

LetsEncrypt has been providing free SSL certificates since 2014, and has seen widespread usage.

With a 90 day lifetime, the certificates only have relatively short lifespans and need renewing regularly, with the recommended way being to automate renewal using certbot.

The relatively short lifetime of these certificates means there's also a fairly short window to notice and intervene if renewal fails (whether because you've hit LetsEncrypt's rate limits, because certbot has started failing to fire, or some other reason).

Service monitoring often includes a check that connects in and checks certificate expiration dates, but there's usually a window between where a certificate should have renewed and when it gets close enough to expiry to breach your alert threshold.

If we apply a defense-in-depth mindset, there should also be monitoring of the renewal process itself: not only does this provide an earlier opportunity to trigger an intervention, it also addresses the risk of reliance on a single health check (which might itself malfunction).

This post covers the process of configuring a post-deploy hook in certbot to write renewal information into InfluxDB so that alerts can be generated when an expected renewal is missed.

Read more…

Building a Topper to Extend My Desk (and Increase Leg Room)

I've never placed much importance on having a nice looking desk: it's just a bit of furniture that you pay no real attention to whilst it holds the stuff that you are paying attention to.

When we last moved, I switched from my original desk to using one that I'd previously been using as a workbench. The switch was purely on the basis that the workbench didn't have drawers built in, giving more room for me to move my legs around.

As a result, for the last couple of years, my desk has been an unimposing white thing. At 46cm deep, it has just enough space to hold my various bits and pieces

Tightly packed desk

Until recently, this worked absolutely fine.

For reasons involving a motorcycle and diesel, I've got longstanding knee pain. Lately, it's been giving more jip than normal so I decided to order a foam foot-rest to see whether that helps.

Unfortunately, doing so has revealed something I hadn't previously realised: the recess under my desk is perfectly sized for me. Adding the foot-rest raised my knees too high, so I needed to wheel my chair back a bit, leaving me unable to rest my wrists on the edge of the desk.

I didn't want to replace the desk entirely, so decided to try and make a topper that would extend the desk outward, allowing me to sit a little further back whilst still providing that all important wrist support.

This post details the process I followed to make my desk extender.

Read more…

tor-daemon telegraf plugin v0.2

Version: 0.2

Project Info

The tor-daemon plugin is an exec plugin for Telegraf allowing statistics to be captured from the Tor daemon on Onion services and relays.

Details on usage can be found at Monitoring the Tor daemon with Telegraf.

Release Notes

Version 0.1 implements the basic functionality of the plugin

Release Revision

This version was released in commit c30a1bd

Release issue tracking

Issues assigned to this release can be viewed in GILS

Plugin

The plugin can be downloaded from Github or from here.

Rotating Docker Container Logs To Comply With Retention Policies

Docker's default configuration doesn't perform log rotation.

For busy and long running containers, this can lead to the filesystem being filled with old, uncompressed logging data (as well as making accidental docker logs $container invocations quite painful).

It is possible to configure docker to rotate logs by editing daemon.json, but the rotation threshold options are fairly limited:

  • max-size: size at which to rotate
  • max-file: max number of rotated files

Whilst these options do help to reduce filesystem usage, being purely size based they fail to support a number of extremely common log rotation use-cases

  • Log rotation at a specific time based interval (e.g. daily log rotation)
  • Maximum retention periods (to comply with GDPR retention policies etc)

Unfortunately, json-file isn't the only logging driver to suffer from this limitation, the local driver has the same restrictions. It looks like there's an implicit decision that anyone who wants to follow common rotation practices should just forward logs onto syslog, journald or some other logging infrastructure (such as logstash). In practice, there are a variety of use-cases where this may be undesirable.

However, as json-file simply writes loglines into a logfile on disk, it's trivial to build a script to implement the rotation that we need.

This documentation details how to set up interval based log rotation for docker containers

Read more…

Regularly refreshing Pi-Hole Regex Block List from external sources

Pi-Hole provides simple tooling for managing lists of ad domains to block, but sometimes simple blocklists don't provide enough coverage on their own.

Blocking Xiaomi's Tracking

The mobile phone manufacturer Xiaomi is a good example of why a more flexible blocking approach is sometimes called for.

Various views within the MIUI system UI contain tracking/ads with a broad range of regionalised addresses used to support these data harvesting activites.

For example, Xiaomi phones sometimes contact the domain tracking.intl.miui.com, but there are also regionalised variations such as tracking.rus.miui.com and tracking.india.miui.com.

Once known, these domains are easy to block, but a purely reactive approach means that there will always be periods where data is collected unimpeded.

It's far preferable, then, to be able to predict what their other tracking domains might be. Unfortunately the regionalisation of Xiaomi's services isn't particularly consistent:

  • There are services at fr.app.chat.global.xiaomi.net
  • But there are none at tracking.fr.miui.com
  • There are also no services at tracking.gb.miui.com but DNS lookups for it behave differently to those for tracking.fr.miui.com

This inconsistency makes effective blocking of Xiaomi's tracking domains via blocklists quite difficult: not only do we need to be able to enumerate all current domains, we're also reliant on Xiaomi not launching stalkerware services in a new region.


Enter Regex

Regular expressions (regex) provide a tool by which we can improve the effectiveness of our blocks.

Rather than needing to enumerate every variation of tracking.$foo.miui.com we can instead provide a pattern to match against

^tracking\..+\.miui.com$

For those not familiar with Regex, this breaks down as follows

  • ^tracking. : queried name must begin with tracking (the ^ signifies start of the input)
  • .+ : allow an unlimited number of any characters
  • \.miui.com$ : the queried name must end with .miui.com (the $ signifies end of the input)

As if this wasn't powerful enough, PiHole also supports approximate matching allowing things like stemming to be used.

For example, this allows us to trivially create a regular expression that'll accept TLD substitutions:

^tracking\..+\.miui.(com){#3}$

This expression will match any of the following

  • tracking.foo.miui.com
  • tracking.foo.miui.org
  • tracking.foo.miui.net

Managing Regex in Pi-Hole

So, why do we need an entire post for this?

Adding regex blocks to Pi-Hole individually is trivial as they can be added through the web interface

Adding a Regex Filter to Pi-Hole

However, adding a bulk list or linking a remotely maintained list provides a bit more of a challenge.

Older versions of Pi-Hole referenced a file called regex.list on disk, allowing easy automation of updates.

But support for that file was dropped when Pi-Hole Version 5 was released last year and regexes now need to be written into the gravity database.

This post details the process of automatically fetching and refreshing lists of regular expressions for Pi-Hole versions later than version 5.

Read more…

Replacing My Adblock Lists

I started to curate my own adblocking scripts back in 2014, making them available in the directory /adblock/ on my site.

At the time of their creation the lists were poorly controlled and pretty sparsely documented:

Original Adblock list documentation

In 2018, I got my act together a bit: I moved the lists into a Github repo and implemented project management to track additions to the lists.

Whilst project management improved, the publishing of the lists continued to rely on some increasingly shonky bash scripts. Those scripts modified and compiled in third party lists, stripped duplicates and generated various output formats. They were never engineered so much as spawned.

Because the lists used third party sources, the compilation process needed to run at scheduled intervals: it couldn't simply be triggered when I made changes, because the lists were presented as near-complete alternatives to others.

Despite their awful hacky nature, those scripts managed to compile and update my block lists for nearly 8 years.

However, they've long been overdue for replacement.

This post serves as a record for the deprecation of my original adblock lists, as well as providing details of their replacement.

Read more…

How much more efficient is refilling a kettle than reboiling it?

Like many on this here Sceptred Isle, I use my kettle regularly throughout the day.

I live in a hard water area, and the prevailing wisdom is that you should refill rather than reboil your kettle in order to reduce the rate that limescale builds up at (and by extension reduce energy usage).

The logic is that during the first boil, the denser minerals move to the bottom of the kettle, so after you've made your cuppa, the adulterants in the water left in the kettle are much more concentrated, leading to an increased rate of scaling in each subsequent boil (and, it's been suggested, possible increased health risks).

From an energy use perspective, this is an issue: Limescale adds mass to the inside of the kettle, so over time more energy is required in order to boil the same volume of water (though, strictly speaking, if you're using the gauge on your kettle you'd actually be boiling a smaller volume water because the limescale will have displaced some measure of it) because you're having to heat the limescale layer too.

Emptying and refilling reduces the rate of build-up, but, if the kettle is used even semi-regularly it comes at a cost: the residual warmth of the remaining water is lost and the new water has to be brought to boil from (tap) cold instead.

It's the cost of that temperature gap that I was interested in: I wanted to see how big a difference refilling made in energy usage (both per boil and over time).

Over the course of a few days, I stuck to my usual routine (best summarised as: want tea, make tea) but used different approaches to kettle filling to see what the effect on energy consumption was.

Read more…

Building a serverless site availability monitoring platform with Telegraf, AWS Fargate and InfluxCloud

I use a free Uptime Robot account to help keep an eye on the availability of www.bentasker.co.uk.

Every 5 minutes, UptimeRobot places requests to my site (and it's origin) and reports on how long those requests take to complete and updates my status page if there are issues.

The free tier only tests from a single location, but is usually a good indicator of when things are going (or starting to go) wrong. I use my uptime-robot exec plugin for Telegraf to pull stats from UptimeRobot into InfluxDB for use in dashboards.

Because I test against my origin as well as the CDN, it's usually possible to tell (roughly) where an issue lies: if CDN response time increases, but origin doesn't, then the issue is likely on the CDN.

Earlier in the week, I saw a significant and sustained increase in the latency UptimeRobot was reporting for my site, with no corresponding change in origin response times.

UptimeRobot reports increase in latency fetching from my site

This suggests possible CDN issues, but the increase wasn't reflected in response time stats drawn from other sources:

  • There was no increase in the response time metrics recorded by my Privacy Friendly Analytics system
  • The Telegraf instance (checking the same endpoints) on my LAN wasn't reporting an increase

Given that pfanalytics wasn't screaming at me and that I couldn't manually reproduce the issue, I felt reasonably confident that whatever this was, it wasn't impacting real users.

But, I decided that it would be useful to have some other geographically distributed availability system that I could use for investigation and corroboration in future.

I chose to use Amazon's Elastic Container Server (ECS) with AWS Fargate to build my solution.

This post walks through the process of setting up a serverless solution which runs Telegraf in a Fargate cluster and writes availability data into a free InfluxDB Cloud account.

Read more…

tor-daemon telegraf plugin v0.1

Version: 0.1

Project Info

The tor-daemon plugin is an exec plugin for Telegraf allowing statistics to be captured from the Tor daemon on Onion services and relays.

Details on usage can be found at Monitoring the Tor daemon with Telegraf.

Release Notes

Version 0.1 implements the basic functionality of the plugin

Release Revision

This version was released in commit 8b60035

Release issue tracking

Issues assigned to this release can be viewed in GILS

Plugin

The plugin can be downloaded from Github or from here.