Using BlueSky Features As Disinformation Tools

Recently, whilst working on implementing automatic posting into BlueSky I ran into an issue with link-preview cards not being displayed.

Posts are submitted into BlueSky using ATProtocol, which place the onus on the sender to generate and provide preview card functionality, so that a rich preview can be displayed alongside post text.

In my other post, I described the need to do this as being a pain in the arse. However, there's more to it than that: having the ability to submit arbitrary card content is problematic because it can be used to facilitate disinformation campaigns.

Bluesky also uses facets, which allow the sender to turn text into arbitrary hyperlinks, presenting its own set of issues.

In this post, I'll explain why giving the sender control over these items is potentially harmful.

Note: I did email Bluesky detailing my concerns, but given that

  • The ability to do this is publicly documented
  • It turns out it's also something that BlueSky were already made aware of and have defended.
  • Update: 2 weeks later, they've still not replied at all

There didn't seem to be any value in delaying disclosure: it's better to ensure there's awareness of the issue.

Read more…

Posting into BlueSky, Nostr and Threads from Python

At the beginning of this year, I wrote about how I was starting to play around with automating syndication of my content into various social networks in order to better pursue the approach known as POSSE. That earlier post provides quite a long explanation of why I prefer to write here rather than elsewhere, so I won't revisit that now.

The underlying concept, though, is simple: I publish content on www.bentasker.co.uk, something picks up on the change in my rss feed and then posts it into social networks to help increase discoverability.

In this post, I want to write about how I've implemented support for automatic posting into Nostr, Threads and BlueSky.

Read more…

Writing data from a Bip 3 Smartwatch into InfluxDB

I've done a fair bit of playing around with my Watchy since I bought it a couple of months back and, generally, I really like it.

Unfortunately, as cool as it is, it's proven just a little too limited for what I need: if nothing else, it turns out that I really need my watch to be waterproof because I sometimes stick my arm into an aquarium without thought for my watch.

So, I started looking for a more suitable alternative.

I really wanted another open source watch, but nothing quite fit the bill: the Bangle.js 2 looks great, but (like the Pine-Time) isn't suitable for swimming (something I used to do regularly and want to get back in the habit of). After evaluating the (sadly quite small) Open Source market, I decided that I'd have to look at more proprietary options.

Ultimately I settled on an Amazfit Bip 3 Pro: it's decently waterproof and has a range of sensors (more than I need really). The Bip U Pro, which is basically a cheaper version of the same watch, was also an early contender until I saw the words "Amazon Alexa Built-in". Nope, not for me, thanks anyway.

An important factor in choosing the Bip was that it appeared that, instead of using the manufacturer's app, I could pair the watch with Gadgetbridge, affording control over the collected data rather than having it sent to some proprietary silo.

In this post, I'll talk about how I built a scheduled job to fetch the health related data that my Bip 3 Pro smartwatch records in order to write it onwards into an InfluxDB database.

Read more…

Evaluating the Break Even Period of Our Solar Battery

A little while ago, I wrote a post on Monitoring Solar Generation stats with InfluxDB, Telegraf and Soliscloud.

Since then, one of the things that I've been working on is a Grafana dashboard to track our path towards break-even: that is, when the system has "saved" us enough that it's paid the costs of purchase and install.

As well as charting the break-even path of the system as a whole, the dashboard also calculates individual break-even for the battery. Because battery storage is an optional part of a solar install, I thought it'd be interesting to calculate what kind of difference it was making versus the cost of adding it.

I actually sort of wish that I hadn't, because the thing that's stood out to me is just how long the battery's break-even period actually is.

In this post, I'll talk about how I'm calculating amortisation stats, what I'm seeing, possible causes and what I think it all means.

Read more…

Flashing and Rooting a Samsung Galaxy S4 from Linux

I recently found need to update the OS on an old Samsung Galaxy S4 (GT-I5905) from the stock Android 7 to Android 11 (if you're interested, it was for this).

I decided to use LineageOS as the phone's new Android distro and also installed Magisk to root the phone so that I could install a custom CA certificate into the system root store.

This short post details the process of using a Linux laptop to flash Lineage OS onto a Samsung Galaxy S4 before loading the Google Apps, rooting with Magisk and installing a custom CA certificate (you don't need to do that bit).

If you're running Windows or a Mac, the process should be much the same, but you'll need Windows/Mac versions of the tools instead.

Note that this process will wipe any existing data, so be sure to take a backup first.

You'll need your phone, a Linux laptop and a USB cable.

Read more…

Deploying Kubernetes Onto A Single Debian 12.1 Host

There are a few things that I've been wanting to play around with, some of which could do with being deployed into a Kubernetes cluster.

They're only small projects, so I didn't want to devote multiple bits of hardware to this experimentation, nor did I particularly want to mess around with setting up VMs.

Instead, I wanted to run Kubernetes on a single machine, allowing orchestration via kubectl and other tools without any unnecessary faffing about.

By sheer luck, I started doing this a few days after the release of Debian 12.1 (Bookworm), otherwise this'd probably be a post about running on Debian 11 (the steps for that, though, are basically identical).

In this post, I'll detail the process that I followed to install Kubernetes on Debian 12.1 as well as the subsequent steps to allow it to function as a single node "cluster". If you're looking to install Kubernetes on multiple Debian 12.1 boxes these instructions will also work for you - just skip the section "Single-Node Specific Changes" and use kubectl join as you normally would.

Aside from a few steps, it's assumed that you're running through the doc as root so run sudo -i first if necessary.

Read more…

Golang HTTP/2 connections can become blocked for extremely long periods of time

I've recently had cause to try and explain this issue to a few people, so I figured it was probably worth logging a synopsis in my Gitlab for ease of future reference. I had intended to create something more like this, but in the process of writing it up, it's ended up turning into more of a blog post - so here we are.

There is an issue in Golang's HTTP2 client which can lead to connections becoming blocked for a prolonged period of time. When this happens, subsequent requests fail, because broken connections continue to be selected from the connection pool.

I originally logged this as upstream bug Go#59690, but because it was aimed at an audience familiar with both the code and the underlying fundamentals, the description there makes a few assumptions about the knowledge of the reader.

The intent of this blog post is to provide a broader and higher(ish)-level overview of the issue as well as providing details of how to mitigate until such as time as it's permanently fixed in Go's net packages.

Copies of the repro code used in this post can be found in my article-scripts repo.

Read more…

Collecting Octopus Energy Pricing and Consumption with Telegraf

As an energy supplier, Octopus Energy are pretty much unique (at least within the UK energy market), not least because they expose an easily accessible API allowing customers to easily fetch consumption and pricing details.

Like many others, I'm on Octopus Agile, so being able to collect past and future pricing information is particularly useful because that information enables us to try and shift load to when import rates are most favourable.

At times, this can be incredibly beneficial: for example, at time of first writing, the rates were negative, so we were actually getting paid (albeit a small amount) for the energy that we were consuming.

Screenshot of Octopus's description of Plunge pricing. When supply outstrips demand, prices drop and occasionally go negative.

The next day's prices are published daily at around 4pm every day, allowing some manner of planning ahead (more plunge pricing tomorrow, yay!):

Screenshot of Octopus agile prices for the previous and next 24 hours. Most of tomorrow is in negative prices....

For those who want to build automations, there's an excellent integration for HomeAssistant, however, I spend more of my time in Grafana/InfluxDB than HomeAssistant so I also wanted Telegraf to be able to fetch this information.

In this post, I'll detail how to set up my Octopus Energy exec plugin for Telegraf and will also provide some examples of how I've started using that data within Grafana.

Read more…

Running a Lemmy Instance using docker-compose

Recently Reddit made changes to it's API in an attempt to knobble third party apps (apparently stemming from concern that it's own lacklustre and ad-laden app could not compete on a level playing field). Reddit's management now seems to have moved on from lying about application developers in order to continue to threaten moderators protesting the changes.

Reddit's user and app hostile approach looks set to continue for some time and is already driving the growth of Reddit alternatives such as Squabbles.io, Lemmy and KBin.

Like many users, I've ended up creating new accounts in various places and now only really look at Reddit in order to look in on the drama or to see whether my 3rd party app is still working (which, at time of writing, it is).

I originally thought that I'd end up primarily using KBin (because I preferred the interface). That changed, though, when then the news broke that Boost will have a Lemmy compatible adaptation: Boost's interface is probably the reason that I've managed to stay on Reddit for so long - the official app would have led to me drifting away years ago.

With Boost targeting Lemmy, I decided that adopting Lemmy was likely my best long-term option, and that I would look at running my own instance (much like I do with Mastodon).

The Lemmy documentation does contain a guide to installation using docker but (IMO) it's a bit simple and lacking in examples.

In this post I'll detail the process I followed to stand up a docker based Lemmy instance, including where (and why) I deviated from the official documentation.

Read more…

Upgrading a docker-compose based Mastodon server to gain today's security fixes

My previous post detailing how to run a Mastodon server using docker-compose includes a section detailing how to upgrade an instance built using that approach

However, today's Mastodon releases (v4.1.3, v4.0.5 and v3.5.9) include important security fixes (especially the fix for CVE-2023-36460), so I thought it was worth a quick post detailing the process that I've followed to upgrade my instance (mastodon.bentasker.co.uk) to ensure that I get today's security fixes in place.

Read more…