Injecting Audio Sidetone using Pipewire or PulseAudio

My Razer headset has a built in microphone which started to fail recently: people started to note that they couldn't really hear me (a particular highlight being "We're having a hard time understanding you because your voice is so feeble").

Whilst it's possible to replace the headset's mic, they do get knocked a lot and I didn't really want to replace it with something that was just going to fail again, so I decided to get a boom mic instead.

It's not a particularly expensive one, but is a nice bit of kit. The only issue is that I'm never quite sure how I'm coming across on it. Am I too loud, am I too quiet? Am I disturbing the rest of the house?

So, I decided I wanted to enable Mic Monitoring (also known as sidetone or audible feedback): having the microphone's input play in my headphones, so that I can hear and moderate my own voice when I speak.

This post talks about enabling sidetone on a Linux box running Pipewire or PulseAudio (the commands are backwards compatible).

Read more…

Deploying Kubernetes Onto A Single Debian 12.1 Host

There are a few things that I've been wanting to play around with, some of which could do with being deployed into a Kubernetes cluster.

They're only small projects, so I didn't want to devote multiple bits of hardware to this experimentation, nor did I particularly want to mess around with setting up VMs.

Instead, I wanted to run Kubernetes on a single machine, allowing orchestration via kubectl and other tools without any unnecessary faffing about.

By sheer luck, I started doing this a few days after the release of Debian 12.1 (Bookworm), otherwise this'd probably be a post about running on Debian 11 (the steps for that, though, are basically identical).

In this post, I'll detail the process that I followed to install Kubernetes on Debian 12.1 as well as the subsequent steps to allow it to function as a single node "cluster". If you're looking to install Kubernetes on multiple Debian 12.1 boxes these instructions will also work for you - just skip the section "Single-Node Specific Changes" and use kubectl join as you normally would.

Aside from a few steps, it's assumed that you're running through the doc as root so run sudo -i first if necessary.

Read more…

Set up and monitor a Tor Snowflake Proxy

The Tor Project's Snowflake is a system which helps users in censorious countries gain access to the wider internet (via Tor) despite blocking attempts by local ISPs or Governments. Given current geopolitics, it's unsurprising currently seeing heavy usage from users in both Russia and Iran as citizens seek out non-state-originated information on events affecting those countries.

Snowflake is one of Tor's Pluggable Transports and aims to hide the fact that the user is using Tor by first connecting to an arbitrary endpoint (the Snowflake proxy) as if the user were just browsing the net.

Snowflake's design relies on proxies being run by volunteers: the more there are, the harder it is for governments to block them in order to restrict their citizen's access to information.

This documentation details the process of setting up a standalone Snowflake proxy and using Telegraf to collect basic statistics for monitoring purposes.

Read more…

Set up a Signal Proxy on Ubuntu

Signal Messenger are currently calling for volunteers to run Signal TLS Proxies in order to help people in Iran re-establish private communications despite the government's current censorship attempts.

Signal's blog post gives simple instructions on how to get a proxy up and running on a system, but assumes that you're already running Docker.

The aim of this post is to go a step further and provide instructions on getting Docker and Signal Proxy onto a freshly installed Ubuntu system. If you're on Debian, the steps are almost identical, you just need to swap ubuntu for debian in the Docker install related commands.

If you're interested in providing support, it really is a matter of minutes to get it up and running.

Read more…

Unable to SSH onto some systems after client upgrade: no matching key found and/or permission denied

After installing system updates, you may find that you're suddenly unable to SSH into systems that you could once reach without issue.

After updating to Ubuntu Jammy Jellyfish (22.04), I found I was no longer able to SSH onto my router

ssh 192.168.4.254
Unable to negotiate with 192.168.4.254 port 22: no matching host key type found. Their offer: ssh-rsa

I also found that I was no longer able to authenticate with a different box

ssh root@mikasa
root@mikasa: Permission denied (publickey).

Both of these issues occured because OpenSSH had been updated to a version >= 8.2:

ii  openssh-client                                1:8.9p1-3                                   amd64        secure shell (SSH) client, for secure access to remote machines

SHA-1 signatures were deprecated in OpenSSH 8.2.

This post discusses how to verify and resolve this.

Read more…

Installing Firefox as a package instead of a Snap on Ubuntu 22.04 LTS

The snap packaging system is a convenient way to install containerised applications.

However, containerisation of applications has drawbacks, including an inability to communicate directly with other applications on the system (after all, that's kind of the point of process isolation).

In Ubuntu 22.04 (Jammy Jellyfish), the Firefox browser has been moved over to using a snap package rather than a .deb, and the package in the repos is now just a wrapper triggering install of the snap.

Whilst process isolation does bring some security benefit, it also means that password managers such as KeepassXC are rendered useless because the Firefox extension can no longer communicate with the database.

Broken comms

Frustratingly, this isn't a surprise: a lot of things broke when Canonical moved Chromium to snap a couple of years back. That browsers are being moved over without the underlying problems being fixed is troubling.

However, it is possible to remove the Firefox snap and install as a native package instead.

This documentation details the process of uninstalling the Firefox snap, adding a repository and giving it's Firefox package priority over the one in the default repos.

Read more…

Enabling and monitoring the zswap compressed page cache on Linux

I use a Lenovo X1 Carbon for work, and generally speaking it's a lovely bit of kit.

Unfortunately, the laptop was only specced with 16GB of RAM at purchase time, and it turns out that Lenovo decided it was a good idea to solder RAM in, so it's not actually possible to add more.

As a result, I frequently find that I'm using swap.

This isn't as bad as it sounds, the laptop's NVME storage is blazing fast, so it's often not immediately obvious that it's swapping (so much so, in fact, that I set up an ordered set of swap partitions so that it's more obvious when the system is approaching swap exhaustion).

Inevitably though, I reach the point where the system just doesn't have the resources that it needs, especially if I'm busy and multitasking.

The full fix for that, really, is a new laptop but there are things which can be done to mitigate the issue and improve performance a bit.

One of those is enabling a compressed in-memory page cache using the Linux kernel's zswap support (introduced in kernel version 3.11). zswap is more computationally expensive than RAM, but less expensive than swapping to disk.

This post details the process of enabling zswap in order to improve the performance of a Linux system. We'll also explore how to monitor it's usage with Telegraf.

Read more…

Manually applying a snap package update

Snap is a convenient way to install containerised applications. Like all package management systems it has it's flaws, but sees widespread use (in particular on Ubuntu derived distros).

There's a little known feature of Snap that's started catching people out though. Snap has the ability to force updates, and will push notifications about a forthcoming attempt to do so.

Pending update of "signal-desktop snap"

Although this feature was actually introduced back in 2019, it's still not particularly well received at times.


Misleading Notification

One concern is that the notification is quite misleading and doesn't really give a clear indication of what the user is supposed to do

Pending update of "signal-desktop" snap

Close the app to avoid disruption (7 days left)

The call to action seems to suggest (particularly to those familiar with things like AWS degraded instance notifications) that you can avoid the disruption of a forced update by closing the app and re-opening it.

But, this isn't the case. On relaunch, the app will be running the same version and notifications will continue unabated.

It is, however, possible (desirable, even) to update (or, in snap parlance: refresh) the package/application manually rather than waiting for the scheduled update.

This documentation details the (simple) process to refresh a snap package on linux.

Read more…

Tracking and Alerting on LetsEncrypt Certificate Renewals With InfluxDB and Kapacitor

LetsEncrypt has been providing free SSL certificates since 2014, and has seen widespread usage.

With a 90 day lifetime, the certificates only have relatively short lifespans and need renewing regularly, with the recommended way being to automate renewal using certbot.

The relatively short lifetime of these certificates means there's also a fairly short window to notice and intervene if renewal fails (whether because you've hit LetsEncrypt's rate limits, because certbot has started failing to fire, or some other reason).

Service monitoring often includes a check that connects in and checks certificate expiration dates, but there's usually a window between where a certificate should have renewed and when it gets close enough to expiry to breach your alert threshold.

If we apply a defense-in-depth mindset, there should also be monitoring of the renewal process itself: not only does this provide an earlier opportunity to trigger an intervention, it also addresses the risk of reliance on a single health check (which might itself malfunction).

This post covers the process of configuring a post-deploy hook in certbot to write renewal information into InfluxDB so that alerts can be generated when an expected renewal is missed.

Read more…

Rotating Docker Container Logs To Comply With Retention Policies

Docker's default configuration doesn't perform log rotation.

For busy and long running containers, this can lead to the filesystem being filled with old, uncompressed logging data (as well as making accidental docker logs $container invocations quite painful).

It is possible to configure docker to rotate logs by editing daemon.json, but the rotation threshold options are fairly limited:

  • max-size: size at which to rotate
  • max-file: max number of rotated files

Whilst these options do help to reduce filesystem usage, being purely size based they fail to support a number of extremely common log rotation use-cases

  • Log rotation at a specific time based interval (e.g. daily log rotation)
  • Maximum retention periods (to comply with GDPR retention policies etc)

Unfortunately, json-file isn't the only logging driver to suffer from this limitation, the local driver has the same restrictions. It looks like there's an implicit decision that anyone who wants to follow common rotation practices should just forward logs onto syslog, journald or some other logging infrastructure (such as logstash). In practice, there are a variety of use-cases where this may be undesirable.

However, as json-file simply writes loglines into a logfile on disk, it's trivial to build a script to implement the rotation that we need.

This documentation details how to set up interval based log rotation for docker containers

Read more…

Accessing Nextcloud files (and external storages) Without Syncing

The Nextcloud Desktop Sync Client does a fantastic job of syncing files between your desktop/laptop and Nextcloud's storage, but you don't always want everything synced down.

For example, we have some fairly sizeable volumes mounted as "External Storage" in Nextcloud. We wanted to be able to browse through those from desktops without having to sync >200GB of data down to each and every client.

This post details how to mount your Nextcloud instance as a remote drive, using WebDAV, so that files are only pulled over the network as and when they're opened.

Read more…

Using multiple swap partitions in a specific order on Linux

It's possible, on Linux, to have multiple swap partitions and/or swapfiles so that your swapspace is spread across multiple physical devices.

It's also sometimes desirable to set an order of priority for these, so that paging uses the fastest underlying storage first.

This documentation details how to set up a mix of swap files and partitions and tell the kernel how to prioritise it's swapsources Linux.

Read more…

Running and monitoring a Minecraft server using Docker and Linux

Running a Minecraft server has always been fairly painless, with the biggest headache usually being getting the right version of Java up and running.

I wanted to find an even simpler route, though, and wanted something that gave me the ability to monitor the server (if only so I could fix stuff before getting complained at).

Although I came into this ready to build my own images, it turns out a bloke called Geoff has done a sterling job not only of dockerising Minecraft-server, but also creating a monitoring tool.

This documentation details how to tie that all together in order to use Docker to stand up a Minecraft Java edition server and monitor it using Telegraf to push monitoring data into InfluxDB or InfluxCloud. Technically, this should all work with running a Bedrock server too, but I've not tried that.

This document assumes you're running Ubuntu, but if you're not then it's only really the Docker installs steps which are Ubuntu specific.

Read more…

Running multiple Tor daemons with Docker

Running a Tor relay helps give bandwidth back to the network, however it's not uncommon for new relay operators to be surprised at Tor's performance profile.

Tor is not multi-threaded, so operators arrive with multi-core machines and find that only a single core is actually being worked. However, it is possible to maximise use of multiple CPU cores by running multiple instances of the tor daemon - this documentation details how to do it using docker (although it's perfectly possible without containerisation too).

Read more…

Collecting Nextcloud User Quota Information With Telegraf

Collecting system level stats from Nextcloud with Telegraf is well documented, and very well supported.

However, I wanted to extract some additional information - current storage quota allocations and usage. Nextcloud allows you to apply a storage quota to each individual user, so I though it'd be useful to be able to monitor for accounts that are getting close to their quota.

The information is a bit more buried within Nextcloud's APIs than the system level stats, and so can not be (as easily) consumed using inputs.http.

This post gives details of an exec plugin which can fetch quota usage, per user, and pass it into Telegraf in InfluxDB line protocol

Read more…

Automounting a remote server over SSH with sshfs and autofs

Sometimes it can be pretty useful to access a remote system's filesystem from your own, and sometimes NFS isn't a great fit for the connectivity at hand.

It is, however, possible to use sshfs to mount a remote filesystem on pretty much any box that you can SSH to - giving quite a bit of convenience without the need to make changes at the remote end.

This documentation details how to create an auto-mounting SSHFS mount

Read more…

Allowing your internal search engine to index Gitlab Issues, Commits and Wiki pages

I've basically lived my life inside issue tracking systems - it used to be JIRA, and I had built tooling allowing effective indexing/mirroring of JIRA issues, but then Atlassian decided to all but do away with on-prem users. So, like many others, I moved over to Gitlab instead.

As a product, Gitlab's great: it's got project lifecycle management, issue tracking, wiki support and even a container registry baked straight into it.

However, it's indexability is basically non existent - even if you have projects marked as public, significant parts of pages are fetched by javascript, so a crawler can't index things like issue references (no, I'm not kidding) unless they execute javascript. It's like they reverse search engine optimised...

It's not just on-prem Gitlab either, it also affects their cloud offering.

Search within Gitlab is excellent, but this is of little use if you have things spread across systems and want to search from a single central point (to be fair to Gitlab, their idea is that everything you do should be within their solution, but life rarely actually works that way).

This documentation details how to use my new tooling (rudimentary as it currently is) to expose an indexable version of your Gitlab life. Although this is focused on an on-prem install, this tool should work with their cloud offering too, as the APIs are the same (but I haven't tested against it).

 

Read more…

Kernel Modules missing on Rasberry Pi

This, almost certainly, was a mess of my own making, but as I didn't find any answers with web searches I thought it was worth documenting for anyone else who sets a similar time bomb for themselves.

I've got some Raspberry Pi's which use NFS for their root partition. They used to be PXE booted, but at some point starting failing to boot so some time back I put a SD card back in for the /boot partition.

This, I suspect, was probably my undoing.

The Pi's have been working fine since, but I wanted to install Docker onto one of them. Although it installed, Docker failed to start, logging the following

Oct 09 22:45:43 redim-4-search-pi dockerd[3534]: failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to
Oct 09 22:45:43 redim-4-search-pi dockerd[3534]: modprobe: FATAL: Module ip_tables not found in directory /lib/modules/4.19.75-v8+
Oct 09 22:45:43 redim-4-search-pi dockerd[3534]: iptables v1.6.0: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Oct 09 22:45:43 redim-4-search-pi dockerd[3534]: Perhaps iptables or your kernel needs to be upgraded.
Oct 09 22:45:43 redim-4-search-pi dockerd[3534]:  (exit status 3)
Oct 09 22:45:43 redim-4-search-pi systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE

On examination, there is no modules directory for the kernel version I'm currently running

root@redim-4-search-pi:~# uname -r
4.19.75-v8+
root@redim-4-search-pi:~# ls /lib/modules/
4.19.66+  4.19.66-v7+

This post details the steps I took to resolve this issue

 

Read more…

OpenWRT opens multiple OpenVPN client connections

This is a slightly obscure one, but when I was initially hit by it, didn't find much searching on the net.

After setting up a new OpenVPN client config (i.e. the OpenWRT box is VPN'ing into somewhere, rather than acting as a VPN server itself) on an OpenWRT box, you might find that OpenWRT eventually crashes.

This documentation details the cause, at least in so far as it affected me

Read more…

Generating a vanity .onion address

Note: This documentation only applies to the older V2 Onions, for newer, please see Generating a vanity .onion address for Version 3 .onions.

Tor Hidden Services are accessed through a web address ending in .onion. Generally speaking these appear to be random strings of letters and numbers, though they're actually a representation of the public key generated when the operator created their hidden service.

It is possible, however, to attempt to generate a keypair which will allow you to generate a desired vanity URL, though the process is essentially a brute-force of key combinations, so may take some time.

 

 

Read more…

Generating a Vanity Address for Version 3 Onions

Tor Hidden Services are accessed through a web address ending in .onion. Generally speaking these appear to be random strings of letters and numbers, though they're actually a representation of the public key generated when the operator created their hidden service.

Whilst it's possible to generate a V2 vanity .onion address with eschallot, V3 Onions use ed25519 requiring use of a different tool.

This documentation details how to generate a vanity .onion address for Version 3 Onions

 

 

Read more…

Automatically Mounting Secondary Encrypted Disk/Partition at Boot

There are a wide variety of use-cases for disk encryption, and the idea of automatically mounting an encrypted disk/partition without user intervention is an anathema to many of those - anyone who can take physical possession of your system will have the disk auto-mount for them.

However, there is a very simple use-case which benefits from being able to automount a second encrypted disk.

If you're storing data unencrypted on a drive and it fails, you're now potentially left with something of an issue, particularly if you intend to RMA it (return it under warranty) - could the drive be fixed, allowing someone else to pull that data off the drive (bearing in mind the manufacturer may fix the drive and sell as refurbished)?

Similarly, when you need to expand your storage, you hit a similar conundrum - do you trust disk wipes sufficiently to be willing to sell/pass the disk on (a particular concern with SSDs where data may previously have been written to a now bad block, so won't be overwritten by your wipe), or do you feel you have to physically destroy the disk, un-necessarily generating e-waste.

Using Full Disk Encryption (FDE) addresses both of these situations - the manufacturer might fix the disk, but without the key the data's just random bytes, similarly, for whoever buys your disk off ebay.

But, FDE can quickly become a major inconvenience at boot - your system will stop booting and ask you to provide the decryption passphrase. That's particularly problematic if you're talking about a headless system like a NAS, where you want things to come up working following a power cycle.

It's possible (trivial even) to configure so that the system uses a key stored on another disk (like your root filesystem, or if you prefer, a USB flash drive) so that the partition is automagically mounted.

This documentation details how to set up ecryptfs on a disk (or partition) and add it to /etc/fstab so that it automatically mounts at boot

All commands are run as root, so use sudo -i/su

 

 

Read more…

Configuring Unbound for Downstream DoT

Quite some time ago, I wrote some documentation on how to stand up a DNS-over-TLS server using a Nginx reverse streams proxy to terminate the SSL.

However, since then (in fact, back in 1.6.7) Unbound released support for directly terminating TLS connections.

This documentation details the (simple) config changes necessary to configure Unbound to service DNS over TLS (RFC 7858) queries.

 

You will need to have a copy of Unbound >= 1.6.7 installed (and, of course, you should really be running the latest release)

Within the config file (normally /etc/unbound/unbound.conf) you'll need to add the following within the server block

    interface: 0.0.0.0@853
    interface: ::0@853
    tls-port: 853

    tls-service-key: [path to your SSL Key]
    tls-service-pem: [path to your SSL cert]
 
    access-control: 0.0.0.0/0 allow

You then simply need to restart unbound and can then confirm it's listening

systemctl restart unbound
netstat -lnp | grep 853

As we only want to accept TCP connections (to reduce opportunity for us to be used in a reflection attack, we firewall UDP 853

iptables -I INPUT -p udp --dport 853 -j DROP

 

Read more…

Nginx logs two upstream statuses for one upstream

I'm a big fan (and user) of Nginx

Just occasionally, though, you'll find something that looks a little odd - despite having quite a simple underlying explanation.

This one falls firmly into that category.

When running NGinx using ngx_http_proxy_module (i.e. using proxy_pass), you may sometimes see two upstream status codes recorded (specifically in the variable upstream_status) despite only having a single upstream configured.

So assuming a logformat of

'$remote_addr\t-\t$remote_user\t[$time_local]\t"$request"\t'
'$status\t$body_bytes_sent\t"$http_referer"\t'
'"$http_user_agent"\t"$http_x_forwarded_for"\t"$http_host"\t$up_host\t$upstream_status';

You may, for example, see a logline line this

1.2.3.4	-	-	[11/Jun/2020:17:26:01 +0000]	"GET /foo/bar/test/ HTTP/2.0"	200	60345109	"-"	"curl/7.68.0"	"-"	"testserver.invalid"	storage.googleapis.com	502, 200

Note the two comma-seperated status codes at the end of the line, we observed two different upstream statuses (though we only passed the 200 downstream).

This documentation helps explain why this happens.

 

Read more…

Building a Raspberry Pi Based Music Kiosk

I used to use Google's Play Music to host and play our music collection.

However, years ago, I got annoyed with Google's lacklustre approach to shared collections, and odd approach to VMs. So, our collection migrated into a self-hosted copy of Subsonic.

Other than a few minor frustrations, I've never looked back.

I buy my music through whatever music service I want, download it onto the NFS share and Subsonic picks up on it following the next library scan - we can then stream it to our phones (using DSub), to the TV (via a Kodi plugin) or to a desktop (generally, using Jamstash). In the kitchen, I tend to use a bluetooth speaker with the tablet that I use to look up recipes.

However, we're planning on repurposing a room into a puzzle and playroom, so I wanted to put some dedicated music playback in there.

Sonos devices have Subsonic support, but (IMO) that's a lot of money for something that's not great quality, and potentially has an arbitrarily shortened lifetime.

So, I decided to build something myself using a Raspberry Pi, a touchscreen and Chromium in kiosk mode. To keep things simple, I've used the audio out jack on the Pi, but if over time I find the quality isn't what I hope, it should just be a case of connecting a USB soundcard in to resolve it.

There's no reason you shouldn't be able to follow almost exactly the same steps if you're using Ampache or even Google Play Music as your music source.

 


Read more…

Resolving GFID mismatch problems in Gluster (RHGS) volumes

Gluster is a distributed filesystem. I'm not a massive fan of it, but most of the alternatives (like Ceph) suffer with their own set of issues, so it's no better or worse than the competition of the most part.

One issue that can sometimes occur is Gluster File ID (GFID) mismatch following a split-brain or similar failure.

When this occurs, running ls -l in a directory will generally lead to I/O errors and/or question marks in the output

ls -i
ls: cannot access ban-gai.rss: Input/output error
? 2-nguoi-choi.rss ? game.rss

If you look within the brick's log (normally under /var/log/glusterfs/bricks) you'll see lines reporting Gfid mismatch detected 

[2019-12-12 12:28:28.100417] E [MSGID: 108008] [afr-self-heal-common.c:392:afr_gfid_split_brain_source] 0-shared-replicate-0: Gfid mismatch detected for <gfid:31bcb959-efb4-46bf-b858-7f964f0c699d>/ban-gai.rss>, 1c7a16fe-3c6c-40ee-8bb4-cb4197b5035d on shared-client-4 and fbf516fe-a67e-4fd3-b17d-fe4cfe6637c3 on shared-client-1.
[2019-12-12 12:28:28.113998] W [fuse-resolve.c:61:fuse_resolve_entry_cbk] 0-fuse: 31bcb959-efb4-46bf-b858-7f964f0c699d/ban-gai.rss: failed to resolve (Stale file handle)

This documentation details how to resolve GFID mismatches

 

Read more…

Improving Nextcloud's Thumbnail Response Time

I quite like Nextcloud as a self-hosted alternative to Dropbox - it works well for making documents and password databases available between machines.

Where photos are concerned, the functionality it includes offers a lot of promise but is unfortunately let down by a major failing - a logical yet somewhat insane approach to thumbnail/preview generation.

The result is that tools like "Photo Gallery" are rendered unusable due to excessively long load times. It gets particularly slow if you switch to a new client device with a different resolution to whatever you were using previously (even switching between the Android app and browser view can be problematic).

There's a Nextcloud App called previewgenerator which can help with this a little by pre-generating thumbnails (rather than waiting for a client to request them), but it's out of the box config doesn't help much if, like me, your photos are exposed via "External Storage" functionality rather than in your Nextcloud shared folder. Even when photos are in your shared folder, the app's out of the box config will result in high CPU usage and extremely high storage use.

This documentation details how to tweak/tune things to get image previews loading quickly. It assumes you've already installed and configured Nextcloud. All commands (and crons) should be run/created as the user that Nextcloud runs as.

 

Read more…

Disk automatically unmounts immediately after mounting

When it happens, it's incredibly frustrating - you've had a disk replaced on a linux box, the disk has shown up with a different name in /dev, so you edit /etc/fstab and then try to mount the disk.

The command runs, without error, but the disk isn't mounted, and doesn't appear in df

This documentation details the likely cause, and how to resolve it

If you look in dmesg, you might see something like the following

[  462.754500] XFS (sdc): Mounting V5 Filesystem
[  462.857216] XFS (sdc): Ending clean mount
[  462.871119] XFS (sdc): Unmounting Filesystem

Which, whilst it shows the disk is getting unmounted almost immediately, isn't otherwise very helpful. It doesn't tell us why.

However, if you look in syslog (e.g. /var/log/messages, journalctl or /var/log/syslog) you may well see this logged again with a couple of additional relevant lines

kernel: XFS (sde): Mounting V5 Filesystem
kernel: XFS (sde): Ending clean mount
systemd: Unit cache2.mount is bound to inactive unit dev-sdc.device. Stopping, too.
systemd: Unmounting /cache2...
kernel: XFS (sde): Unmounting Filesystem
systemd: Unmounted /cache2.

We can now see that the erstwhile init system - systemd - decided to unmount the filesystem

systemd: Unit cache2.mount is bound to inactive unit dev-sdc.device. Stopping, too.

The reason for this is that at boot time systemd-fstab-generator generates, in effect, a bunch of dynamic unit files for each mount. From the output above we can tell the disk used to be sdc but is now sde. Despite fstab saying

/dev/sde      /mnt/cache2   xfs   defaults,nofail  0 0

When we issue the command

mount /cache2

SystemD picks up on the fact that it has an inactive unit file (inactive because the block device has gone away) which should be mounted to that path, decides there's a conflict, and that it knows better, and unmounts your mount again

If you're in this position, then, you should be able to briefly resolve with a single command

systemctl daemon-reload

Keep in mind that if your disk moves back following a reboot, you'll be back to this point where SystemD decides you can't have wanted to mount your disk after all.

 

SystemD have a bug for this, raised in 2015 and seemingly still unresolved (it's certainly still attracting complaints at time of writing). Rather worryingly, it suggests that the above will not always resolve the issue, and instead suggests the following "workaround"

Remove the line from /etc/fstab and mount it manually (or use a startup cronjob script).

 

Building a HLS Muxing Raspberry Pi Cluster

It was quite a long time ago now that I started HLS-Stream-Creator, and I've previously released an example of automating HLS-Stream-Creator so that it receives and processes workloads.

I never really expected that I'd actually have much practical use for HLS Stream Creator when I created it (I created it as a means to learning about HLS in advance of a 2nd interview), particularly as I wasn't generating/publishing any video at the time.

Over time, though, that's changed and my needs have grown from occasionally invoking the script to wanting the convenience of having a dedicated muxing pool so that I can simply submit a video and have it come out in HTTP Live Streaming (HLS) format (particularly useful now that I have videos.bentasker.co.uk).

Although I'm not going to focus on the Command and Control aspect (at it's heart it's a simple REST API) in any depth, this documentation will detail the process I followed in order to have 3 Raspberry Pi's PXE boot and run HLS-Stream-Creator as a service in order to receive arbitrary videos, calculate the output bitrates and then generate HLS output.

It's as much an opportunity to document the process I used to PXE boot Raspberry Pi's with a NFS root filesystem.

 


Read more…

Building a DNS over TLS (DoT) server

WARNING: This article is outdated and has been superseded by Configuring Unbound for Downstream DoT.

Since Unbound 1.6.7 there's a better way than the one described here

I previously posted some documentation on how to build a DNS over HTTPS (DoH) Server running Pi-Hole and/or Unbound.

There's another standard available, however - RFC 7858 DNS over TLS (DoT)

DoT isn't as censorship resistant as DoH (as it's easier to block), but does provide you with additional privacy. It also has the advantage of being natively supported in Android Pie (9), so can be used to regain control of your queries without needing to run a dedicated app link Intra, with all the issues that might entail.

In this documentation we're going to trivially build and place queries against a DoT server.

 

Read more…

Installing iRedMail on Debian (Jessie) 8

I've run my own mail server for quite some time, but it's started to reach the point where a refresh is probably in order.

Normally, I'd prefer to build from scratch, but I thought, this time, I'd have a look at some of the "off-the-shelf" solutions that now exist. Mailinabox was quickly discounted because there's no real configurability, which doesn't sit well with me (it does simplify installation, but makes long-term management that much harder).

iRedMail seems to have a reasonable following, and a scan of it's website and documentation suggested that it is, at least, reasonably sane.

This documentation details the process I followed to install iRedMail on Debian 8 (Jessie). I used Jessie rather than Stretch (9) because that's what the VM I was repurposing was imaged with.

 

Read more…

Building and running your own DNS-over-HTTPS Server

There's been a fair bit of controversy over DNS-over-HTTPS (DoH) vs DNS-over-TLS (DoT), and some of those arguments still rage on.

But, DoH isn't currently going anywhere, and Firefox has directly implemented support (though it calls them Trusted Recursive Resolvers or TRR for short).

Although DoH offers some fairly serious advantages when out and about (preventing blocking or tampering of DNS lookups by network operators), when left with default configuration it does currently come with some new privacy concerns of it's own. Do you really want all your DNS queries going via Cloudflare? Do you want them to be able to (roughly) tell when your mobile device is home, and when it's out and about (and potentially, also your employer - if they own the netblock)? The same questions of course go if you use Google's DNS too.

That, however, is addressable by running your own DNS-over-HTTPS server. This also has advantages if you're trying to do split-horizon DNS on your LAN, so I'll discuss that later too.

The primary purpose of this documentation is to detail how to set up your own DoH server on Linux. The main block of this documentation is concerned with getting a NGinx fronted DoH server backed by Unbound up and running, but will also discuss the steps needed to add Pi-Hole into the mix.

Unless otherwise noted, all commands are run as root

 

Read more…

OpenVPN, Network-Manager and max-routes

Network-manager, simply, sucks. But sometimes you have little choice but to use it.

Unfortunately, despite a bug being sat idle for some time, Network-manager-openvpn doesn't support various OpenVPN client options such as max-routes. Unfortunately, if your OpenVPN server is pushing more than 100 routes, this is sufficient to prevent you from connecting at all.

This documentation details a way to work around that limitation. It's dirty and hacky, but so far, is the only solution I've found

 

 

Basically, we're going to move OpenVPN out of the way and replace it with a shell script that'll take the command it's given, and set max-routes (or any other custom option) appropriately.

First, we need to relocate openvpn, so

sudo
cd /usr/sbin
mv openvpn openvpn.real
touch openvpn
chmod +x openvpn

Then open openvpn in your text editor of choice, and enter the following

#!/bin/bash
#
# 

oldopts=$@
t=`getopt -o r: --long remote: -- $@ 2> /dev/null`
set -- $t
while [ $# -gt 0 ]
do
        case $1 in
        --remote) remote=`echo "$2" | sed "s/'//g"`; break;;
        (-*) continue;;
        esac
done

newopts=$(egrep -e "^$remote" ~/.vpn_additional_opts | cut -d\| -f2)

trap 'kill $PID 2> /dev/null || true' TERM INT HUP
/usr/sbin/openvpn.real $newopts $oldopts &
PID=$!
wait $PID

Now we just need to create a small config file in our home directory. The key for the file is the hostname/ip provided to NetworkManager (in this case, foo.example.com). So, as your user

echo "foo.example.com|--max-routes 200" > ~/.vpn_additional_opts

Everything after the | will be passed to openvpn, so if there are any other arguments you need that are not supported by network manager you can also add them there.

The new script will likely be overwritten when you next update openvpn, so post-upgrade there are a couple of steps you need to follow

sudo
cd /usr/sbin
mv openvpn openvpn.real
cp ~/path/to/backup/openvpn ./
chmod +x openvpn

 

Configuring LetsEncrypt on a CentOS 6 NGinx Reverse Proxy

For those who haven't come across it, LetsEncrypt allows you to obtain free DV SSL Certificates but requires a server side script to be run periodically in order to renew the certificates (for better or worse, a 90 day expiration period has been used).

Although the provided script has plugins to allow support for automatically generating SSL certs based on NGinx and Apache configurations, the script assumes that the server is the origin and that the relevant docroot is available for writing to.

In the case of a reverse proxy - this won't be the case. We want the certificate on the Reverse Proxy (being the endpoint the client connects to) but the websites files are hosted on another server.

This documentation details a simple way to work around that on a NGinx reverse proxy (it should be possible to adjust the config for Apache's mod_proxy if needed).

 

Read more…

Installing and Configuring KDump on Debian Jessie

Having kdump enabled on a server provides a number of benefits, not least that in the event of a kernel panic you can collect a core-dump to help investigations into the root cause. It may simply be bad luck, but my experience with Debian Jessie has been that JournalD is absolutely hopeless in the event of a kernel panic.

Pre SystemD we used to (sometimes) get a backtrace written out to a log, even a partial backtrace could help point investigations into a rough direction, but even with JournalD configured to pass through to rsyslogd those traces just don't seem to be appearing (which to be fair, might be because of the nature of the panic rather than the fault of journald).

This documentation details the steps required to install and configure KDump on Debian Jessie

 

 

Read more…

Building a Tor Hidden Service From Scratch - SELinux

On a system with SELinux, upon attempting to start Tor, you may see errors similar to the following

    [root@localhost tor]# service tor start
    Raising maximum number of filedescriptors (ulimit -n) to 16384.
    Starting tor: Apr 02 15:53:14.041 [notice] Tor v0.2.5.11 (git-83abe94c0ad5e92b) running on Linux with Libevent 1.4.13-stable, OpenSSL 1.0.1e-fips and Zlib 1.2.3.
    Apr 02 15:53:14.042 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
    Apr 02 15:53:14.042 [notice] Read configuration file "/etc/tor/tor-rpm-defaults-torrc".
    Apr 02 15:53:14.042 [notice] Read configuration file "/etc/tor/torrc".
    Apr 02 15:53:14.056 [notice] Opening Socks listener on 127.0.0.1:8080
    Apr 02 15:53:14.057 [warn] Could not bind to 127.0.0.1:8080: Permission denied
    Apr 02 15:53:14.058 [notice] Opening DNS listener on 127.0.0.1:54
    Apr 02 15:53:14.060 [warn] Could not bind to 127.0.0.1:54: Permission denied
    Apr 02 15:53:14.060 [notice] Opening Transparent pf/netfilter listener on 127.0.0.1:9040
    Apr 02 15:53:14.062 [warn] Could not bind to 127.0.0.1:9040: Permission denied
    Apr 02 15:53:14.062 [warn] Failed to parse/validate config: Failed to bind one of the listener ports.
    Apr 02 15:53:14.062 [err] Reading config failed--see warnings above.
    /usr/bin/torctl start: tor could not be started

Which is almost certainly the result of a selinux policy

    [root@localhost tor]# sestatus 
    SELinux status:                 enabled
    SELinuxfs mount:                /selinux
    Current mode:                   enforcing
    Mode from config file:          enforcing
    Policy version:                 24
    Policy from config file:        targeted

There should be a Tor type within selinux, so rather than disabling completely, we'll just tell selinux to be permissive

    [root@localhost tor]# yum install policycoreutils-python
    [root@localhost tor]# semanage permissive -a tor_t

Alternatively, to make selinux enforce, but to instead allow tor to bind to non-reserved ports

    [root@localhost tor]# semanage permissive -d tor_t # Undo the change we made above
    [root@localhost tor]# setsebool -P tor_bind_all_unreserved_ports 1

The latter approach will not help if you've told Tor to bind to a reserved port (for example if DNS is set to bind to port 53). In the example output above, Tor had been configured to bind it's DNS services to port 54, so simply allowing tor to bind to unreserved ports would be insufficient.

Note: The ports you configure for hidden services do not need to be taken into account, as Tor does not actually bind to these ports, it simply interprets traffic received via the Tor connection and acts appropriately. Followed by starting Tor

    [root@localhost tor]# service tor start

 

Building a Tor Hidden Service From Scratch - Part 3 - General User Anonymity and Security

This is Part 3 of my Hidden Service From Scratch documentation. In Part One we designed and built our system, in Part Two we configured HTTP Hidden Service hosting.

In this documentation, we'll be looking more generally at user account and identity protection, as well as examining why you may need to maintain a certain level of paranoia even if your hidden service doesn't fall outside the law in your home country.

Introduction

This section will cover how to implement various mechanisms which may be of use in protecting from Spam/bots and user account compromise.

Many of the approaches used on the Clearnet are effective, but may lead to a small loss of privacy. Most Hidden Service users use Tor precisely because they care about privacy and will expect that design decisions that you make will take that into consideration

In the previous module, it was stated that using server-side scripting should be avoided where possible. For some sites, though, it simply isn't possible - forums for example simply don't work without some form of scripting.

Credential phising is particularly prevalent on the darknet, so you may wish to implement 2 Factor Authentication so that your users can choose to use an additional level of protection. Mechanisms such as Google Authenticator and the Yubikey obviously aren't going to be particularly welcome, as they require information to be disclosed to a third party.

Similarly, you may wish to send emails to users, but care needs to be taken in order to do so without revealing the identity of your server.

In this short module, we'll be looking at

  • Bot/Compromise Protection
    • CAPTCHA's
    • 2 Factor Authentication (2FA)
  • Email
    • MTA -> Tor
    • HS -> Public Mailserver
  • Handling User Data in General
  • Opsec is essential
  • Backups
    • Push
    • Pull
    • Encryption

 

Bot/Compromise Protection

Read more…

Building a Tor Hidden Service From Scratch - Part 2 - HTTP and HTTPS

Despite some fairly negative media attention, not every Tor Hidden Service is (or needs to be) a hotbed of immorality. Some exist in order to allow those in restrictive countries to access things we might take for granted (like Christian materials).

Whilst I can't condone immoral activities, Tor is a tool, and any tool can be used or misused

This is part Two in a detailed walk through of the considerations and design steps that may need to be made when setting up a new Tor Hidden Service.

The steps provided are intended to take security/privacy seriously, but won't defend against a wealthy state-backed attacker.

In Part One we looked at the system design decisions that should be made, and configured a vanilla install ready for hosting hidden services.

Introduction

This section will cover the steps required to safely configure a Tor Hidden service on ports 80 and 443

  • Basic setup
    • Installing and Configuring NGinx
    • Enabling a static HTML Hidden Service
    • CGI Scripts
    • Installing and configuring PHP-FPM
  • HTTPS
    • Safely generating a self-signed certificate
    • Configuring NGinx to serve via HTTPS
  • Hidden Service Design Safety

 

Basic Setup

Read more…

Building a Tor Hidden Service From Scratch - Part 1 - Design and Setup

Despite some fairly negative media attention, not every Tor Hidden Service is (or needs to be) a hotbed of immorality. Some exist in order to allow those in restrictive countries to access things we might take for granted (like Christian materials).

Whilst I can't condone immoral activities, Tor is a tool, and any tool can be used or misused

This is part one in a detailed walk through of the considerations and design steps that may need to be made when setting up a new Tor Hidden Service.

The steps provided are intended to take security/privacy seriously, but won't defend against a wealthy state-backed attacker.

How much of it you'll need to implement will obviously depend on your own circumstances, and in some cases there may be additional steps you need to take

Introduction

This section will cover the basics of setting up a hidden service, including

  • Decisions you need to make regarding hosting and hidden service operation
  • Controlling/Securing SSH access via Clearnet address
  • Basic Opsec changes to the server
  • Installing and Configuring Tor
  • Enabling a Tor Hidden Service for routine SSH access
  • Configuring an SSH client to connect to that hidden service

 

Hosting Theory

Read more…

Copying a Linux Kernel From One System to Another

There may be occasions where, for testing purposes, you want to copy a kernel from one machine to another.

There are some fairly self-explanatory caveats:

  • The donor and target system must be running on the same architecture
  • The target machine shouldn't have any (important) hardware that's unsupported by your donor kernel

Obviously, you'll ideally want to make sure that the hardware is as close to identical as possible (otherwise your testing may be invalid) so the above should be considered a minimum

 

Read more…

sar Cheatsheet

sar can be an incredibly helpful utility when examining system performance, but if not used regularly it's easy to forget which flags to use.

This short post details a number of useful arguments to pass

 

Basic Output

sar

 

CPU Usage per Core

sar -P ALL

 

Memory Usage

sar -r

 

Swap Usage

sar -S

 

I/O

sar -b

 

I/O by Block Device

sar -d -p

 

Check Run Queue and Load Average

sar -q

 

Network Stats

sar -n DEV

Where DEV can be one of the following

  • DEV – Displays network devices vital statistics for eth0, eth1, etc.,
  • EDEV – Display network device failure statistics
  • NFS – Displays NFS client activities
  • NFSD – Displays NFS server activities
  • SOCK – Displays sockets in use for IPv4
  • IP – Displays IPv4 network traffic
  • EIP – Displays IPv4 network errors
  • ICMP – Displays ICMPv4 network traffic
  • EICMP – Displays ICMPv4 network errors
  • TCP – Displays TCPv4 network traffic
  • ETCP – Displays TCPv4 network errors
  • UDP – Displays UDPv4 network traffic
  • SOCK6, IP6, EIP6, ICMP6, UDP6 are for IPv6
  • ALL – This displays all of the above information. The output will be very long.

 

Historic Information

By default, sar will output for the previous 24 hours, however previous days can be checked with

sar -f /var/log/sa/sa15 -n DEV

ls -l Shows Question Marks instead of Permissions

Occasionally, when running ls -l within a directory, you might find that the output shows question marks (?) instead of the usual permissions indicators:

ben@Queeg:~$ ls -l ~/test
ls: cannot access /var/www/html/Vx/Notes: Permission denied
total 0
d????????? ? ? ? ? ? Notes

 

This is because whilst the user has permission to read the directory, they don't have permission to stat the entries within it. At some point, the chmod command has likely been used to remove the executable bit.

ben@Queeg:~$ ls -ld ~/test
drw-rw-rw- 6 ben ben 4096 Nov 7 10:30 /home/ben/test

Fixing the issue is as simple as adding execute permissions

chmod +x ~/test

If you've accidentally removed the executable flag using a recursive chmod, you can re-instate it with the following command

find ~ -type d -exec chmod +x {} \;

Where ~ is the basepath you want to start searching from (in this case /home/$USER. The -type argument ensures the change will only be run against directories

Avoiding BCC Leaks with Exim

This issue is, by no means, Joomla specific - but Joomla's mass mail functionality provides a good example of what can go wrong.

The expectation that most users have, is that the list of recipients BCC'd on an email will never be visible to any of those recipients.

Unfortunately, whether or not that's the case may well depend on the Mail Transport Agent (MTA) that you are using.

Those familiar with Joomla's Mass Mail feature will know that by default, recipients are BCC'd - unfortunately, if you're using Exim (which most CPanel servers, for example, are) then you may in fact find that those receiving your message can see exactly who it was sent to.

Whether or not this BCC Leak is visible to the recipients will depend on what mail client they use (assuming they're not in the habit of looking at the mail headers anyway....), but those using Google Apps/Google Mail will have the list clearly presented to them when viewing the mail.

 

The issue stems from the fact that the Exim developers appear to have adopted a simple philosophy - An MTA should never change mail headers, that's for the Mail User Agent (MUA) to do.

For the most part, I'd agree, but add the qualification - with the exception of BCC...

As a result of this do not touch mentality, if Exim receives the mail with the BCC headers intact, it will faithfully relay that mail onto the recipients SMTP servers with the full header intact (something Mutt users discovered a little while back).

The issue being, that if your Joomla site is configured to use either PHPMail or Sendmail, there isn't an MUA to speak of, and so Exim helpfully discloses your BCC recipient list. 

 

Read more…

Installing Mailpile on CentOS 6

I've been meaning to play around with Mailpile since the beta was released back in September. Thanks to a bout of insomnia I finally found time, though it turns out that getting it up and running on CentOS 6 is initially something of a pain.

This documentation details the steps required to install and run Mailpile on CentOS 6

DISCLAIMER: For reasons I'll discuss in a separate post, at time of writing I'd only recommend following these steps if you want to test/play with Mailpile - Personally I don't feel at all comfortable with the idea of using Mailpile in production in it's current state.

 

 

Read more…

Implementing Secure Password Storage with PHPCredlocker and a Raspberry Pi

Password storage can be a sensitive business, but no matter whether you're using PHPCredlocker or KeePassX, dedicated hardware is best. The more isolated your password storage solution, the less likely it is that unauthorised access can be obtained.

Of course, dedicated hardware can quickly become expensive. Whilst it might be ideal in terms of security, who can afford to Colo a server just to store their passwords? A VPS is a trade-off - anyone with access to the hypervisor could potentially grab your encryption keys from memory (or the back-end storage).

To try and reduce the cost, whilst maintaining the security ideal of having dedicated hardware, I set out to get PHPCredlocker running on a Raspberry Pi.

This documentation details how to build the system, a Raspberry Pi Model B+ was used, but the B should be fine too

 

 

Read more…

Implementing Encrypted Incremental Backups with S3cmd

I've previously detailed howto use S3cmd to backup your data from a Linux machine. Unfortunately, because of the way that s3cmd works, if you want an incremental backup (i.e. using 'sync') you cannot use the built in encryption.

In this documentation I'll be detailing a simple way to implement an encrypted incremental backup using s3cmd, as well as a workaround if you're unable to install GPG - instead using OpenSSL to encrypt the data. Obviously we'll also be exploring how to decrypt the data when the backups are required

It's assumed that you've already got s3cmd installed and configured to access your S3 account (see my earlier documentation if not

 

 

Read more…

Unbound: Adding Custom DNS Records

When I wrote my post on configuring DNS, DHCP and NTP on a Raspberry Pi, I forgot to include information on how to add your own DNS records to Unbound (straight forward as it is). So in this post, I'll give a very brief overview.

All changes should be made in an unbound configuration file (probably /etc/unbound/unbound.conf, though you could also put them into a file in local.d, depending on your distribution - see below)

 

Read more…

NGinx: Accidentally DoS'ing yourself

It turned out to be entirely self-inflicted, but I had a minor security panic recently. Whilst checking access logs I noticed (a lot of) entries similar to this

127.0.0.1 [01/Jun/2014:13:04:12 +0100] "GET /myadmin/scripts/setup.php HTTP/1.0" 500 193 "-" "ZmEu" "-" "127.0.0.1"

There were roughly 50 requests in the same second, although there were many more in later instances.

Generally an entry like that wouldn't be too big of a concern, automated scans aren't exactly a rare occurrence, but note the source IP - 127.0.0.1 - the requests were originating from my server!

I noticed the entries as a result of having received a HTTP 500 from my site (so looked at the logs to try and find the cause). There were also (again, a lot of) corresponding entries in the error log

2014/06/01 13:04:08 [alert] 19693#0: accept4() failed (24: Too many open files)

After investigation, it turned out not to be a compromise. This post details the cause of these entries.

 

Read more…

Usurping the BTHomeHub with a Raspberry Pi: Part 6 - Conclusion

Throughout this series of articles, we've been aiming to usurp the role of the BTHomeHub on our home network, leaving it to do nothing but act as an Internet Gateway and provide a basic NAT firewall. As we've seen, it can be stubborn and insist on trying to ignore 'off' settings.

In the previous five parts, we've configured our Raspberry Pi to perform many of the functions of the HomeHub, as well as a few extras that BT never saw fit to provide. So, now we're going to step back and look at the functionality we've got.

 

Read more…

Usurping the BTHomeHub with a Raspberry Pi: Part 5 - Inbound OpenVPN

In Part 4 we configured our Raspberry Pi router to maintain a number of OpenVPN tunnels and to route through them selectively. Now we'll look at the steps needed to allow connection to our LAN via OpenVPN. Although helpful, as the HomeHub doesn't provide VPN connectivity, this stage doesn't really count as Usurping the BTHomeHub.

The steps are almost completely identical to those performed when Installing Open VPN on Debian. We're going to have to NAT connections though, as the HomeHub is a little stupid and we can't add static routes to it (so if we're connected to the VPN and accessing the Internet, it won't know where to route the response packets).

What we'll do, though, is only NAT if the connection isn't to something on the LAN.

 

Read more…

OpenVPN on CentOS 6 (Updated) - With HMAC

I've previously documented how to install and configure OpenVPN on CentOS 6, but the steps appear to be outdated.

In this documentation, we'll (very quickly) detail how to configure OpenVPN on CentOS 6. We're also going to enable TLS Authentication so that OpenVPN won't even respond unless the connecting client provides the right pre-shared key.

You'll need the EPEL repos installed and enabled.

 

Read more…

Recovering from corrupted InnoDB Pages

I recently encountered an issue with various InnoDB pages becoming corrupted on the database that plays host to my JIRA install. It was - to some extent - a mess of my own making for mixing production and development databases (or more precisely, for hosting that production database on a dev machine).

Lesson learnt, sure, but I still needed to address the issue so that I could get JIRA up and running again.

This documentation details the steps to follow - it won't resolve every case of corruption, but it resolved the issues I was seeing

 

Read more…

Usurping the BTHomeHub with a Raspberry Pi: Part 4 - Using a VPN to Tunnel Connections to Specific IPs

Content Filtering is becoming increasingly popular amongst Politicians, ISPs and generally clueless do-gooders. The problem  is, whatever you think of their motives, it's generally poorly implemented and interferes with the end-users browsing experience, even when it's not supposed to (the image to the right appeared with filtering off! - click to enlarge).

As we've been Usurping the BTHomeHub with a Raspberry Pi, we're going to take a brief break to implement some useful functionality that the HomeHub didn't provide.

In this Part, we're going to configure our Raspberry Pi to connect to an OpenVPN server and route some of our traffic over the tunnel - depending on the destination IP (i.e. Split tunnelling). This will allow us to easily bypass the troublesome content filtering, whilst not un-necessarily introducing any latency to any connection that is (for the time being at least) unaffected by the filters.

Note: We'll be manually specifying the connections that are routed via VPN, so that we can 'whitelist' mistakes such as the EFF and Wikipedia, whilst still being 'protected' against other filtered pages.

Unless otherwise stated, all commands need to be run as root

 

Read more…

OpenVPN on Debian

Setting up OpenVPN on Debian is as straight forward as on CentOS, though some of the file locations differ slightly.

This documentation details how to install and configure OpenVPN on a Debian server.

 

 The first thing we need to do, is to get openvpn installed

apt-get install openvpn

Next we want to create a configuration file, we'll use and adapt the sample config file

cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
cd /etc/openvpn/
gunzip server.conf.gz

By default, the OpenVPN server will hand out IP's in the 10.8.0.0/24 subnet, if you want to change this, edit the config as follows (I'll change to 10.14.0.0/24)

nano server.conf

# Find server 10.8.0.0 255.255.255.0 and change to
server 10.14.0.0 255.255.255.0

Save and exit (Ctrl + X, Y)

Next we want to create our keys and certificates, assuming we're still cd'd into /etc/openvpn

mkdir easy-rsa/keys -p
cd easy-rsa/
cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0/* ./

At the bottom of the file vars are some variable that we probably want to change - they set the defaults used in config generation

nano vars

# Set these to suit
export KEY_COUNTRY="UK"
export KEY_PROVINCE="SUFF"
export KEY_CITY="Ipswich"
export KEY_ORG="myserver"
export KEY_EMAIL="me@adomain.com"

Save and exit

Read more…

Usurping the BTHomeHub with a Raspberry Pi: Part 3 - Routing, Remote Administration and Utilities

In Part One we configured a RaspberryPi to act as a Wireless Access point, providing DHCP services to wireless clients. In Part Two we then configured our Pi to provide DHCP, DNS and NTP services to the entire LAN.

In this part, we'll be taking some more responsibility away from the BTHomeHub, as well as configuring a few conveniences, such as Remote administration and useful utilities, including

  • Wake On Lan
  • Network Troubleshooting Tools
  • Dynamic DNS Update Client (No-Ip.com)

 

Read more…

Usurping the BTHomeHub with a Raspberry Pi: Part 2 - DNS, DHCP and NTP

In Part One, we configured our RaspberryPi to act as a Wireless access point and bridged the wireless and wired interfaces so that WLAN client's were easily accessible from the LAN.

As part of that setup, we configured a DHCP server, however we haven't yet made it the DHCP server for the LAN - our tired old BTHomeHub is still the authoritative server for the network.

In this part, we'll be reconfiguring our DHCP server so that it takes responsibility for the entire LAN, configuring DNS services, and making our Pi the LANs central NTP (Network Time Protocol) Server

Step by step, we'll be configuring our Raspberry Pi to take over nearly all of the duties performed by the BTHomeHub.

 

Read more…

Usurping the BTHomeHub with a Raspberry Pi: Part 1

I used to run a nice pfSense box as my router, unfortunately power bills being what they are, I reverted back to using a BTHomeHub. Unfortunately, the BTHomeHub isn't particularly good - the Wifi signal sucks, it's DNS server seems to daydream and it occasionally forgets that it should be assigning some devices the same IP every time (or more precisely, will give their IP away if they are not currently present).

We could, of course, replace the HomeHub with something a bit more up market, but where's the fun in that? In this post, we'll be starting down the route of using a Raspberry Pi to usurp some of the power the BTHomeHub currently holds over the LAN. Eventually, the HH will be acting as nothing but a dumb internet gateway, doing a little bit of NAT and not much else.

 

Read more…

Creating a Virtual Network Interface in Debian

There are times when you might want to assign more than one IP to a system, even if it only has a single physical NIC. This documentation details how to create a virtual network interface (known as aliasing) under Debian (see here for how to alias in Centos 6).

We'll assume that your NIC is eth0, if not then simply use the name of your network interface.

To check, run

cat /etc/network/interfaces

You should see an entry similar to one of the following

auto eth0
iface eth0 inet dhcp

# OR
iface eth0 inet manual
address 192.168.1.252
netmask 255.255.255.0
gateway 192.168.1.254

Taking the settings from above, we can create the virtual interface by doing the following

nano /etc/network/interfaces

# Add the following
auto eth0:1
allow-hotplug eth0:1
iface eth0:1 inet static
address 192.168.1.248
netmask 255.255.255.0
gateway 192.168.1.254

# Ctrl-X, Y to Save and exit

The configuration above creates a new interface - eth0:1 and sets a static IP of 192.168.1.248. To apply the changes, we simply need to restart networking

service networking restart

It's that simple, now you should be able to see the interface

ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:129 errors:0 dropped:0 overruns:0 frame:0
TX packets:129 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16247 (15.8 KiB) TX bytes:16247 (15.8 KiB)

eth0 Link encap:Ethernet HWaddr b8:27:eb:a2:0e:62
inet addr:192.168.1.252 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:902097 errors:0 dropped:0 overruns:0 frame:0
TX packets:432782 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:203759013 (194.3 MiB) TX bytes:71615701 (68.2 MiB)

eth0:1 Link encap:Ethernet HWaddr b8:27:eb:a2:0e:62
inet addr:192.168.1.248 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

We can see the interface is up and running, and should be able to ping the IP that we've added

Read more…

Keeping Hitcounts accurate when using an NGinx Caching Proxy

In previous documentation, we've configured sites to use NGinx as a Reverse Caching Proxy, leading to hugely improved response times on popular content. We've also implemented a custom configuration so that we can refresh the cache periodically, ensuring that dynamic content (such as Twitter modules) updates.

One thing we haven't done as yet, though, is to address the issue of internal hitcounts. We've looked specifically at using NGinx with Joomla, and noted that a side effect would be inaccurate hitcounts displayed within Joomla (which remains true even when using the internal page caching).

In this documentation, we'll be implementing a small script to ensure that hits served from the cache are still recorded within Joomla (or Wordpress, Drupal - whatever you happen to be using), albeit a little while after they happen.

Most people won't need a realtime view of hits (if you do, you'll need a different implementation to the one we're going to develop), so we're going to have the hitcounts update hourly.

Put simply, the aim is to parse NGinx's access logs and place a cache bypassing request for each cache hit that we find - as we're running hourly we'll only process requests which occurred in the last hour.

 

Read more…

CentOS: Using NGinx as an SSL Reverse Proxy for Apache

A little while ago, I published a guide to configuring NGinx to act as a Reverse Proxy for Apache. Although it was briefly mentioned, we never went through the steps necessary to configure NGinx to work in this manner for SSL connections - we simply left Apache listening on port 443.

This documentation details how to go about configuring NGinx to handle your SSL stuff as well. It assumes you've already generated CSR's and got the SSL certificate to use (presumably because they're already being used by Apache).

 

Read more…

Transcoding files ready for HTTP Live Streaming on Linux

HTTP Live Streaming (HLS) is an IETF draft standard created by Apple Inc. It's use is pretty widespread, although it was primarily designed to allow video to be easily delivered to iOS devices it works well with a wide range of clients. Later versions of Android support it, as do players like VLC.

Unfortunately, a lot of the tools for creating (and testing) HLS streams are created and released by Apple. Unless you develop for iOS devices, you probably lack a developer login!

It's actually pretty easy to set up at a basic level. In this documentation we'll be looking at what HLS is, and how to prepare video for transmission using HLS.

 

Read more…

Configuring NGinx to act as a Reverse Proxy for PHPMyAdmin

In a previous post, I detailed how to Use NGinx to serve static files and Apache for dynamic as well as the minor tweaks you need to make to have it work nicely with Joomla.

One thing I didn't cover, though, is setting up PHPMyAdmin. This documentation isn't going to go into the detail of installing and configuring PHPMyAdmin as there's plenty of that available elsewhere on the web. What we will discuss, though, is the NGinx configuration changes you need to make to have the connection reverse proxied to Apache.

These steps only really apply if you've gone for a system-wide installation of PMA. If you've unpacked into a web-accessible directory then you probably don't need to make any changes!

Read more…

Checking for Outdated Joomla Extensions on your server

When you're managing Joomla sites it's reasonably easy to keep track of updates, especially if you use something like Watchful to help you. When you're running a server and only managing some (or none) of those sites, it becomes a little more difficult (especially on a busy shared hosting server).

It's quite easy to shrug and say 'Not my site, not my problem', but the simple fact is that it is. The second someone manages to compromise one of the sites you host, they're going to try and find a way to run arbitrary code, once they've done that they'll try to run an auto-rooter. If they succeed, it's game over for everyone you host!

The extension that always comes to mind, is the Joomla Content Editor (JCE) as they had a nasty vulnerability involving spoofed GIFs some time back. You'd hope that everyone would have updated by now, but there still seem to be a lot of sites running versions older than 2.1.1!

In this post, we'll be creating a script designed to automatically check every one of the sites you host for a version of JCE older than the latest. Adjusting it to check other extensions is easy, so long as that extension has an update stream.

Read more…

Setting up Xen on Ubuntu 12.04

In order to be able to run some destructive testing on customer's systems, I needed to set up virtual servers. The hardware I have spare doesn't have virtualisation hardware, so KVM is out. Due to time constraints, it means my usual choice of CentOS is out (as RH have dropped support for Xen in RHEL6 and I lack the time to risk delays).

So, I figured I'd use Ubuntu 12.04 (Precise Pangolin) for my Dom0.

The hardware is an old HP G3 with dual Xeon processors and 3GB RAM. It's never going to be much use for testing dedicated servers, but as a lot of VPS configurations are set to 1 core/ 1GB RAM it just about passes the mark.

This documentation details the steps I took to get Xen installed and set up - every step listed can be run via SSH (assuming you do a net install of the base system), but be aware that if something goes wrong you might need physical access to the system to resolve it.

 

Read more…

Enabling SRS on a CPanel Server

The default MTA on a CPanel server (Exim) has supported both the Sender Policy Framework (SPF) and the Sender Rewriting Scheme (SRS) for quite some time. Unfortunately, whilst CPanel provides configuration options allowing you to enable and configure SPF, the same cannot be said for SRS.

This can cause a major headache if you have set-up mail forwarders on your system. This documentation details how to go about configuring SRS.

Read more…

CentOS: Using NGinx to serve static files and Apache for dynamic

Apache is a great web-server, but it has a pretty heavy memory footprint. It can get quite restrictive quite quickly, especially if you're on a system will limited resources (given how many people now run on a VPS, and the poor disk IO of these systems it's all the more important - swapping is slow).

The way around it, is to configure your system to use NGinx as a reverse-proxy. Depending how many virtualhosts you have, you can make the changes almost completely transparently within about 10 minutes.

Pre-Requisites

First, we need to be able to install NGinx, which means setting up the EPEL repo (if you already have it enabled, skip this step)

CentOS 5.x

http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm

CentOS 6.x

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Now that the repo is installed, we need to install NGinx

yum install nginx

 

Read more…

Installing FFMpeg on CentOS 5

I'm not actually a huge fan of running things like ffmpeg on servers without good reason, but the popularity of extensions such as hwdMediaShare means that sometimes you have to install it.

Normally, it'd be a simple yum install ffmpeg but as it's not really server software it's not in the default repositories, this documentation explains the steps needed to install (without compiling from source). It's CentOS 5 specific, but should actually apply to 6 as well so long as you add the 5 specific repos.

 First we need to add a couple of repositories

nano /etc/yum.repos.d/dag.repo [dag]
name=DAG RPM Repository
baseurl=http://apt.sw.be/redhat/el$releasever/en/$basearch/dag
gpgcheck=1
enabled=1

Next we grab the RPM

rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS//rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm

For some, running yum install ffmpeg ffmpeg-devel will now be successful, for others there'll be a dependancy issue with libedit. If you fall into the latter group, try the following steps

#Add the EPEL Repo
rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install libedit ffmpeg ffmpeg-devel

That should be job done. Of course, if you are wanting to run HwdMediaShare you'll probably also want to run

yum install mencoder

Configuring Postfix to automatically forward mail for one address to another

There seem to be a number of people searching for how to do this, and from what I can see there's very little quick and easy documentation on the net. You've got a server, hosting a website (for example) for example.com.

You want the server to accept mail for example.com but to automatically pass the mail onto a different address.

Assuming you're running Postfix, it's as simple as the steps below

First we make sure the virtual mappings file is enabled

nano /etc/postfix/main.cf 
# Scroll down until you find virtual_alias_maps

# Make sure it reads something like
virtual_alias_maps = hash:/etc/postfix/virtual
# We also need to make sure the domain is enabled
virtual_alias_domains=example.com

Save and exit, next we add the aliases to our mapping file

nano /etc/postfix/virtual 
# Forward mail for admin@example.com to jo.bloggs@hotmail.com
admin@example.com jo.bloggs@hotmail.com

Simple! And if we want to send to two different addresses at once, we just specify them

admin@example.com  jo.bloggs@hotmail.com jos.wife@hotmail.com

 Finally, we just need to create a hash (actually later versions of Postfix don't require this)

postmap /etc/postfix/virtual

It's exactly the same principle as passing mail into a local user's mailbox.

Configuring Postfix to block outgoing mail to all but one domain

This is so simple to do, but I have to look it up every time I need it (not something that comes up regularly!);

When configuring a development server, you may find you have a need to ensure that emails will not be sent to any domain except those you explicitly permit (for example if you're using real-world data to do some testing, do you want to send all those users irrelevant emails?).

This documentation details how to configure Postfix on a Linux server to disregard any mail sent to domains that are not explicitly permitted.

 

Read more…

Finding the cause of high CPU utilisation on Linux

We've all had it happen from time to time, suddenly sites on a server we manage appear to have slowed to a crawl. Having logged in you take a look at the running processes to find the cause. 9 times out of 10 it's something like a backup routine overrunning, so you kill the task and everything's OK again.

What do we do, though, if that's not the cause. It can sometimes be difficult to narrow down exactly what's causing it if the process list shows everything's currently OK. The slowdown may only kick in for a few seconds, but the perfectionist in us still needs to know what the cause is so we can mitigate it (if it's a particular page on a particular site, what happens if it suddenly gets a lot of hits?)

This documentation details ways to monitor usage without needing to spend all day looking at top. The main aim being to try and identify which process is responsible to then dig a little further.

 

I'm assuming you've already done the basic investigation, top iostat etc.  This is more about how to implement logging to try and pick up the longer term issues.

 

The first thing to do is to set up a script found on UnixForums, and make a few tweaks to iron out a couple of minor bugs

#!/bin/bash
export fa=/tmp/ps.a.$$ fb=/tmp/ps.b.$$
export PS='aux'
ps -fp$$ | read zh # capture ps header
ps $PS | sort >$fb # prime the comparison

while [ 1 ]
do
sleep 60
mv -f $fb $fa
date
ps $PS | sort >$fb
echo "$zh"
comm -13 $fa $fb | grep -v '0:0[01] '
echo
done

The script takes the output of ps and then takes another to compare. It strips out any processes that are only using the CPU for 0 - 1 seconds (as they're not a major concern) and then spits out details of the changed processes.

To be of real use we want to leave this running for quite some time, preferably logging to a file, so assuming we saved the script at ps_monitor.sh we run

ps_monitor.sh > ps_log.log

We'll then be building a historic log, but probably don't want to have to read it all manually. Given the format it gives us we can check for excessively high usage another way. Create a BASH script called parselog.sh

#!/bin/bash

# Get the filename to read from the arguments
export FILENAME=$1
export MAXLIM=30

echo "USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND"

while read -r a
do
if [ "$a" != "" ]
then
CPUUSAGE=$( echo "$a" | awk -F\ '{print $4}');
echo $CPUUSAGE | grep ":" > /dev/null
if [ "$?" == "1" ]
then
ABOVEMAX=$( echo "$CPUUSAGE > $MAXLIM" | bc );
if [ "$ABOVEMAX" == "1" ]
then
echo $a | sed 's/ / /g' >> /tmp/pslogresults.$$
fi
fi

fi
done < $FILENAME

cat /tmp/pslogresults.$$ | sort -t" " -k 3 -r
rm -f /tmp/pslogresults.$$

To run the script, we simply invoke it (you've already made it executable, right?) and pass it the name of the file we directed our output to in the earlier command. So in this example we'd run

parselog.sh ps_log.log

Although not pretty, this will output a list of all processes recorded as having a higher CPU utilisation that that specified in the parselog script. You may find (as I did) that one instance has a rediculously high usage (90%) so you'll then be able to go and take a look.

Keep an eye out in the original log file for signs of infinite loops as well, but these two scripts can be very helpful in tracking down errant scripts that are hogging resources.

intel_do_flush_locked on Kubuntu 12

A lot of people have updated to the latest build of Kubuntu, but those with Intel graphics cards may be finding that KWin crashes regularly. This documentation details how to identify whether it's an issue with intel_do_flush_locked, what this means and how to resolve it until an official fix comes out.

 

Firstly wait until KWin has crashed (your windows will lose their headers and your taskbar will dissapear), based on the systems I've seen exhibiting this issue it won't be long.

Now press Ctrl-Alt-F2 to switch to a virtual terminal.

Login using the same username/password you log into KDE with.

Run the following commands

export DISPLAY=':0.0'
kwin

You'll see some output. Now press Ctrl-Alt-F7 to switch back to KDE. You should have everything back. Do whatever you were doing until it crashes again.

Ctrl-Alt-F2 to switch back to the VT. Now look at the output, in most cases the last error will be

intel_do_flush_locked failed: Invalid Argument

 

What causes it

Basically, the call 'intel_do_flush_locked' isn't supported in earlier kernels (such as the one that ships with K/Ubuntu), so the system is trying to run a command that just doesn't exist (in simple terms).

 

Temporary Fix

To get around it, we need to disable OpenGL as most of these calls are coming through MESA. So on your KDE desktop

  • K Menu
  • System
  • System Settings
  • Desktop Effects
  • Untick Enable Desktop Effects on Login
  • Choose Advanced tab and change Compositing type to XRender
  • Save/Apply
  • Log out and log back in (or restart your machine if you want)

The issue should have gone away. Hopefully a kernel update will be out soon which will include the missing call. Until then unfortunately it means you have to cope with minimal or no desktop effects.

 

Creating an IPv6 Tunnel on Linux

RIPE, the European internet registry has started heavily rationing IPv4 addresses, meaning that the day of IPv6 only connections is fast approaching. BT don't yet support IPv6 on their connections, but I need to be able to use IPv6 to help ensure that servers are correctly set up to handle IPv6 only traffic.

So, I need to create an IPv6 over IPv4 tunnel.

This documentation details the steps to do this using Helium Electric's (free) tunnelbroker service

 

Firstly we need to sign up for a free account at http://tunnelbroker.net. Once your username and password have been emailed to you, login and start the process of creating a tunnel. If you're on a BT connection behind a BTHomeHub you're going to hit issues almost straight away!

The BTHomeHub doesn't respond to WAN Side ICMP echo requests (Pings) and there's no way to configure it to do so. You'll need to put one of your machines into the DMZ so that tunnel broker can check your IP. To do so

  • Browse to BTHomehub.home
  • Click Settings
  • Enter your Administrator password
  • Click Advanced Settings
  • Click Advanced Settings again (why do BT make us click twice?)
  • Click Port Forwarding
  • Click DMZ
  • Set the radio to Yes and select the machine you're currently on
  • Click Apply

Now Tunnelbroker should be able to ping your machine, so continue with the set up process. Once the tunnel's created, make a note of the settings you are given and then run the following set of commands from a console

Change the addresses to those that you were given

# replace 216.66.80.26 with the Server IPv4 address you were given
# Replace 109.151.91.144 with the Client IPv4 address you were given
ip tunnel add he-ipv6 mode sit remote 216.66.80.26 local 109.151.91.144 ttl 255
ip link set he-ipv6 up

# Replace 2001:470:1f08:4cc::2/64 with Client the IPv6 address you were given
ip addr add 2001:470:1f08:4cc::2/64 dev he-ipv6
ip route add ::/0 dev he-ipv6
ip -f inet6 addr

In theory you should now be able to ping using an IPv6 connection

ping6 ipv6.google.com

Did it work? No it didn't for me either at first! (Network is unreachable).

I found two things - first that my machine has to be in the DMZ as the HomeHub won't route protocol 41 (I'd taken it back out of the DMZ once the tunnel was created on tunnelbroker), but also that the routes created above didn't seem to work.

So what I did was to remove all the routes

# I ran each of these twice

ip -6 route del 2001:470:1f08:4cc::/64
ip -6 route del default

Now I was able to use ifconfig to set things up (the commands above helpfully created the sit devices for us!). I did originally try re-adding the routes using ip but received errors

ifconfig sit0 up
ifconfig sit0 inet6 tunnel ::216.66.80.26
ifconfig sit1 up
ifconfig sit1 inet6 add 2001:470:1f08:4cc::2/64
route -A inet6 add ::/0 dev sit1

Following this I was able to run

ping6 ipv6.google.com

and get a response!

Browsing to http://www.whatsmyipv6.org/ also showed that I was connecting via IPv6.

Now I can go and add some AAAA records for websites, as I'm able to check that the sites/servers are actually working with IPv6 rather than having to click and pray.

 

Update: Once the tunnel is established you can move the machine out of the DMZ. It seems the HomeHub will route the packets if the state is ESTABLISHED. You'll need to move back into the DMZ if you want to re-connect (say after a reboot) but it's better than leaving a client in the unprotected area the whole time (though you are running a software firewall, aren't you!)

Creating a virtual Network Interface in CentOS 6

Sometimes you need to assign more than one IP to a server, even if it only has one NIC. To do so, you create a virtual (or aliased) interface, attached to the physical NIC.

This documentation details how to do this in CentOS5 / CentOS 6 (this also applies to CentOS7 if you're not using Network Manager).

 We'll assume that your NIC is eth0, if not simply substitute for the name of your NIC

cd /etc/sysconfig/network-scripts

# Use the real NIC's details as a template cp ifcfg-eth0 ifcfg-eth0:0

# Now we need to make a few small changes nano ifcfg-eth0:0

# Change the file to contain the following (substitute your desired IP!)
BOOTPROTO=static
BROADCAST=192.168.1.255
IPADDR=192.168.1.99
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
TYPE=Ethernet

In essence, we're removing HWADDRESS and changing the IP to suit our needs. The next step is to restart the networking subsystem on the server. If our physical NIC uses DHCP we'll need to change BOOTPROTO from dhcp to static.

/etc/init.d/network restart

 

Read more…

OpenVPN on CentOS 6

Setting up OpenVPN is seldom complicated nowadays, but on CentOS it's far more straightforward than I've experienced on most other distro's.

This documentation details how to install and configure OpenVPN on CentOS 6

 Install and enable the epel repo's and then run the following

yum install openvpn 
cp /usr/share/doc/openvpn-*/sample/sample-config-files/server.conf /etc/openvpn/
nano /etc/openvpn/server.conf
# Change localserver to the IP of your machine # Exit and save (change anything else you don't want at the default)

# Handle keys mkdir /etc/openvpn/easy-rsa/keys -p cd /etc/openvpn/easy-rsa cp -rf /usr/share/openvpn/easy-rsa/2.0/* . nano vars

# Change the relevant vars (country, state etc) # Exit and save

# Now we're going to create a certificate authority ./clean-all ./build-ca
# Build the server certificate ./build-key-server server #You can replace server with a different name if you want to use one
# Build the diffie-hellman files (Used for key exchange) ./build-dh
# Copy the keys to the openvpn dir cd keys && cp ca.crt cert SERVER.crt key SERVER.key dh dh1024.pem /etc/openvpn/
# Build a key for a client # Note: If you want to password protect the key, use build-key-pass instead cd ..
./build-key BensPC
# Set up NAT iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE

# Start the service
/etc/init.d/openvpn start

# If that worked without error, make boot automatically
chkconfig openvpn on

We're now ready to install on the clients, whether that be an Android phone, Windows or *NIX work stations etc. To do so, simply copy the client key (in my example it's /etc/openvpn/easy-rsa/keys/BensPC.key and BensPC.crt ) to the client and use with OpenVPN.

Don't forget to logon to your firewall/NAT Router and forward port 1194 to your OpenVPN server (if you left the defaults, the protocol will be UDP).

Virtualisation with Xen on CentOS 6.3

It's been a while since I've had to set up a virtualisation server, but today I needed to configure a brand-new install of CentOS 6 to act as a virtual host. The hardware doesn't have virtualisation support (an old G3) so I had to use Xen so that paravirtualisation was available (not currently supported by KVM). Oops, not so easy now that Xen isn't included by default, Red Hat having opted to use KVM instead.

Despite that, getting things set up isn't that hard, although not nearly as easy as it was when you could just

yum install xen

 This documentation details the steps you'll need to follow.

So let's start with a few basic pre-requisites;

yum install sanlock presto libblkid

 

Read more…

ProFTPD not working with FileZilla (Plesk)

So you've got a nicely configured server, slightly tarnished by the presence of Plesk but everything seems to be running well. Suddenly, you've got users complaining that they can't access the server via FTP.

You're running ProFTPD (as Plesk kindly installed it for you) and can log in from the CLI FTP client (on Windows or Linux), but can't get in using FileZilla, FireFTP or Internet Explorer. FileZilla is probably giving the error "Cannot Retrieve Directory Listing" but will have authenticated correctly just before that. For some, FileZilla will hang just after MSLD or LIST commands.

This documentation details how to resolve a common issue

Read more…

Resolving KDE issues with Dual Monitors

Quite a number of users are reporting issues with KDE4 forgetting monitor settings when the log out. The end result being whenever you log back in you need to sit and reconfigure KDE to make proper use of a dual-headed system rather than cloning the primary screen onto your second.

Although the issue is apparently fixed in KDE 4.6, many aren't running this version yet.

This documentation shows a workaround that can be used in order to get KDE to remember the settings.

 

Read more…

Howto stop Ubuntu nagging you about Distribution Upgrades

It's a fairly common scenario, you've just got your system set up the way you like it and then there's a new release. Your update manager starts nagging that you should upgrade, no matter how many times you dismiss it it just won't get the message!

 

You can disable this using the method below, but don't stick on old versions for too long!

 

nano /etc/update-manager/release-upgrades
#change Prompt=normal to Prompt=never
Ctrl-X, Y

Job done, just keep in mind that you'll actually have to remember to upgrade! Standard updates and bugfixes will still come down, but upgrades to the next version of the distribution won't.

Registering Existing MySQL Databases in Plesk

Many businesses use Plesk to manage their webserver, what happens though if you import databases from the command line rather than through Plesk? The databases will be valid, and will work but won't be visible in Plesk (meaning no PHPMyAdmin access).

There are a lot of solutions to this listed on the net, but all either seem to carry the potential for extended downtime or are quite mandrolic. This document details an alternative method, which should hopefully leave downtime at less than a minute, but only applies to Plesk running with a MySQL server (usually on Linux or similar).

Read more…

Automatically clearing old emails using CPanel

Depending on the setup of your system, old emails can be a bane. If you forward a copy of all emails to a single account for retrieval by the MS Exchange POP3 connector, you'll experience issues when the other mailboxes become full (a lot of servers won't accept a wildcard redirect for email addresses!).

It's actually fairly simple to solve so let's take a look at a sample setup;

 

Mailboxes on Example.com

  • Ben
  • Postmaster
  • Brian
  • Bill

 

Mailboxes on Exchange

  • Ben
  • Brian
  • Bill

 

I've forwarding rules setup so that all mail is forwarded from Ben, Brian and Bill to the Postmaster mailbox. The POP3 connector then downloads from the Postmaster account (and deletes the retrieved mail) so that it can be distributed to the correct mailboxes on the Exchange Server.

The problem comes when mailboxes Ben, Bill and Brian on Example.com fill up and reach their respective quotas. At that point, mail will start bouncing back to the sender with "Mailbox full" messages. Not good!

Thankfully, it's actually reasonably simple to resolve.

  1. Log into Cpanel account for Example.com
  2. Choose Cron Jobs
  3. Create a new cron job (I selected to run weekly)
  4. As the command enter

find /home/example/mail///cur -name "*" -mtime +days -exec rm {} \;

But replace example with your Cpanel username, and days with the oldest email you want to keep (I want 21 days so simply enter 21).

Save the Cron job and leave it to it, as you're now done!

 

Note: You will receive error messages when a mailbox is already empty, the system will report that it's unable to delete an item because it is a directory. I find this a useful reminder that the job is executing, but if you find it annoying simply add "-type f" to the command so that it becomes;


find /home/example/mail///cur -type f -name "*" -mtime +days -exec rm {} \;

Couldn't really be much simpler once you know how to do it!

 

 

 

Archiving a large backup across multiple discs on Linux

 

Hopefully, we all back up our data, but what should we do once our data won't fit on our chosen media?

 

We have two options (as we obviously don't want to delete our data!)

  • Use a different backup medium
  • Split the backup across multiple volumes

Sometimes the former just isn't appropriate, as much because of the cost of harddrives vs Optical Media (i.e. CD's/DVD's).

This short tutorial will explain how to create a single backup archive, and then split it across multiple CD's/DVD's.

 

 

In this tutorial, I want to backup to DVD (as my photo collection would require more than 40 CD's).

So from BASH;

 

tar -cf Pictures_backup.tar Pictures/

split -d -b 4480m Pictures.backup.tar


This will then give you multiple files to burn to disc (each beginning with x). Burn these files with your favourite CD Burner

To restore the files, copy them all to one directory and then run the following

 

 cat * > Pictures_backup.tar

tar xvf Pictures_backup.tar

 

Read more…

Howto Install the Realtek 8171 Wireless Driver in Linux

I was asked to install Linux on a brand new laptop today (yet another who doesn't like Win 7!), it was an Advent M100 which contains the Realtek SemiConductor 8171 Wireless Adaptor.

The card is detected by the kernel (on Kubuntu) but no drivers are available, so neither ifconfig or NetworkManager find the device.

 

 

 

It's been a while since I've had to deal with Linux not having drivers (nearly everything is supported now!) but there's no pain involved in resolving this one;

  • Go to www.realtek.com and locate drivers for the RTL8191SE-VA2.
  • Download the drivers onto your system (I plugged a CAT-5 into the laptop, but you could download on another system)
  • Put the tarball somewhere convenient (I tend to use ~/Programs/ )
  • Extract and install it:

    tar xvpzf rtl8192se-*.tar.gz

    cd rtl8192*

    # Next step needs to be as root, so either

    sudo -s

    # or

    su

    make && make install

  • The next time you reboot the system your wireless card will be detected.

 

 

 

Getting Started with Linux

I originally began the 'Getting Started with Linux" series in November 2006. Although a lot of the information is now outdated, thanks in part to work on User Interfaces, I hope the series will still be able to answer some of the questions that those new to Linux often have.

 

 

Originally six sections were planned but, unfortunately, I never found the time to write the final two;

  1. Introduction Part One - Why use Linux?
  2. Introduction Part Two - Hardware
  3. Getting Started With Linux Part One
  4. Installing Software
  5. Configuring Additional Keys on your Keyboard
  6. Window Managers and Desktop Environments

I also made a 'revision' script available here.