Building a Tor Hidden Service From Scratch - Part 2 - HTTP and HTTPS

Despite some fairly negative media attention, not every Tor Hidden Service is (or needs to be) a hotbed of immorality. Some exist in order to allow those in restrictive countries to access things we might take for granted (like Christian materials).

Whilst I can't condone immoral activities, Tor is a tool, and any tool can be used or misused

This is part Two in a detailed walk through of the considerations and design steps that may need to be made when setting up a new Tor Hidden Service.

The steps provided are intended to take security/privacy seriously, but won't defend against a wealthy state-backed attacker.

In Part One we looked at the system design decisions that should be made, and configured a vanilla install ready for hosting hidden services.


This section will cover the steps required to safely configure a Tor Hidden service on ports 80 and 443

  • Basic setup
    • Installing and Configuring NGinx
    • Enabling a static HTML Hidden Service
    • CGI Scripts
    • Installing and configuring PHP-FPM
    • Safely generating a self-signed certificate
    • Configuring NGinx to serve via HTTPS
  • Hidden Service Design Safety


Basic Setup

NGinx Installation & Configuration

On CentOS, getting a basic install of NGinx is as simple as as adding the EPEL repository and then running an install.

  rpm -Uvh $U
  yum install nginx
  service nginx stop
  chkconfig nginx on

The default configuration contains some small HTTP leaks, however, so changes need to be made to ensure that the server's true identity is not leaked.

To begin, we want to ensure that NGinx will never be accessible via Tor and the clearnet at the same time.

In part 1, we added some basic firewall rules, so we'll start by explicitly blocking external access to the relevant ports

  iptables -A INPUT -p tcp --dport 80 -j DROP
  iptables -A INPUT -p tcp --dport 443 -j DROP
  service iptables save

Next, we want to ensure that if the firewall rules are ever accidentally flushed, we won't automatically be exposed to detection.

To achieve this, we need to edit the default server block and explicitly bind it to the loopback adapter.

  nano /etc/nginx/conf.d/default.conf

Note: On Debian based systems, this will be /etc/nginx/sites-available/default

Within that file, you will see an NGinx Server block, the section we're interested in is the listen directive:

  server {
      listen       80 default_server;
      server_name  _;

Change this so that we explicitly bind to localhost

  server {
      listen       localhost:80 default_server;
      server_name  _;

Were we to start NGinx now, only connections made via the loopback adapter would establish.

NGinx also includes some basic information about itself in headers though, to reduce the amount of information available to an adversary, we want to ensure that that information isn't disclosed.

  nano /etc/nginx/nginx.conf

Within the http section, we want to set the following

  server_name_in_redirect off;
  server_tokens   off;
  port_in_redirect off;

The first of which specifies that the configured server name won't be used in any redirects which may be generated by NGinx.

The second removes version information from both the Server header and error pages.

The third will have no obvious effect if you've configured NGinx to listen on port 80. However, if you have configured it to listen on a different port to that which the client will connect via (i.e. Nginx on port 8080, Tor configured to listen for connections on 80), then this option ensures that NGinx will never include it's configured port number when generating a redirect.


Sanity Checking Our Changes

In theory, we should now be ready to configure Tor to expose a hidden service and then look at setting up our site. However, we first need to check what data NGinx might expose about itself

We need to run a number of tests, which are most easily achieved with the GET command

  yum install perl-libwww-perl

To begin with, we need to start Nginx

  service nginx start

Our first check is designed to check normal behaviour

  GET -Ssed

Which should give something similar to the following

  GET --> 200 OK
  Connection: close
  Date: Sat, 28 Jun 2015 14:20:53 GMT
  Accept-Ranges: bytes
  Server: nginx
  Content-Length: 3698
  Content-Type: text/html
  Last-Modified: Tue, 11 Nov 2014 16:27:04 GMT
  Client-Date: Sat, 28 Mar 2015 14:20:53 GMT
  Client-Response-Num: 1
  Title: Test Page for the Nginx HTTP Server on EPEL

The main thing we're looking for is any header which could potentially be used to fingerprint the server (even if non-uniquely).

Due to the way Tor Hidden services work, you can safely ignore the Client-Peer header.

In the example above, there's nothing which may identify us, NGinx isn't even using the server's configured timezone in it's responses.

The Next thing to do, is to perform the same check for 404's

  GET -Sse

Check the content for signs of specific version strings, or anything else you feel could be used to fingerprint the server.

Although it's tempting to override the default error pages, doing so (at the default level) could potentially be used to help fingerprint the server if NGinx were ever accidentally exposed to the clearnet.


Building a Static Site

Unless you've used Shallot to generate a custom .onion you'll need to configure Tor to serve the new hidden service before you can start setting it up.

Open /etc/tor/torrc and add the following to the bottom of the file

  HiddenServiceDir /var/lib/tor/myonion/
  HiddenServicePort 80

Save and exit, stop Nginx, and then reload tor

  service nginx stop
  service tor reload

(Use reload and not restart, the latter will break your SSH session if you've connected via your .onion).

We stopped Nginx because we don't want any accesses to the new Hidden Service descriptor to result in NGinx's default page being served.

Grab the hostname we've generated (and make a note of it)

  cat /var/lib/tor/myonion/hostname

Next, we need to create a docroot and tell NGinx where to serve from

  mkdir -p /usr/share/nginx/onions/myonion
  echo "Hello World" > /usr/share/nginx/onions/myonion/index.html

We'll create a new file to hold the configuration for our onion site

  cd /etc/nginx/conf.d
  cp default.conf onions.conf

If we now open onions.conf for editing, we want to change the server block to reflect the following

  listen	 localhost:80; 
  server_name foo.onion; # Use whatever tor gave you

  root /usr/share/nginx/onions/myonion;

  location / {
        root   /usr/share/nginx/onions/myonion;
        index  index.html index.htm;

You can define an onion specific access log if you'd like, though the only interesting information it's likely to contain is what was requested and when. All readers will have a source IP of and all Tor Browser Bundle users will have the same User-Agents.

If we start NGinx now, our (extremely basic) static HTML site should be accessible over Tor.


CGI Scripts

When using any kind of server side scripting, the risks on information leakage increase greatly.

Whatever solution is being deployed (whether it's a CMS, a bug tracker or something else) the solution itself needs to be carefully checked for anything which may lead to leakage. These may include

  • Server Information pages
  • Experience 'improvement' functionality (essentially, call-home scripts)
  • Certain CAPTCHA implementations

The potential risks will vary with each application you deploy, so looking at the specific risks falls outside the scope of this training.

One thing that all have in common, though, is that some sort of application handler will need to be installed and configured in order to run the scripts. The potential risks will vary both with the languages (e.g. Perl, Python) and the handlers themselves.



To provide an example of the kind of thing you need to be checking your CGI handler for, we'll be installing and configuring PHP-FPM.

PHP is known to have a number of leaks, and there are a wide number of PHP applications which can be used, so it's probably a good place to start learning.

  yum install php-fpm

PHP by default will leak it's major version number, and will attempt to 'correct' incorrect URLs by using a 'closest match' mentality. The former can be used for fingerprinting, whilst the latter is incredibly useful if trying to exploit directory traversal vulnerabilities, amongst other things

We disable them both by changing their value in php.ini

  sed -i 's/expose_php = On/expose_php = Off/' /etc/php.ini
  echo "cgi.fix_pathinfo=0" >> /etc/php.ini

Disabling expose_php also brings another (undocumented) benefit. Certain PHP versions contain a number of 'easter eggs' which can also be used to identify the version of PHP being run on the server.

We also want to be sure that error messages will not be displayed to visitors (this should be the default, but again, it's best to be sure).

  sed -i 's/display_errors = On/display_errors = Off/' /etc/php.ini

Failing to prevent the display of errors could lead to serious leakage - most errors will contain filepaths, however the exact information contained in an error/notice will depend on exactly what caused the issue.

In PHP > 5.4 Server timezone information is regularly included in PHP Notice errors

Our next task is to tell NGinx to pass requests for PHP scripts through to PHP-FPM, so open /etc/nginx/conf.d/onions.conf for editing, and add the following to your server block

  location ~ \.php$ {
      root           /usr/share/nginx/onions/myonion;
      fastcgi_index  index.php;
      fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
      include        fastcgi_params;

Save and exit.

For the changes to take effect we need to reload NGinx (or start if you haven't already)

  service nginx reload

We can create a simple PHP file to test against

  echo '<?php echo "hello world";?>' > /usr/share/nginx/onions/myonion/test.php
And then use GET to see whether the request is correctly passed through (set the
Host header to be the .onion you're setting up)
  GET -H "Host: foo.onion" -Sse
You should see the headers (with a HTTP 200) followed by a blank line and the string "hello world".



Configuring a Tor Hidden service to use HTTPS is fairly straight forward, although there are some pitfalls to be avoided - especially if you're intending on running multiple hidden services.

In this section we'll be looking at generating a HTTPS certificate and then configuring NGinx to use it.


Safely Generating an SSL Certificate

There is a wealth of documentation available online on how to generate a self-signed (snakeoil) certificate, however most of this documentation includes a step which may lead to identification if certificates are generated for multiple hidden services.

Many of the tutorials involve creating your own Certificate Authority and using that to sign your new certificate. As details of the signing CA are embedded within the certificate, and adversary will be able to see that there is a common CA for all certificates that you generate.

This simple information leak allows an Adversary to identify that there is likely a common administrator for each of the affected Hidden Services.

In order to avoid this, we want to generate a certificate without creating a CA. Assuming you do not want to encrypt your private key, run the following

  openssl genrsa -out server.key 2048

If you do want to password protect the key (you'll need to enter the password whenever restarting NGinx)

  openssl genrsa -des3 -out server.key 2048

Next, we need to create a Certificate signing request

  openssl req -new -key server.key -out server.csr

Try to leave the questions at their defaults (for example, country code of XX), but for the common name you'll need to enter your .onion address.

Finally, we create the certificate

  openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

We now want to sanity check the generated certificate to ensure it doesn't contain any un-necessary information.

  openssl x509 -in server.crt -text

Details for both the Issuer and the Subject should be non-descript.


Configuring NGinx to use HTTPS

Now that we've created a certificate, we need to tell NGinx to actually use it.

First we need to move our Certificate and private key into sane locations

  mv server.key /etc/pki/tls/private/myonion.key
  mv server.crt /etc/pki/tls/private/myonion.crt

Next, we need to create an NGinx server block, configured to use HTTPS

The configuration will be more or less the same as for our HTTP onion - except that we need to enable SSL and set a few options related to that.

  nano /etc/nginx/conf.d/onions.conf

  server {
      listen       localhost:443; # We want to listen on port 443
      server_name  foo.onion;
      root /usr/share/nginx/onions/myonion;

		  # Enable SSL and identify the certificate/key
      ssl                  on;
      ssl_certificate      /etc/pki/tls/certs/myonion.crt;
      ssl_certificate_key  /etc/pki/tls/private/myonion.key;

      ssl_session_timeout  5m;

      # Limit the types of encryption that can be used 
      # (to reduce the  possibility of downgrade attacks)
      ssl_prefer_server_ciphers On;
      ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

      # You may also wish to disable specific ciphers here

      # The rest of the config is identical to our HTTP config
      location ~ \.php$ {
	    root           /usr/share/nginx/onions/myonion;
	    fastcgi_index  index.php;
	    fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
	    include        fastcgi_params;

      location / {
	  index  index.html index.htm;

      # Here, we've left the error pages as the default, though it's
      # relatively safe to tweak them on a per-hidden-service basis
      # if you wish

      error_page  404              /404.html;
      location = /404.html {
	  root   /usr/share/nginx/html;

      # redirect server error pages to the static page /50x.html
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
	  root   /usr/share/nginx/html;

If you save the configuration and restart NGinx, it should now be available on port 443

  service nginx reload

Lets test our connection

  echo | openssl s_client -connect

We should see our certificate details. This, however, is an issue in itself - we only want an onion's certificate to be supplied if the request uses that onion's hostname within the SNI request. Currently, any attempt to connect via SSL will return this certificate, regardless of hostname:

  echo | openssl s_client -servername in.correct -connect

To prevent this, we need to generate a generic certificate and configure NGinx to use that

  openssl genrsa -out server.key 2048
  openssl req -new -key server.key -out server.csr

Answer the questions, again using the defaults. When asked for a Common Name enter 'Default.Site'

Sign the CSR to generate a certificate

  openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

  mv server.key /etc/pki/tls/private/default.key
  mv server.crt /etc/pki/tls/certs/default.crt

We simply need to configure the default server block so that it also supports SSL.

Add the following lines to the server block in /etc/nginx/conf.d/default.conf

  listen localhost:443 default_server ssl;
  ssl_certificate /etc/pki/tls/certs/default.crt;
  ssl_certificate_key /etc/pki/tls/private/default.key;

Reload NGinx's configuration and then try connecting with openssl again

  service nginx reload
  echo | openssl s_client -connect

You should now see the new certificate in use. Using SNI though, you should get the certificate for your onion

  echo | openssl s_client -servername foo.onion -connect

Now, should anyone ever manage to establish a HTTPS session to your server, it won't automatically disclose the fact that you're hosting at least one .onion site.

One thing that may still happen, though, depending on the browser in use, is that the certificate may be used for connections to your .onion address.

It essentially boils down to whether or not the browser supports SNI. It's easy to replicate the behaviour with wget:

  wget --header="Host: foo.onion"

Because wget doesn't automatically parse the host header (when passed in that manner) it won't use SNI and so the server uses the certificate to serve foo.onion.

It's therefore very important to ensure that HTTPS, in particular, is never available on both the clearnet and a Tor Hidden Service at the same time (and the certificate should be regenerated with a new private key if changing between the two) as the certificate can be used as definitive proof that the server is the same as the one hosting the Hidden Service.

For the same reason, you should never have two HTTPS hidden services on the same system - sending a non-SNI request to each is more than sufficient to prove that both hidden services have the same admin (and are likely on the same physical server).

If there is a need to have multiple HTTPS hidden services on the same physical machine, the safest means is to use containerisaion (such as OpenVZ), allowing you to run completely distinct NGinx processes with different keys.


Configuring Tor to serve the HTTPS version

Configuring Tor is as simple as adding an additional port directive in /etc/tor/torrc. Open that file for editing, and find the relevant hidden service definition.

Just after the declaration for Port 80, add

	HiddenServicePort 443

And then reload Tor

	service tor reload

Depending on your needs, you can then optionally configure the port 80 version of the .onion to automatically redirect users to the https version - just as you might on the clearnet.


Hidden Service Design Safety

As we've seen, setting up a hidden service is straightforward, but avoiding a number of potential pitfalls requires a bit of forethought and planning.

It's important to adopt the mindset that both mistakes and compromises do happen from time to time, so that you automatically put failsafes in place as you go.

For example, you may have noted that the firewall rules we added at the beginning of this module have very little effect - the Nginx server blocks are bound to the loopback interface, so even without those rules, NGinx shouldn't respond to connections from the clearnet.

The risk, of course, is that it's very easy to forget to include 'localhost' when writing a listen directive. The firewall rules are there to ensure that if that mistake is ever made, some protection remains in place.

Conversely, binding to the loopback interface ensures that if the firewall rules are ever accidentally flushed, NGinx still shouldn't be exposed to the wider world.

Particular care needs to be taken when using any kind of server side scripting.

If an adversary finds a way to execute arbitrary code on your system, they can very quickly identify the physical server that you're located on. Just as on the clearnet, there are mitigations you can put in place to reduce the attack surface, but the best defence is to ensure that server side scripting is used only where absolutely necessary.

As discussed in the previous module, in certain deployments, using HTTPS may provide your users with some additional security. However, it needs to be very carefully implemented to minimise the risk of information leakage.

When generating a default certificate, the values given need to be as generic as possible, to help ensure that a correlation cannot be found between multiple servers (as you may, one day, be running multiple services).


Part 3 - General User Anonymity and Security