Tommaso Gagliardoni's Homepage

Mathematics, Security, Philosophy and random ravings

Email: tommaso[AT)gagliardoni(DOT]net

All the contents of this website are released under a Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 Unported License.

(Jul 31th, 2016) Trip to the US!

Hey, I'm heading off to the States! I will first go to Tacoma, WA, for attending ICITS, where I'll give a talk about our recent results on the Computational Security of Quantum Encryption (joint work with Gorjan Alagic, Anne Broadbent, Bill Fefferman, Christian Schaffner, and Michael St Jules). Then I'll move to Santa Barbara, CA, where I'm giving a talk at CRYPTO about our work on Semantic Security and Indistinguishability in the Quantum World (joint work with Andreas Huelsing and Christian Schaffner). And then a few days of vacation for some Californian desert adventure time!

(Mar 6th, 2016) Copenhagen

I'm in Copenhagen, visiting my colleague and friend Gorjan Alagic at Matthias Christandl's group of Quantum Information Theory QMATH at the University of Copenhagen. I should also visit Lars Knudsen's group at the Technical University of Denmark one of these days. Looking forward to some productive time!

(Dec 19th, 2015) Incoming Event: First CROSSING Winter School on Quantum Security

I want to point you out to this winter school which is going to be held in Darmstadt, Germany, from January 25th to 29th, 2016.

Marc Fischlin (my advisor) and I initially devised this school as a nice academic event to be hosted by CROSSING here at the Technical University. We made sure that the school would be free of charge to attend, but at the same time we managed to keep at a very high level the quality of applications.

Apparently, people liked the idea. Very renown speakers accepted our invitation, and we have been overwhelmed with requests of participation from all around the world. We planned around 30-35 participants, we received almost three times that amount of applications. We worked very hard to find a suitable venue and now we managed to extend the list of participants to more than 70. This includes now not only students, but professors, professionals from private companies, military organizations and so on. So, long story short: the whole thing turned out much bigger than we expected!

I am very excited, it's also a completely new experience for me to be part of the organization of such a big event, there are a lot of things to do and to take care of. It's really hard work too, especially considering that we are trying to do in 6 months what usually takes 12. But I'm sure it's going to be a great success, thanks to the amazing speakers we will have. Looking forward to the end of January, but for now I wish you all happy winter holidays (I am myself enjoying some rest at my hometown in Italy finally!)

All the best!

(May 6th, 2015) Lugano, Switzerland: ICITS 2015 and visiting USI

I am in Lugano for ICITS 2015 and visiting the USI. I also gave a talk at Stefan Wolf's group about our work `Semantic Security and Indistinguishability in the Quantum World` (joint work with Andreas Hülsing and Christian Schaffner, full version available on the Eprint).

(Apr 7th, 2014) Buenos Aires, Argentina: PKC'14

I was in Buenos Aires for the PKC'14 conference. Feel free to check out the slides of my talk `General of Group Homomorphic Encryption in the Quantum World` (joint work with Frederik Armknecht, Stefen Katzenbeisser and Andreas Peter, full version available on the Eprint).

(Dec 5th, 2013) Bangalore, India: Asiacrypt'13

I am currently in Bangalore for the Asiacrypt'13 conference. Feel free to check out the slides of my talk `The Fiat-Shamir Transformation in a Quantum World` (joint work with Özgür Dagdelen and Marc Fischlin, full version available on the Eprint).

(Apr 29th, 2013) Eric Schmidt about Google Glass privacy: society will adapt

You may have heard about this Erik-guy: he's the big boss of some non-evil tech company everybody likes. The same guy who says its company is basically reading your mind, but it's OK because they do it with your permission, and in any case you could always change your name in the future to avoid association with your digital past. The same guy who gave us this pearl of wisdom:

"If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place"

except, maybe it's better if you don't try to out-creep him by publishing some of his (publicly available) personal data, or he may get pissed.

On a sidenote, you may have heard about Google Glasses: the new, revolutionary, ultrasmart, uberhipster thing for sexy geek people who love tech, freedom, democracy, and most importantly, sharing. The new, cool people everybody wants obviously to be like.

You may also have heard about some nonsense luddist criticism about the feature of these cool glasses of having a small camera potentially turned on all the time, recording your life and the life of people around you 24/24H. Criticism obviously devoid of any argument, made by old, uncool, paranoid haters never leaving their damp basement full of polluting Debian machines. Oh, and never getting laid. But worry not, the Heric-guy has a proper response to such nonsense:

"criticisms are inevitably [sic] from people who are afraid of change or who have not figured out that there will be an adaptation of society"

Good. Now, the point is: you may also have heard about the Enric-guy's opinion about the use of commercial flying drones by civilians:

"It's got to be regulated. [...] It's one thing for governments, who have some legitimacy in what they're doing, but have other people doing it... It's not going to happen."

and then you may have naively wondered: "how comes that being against the idea of normal people flying their own, cheap drones thanks to the advancement of technology is not just fear of change and failure to adapt to the society?".

A lot of readers noticed the above fallacy and raged against it (see for example the comments in the Slashdot article). But I have the feeling that stopping at this observation is dangerously short-sighted. I'm not going to discuss here the use of drones by civilians, I admit I don't have yet strong opinions about it - while I definitely do about people wearing surveillance equipment 24/24H: not in my house anyway. My point is another:

suppose more and more people react against Schmitt's words, pointing at his opinion about private drones as a proof of his hypocrisy. Then one day he pops up with apologizes: "Yeah, sorry, you're right: maybe private drones are not so bad after all". Satisfied? Not a big damage for him, main argument of his detractors dismissed. Uh... what was so bad about those glasses apart from that thing, anyway? Now you have to explain it over again to me and all my lazy-minded friends on Facebook...

My suggestion: never lose sight of the big picture of things. Eerie Smith's argument is so laughable that I can understand people enjoy bashing him on that point. But to focus on this strategy is to accept candies from the enemy: there may be poison. Whatever you may think about private drones, the implications of a society full of dorks wearing those glasses are FAR more important. Call me paranoid, but I wouldn't be surprised if this were just a smart marketing move to defuse the much more serious criticisms about this dystopian view of (still more) ubiquitous surveillance.

(Mar 17th, 2013) HowTo: OpenVPN with Elliptic Curve Cryptography (ECC) and SHA2 on Linux Debian

UPDATED VERSION: fixed OpenSSL vulnerability from Nadhem Alfardan and Kenny Paterson. Notice that the old version 1.0.1c and previous are vulnerable . Thanks to the anonymous reader who pointed that out (I had already attended a talk by Kenny in Darmstadt about the issue last year but I was still waiting for a fix). If you are affected you should recompile OpenSSL (the new version) and then OpenVPN, all should work as before. Please upgrade!

In this howto I will explain how to setup a typical OpenVPN infrastructure with an atypical cryptographic protocol involving Elliptic Curve Cryptography (ECC) and the more recent SHA2 cryptographic hash family on Linux Debian. The problem lies in the fact that this setup is not (still) natively supported by OpenVPN, and some workarounds are necessary. Proof of this is the fact that you won't find a lot of informations on the internet about a similar setup, and those few are either incomplete or not working. It took me a lot of effort to make this tutorial, I really hope it can help some of you having a similar situation.


We will first install the latest version of OpenSSL on a Debian system (everything also works fine for Ubuntu and similar derivates). Then we will compile OpenVPN against the newly installed OpenSSL and add a manual patch, which makes possible the use of ECC authentication schemes with SHA2. We will then create a Certification Authority (CA), server and client certificates, and then setup a VPN by enabling routing and NATting over a virtual interface on the server side.

Open problems

The encryption protocol for secure key-exchange is still the traditional Diffie-Hellmann (DH) and not the more secure ECDH (Elliptic Curve Diffie-Hellmann). This cannot be fixed until OpenVPN won't natively support ECDH (e.g., with an "ecdh" directive in the server.conf instead of the "dh" directive).

Moreover, it is not still 100% clear if SHA2 is used over both OpenVPN's channels (control and data) or just over the control channel (see thread at


I will have to assume that the reader is familiar with the basic concepts of public-key cryptography, authentication and hash functions. The typical public-key encryption and signature schemes, such as RSA, Diffie-Hellmann (DH) and DSA, do not offer the same advantages (both in terms of security as well as performances) as ECC does: ECC is faster (on the average hardware) and needs smaller key sizes to obtain comparable security, hence saving bandwidth etc.

Many ECC applications are unfortunately plagued by the draconian North American patent system (there is a lot of mess on some of these aspects here, which I will not address). I am based in Europe, so am not going to take into account such considerations in this tutorial, but if you are a US/Canadian citizen maybe you should care, I'm not exactly sure which of the following steps and to what extent may be deemed unappropriate. Please check if in doubt.

SHA2 cryptographic hash functions family comes basically in three variants: SHA-256, SHA-384 and SHA-512, depending on the message digest size. They are all much more secure of the more well-known MD5 and SHA1 (SHA-160), which are both to be considered insecure for today's standards. It is worthwhile repeating: do not use SHA1 or MD5 for secure applications, ever.

The main problem about OpenVPN's most common setups is that they usually employ SHA1 as a digest. There is an undergoing public competition by NIST for a new standard called SHA3 (in a similar way to the competition for AES), but it won't be due until mid 2012.

I will give this tutorial for a Debian stable system, but it shouldn't be difficult to adapt it to other distributions. I assume that all the job is done via the root account for simplicity.

I will assume that there are three machines:
  • the secure working machine where the CA certificates are stored, which should ideally be disconnected from the network
  • a reasonably secure server machine, where the OpenVPN server daemon will run
  • a client machine, which wants to connect to the internet by routing all its traffic through the server machine
Also for simplicity, in this tutorial all the keys and certificates will be generated on the secure machine, and then moved to client and server. Of course the standard way would be instead to generate, e.g., the client key on the client an then submit it to the CA for the certificate generation. I will just simulate this process by using directories on the secure machine, and then copying or moving the certificates and keys as needed.

The issue

OpenVPN uses the encryption routines provided by OpenSSL. ECC and SHA2 crypto primitives are relatively new and have been only recently added to the set of OpenSSL's primitives. The problem is that a composite scheme using both has not yet been added in OpenVPN: with the latest stable release you can either choose an RSA setup with SHA2 as a message digest, or opt for an ECC setup with the standard SHA1, but there is no way to mix both. After having asked around in both OpenVPN forum and OpenSSL mailing list, user janjust has kindly provided a rough patch (see here ) which solves the issue. The patch has been submitted, but AFAIK not yet included in the official version, so we'll have to do the fix manually, but it is likely that it will be included starting from the next stable release.


First of all open terminal and open a root session with su . On Ubuntu and similar you don't have a root account by default, so either you always use sudo before administrative commands, or:

sudo su -

Another option would be to create a root account with:

sudo passwd root

now you can use su. Remember to close all the root session you open with exit when you have finished.

Install the following packages on all of the machines involved:

apt-get install wipe build-essential

Keep in mind that we are going to build some big stuff, so it might take a lot of time on a slow machine.

Installing updated OpenSSL

The following must also be done on all the machines. First of all install and compile the latest version of the LZO library, we will need it later:

cd /root
mkdir .ssl
cd .ssl
tar -zxf lzo-2.06.tar.gz
rm lzo-2.06.tar.gz
cd lzo-2.06
make test
make install
cd ..
rm -R lzo-2.06

Then we are going to install OpenSSL. Be aware that OpenSSL is a vital part of any Linux distribution, so it is not possible to simply uninstall the default version and compile the new one without having a lot of problems. For this reason we are just going to install the latest version in a non-system directory, which in this case is /usr/local but you have to change it if for any reason it is not suitable to your situation. It would be much simpler if the latest release was found in the Debian repositories but I'm not aware of any Debian repository for updated versions of OpenSSL.

The latest OpenSSL stable release at the moment is the 1.0.1e.

tar -zxf openssl-1.0.1e.tar.gz
rm openssl-1.0.1e.tar.gz
cd openssl-1.0.1e
./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl
make test
make install
cd ..
rm -R openssl-1.0.1e

Now we are going to add an alias, so that every time we run the openssl command, the freshly installed version will be called, and not the default system one. I will use the nano editor for this tutorial.

nano ../.bashrc

Add the following line at the bottom:

alias openssl='/usr/local/openssl/bin/openssl'

then activate the alias:

source ../.bashrc

Check that everything is OK:

openssl version

Should be something like:

OpenSSL 1.0.1e x Yyy zzz

Now we have to use a slightly modified version of the openssl.cnf configuration file. Put this file in the current directory ( /root/.ssl ), so we don't have to overwrite the default conf file (though it would probably be a good idea anyway, since the one we will use is a more appropriate version tuned up for better security). In case you wonder, the main modification is the addition of a [server] section to give the right permissions to the certificates generated for the server.

Installing and patching OpenVPN

This is also going to be done on all the machines.

The latest OpenVPN stable release at the moment is the 2.2.1

NOTICE: I'm still using an old version, but maybe the patch has already been included in one of the latest versions, it would be nice to check.

tar -zxf openvpn-2.2.1.tar.gz
rm openvpn-2.2.1.tar.gz
cd openvpn-2.2.1

Now add the following patch to the file ssl.c (you can do it manually: plus signs at the beginning of a line means that the line must be added, minus signs means that the line must be removed, while @@ is a marker for the line position in the file).

We finally build OpenVPN by linking it to the newly installed OpenSSL version:

./configure --with-ssl-headers=/usr/local/openssl/include --with-ssl-lib=/usr/local/openssl/lib --enable-iproute2
make install
cd ..
rm -R openvpn-2.2.1

Check that everything is OK:

openvpn --version

must be something like:

OpenVPN 2.2.1 x86_64-unknown-linux-gnu [SSL] [LZO2] [EPOLL] [eurephia] built on ...

Create the configuration directory for OpenVPN in case it has not been created automatically:

mkdir /etc/openvpn

Optionally, reboot.

Creating a Certification Authority

Recall that in this tutorial we are going to create the keys and certificates in three separate directories, each one for CA, server and client, and just move the certificates around for simplicity, but the "safe" way to do this is to have three separate machines and to keep the CA machine unplugged from the network.

cd /root/.ssl
mkdir CA
cd CA
mkdir newcerts
echo "01" > serial
touch index.txt
mkdir private
chmod 700 private
cd private

Now we are going to generate a public/private EC key. Beware especially to the -conv_form compressed option: it will produce smaller curve representation which are also faster for point multiplication, but this representation is covered by a Canadian patent. I am using the Koblitz curve sect571k (571 bit binary field), but you can choose other curves, just run:

openssl ecparam -list_curves

to see a list of available curves.

openssl ecparam -genkey -name sect571k1 -text -conv_form compressed -out cakey_tmp.pem
openssl ec -in cakey_tmp.pem -out cakey.pem -aes256

Choose a passphrase for unlocking the CA key. You will need this key every time you want to generate a server or client certificate.

wipe -f cakey_tmp.pem
chmod 600 cakey.pem
cd ..

Now we are going to create the main CA certificate. This will be needed by every client and server to authenticate the CA. Edit the fields in the command below according to your needs.

openssl req -new -x509 -key private/cakey.pem -config ../openssl.cnf -extensions v3_ca -subj '/C=CaState(TwoLetters)/ST=CaState/L=CaCity/' -days 36500 -sha512 -out cacert.pem

Insert the CA passphrase to generate the certificate. You can see the certificate's attributes with:

openssl x509 -text -in cacert.pem
openssl x509 -fingerprint -noout -in cacert.pem -sha256
cd ..

Take note of the hash value and make sure it is publicly accessible by anybody who wants to check the authenticity of the cacert.pem in their possession.

Creating a server certificate

First of all we are going to create a Diffie-Hellmann key exchange parameter file. This might change in the future, when OpenVPN will support the ECDH key exchange protocol. We are going to generate a quite large DH modulus, since this tutorial is aimed at obtaining the best possible security, but keep in mind that this will take a long time (several minutes or possibly hours on a 2.3 GHz quadcore machine). Also keep in mind that this parameter needs not to be secret, so you can share and reuse it for as many other server installations as you want, as long as you don't care that the servers could be linked to the same generating entity.

mkdir server
cd server
openssl dhparam -out dh.pem 4096

Now import the CA certificate and generate a secret TA key which we'll need later to authenticate the control channel for further security.

cp ../CA/cacert.pem ./
mkdir private
chmod 700 private
cd private
openvpn --genkey --secret ta.key

Now we are going to generate the server's private key for authentication. Unlike in the CA step, we are not going to encrypt it, because otherwise we would be prompted for a password every time a service on the server side needs to access the key to be started. E.g., it would be impossible to automatically restart our OpenVPN server without manually inserting a passphrase or delegating this step to a script that would contain the passphrase in cleartext (which is almost equivalent to have an unencrypted key for the most of practical applications). Considered that the services accessing this key in plaintext will be running almost all the time, I think it's not worthwhile to encrypt the server key, unless you REALLY trust the security of all those services OR you think that they won't need frequent restarts OR you employ very secure process isolation policies via virtual machines (e.g., on a Qubes environment or similar).

openssl ecparam -genkey -name sect571k1 -text -conv_form compressed -out serverkey.pem
chmod 600 serverkey.pem
cd ..

Now we need to generate a certificate request, send it to the CA, have it signed by the CA and then sent again back to our server. As usual, in this tutorial these steps will be simulated.

openssl req -new -key private/serverkey.pem -config ../openssl.cnf -extensions server -subj '/C=ServerCountry(TwoLetters)/ST=ServerCountry/L=ServerCity/' -days 36500 -sha512 -out req.pem
openssl req -in req.pem -noout -text
cd ../CA
mv ../server/req.pem ./
openssl ca -config ../openssl.cnf -extensions server -policy policy_anything -out ../server/servercert.pem -md sha512 -cert cacert.pem -keyfile private/cakey.pem -infiles req.pem

Insert the CA passphrase to sign the server certificate.

rm req.pem
cd ../server
openssl x509 -in servercert.pem -noout -text -purpose
cd ..

Creating one or more client certificates

The following must be repeated for every client who wants to connect to the VPN and at this point should be self-explainatory:

mkdir client
cd client
cp ../CA/cacert.pem ./
mkdir private
chmod 700 private
cd private
cp ../../server/private/ta.key ./
openssl ecparam -genkey -name sect571k1 -text -conv_form compressed -out clientkey_temp.pem
openssl ec -in clientkey_temp.pem -out clientkey.pem -aes256

Insert a passphrase to protect the client key.

wipe -f clientkey_temp.pem
chmod 600 clientkey.pem
cd ..
openssl req -new -key private/clientkey.pem -config ../openssl.cnf -extensions client -subj '/C=ClientCountry(TwoLetters)/ST=ClientCountry/L=ClientCity/CN=ClientName/O=ClientOwnerName/emailAddress=client@e.mail' -days 36500 -sha512 -out req.pem

Reinsert the client passphrase.

openssl req -in req.pem -noout -text
cd ../CA
mv ../client/req.pem ./
openssl ca -config ../openssl.cnf -policy policy_anything -extensions client -out ../client/clientcert.pem -md sha512 -cert cacert.pem -keyfile private/cakey.pem -infiles req.pem

Insert the CA passphrase.

rm req.pem
cd ../client
openssl x509 -in clientcert.pem -noout -text -purpose
cd ..

Server configuration

This must be done on the server machine. Create the following two folders if they have not been already created:

mkdir /etc/openvpn
mkdir /etc/default

Copy all the content of the directory /root/.ssl/server on the trusted machine we used in the previous step in the server's /etc/openvpn directory. Also, add this file, but substitute xx.xx.xx.xx with your server's IP, and yy.yy.yy.0 will be the VPN address pool (the server will create a virtual interface on yy.yy.yy.1). Also note that with this configuration you can override the DHCP discoveries of your clients being sent through the VPN (i.e.: the DHCP queries of your clients will remain inside their respective networks and will not travel through the VPN). The DNS option allows you to specify your own DNS servers for the VPN, in this case I have put the OpenDNS servers, but you can also choose others or comment out that option (then DNS queries through the VPN will use the same OpenVPN server's DNS).

Now we must make sure that the OpenVPN server's interface is started whenever the server reboots. First of all create a file called openvpn in /etc/default (if it is not present already) and make sure it contains the following line:


Then put this script file in /etc/init.d . Notice the line DAEMON=/usr/local/sbin/openvpn instead of DAEMON=/usr/sbin/openvpn : this is because we installed OpenVPN manually at that position. Now run the following:

service openvpn add

Then open the /etc/sysctl.conf file and look for the following line (add it if it doesn't exist):

net.ipv4.ip_forward = x

Where x can be 0 or 1. Change x to 1 if it is set to 0. This will basically turn your server into a router, meaning that it will be able to route packets through different network interfaces. This behaviour will be reset at every reboot though, so we need to make the modification permanent with:

sysctl -p /etc/sysctl.conf

Check that everything is OK:

cat /proc/sys/net/ipv4/ip_forward

must be 1.

Now we need to abilitate a NAT translation from the OpenVPN virtual interface to the server's network interface. To do this, edit the script at /etc/network/if-up.d/iptables (or create it if it doesn't exist, making it executable with chmod +x /etc/network/if-up.d/iptables ). Make sure that it begins with:


and then add the following lines:

iptables -t nat -A POSTROUTING -s yy.yy.yy.0/24 -o eth0 -j MASQUERADE
iptables -A INPUT -i tun+ -j ACCEPT
iptables -A OUTPUT -o tun+ -j ACCEPT
iptables -A FORWARD -i tun+ -j ACCEPT

where eth0 should be your primary network interface, the one which is bound to the xx.xx.xx.xx address (check it with ifconfig if unsure).

Finally, restart networking and OpenVPN:

service networking restart
service openvpn restart

Now the OpenVPN service should automatically run at every reboot, and with ifconfig you should also see the new yy.yy.yy.yy interface.

Client configuration

This must be done on the client machine. Login as a normal user and create the following folder if it has not been already created:

mkdir ~/.openvpn

Copy all the content of the directory /root/.ssl/client on the trusted machine we used in the previous step in the client's ~/.openvpn directory. Also, add this file, but substitute xx.xx.xx.xx with your server's IP.


From the client, you can connect to the new VPN with the command:

openvpn ~/.openvpn/client.conf

(you need to provide the password to unlock the client's secret key).

Revoking a client certificate

In order to revoke a certificate, first we have to update the CA's index and generate an updated Certificate Revocation List (CRL), then instruct all the servers who use that CA to use the crl-verify options, and finally upload the new updated CRL to those servers.

cd /root/.ssl/CA
cat index.txt

This will give us a list of the recorded certificate, plus a serial number in front of it. Let's assume we want to revoke client certificate number 03. Then check to be sure:

openssl x509 -text -noout -in newcerts/03.pem

If it is correct we can revoke it:

openssl ca -config ../openssl.cnf -revoke newcerts/03.pem

(insert the CA's passphrase). We can confirm the revocation with:

cat index.txt

(the R stands for "Revocated"). Now we can generate an updated CRL. If this is the first time we revoke a certificate we have to prepare a counter:

echo "01" > crlnumber

Then generate the CRL:

openssl ca -config ../openssl.cnf -gencrl -out crl.pem

(insert the CA's passphrase). Check the CRL with:

openssl crl -text -noout -in crl.pem

Now we have to copy the crl.pem file into the server directory of EVERY server which uses that CA. Usually it's sufficient to just replace an old crl.pem with the new one without restarting OpenVPN, because the CRL is checked whenever an host tries to connect. But since this is the first time we have also to add the following line to the server.conf:

crl-verify ./crl.pem

and then reboot the service. It is also possible to un-revoke a certificate but it is not advisable (there are other ways to block temporarily an user).


Feedbacks via email at tommaso[at)gagliardoni(dot]net very welcome :)

(Feb 20th, 2013) HowTo: Remote LUKS unlock via SSH/dropbear with multiple ethernet interfaces on Linux Debian

In this howto I will explain how to setup a minimal SSH server on a Debian system with LUKS full disc encryption (i.e.: the root directory is also encrypted) and with multiple ethernet interfaces. In fact, when using the standard howtos, it is always assumed that there is just a single ethernet interface. The difference is not negligible as we will see, since at the stage where dropbear starts the order in which the interfaces are raised is random - udev is not loaded yet. I will not explain in details here the typical scenario where you want to use this setup, but basically what you want to do is being able to reboot a remote machine where the discs are fully encrypted with LUKS. The solution is to install a minimal SSH server (dropbear) in the cleartext boot partition, connect to it and enter the password to unlock the rest of the filesystem, continuing the normal boot procedure.

Remote LUKS: the standard procedure

You can find the standard howto in the Debian cryptsetup documentation:

zcat /usr/share/doc/cryptsetup/README.remote.gz

First of all install the needed tools:

apt-get install dropbear busybox wipe

Check that /etc/initramfs-tools/initramfs.conf contains BUSYBOX=y and does NOT contain DROPBEAR=n .

The host keys used for the initramfs are dropbear_dss_host_key and dropbear_rsa_host_key, both located in/etc/initramfs-tools/etc/dropbear . The RSA key is "too small" by default. If you feel paranoid, delete it and create a new one:

rm /etc/initramfs-tools/etc/dropbear/dropbear_rsa_host_key
dropbearkey -t rsa -s 4096 -f /etc/initramfs-tools/etc/dropbear/dropbear_rsa_host_key

Now you want to generate an user (root) private SSH key to connect to the server. Only thing, it must be first created in Dropbear format:

dropbearkey -t rsa -s 4096 -f /etc/initramfs-tools/root/.ssh/id_rsa_dropbear

Then extract the public part:

dropbearkey -y -f /etc/initramfs-tools/root/.ssh/id_rsa_dropbear | grep "^ssh-rsa " > /etc/initramfs-tools/root/.ssh/

and include it in the initrd authorized_keys file:

cat /etc/initramfs-tools/root/.ssh/ >> /etc/initramfs-tools/root/.ssh/authorized_keys

At this point (assuming you use OpenSSH to connect) you want your private key converted to OpenSSH format to connect:

/usr/lib/dropbear/dropbearconvert dropbear openssh /etc/initramfs-tools/root/.ssh/id_rsa_dropbear /etc/initramfs-tools/root/.ssh/id_rsa

and then we can delete the old Dropbear one, since we will use OpenSSH to connect:

wipe -f /etc/initramfs-tools/root/.ssh/id_rsa_dropbear

Now assume you want to use the interface eth0 (with DHCP enabled) to unlock your system. You have to add the following line in /etc/initramfs-tools/initramfs.conf :


Finally, recompile the server's initrd and reboot:

update-initramfs -u ALL

Let's assume your server is at and the passphrase to unlock your LUKS setup is "hello world" (without quotes). I am assuming you have a LUKS setup with a single partition, or in any case you are requested for just ONE passphrase at boot (i.e. you use keyfiles stored in the first partition to unlock other partitions). Then, once your server is rebooted and hanged at boot waiting for the passphrase, you can unlock it remotely with the following command:

ssh -o "UserKnownHostsFile=~/.ssh/a_new_file_known_hosts" -i "~/.ssh/id_rsa" "echo -ne \"hello world\" >/lib/cryptsetup/passfifo"

So far, so good.

Multiple ethernet interfaces

Now the problem: let's assume you have TWO (physical) ethernet interfaces, eth0 and eth1. Here you will find an issue: the order in which they are renamed at boot is random. This is because udev (which usually takes care of assigning the interfaces the correct name) is not started yet when Dropbear starts. So, sometimes the above solution will fail. One solution could be to tell Dropbear to listen over ALL the interfaces, but 1) it seems that Dropbear has a hard time doing so, since it was created for embedded environments where usually you cannot afford multiple servers 2) it may be not desirable (maybe one of the interfaces is a DMZ).

Luckily there is another, elegant solution: force some of the udev rules to be loaded in the initrd. Namely, if you open /usr/share/initramfs-tools/hooks/udev with an editor, look for around line 37:

for rules in 50-udev-default.rules 50-firmware.rules 60-persistent-storage.rules 80-drivers.rules 95-udev-late.rules; do

add 70-persistent-net.rules to the list, so it looks like this:

for rules in 50-udev-default.rules 50-firmware.rules 60-persistent-storage.rules 80-drivers.rules 95-udev-late.rules 70-persistent-net.rules; do

Save file. Regenerate initramfs and reboot:

update-initramfs -u ALL


(Aug 23rd, 2012) An overview of position-based quantum cryptography.

An overview of position-based quantum cryptography.


Imagine a secure cryptographical protocol where you can use your geographical position as a credential. For example, you could register your home location at your bank to secure your on-line banking account from connections not originating from your home PC. Or, as a military organization, you could send an encoded message that only troops deployed at a certain location could read. In general, this would link the security of the digital world to that of the physical world - the Holy Grail of on-line security. This is exactly what position-based cryptography (PBC) tries to achieve, and its goal has proven to be all but trivial. In fact, in order for this model to be meaningful, the geographical position cannot be obtained by external measurement services, which would then become the weak link of the security chain. I.e., it would not make sense to rely on something like the GPS satellite system to obtain coordinates, since this would open a plethora of new attack scenarios. The position should instead depend on some intrinsic properties of the system involved, something as cheating-proof as possible. In practice, this means to exploit some physical law which we are guaranteed to be unmodifiable by any known attacker - just as the GPS system relies on the constant speed of light to pinpoint position.

Cryptosystems relying on the idea of exploiting the physical world are theoretically unbreakable as long as the physical law holds, as their security does not depend, in principle, on any computational assumption. They are, of course, prone to implementation issues like any other cryptosystem, but we are guaranteed that as long as we understood correctly the physics we must not fear any surprise.

Must we?

The gap between theory and practice has always proven to be unpredictably wide and, as often happens, the good and the bad guys can use the same tools. The same physical properties which allow to create strong cryptograms also allow to devise new powerful attacks. In this article we will give an overlook of the rise, evolution, fall and resurrection of position-based cryptography augmented with the magic of quantum physics, keeping in mind that there is no final word to the everlasting arms race of security.

Classical PBC

A formalized model for PBC was first introduced - and proven to be insecure - in [CGMO09]. The idea of the original protocol was to achieve secure position verification via a distance bounding protocol based on relativistic assumptions (namely, the speed of light) and involving multiple verifiers to authenticate an honest prover.

Let us see an example in the 1-dimensional case. Consider a segment of space [0,1] and two verifiers, V_0 and V_1, sitting at positions 0 and 1 respectively. Then we have an honest prover P, standing at position 0.6, whose goal is to convince the two verifiers that he is located exactly at that position. We assume the following:
  • there is a fixed and publicly known Boolean function f:{0,1}^n X {0,1}^n -> {0,1} which is very easy to compute and takes virtually no time to be evaluated for reasonable values of n
  • the two verifiers V_0 and V_1 share a private channel which they can use to securely communicate with each other and synchronize their clocks
  • all the parties are able to communicate just at a fixed, bounded speed (i.e.: we assume that they can broadcast radio waves or laser pulses at the speed of light)
Under these assumptions we can design many different position verification protocols, here is an example:
  1. P initializes the protocol, sending a request to the verifiers to be authenticated at position 0.6
  2. V_0 generates a random n-bit string x and shares it with V_1 through the private channel, V_1 does the same with another random n-bit string y
  3. V_0 generates a random bit b and, if f(x,y) = 1, sends a copy of b to V_1 through the private channel
  4. V_0 sends x and b to P, and V_1 sends y to P; they time their messages so that x, y and b arrive all at the same time at position 0.6
  5. P evaluates f(x,y): if the result is 0 then P routes back b to V_0, otherwise he routes b to V_1
  6. the verifiers accept the round if and only if V_f(x,y) receives the correct b at a time which is consistent with the claimed position 0.6 given the speed of light of the signal
  7. repeat the previous steps for k times (where k is a security parameter) for security amplification and authenticate P's position if and only if all the k rounds are passed.
It is pretty clear that a single malicious prover standing at a position different from what claimed cannot cheat with probability greater than 1/2 at each round. But it is easy for two colluding malicious provers, M0 and M_1 standing, e.g., at positions 0.3 and 0.8 respectively, to trick the verifiers into believing that there is a single honest prover at position 0.6, namely:
  1. as soon as M_0 receives x and b from V_0, she forwards a copy of both to M_1; M_1 does the same by forwarding a copy of y to M_0
  2. as soon as M_0 receives y from M_1, she computes f(x,y) and, if 0, sends b to V_0, otherwise does nothing; at the same time, as soon as M_1 receives b and x from M_0, she sends b to V_1 if f(x,y) = 1, otherwise does nothing.
It is easy to check that by summing up the travel time of the malicious attacker's messages, either V_0 or V_1 will receive the right b at the right time, hence the attackers can convince the verifiers that there is a honest prover at any given position between them.

This argument (formally proved in [CGMO09]) can be extended to any distance bounding protocol, concluding therefore that secure classical PBC against multiple adversaries is unachievable.

Quantum PBC

First ideas for quantum PBC (QPBC) appeared in the academic literature in 2010 [KMS10]. The idea is the following: since the above multi-adversary attack to classical PBC protocols relies on creating copies of some bit and keeping them before relying to the correct verifier, this attack fails if we use quantum bits (qubits) instead.

It is well known that a quantum state cannot be copied. In mechanical physics this is called the "No-Cloning Theorem", and it is a consequence of the thermodynamical Landau's Principle stating that in order to erase information embedded in a physical system one has to spend energy. Since in a closed physical system the total energy must be conserved, and since in order to copy a qubit one has first to "free space" (erase another qubit) to copy the given state into it, it turns out that it is impossible to copy a qubit in a reversible way. The other option is to have the system interact with another system (the observer) in order to measure the given qubit and then prepare another qubit from scratches in the measured state. But by the postulates of quantum mechanics, the process of measurement destroys the original quantum state with very high probability, it is therefore impossible to make a perfect copy of a qubit. This "magic" property of quantum mechanics has been exploited to design new, innovative, unconditionally secure cryptographical schemes and it is at the base of quantum cryptography [BB84].

In QPBC, part of the information exchanged by prover and verifiers consists of some quantum state |phi> which must be measured at a certain step of the protocol and act as a "guarantee" that the particular step can be only performed once for every round. Under the same conditions as the protocol in the previous section, let us consider the following quantum variation:
  1. P initializes the protocol, sending a request to the verifiers to be authenticated at position 0.6
  2. V_0 generates a random n-bit string x and shares it with V_1 through the private channel, V_1 does the same with another random n-bit string y
  3. V_0 generates an entangled EPR pair |ab> = (|00>+|11>)/sqrt(2); if f(x,y) = 1, he sends the first half of the registry, |a>, to V_1 through the private channel, otherwise keeps this half for himself
  4. V_0 sends x and the other half of the EPR registry, |b>, to P, and V_1 sends y to P; they time their messages so that x, y and |b> arrive all at the same time at position 0.6
  5. P evaluates f(x,y): if the result is 0 then P routes back |b> to V_0, otherwise he routes |b> to V_1
  6. the verifiers accept the round if and only if V_f(x,y) receives the qubit |b> at a time which is consistent with the claimed position 0.6 given the speed of light of the signal and it is consistent with a Bell measurement performed together with the other half |a>
  7. repeat the previous steps for k times (where k is a security parameter) for security amplification and authenticate P's position if and only if all the k rounds are passed.
A Bell measurement over two qubits in this case is just a "test" to see whether the two qubits have preserved entanglement or not. Since measuring any of the two qubits destroys entanglement, if this test is passed it means that the qubit |b> has not been measured (and hence has not been replaced nor partially copied). Under this condition it is clear that no coalition of malicious attackers can impersonate a honest P with good probability. In fact, the attack to the classical protocol relies on the fact that the bit b can be copied and distributed amongst the attackers without the verifiers noticing it, which turns out to be impossible with the quantum state |b>. Notice that this protocol only needs a single EPR pair for each of the k round in order to achieve k bits of security, which is pretty efficient.

Breaking QPBC

QPBC looks really unbreakable so far, but unfortunately it does not keep into account the fact that entanglement can also be used by malicious attackers to perform one of the weirdest feats of quantum information: distributed computation via teleporting. This is a consequence of the fact that entanglement between qubits is a non-classical interaction, very different from anything we have been knowing in the last few centuries, like electromagnetic or gravitational interactions. One of its consequences is that if two parties share an entangled EPR pair, then one of the parties can teleport an additional qubit to the other party by sending two classical bits of information and destroying the EPR pair by performing a Bell measurement over the qubit and one of the halves of the EPR pair [NC00]. Let us now see how this can be used by malicious attackers M_0 and M_1 to break the QPBC protocol above in the case where n=1 and f(x,y) = x XOR y:
  1. M_0 and M_1 share three entangled EPR pairs, labelled 0, 1 and 2 respectively, and initialize the protocol
  2. let's assume that f(x,y) = 0, then M_0 receives x (a 1-bit string) and |b>, while M_1 receives the 1-bit string y
  3. M_0 uses the EPR pair labelled x to teleport |b> to M_1, and sends x and the two classical bits for teleportation to M_1 (notice that these three classical bits will travel at the speed of light)
  4. at the same time, M_1 teleports her half of the EPR pair labelled y using EPR channel 2, and sends y and her two classical teleportation bits to M_0 (again, these three bits will travel at the speed of light)
  5. when the classical bits arrive at their destinations (in a single round of simultaneous communication), both cheaters can compute f(x,y)
  6. if f(x,y) = 0, then it is easy to check that |b> has been teleported in M_0's EPR register 2 (and M_0 is able to recover it since she has also received M_1's teleportation bits); otherwise it is easy to check that |b> has been teleported in M_1's EPR register labelled x (and M_1 is able to recover it, since she has received both x and M_0's teleportation bits)
  7. in both cases, it is easy to check that, with a single round of simultaneous communication, the "correct" cheater has received |b> just in time to route it to the "correct" prover, hence breaking the security of the protocol.
This attack can be generalized to any n and any suitable f, the basic idea being the following: if the two attackers share an arbitrary amount of EPR pairs, then they compute the function f "in parallel", like if they were using a single (quantum) computer made of a split (quantum) register physically separated in space.

This is an intrinsic property of quantum computing: non-classical correlations are not limited to the speed of light (in a sense which is still not completely understood). For example it could be possible to have a two-qubit quantum computer where one of the qubits resides on Earth ant the other one is located on Mars, and still it would work perfectly like if we had a single joint register. It is proven, however, that it is impossible to exploit this property to make "miracles" - e.g.: to violate Einstein's relativity principle. Namely, even if the two registers are quantum-jointed (and it is thus possible to perform operations on both simultaneously), it is impossible to access the information of a remote register without transmitting classical information. This is exactly what the classical teleportation bits are for, the interesting part being that the process of computation can be "parallelized" through the EPR channels and can then be performed by using a single round of classical communication - which is enough to break our QPBC protocol.

Future directions

QPBC is thus proven to be insecure in case of multiple colluding attackers sharing an arbitrary amount of EPR pairs. But we must consider that EPR pairs are a costly resource (they are currently, and there is no evidence that they shouldn't be in the future). This has led to an interesting observation.

Let us forget for a second about entangled-capable adversaries and let consider the role of the parameter n in the QPBC protocol above. It has basically no importance, in the sense that the security of the protocol does not depend on it - while depending on the parameter k instead. This means that, in theory, there is nothing wrong in setting n = 1 and using a very simple function f, like the XOR example above. But if we consider the entanglement attack, while the same observation remains true for honest prover and verifiers, n makes a difference for the malicious attackers instead. Namely, the larger n, the more and more EPR pairs M_0 and M_1 need to share in order to compute f.

The QPBC protocol above is then interesting for the following fact: while it only requires a single EPR pair to be used by honest parties for each round, it requires some larger amount of entangled resources to be used by malicious attackers, and this amount in general depends on n. This trade-off can be stated as: the more (cheap) classical computing resources are needed by honest parties, the more (costly) entangled resources are needed by the attackers, which is a pretty nice feature.

This is not an absolute statement though, because it also depends on the function f. Namely, there exist easy to compute functions f where the amount of EPR pairs needed to compute f is small (such as logarithmic in n, or even constant). In order to exploit the aforementioned trade-off in real scenarios we have hence to find some - easy to compute, i.e. in P - functions where the number of EPR pairs needed for its evaluation grows as fast as possible in n.

The Garden-Hose Game is a computational model introduced exactly for this purpose: to model the EPR-related complexity of a function. It basically consists of two players (with their respective inputs x and y) separated by some distance who want to compute a Boolean function f(x,y) in the following way: they have some fixed pipes running between them, plus they have pieces of garden-hose which they can use to connect the ends of the pipes at their location into a pattern dependant on their input only (no communication allowed). There are no T-shaped (or more complex) pieces of hose, just one-to-one connections, but not all the pipes need to be connected. One of the players then has a source of water (tap) which is connected to one of the loose ends of the pipes. The players can arrange their hose pieces in such a way that f(x,y) = 0 if the water eventually reaches one of player 0's ends, 1 else. It is easy to see that water will eventually exit from one side or another, and that this model is closely related to the teleportation technique used to break QPBC. The Garden-Hose complexity GH(f) of a function f:{0,1}^n X {0,1}^n -> {0,1} is then defined as the minimum number of pipes necessary for the two players to compute its output for any input in the above way.

Some computational-theoretic results are known about bounds on GH(f), namely:
  • GH(f) <= 2^n + 1, for any Boolean function
  • if f is in L (computable in logarithmic space on a deterministic Turing Machine), then GH(f) is at most polynomial in n
  • if GH(f) is polynomial in n, then f is in L (up to local preprocessing)
  • if there exists an f in P such that GH(f) is super-polynomial in n, then P != L (notice that L is a subset of P, but it is currently unknown if the equivalence hold)
  • there exist functions f such that GH(f) is exponential in n, but it is unknown whether they are also in P or not.
So, if we could find a function f which is in P, but has an exponential GH(f), that one would be a perfectly good candidate for use in QPBC, making the above attack infeasible for reasonable values of n. But the existence of such a function would imply a major breakthrough in computational complexity. Up to now only functions with polynomial Garden-Hose complexity are known, but the road is open to new, exciting directions.


  • [BB84] Bennet, Brassard: "Quantum cryptography: Public key distribution and coin tossing", Proceedings of IEEE International Conference on Computers, System and Signal Processing

  • [BCF+11] Buhrman, Chandran, Fehr, Gelles, Goyal, Ostrovsky, Schaffner: "Position-based quantum cryptography: Impossibility and constructions", CRYPTO 2011

  • [BFSS11] Buhrman, Fehr, Schaffner, Speelman: "The Garden-Hose Game", arXiv:1109.2563v2

  • [CGMO09] Chandran, Goyal, Moriarty, Ostrovsky: "Position-based cryptography", CRYPTO 2009

  • [KMS10] Kent, Munro, Spiller: "Quantum Tagging: authenticating location via quantum information and relativistic signalling constraints", arXiv/quant-ph:1008.2147

  • [NC00] Nielsen, Chuang: "Quantum Computation and Quantum Information", Cambridge University Press, 2000.

(Jun 25th, 2012) Waterloo (ON), 12th Canadian Summer School on Quantum Information, 9th Canadian Student Conference on Quantum Information, 2nd AQuA Student Congress on Quantum Information and Computation.

I've just come back from this big event in Waterloo, Canada. The past two weeks have been quite intensive, though I had the opportunity of visiting some nice places (Niagara Falls are cool, but I prefer Toronto, especially by night). Waterloo is one of the leading places in the world for quantum stuff, with its three main research centers: the University of Waterloo, the Perimeter Institute and the Institute for Quantum Computing. The university campus is really cool, with a lot of wild animals around (beavers, squirrels, groundhogs, bunnies, eagles... but beware of the damn aggressive gees and small birds attacking people at random!), although everything is quite expensive (especially food, and by the way I didn't see many overweight people around...) and the accommodation was... embarrasingly uncomfortable. I've also given a talk about provable security in the quantum random oracle model. Nightlife in Waterloo is not very intensive. Best places to visit: The Flying Dog (salsa place, young people on Thurday night) and the Revolution (disco club).

(Oct 1st, 2011) A general mathematical principle to achieve `perfect' social justice

Some random thoughts about the `fairness' of a punishment in respect to a crime. The principle `the harsher the crime, the harsher the punishment' is not given for granted, but should be a consequence of a deeper principle. Also, I think that the principle in according to which `a punishment only serves to re-habilitate the criminal' is somehow incomplete, since it can be abused: re-habilitation comes automatically once the criminal has repaid all the `damage' caused by his/her crime. Moreover, a punishment is not `fair' if the personal gain obtained in performing a crime is exaclty balanced by the punishment but should be harsher, since not every crime is prosecuted (not every criminal is caught, or else we wouldn't have criminals). I am not a jurist, so maybe I am telling something trivial, but IMHO the only general `fair' rule should be the following:

The punishment for a certain crime should be equal to the `social damage' caused by that crime, divided by the probability of that crime to be prosecuted.

  • `social damage' is assessed on empirical bases by a competent, indipendent organism, and should include the costs needed for the prosecution, moral and physical damages caused by the crime, its consequencies and so on
  • 'punishment' should always be commuted in appropriate forms, e.g., it is completely pointless to physically punish a murderer since this does by no mean `repay' the social damage caused by the murder; a murderer should instead, e.g., do useful, harsh, underpaid jobs for many years to the benefit of the victim's family, friends, colleagues and whatever, until a `fair' balance (estabilished by a judge) has been reached between the crime and the repaid social gain (in most of the cases a `fair' social gain, according to the victim's family, could only be reached with the murderer's execution, but a mathematical principle should not include emotionality in its formulation)
  • `probability' of a crime to be prosecuted should be computed over historical records for known crimes, and over empirical considerations for emerging crimes, in collaboration with police forces or, however, with organisms competent in performing such an estimate.
A businessman is caught evading 100k EUR of taxes.
The total annual cost of the legal and bureaucratic organisms fighting tax evasion is, say, 500M EUR, but it must be weighted against the ratio between mean total tax evasion (let's say 10G EUR per year) and the amount we are taking here into account (so: 100k * 500M / 10G = 5k EUR).
Plus the judge establishes that there is an additional "mean social damage" (due, i.e., to emulation phenomena) of 2k EUR.
So: the total "social damage" amounts to 107k EUR. This must be divided by the probability of being caught evading taxes, let's say 15%. Then: 107k / 0.15 = 713k EUR (rounded down). This is the total amount the criminal has to repay as a punishment to make complete atonishment, either through fines (provided he has enough money) or social works (which should then be paid much more than now), or a balance between the two. This looks a lot, but it is the only amount statistically effective.
This principle is `fair', in the meaning that it makes both the crime and the prosecution of the crime `statistically non-convenient'. It is also `rational', for there is no room allowed for more or less `hated' crimes (usually subject to manipulation, emotional reactions, demagogy and so on), though it is `elastic', since the definition of `social damage' can take into account many factors.

Note that a corollary of this principle is the following: once a criminal has completely made atonishment for his/her guilt, he/she is really `free'. All the damage has been repaid, in every possible rational and practical meaning, so that subsequent discriminations because of his/her police records should not be allowed. Also, the criminal's rehabilitation is automatic. Tests to see if he/she is still capable of crimes should not be necessary, because after the atonishment we don't have a criminal anymore: we have a common citizen who has repaid to the society more than the damage caused. He/she should already be `rehabilitated' enough. Recidivism has never to be taken into account too.

These are just my thoughts anyway, and a test for the making of a blog on this website if and when I will have some time to set it up. Feedbacks appreciated via email :)

(May 27th, 2011) My thesis and slides : `Classical simulation of quantum circuits'

Here are my Master thesis (in English) and related slides (in Italian), should some visitor be interested.