All posts by Christer

PGP and Yubikey

I was trying to implement client side encryption of files backed up to AWS S3 using Duplicity, with keys on my Yubikey Neo created on an air gapped installation. It worked with local PGP keys, but I didn’t get it to decrypt using my PGP key on the Yubikey


Verify that you have the right pin and or it hasn’t been blocked…


To rule out Duplicity I performed a simple encryption

$ echo "Hello" |gpg2 -e > test.enc
Current recipients:
rsa2048/2ABD**** 2017-07-17

$ gpg2 -d < test.enc
gpg: encrypted with 2048-bit RSA key, ID 2ABD****, created 2017-07-17
gpg: public key decryption failed: Card error
gpg: decryption failed: No secret key

Issuing this command prompted me for my pin. However, the error wasn’t clear on that the pin was wrong or blocked.

Troubleshooting gpg agent

First I killed the existing gpg-agent and started a new one in the console with debug logging:

$ ps aux|gpg-agent
$ kill 12345
$ gpg-agent --daemon --no-detach -v -v --debug-level advanced --homedir ~/.gnupg

which gave me this information:

gpg-agent[20808]: DBG: chan_5 -> INQUIRE PINENTRY_LAUNCHED 25484
gpg-agent[20808]: DBG: chan_5 <- END gpg-agent[20808]: DBG: chan_6 -> [ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...(76 byte(s) skipped) ]
gpg-agent[20808]: DBG: chan_6 -> END
gpg-agent[20808]: DBG: chan_6 <- ERR 100663404 Card error
gpg-agent[20808]: smartcard decryption failed: Card error
gpg-agent[20808]: command 'PKDECRYPT' failed: Card error
gpg-agent[20808]: DBG: chan_5 -> ERR 100663404 Card error
gpg-agent[20808]: DBG: chan_5 <- [eof] gpg-agent[20808]: DBG: chan_6 -> RESTART
gpg-agent[20808]: DBG: chan_6 <- OK

This command 'PKDECRYPT' failed: Card error originally got me wandering of in the wrong direction, but let's keep the story short(er).

I checked if my Yubikey actually have that key 2ABD****:

$ gpg2 --card-status

Reader ...........: Yubico Yubikey NEO OTP U2F CCID 01 00
Application ID ...: ****************
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01234567
Name of cardholder: Christer Barreholm
Language prefs ...: sv
Sex ..............: unspecified
URL of public key : **************
Login data .......: christer
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 0 3 3
Signature counter : 1
Signature key ....: **** **** **** **** **** **** **** **** 7E86 ****
created ....: 2017-07-17 12:28:59
Encryption key....: **** **** **** **** **** **** **** **** 2ABD ****
created ....: 2017-07-17 12:29:13
Authentication key: **** **** **** **** **** **** **** **** 55FD ****
created ....: 2017-07-17 12:31:46
General key info..: sub rsa2048/7E866DD0 2017-07-17 Christer Barreholm
sec# rsa4096/A56F**** created: 2017-07-17 expires: 2018-01-16
ssb> rsa2048/7E86**** created: 2017-07-17 expires: 2018-01-16
card-no: 0006 01234567
ssb> rsa2048/2ABD**** created: 2017-07-17 expires: 2018-01-16
card-no: 0006 01234567
ssb> rsa2048/55FD**** created: 2017-07-17 expires: 2018-01-16
card-no: 0006 01234567

The key is there, but then I noticed PIN retry counter: 0 3 3.

$ gpg2 --change-pin
gpg: OpenPGP card no. ******************************* detected

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 2
PIN unblocked and new PIN set.

$ gpg2 --card-status
PIN retry counter : 3 3 3

Looks better.

Another try to decrypt:

$ gpg2 -d < test.enc gpg: encrypted with 2048-bit RSA key, ID 2ABD****, created 2017-07-17 Hello


Event longer story

I tried this back in July, but eventually gave up. There were indications that the issue was related to stubbed keys in the keyring.
Below are some of the resources that got me in the wrong direction, but still interesting.

Issues with primary key & subkeys on different smartcards
[Resolved] Trouble with GPG --card-status
YubiKey Guide

StackOverflowException in java.util.ArrayList due to subList()

We have experienced a problem with a StackOverflowException when calling ArrayList.add(). The application added data to the end of a list to find different possible solutions. Attempts that lead to a dead end are then truncated from the list with a call to List.subList(). This might appear like a good way of reusing the data which is still valid.

The problem arises when we do something similar to:

List list = new ArrayList();
// try solution x
list = list.subList(0, x);
// try solution y
list = list.subList(0, y);
// try solution z
list = list.subList(0, z);
// repeat for thousands of solutions

// -> StackOverflowException is thrown

The answer to why this is a problem is in the first words of the JavaDoc for List.subList(). “Returns a view of a potion of the list”. The method subList() will return a new object, which has a reference back to the original list. So every time we modify the list, we have to update the parent, parents parent, parents parents parent, and so on. Enough calls to subList() will give us such a deep recursion within ArrayList that we get a StackOverflowException (at java.util.ArrayList$Sublist.add(ArrayList:1005))

This is what the code looks like in ArrayList:

public void add(int index, E e) {
parent.add(parentOffset + index, e);
this.modCount = parent.modCount;

Out solution is to instantiate a new List containing the sub-list elements, but no reference back to the original list.

Ubuntu 12.04 LTS och HWE

Ubuntu 12.04 LTS got something called the Hardware Enablement Stacks (HWE), to support newer hardware. A LTS release is supported for 5 years. There are different versions of the HWE, where only some is supported for the full LTS lifetime. More info here:
HWE End-of-life

To make this story short. I want to remain on 12.04 LTS, so I decided to upgrade the HWE. However, the upgrade failed due to a full /boot, leading to clean-up efforts. Therefor, make sure to check your disk before starting the HWE upgrade:

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/xyz-root 40G 28G 9,7G 75% /
udev 2,0G 4,0K 2,0G 1% /dev
tmpfs 396M 304K 396M 1% /run
none 5,0M 0 5,0M 0% /run/lock
none 2,0G 0 2,0G 0% /run/shm
/dev/vda1 228M 210M 5,9M 98% /boot

That is not enough space.

Run the following to identify old kernels to remove:
$ dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d'

I found e.g. that I had a 3.5.0-42 kernel that I could remove:

$ sudo apt-get purge linux-image-3.5.0-42-generic linux-headers-3.5.0-42 linux-headers-3.5.0-42-generic

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/xyz-root 40G 27G 11G 73% /
udev 2,0G 12K 2,0G 1% /dev
tmpfs 396M 308K 396M 1% /run
none 5,0M 0 5,0M 0% /run/lock
none 2,0G 0 2,0G 0% /run/shm
/dev/vda1 228M 151M 65M 70% /boot

Enough space so lets upgrade:

sudo apt-get install linux-generic-lts-trusty linux-image-generic-lts-trusty

Reboot and get the greeting “Your Hardware Enablement Stack (HWE) is supported until April 2017.”

Nvidia driver on Ubuntu 13.10

I made the mistake to try to switch from the Nvidia driver to use the hybrid Bumblebee driver on Ubuntu 13.10 for my Lenovo T420s laptop with NVIDIA GPU NVS 4200M (GF119). I ended up with a situation where I couldn’t get the Nvidia driver working again. I had to switch back to the default Nouveau driver. I don’t remember the exact steps, but I did roughly the following:

sudo apt-get remove bumblebee nvidia-319-updates
sudo apt-get install xserver-xorg-video-nouveau
sudo dpkg-reconfigure xserver-xorg

This got me back to the Nouveau driver, but that didn’t support my external display on the Displayport, so I was eager to get the Nvidia driver working again.

sudo apt-get remove xserver-xorg-video-nouveau
sudo apt-get install nvidia-319
sudo dpkg-reconfigure xserver-xorg

But when I reboot it looked like the X server would start up, but it crashed and dropped to a shell before the login screen. Checking the logs I found that there was an issue loading the Nvidia kernel module. To make a long story short, I did the following the get the Nvidia driver working again. I’m not sure which step that actually made the difference.

sudo vi /etc/modprobe.d/blacklist.conf


#Problem getting Nvidia to work
blacklist nouveau

I found some remnants of Bumblebee, which possible caused some issues with kernel modules, since it blacklists Nvidia:

less /etc/modprobe.d/bumblebee.conf

So to get rid of it I did the following:
sudo apt-get purge bumblebee
Purge will delete configuration files, including bumblebee.conf.

I got to the login screen after a reboot, which was a step forward. However, trying to login failed and I got back to the login screen. Turns out that .Xauthority had incorrect permissions. Fix the ownership or just delete it.

sudo rm .Xauthority

Finally running with Nvidia driver again!

Check Java version in class files

Java classes can be compiled for different target platforms. You could e.g. compile with JDK 7 creating class files for Java 6. The target version is encoded in the beginning of the class files.

The major version of the different Java releases are:
Java 8 = 52 (0x34)
Java 7 = 51 (0x33)
Java 6 = 50 (0x32)
Java 5 = 49 (0x31)

So the class files for Java 6 starts with

0xCA 0xFE 0xBA 0xBE 0x00 0x00 0x00 0x32

We can use the following command to find e.g. all Java 6 classes.

find . -name \*.class -exec grep -P "^\xca\xfe\xba\xbe\x00\x00\x00\x32" {} \;

Or better still, just checking first line:
find . -name \*.class -exec sh -c 'head -1 {} | grep -P "^\xca\xfe\xba\xbe\x00\x00\x00\x32" {}' \;

However, it might be more interesting to find if any of the files we compile is NOT Java 6. This might happen when compiling with JDK 7, but forgetting to set Java 1.6 as target.

find . -name \*.class -exec sh -c 'head -1 {} | grep -v -q -P "^\xca\xfe\xba\xbe\x00\x00\x00\x32" && echo "Java version is not 0x32 in file {}" &' \;

Java encoding of source files

I got a problem with some java source files, which javac couldn’t compile, because the file encoding was correct. The source files did include some Swedish UTF-8 characters.

unmappable character for encoding ASCII

So I checked what my ant environment looked like:

$ ant -diagnostics|grep encoding
file.encoding.pkg :
sun.jnu.encoding : ANSI_X3.4-1968
file.encoding : ANSI_X3.4-1968 : UnicodeLittle

Definitely not the UTF-8 encoding I desired.

Checked my environment

$ locale
locale: Cannot set LC_ALL to default locale: No such file or directory

And then:

$ locale -a

Notice that sv_SE.UTF-8 is missing.

$ sudo locale-gen sv_SE.UTF-8

$ sudo dpkg-reconfigure locales

Check java file encoding again:

$ ant -diagnostics|grep encoding
file.encoding.pkg :
sun.jnu.encoding : UTF-8
file.encoding : UTF-8 : UnicodeLittle

I’m running Ubuntu 12.04 Server and the previous command worked fine the other day. I guess that there was some Ubuntu update which destroyed my environment.