By looking at the sourcecode, I figured that the way GnuPG (gpg)
creates and verifies uid-to-key binding signatures is the following:
First, it repackages the UID into a sequence of a 0xB4 followed by a
four-byte big-endian length of the UID and then the UID itself. Then
it concatenates the public key packet with this thing and calculates
the signature.

If I do this, gpg will happily import the key. However, if I follow
RFC2004 to the letter and simply calculate the signature on the
concatenation of the public key packet and the UID packet in whatever
(valid) form it is, gpg will complain about an invalid self-signature.
What the hell?

First off, I don't understand the 0xB4 byte, which does NOT correspond
to an old-style UID packet with four-byte length. That would be 0xB6,
which is actually how gpg saves the keys it generates (which is also a
waste of three bytes, if you ask me). Please take a look at the
hash_uid() function in sign.c:

Secondly, I don't understand why not use 1 byte lengths where the UID
is short enough for that (which is the case with the overwhelming
majority of them).

Could someone please enlighten me whether this is a typo that makes
gpg violate RFC2440 and a vast array of keys out there broken, or if I
am missing something?

Thanks in advance!