Category Archives: Crypto

Test Your Server for the Poodle Vulnerablity

The latest crypto issue to hit is the “Poodle” attack. A good explanation of poodle can be found at the openssl website.

The simplest way to test your server is an online scanner at .
Putting results in the following report:
Scan results ( - Vulnerable

This server supports the SSL v3 protocol.

But how does one test a server that is not online or exposed to the world? OWASP’s website has some great information on how to extract a servers SSL/TLS information.
I ran a slightly modified version of the test (without the cert information dump) against Its a bit verbose but lets see the output:
nmap --script ssl-enum-ciphers -p 443

Starting Nmap 6.40 ( ) at 2014-10-28 15:20 EDT
Nmap scan report for (
Host is up (0.037s latency).
Other addresses for (not scanned):
rDNS record for
443/tcp open https
| ssl-enum-ciphers:
|  SSLv3:
|  ciphers:
|   TLS_ECDHE_RSA_WITH_RC4_128_SHA - strong
|   TLS_RSA_WITH_AES_128_CBC_SHA - strong
|   TLS_RSA_WITH_AES_256_CBC_SHA - strong
|   TLS_RSA_WITH_RC4_128_MD5 - strong
|   TLS_RSA_WITH_RC4_128_SHA - strong
|  compressors:
|  TLSv1.0:
|  ciphers:
|   TLS_ECDHE_RSA_WITH_RC4_128_SHA - strong
|   TLS_RSA_WITH_AES_128_CBC_SHA - strong
|   TLS_RSA_WITH_AES_256_CBC_SHA - strong
|   TLS_RSA_WITH_RC4_128_MD5 - strong
|   TLS_RSA_WITH_RC4_128_SHA - strong
|  compressors:
|  TLSv1.1:
|  ciphers:
|   TLS_ECDHE_RSA_WITH_RC4_128_SHA - strong
|   TLS_RSA_WITH_AES_128_CBC_SHA - strong
|   TLS_RSA_WITH_AES_256_CBC_SHA - strong
|   TLS_RSA_WITH_RC4_128_MD5 - strong
|   TLS_RSA_WITH_RC4_128_SHA - strong
|  compressors:
|  TLSv1.2:
|  ciphers:
|   TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - strong
|   TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - strong
|   TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 - strong
|   TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - strong
|   TLS_ECDHE_RSA_WITH_RC4_128_SHA - strong
|   TLS_RSA_WITH_AES_128_CBC_SHA - strong
|   TLS_RSA_WITH_AES_128_CBC_SHA256 - strong
|   TLS_RSA_WITH_AES_128_GCM_SHA256 - strong
|   TLS_RSA_WITH_AES_256_CBC_SHA - strong
|   TLS_RSA_WITH_AES_256_CBC_SHA256 - strong
|   TLS_RSA_WITH_AES_256_GCM_SHA384 - strong
|   TLS_RSA_WITH_RC4_128_MD5 - strong
|   TLS_RSA_WITH_RC4_128_SHA - strong
|  compressors:
|_  least strength: strong

Nmap done: 1 IP address (1 host up) scanned in 2.39 seconds
While its a very long list of supporting ciphers, the culprits are the ones highlighted in red – those that support the *_CBC_* block cipher in SSLv3. If you find any of those then your server supports the vulnerability.

Crypto and Security Brain Teaser

Assuming this code works, what is wrong with the functionality from a security and crypto perspective?

#!/usr/bin/env ruby
# This program encrypts and decrypts messages at the command line.
# It runs setuid root, so that it can be used by users without giving
# them access to the (root-owned) secret encryption key.

require ‘openssl’


cipher =‘aes-256-ecb’)

case ARGV.shift
when ‘encrypt’
when ‘decrypt’
  puts “Usage: $0 [encrypt|decrypt] ”
  exit 1

input =
output =, “w”)

input.each_line do |l|
  output.write(cipher << l)

Here are a few hints…
I found 4 crypto related problems and one security/privilege escalation issue.

More Bugs in OpenSSL – DTLS Packet Injection

Another round of vulnerabilities for OpenSSL were published on June 5th, so I ended up spending a chunk of my weekend going over the diffs to make sure they did things right. They have made errors before with some of their fixes.
One of the vulnerabilities peaked my interest:
DTLS invalid fragment vulnerability (CVE-2014-0195)
A buffer overrun attack can be triggered by sending invalid DTLS fragments
to an OpenSSL DTLS client or server. This is potentially exploitable to
run arbitrary code on a vulnerable client or server.

There is a great writeup on why the code failed that can be found here.
So, I grabbed the latest and prior versions of the code and loaded up the diffs so I could see the impact.
Now, before I go to far, it is probably important that you familiarize yourself with the blog post “OpenSSL is written by monkeys.
Digging through the code, it easy to sympathize with the monkeys belief. From d1_both.c:
if (item == NULL)
  goto err;
  i = -1; <<===—— ????????????????????????????? }
Yes, the above was fixed but this was in the code base how long???

Anyways, looking at the actual fix, the following message length check was added. From d1_both.c:
unsigned long frag_len = msg_hdr->frag_len, max_len;
  frag = (hm_fragment*) item->data;
  if (frag->msg_header.msg_len != msg_hdr->msg_len) <<==— THIS CHECK WAS ADDED   {     item = NULL;     frag = NULL;     goto err;   } }
Don’t even get me started on the convention used to declare these variables. Yes, this is still in the code! The new check validates that the fragment length of the incoming packet matches the claimed length in the prior fragment.
Digging further into the code we can find some interesting gems:

i = s->method->ssl_read_bytes(s,SSL3_RT_HANDSHAKE, devnull,

Why??? Why would anyone program this??? sizeof(devnull)?? This is the code that monkey legends are made of. Yes, this is still in the code!

As for the exploit, an attacker can inject a DTLS packet fragments (its pretty trivial to inject DTLS packets) into an existing stream with a length larger then prior fragment declared, causing a write of data across the heap. Since OpenSSL uses pointers to functions in structs for overloading all over the place (monkeys?), there is a slight chance it could be exploited to run arbitrary code. I say slight because the attacker would need to figure out where the malloc for the data packet aligned to in the heap; with the way OpenSSL is always allocating memory, and with other incoming packets, this would be hard to exploit. It would be trivial to create a system crash.

The monkeys reference is harsh on the OpenSSL folks, who have been nothing but courteous and helpful in all my interactions but I must note that the same programmer for this bug is also responsible for the Heartbleed commit. At least it made for an interesting weekend...

Adobe Password Breach – Did They Really Do It Wrong?

A few months back, attackers got the email addresses and passwords of 130 million Adobe users. Adobe encrypted the passwords and was ridiculed for not hashing the values (considered best practice), but were they wrong?
As of late, I have been digging into passwords breaches, hashing, etc. I have been trying to grab as many of the breached password datasets as possible (please let me know if you have any) and came across the leaked adobe passwords. Normally, passwords are hashed; a one way transformation of the password that is stored on the system.

HashFunction(Password) -> PasswordHash

When the user authenticates, the provided password is hashed in the same manner and the hashed value is checked against the stored value for a match. Passwords are stored like this because if the hashed value is lost, there is no way to “decrypt” the data / get back to the original password. The first problem is if two users have the same password, it will result in the same hashed value, so each user has a unique salt value that is concatenated with the password before the hash function.

HashFunction(concatenateString(Salt, Password)) -> PerUserUniquePasswordHash

So, when the lists of these hashes are broken, how do attackers get the passwords? Classily, the hashing mechanisms were a single pass of algorithms like MD5 or SHA. The attackers will attempt to hash a large number (trillions) of various possible passwords combinations to see if they can find a match. The classic hash functions are computationally simple. Computing power in Graphics Processing Units (GPU), aka graphics cards, has reached the point, that it is not very expensive to build a machine that can check 60+ billion hashes a second.
So, to help thwart this problem, systems will use hash functions and iterate several thousand times, increasing the cost to break.

ItterationCount*HashFunction(concatenateString(Salt, Password)) -> PerUserUniquePasswordHash

Another technique (like scrypt) is to use algorithms that are computationally hard for GPUs, usually requiring lots of memory. Although scrypt usage must be carefully chosen to make it computationally hard for GPU – but that is for another post. Even with these computationally hard algorithms, targeted attacks can still breach passwords, but again, that’s for another post.

So, back to the original premise! Adobe had a breach where the attackers gained access and published the email addresses, passwords and password hints for 130 million of their users. Adobe did something “bad”; they encrypted the passwords (as opposed to hashing) so the values could all be reversed/decrypted. The security community ridiculed Adobe for storing the passwords in a reversible / decryptable format. But was Adobe wrong?

Now, I need to point something out at this time. Usually, when hashed password databases are leaked, researchers end up breaking 70%+ through various brute force methods and thus far, no Adobe passwords have been broken. The only way to break Adobe’s leaked passwords is to figure out the key used in encrypting which, thus far, is not publicly known. Well, almost none, as some have been “broken” through data inference but not by decrypting.
Lets look at some of their mistakes:

1. Along with the passwords were also listed the password hints that, in many cases, contained the passwords themselves. These should have also been encrypted.

So it can be assumed that the encrypted value of “EQ7fIpT7i/Q” is 123456; as a result, the person with the above .net password probably uses 123456 at other sites.

2. The individual passwords were not salted (in this case no IV) so users with the same password had the same encrypted value.

Raw data from the adobe break file:

Both of these users have the same hint of 123456 and the same encrypted value. a grep for ‘EQ7fIpT7i/Q’, the encrypted string piped into wc -l shows 1911938 matches.

3. Adobe was also using an eight byte block cipher which was probably DES which encrypts the data in eight byte increments. Because they did not use a salt, the first eight bytes of encrypted text for password and password123 would be the same.

Value DES ECB Encrypted Value in Hex
password1234 0x8a65e0e80532b5fa bf67cab8afccfa27
P@55w0Rd1234 0xfdf058ce01785b90 bf67cab8afccfa27

Looking at all the Adobe password hints beginning with the string ‘password’ results some interesting patterns in the encrypted values. I added what I believe the cleartext value is:

Count Encrypted Value in Hex Suspected Cleartext
521 EQ7fIpT7i/Q 123456
662 pIjmc+3/mLzioxG6CatHBw pa55word
726 STWrgIvDDp3ioxG6CatHBw P@ssw0rd
729 ygtKdMXm1tHioxG6CatHBw drowssap
1466 IbF1vGcYjCrioxG6CatHBw passw0rd
2726 L8qbAD3jl3jSPm/keox4fA password1
5890 L8qbAD3jl3jioxG6CatHBw password
102 g+/hUkh3HrbioxG6CatHBw Password

So, the next question becomes, if the algorithm is using an eight byte boundary, why do all these passwords with length 8 have the encrypted text ‘ioxG6CatHBw’?

Doing a grep for password hint of ‘1-8’ resulted in:

1484 j9p+HwtWWT/ioxG6CatHBw 12345678

I can only assume the password system was including a null or other extra character? Searching for the string ‘ioxG6CatHBw’ turns up 36,045,481 occurrences, thus is can be assumed these password all have a string length of eight. More testing about the extra character, looking for password hint with the string ‘1-7’ results in 2343 occurrences with the majority having the encrypted string of ‘dQi0asWPYvQ’. Digging further, there are 124,253 passwords with the exact string of ‘dQi0asWPYvQ’; no passwords had this a partial. If the extra character is a null, then it would not be typeable.

So they did not properly normalize their passwords; I am sure that made for great portability.

Most of Adobe’s problems were crypto related – doing cryptography right is hard and they failed. Even with this failure, they still ended up with better protection then regular hashing because the attackers did not get the symmetric key with the data. Could they have done better, without a doubt, but had followed industry best practices many more of the passwords would have been broken. If the attackers had attained the password with the dataset, then it would have been trivial to get the cleartext values for all of the passwords.

DTLS Packet Structure and Fields

I’ve been digging into the DTLS packet structure, looking for free bytes for some security related ideas I have been playing with.
This following is the DTLS packet structure:
| Type | Version | Epoch | Sequence Number | Length | IV | Data | MAC | Padding |
Type = (1 byte), Version (2 byte)
Epoch is incremented each rekey (2 byte)
Seq Num incremented per packet (6 byte)
Epoch + sequence number = IV for MAC
Length (2 byte) = IV + MAC + Padding
IV = Initial Vector = randomizer / seed used by encryption
Encrypted section: Data, MAC, Padding
MAC uses epoch and seq number for its sequence number (similar to an IV)
Padding added to get block size for crypto (16 or AES, 8 for DES)
According to the RFC’s, the type field supports the same types as the TLS version – this basically translates to:
SSL3_RT_ALERT 21 (x’15’)
SSL3_RT_HANDSHAKE 22 (x’16’)

Back to the DTLS packet structure… The encrypted portion of the packet is the Data, Mac and Padding fields. Hence, when a packet is decrypted, the padding is checked first, then the MAC for data authenticity. This is a design flaw in TLS/DTLS allowing for Padding Oracle Attack if CBC block mode encryption is used.