Cryptographic competitions


Introduction
Secret-key cryptography
Disasters
Features
Focused competitions:
AES
eSTREAM
SHA-3
PHC
CAESAR
Broader evaluations:
CRYPTREC
NESSIE
CAESAR details:
Submissions
Call for submissions
Call draft 5
Call draft 4
Call draft 3
Call draft 2
Call draft 1
Committee
Frequently asked questions

Features of various secret-key primitives

This web page is a list of various features that have been advertised by, or requested from, designers of various primitives for secret-key cryptography. The features are described in five categories:

  • Protection against a variety of attack goals. Simple example: a MAC protects against corruption but does not protect against espionage. Advanced example: every short MAC allows a noticeable probability of a successful forgery, but WMAC (2009 Black–Cochran) advertises a large reduction in the number of forgeries created by this success.
  • Protection against various dimensions of attacker resources. Simple example: some ciphers are safe against an attacker carrying out 2^64 bit operations but are not safe against an attacker carrying out 2^128 bit operations. Advanced example: many ciphers degrade rapidly in security as they handle more and more messages, in some cases becoming easily breakable around 2^40 messages, while other ciphers advertise safety for many more messages.
  • High performance in various metrics. Simple example: AES-128 runs in under 20 cycles/byte on an ARM Cortex A8. Advanced example: PRINCE encrypts 1Gbps using 3286 GEs with the UMC 90nm low-leakage Faraday library.
  • High performance in various dimensions of usage. Simple example: many authenticated ciphers reject forgeries more quickly than they verify and decrypt valid ciphertexts. Advanced example: some ciphers perform well even for extremely short inputs.
  • Support for cryptanalysis.

Designers setting goals for new primitives, and in particular for authenticated ciphers for the CAESAR competition, should keep in mind that the entries in this list vary widely in importance. There is no guarantee that the CAESAR selection committee will place any value upon any particular features; some features may even be controversial and viewed negatively. The list is organized thematically, not as any attempt to indicate importance.

It seems unlikely that a single design can simultaneously provide every feature (i.e., every qualitative feature and excellent results for every quantitative feature). Each designer is therefore expected to select some features as goals, and should be prepared to explain the merits of this selection. Of course, a design that sets its goals too narrowly might turn out to be of less interest to the community than a more ambitious design!

The following documents provide more perspective on the features of interest to various applications:

Attack goals

Each attack goal here corresponds to a potential feature of a secret-key primitive; the feature is that the primitive stops the attacker from reaching this goal.

Plaintext corruption, associated-data corruption, message-number corruption. An authenticated cipher produces as output an authenticated ciphertext. This ciphertext encrypts and authenticates a plaintext; it also authenticates, without encrypting or communicating, some associated data; it also authenticates a message number. The attacker's goal here is to forge a combination of plaintext, associated data, and message number that the receiver accepts but that the legitimate sender never encrypted. "INT-PTXT" (integrity of plaintexts) means protection against such attacks.

The attacker's chance of successfully forging at least one message (out of many forgery attempts) does not dictate the number of messages that the attacker successfully forges. One can separately consider, for each n, the probability that the attacker successfully forges at least n messages.

Ciphertext corruption. The attacker's goal here is to forge an authenticated ciphertext that the receiver accepts but that the legitimate sender never produced. "INT-CTXT" (integrity of ciphertexts) means protection against such attacks.

Integrity of plaintexts does not imply integrity of ciphertexts. Consider, for example, an authenticated cipher that pads a 112-bit MAC randomly to 128 bits. The extra 16 bits are malleable: an attacker can modify them to produce another ciphertext that will be accepted. This modification violates integrity of ciphertexts but does not violate integrity of plaintexts.

Integrity of plaintexts implies integrity of ciphertexts for any cipher that allows a unique ciphertext for each combination of plaintext, associated data, and message number. Integrity of ciphertexts implies integrity of plaintexts.

Ciphertext prediction. The attacker's goal here is to distinguish authenticated ciphertexts from uniform random strings. Note that strong MACs are not necessarily hard to predict (i.e., strong MAC does not imply strong PRF); for example, zero-padding a 127-bit MAC to 128 bits preserves MAC strength but allows trivial predictability.

Replay and reordering. Replay means convincing a receiver to accept a plaintext more times than the number of times that the plaintext was generated by the sender. Reordering means convincing a receiver to accept plaintexts in an order different from the order in which the plaintexts were generated by the sender.

The standard defense against both replay and reordering is for the sender to use strictly increasing message numbers, and for the receiver to refuse any message whose message number is no larger than the largest number of any verified message. This requires both the sender and receiver to keep state. The receiver can tolerate some accidental network reordering of ciphertexts, for example by remembering the largest 5 verified message numbers.

Perhaps there are better ways to integrate replay protection and reordering protection into an authenticated cipher.

Sabotage. The attacker's goal here is denial of service, a violation of availability: preventing the sender's legitimate data from arriving at the receiver. Typical sabotage techniques include flooding a radio link with noise at the hardware level, flooding a network with useless packets at the software level, or flooding a CPU with expensive computations.

The standard defense against sabotaged ciphertexts is to acknowledge valid ciphertexts and retransmit unacknowledged ciphertexts. The only traditional availability evaluation for secret-key primitives is the performance of discarding forgeries. But perhaps there is more that secret-key cryptography can do to protect availability.

Plaintext espionage. The attacker's goal here is to figure out the user's secret message. Of course, the attacker can simply guess the message, and this guess has a good chance of succeeding if the space of messages, or of likely messages, is small; but perhaps the attacker can do significantly better by inspecting the user's ciphertext.

Message-number espionage. The attacker's goal here is to figure out a secret message number.

The traditional view is that message numbers are not secret. An authenticated cipher uses the message number as input, and authenticates the message number, but does not encrypt or communicate the message number. Adding secrecy to message numbers requires changing this data flow: the authenticated ciphertext expands to include (and perhaps to generate) the message number.

In many applications, message numbers actually have two parts: a session identifier that needs to be communicated only once, and an identifier for a message within a session. An authenticated cipher that includes message numbers in ciphertexts can save bandwidth by not repeating the session number.

Attack resources

Attackers vary in several different dimensions of resources. Each resource or combination of resources here corresponds to a potential cipher feature; the feature is that the cipher stops an attacker who has those resources.

Some of these resources are, at least in theory, controllable by the legitimate cryptographic users. Designers often simplify and accelerate designs by requiring users to control these resources (e.g., "switch keys after 2^20 messages"; "never reuse a message number under the same key"). Other designers argue that eliminating those requirements adds robustness.

Extensive computation. The key sizes that currently attract the most interest are 80 bits, 128 bits, and 256 bits. The most common questions are (1) whether 80-bit keys are adequate and (2) whether 128-bit keys are adequate.

The main arguments for smaller keys, such as 80-bit or even 64-bit keys, are that

  • the attack cost outweighs the economic benefit of breaking a key, so no sensible attacker will carry out the attack, so there is no actual loss of security; and
  • smaller keys provide better performance.

The main arguments for larger keys are that

  • the attacker's cost-benefit ratio is lowered by multiple-user attacks, multiple forgeries, etc.;
  • some attackers carry out feasible attacks even when those attacks are not economically rational; and
  • larger keys provide adequate performance.

Designers evaluating feasibility may find the following rough figures useful. The Earth's atmosphere receives 2^57 watts from the Sun. Current world power usage is 1/2^13 of this total. One computer center costing 2^30 dollars consumes 1/2^18 of current world power usage. Readily available chips (GPUs) perform 2^68 bit operations per year per watt, if the bit operations are collected into floating-point operations and are sufficiently parallelizable.

Quantum computers. A scalable quantum computer running Grover's algorithm will be able to find a 128-bit key in only 2^64 simple quantum operations. A few ciphers (e.g., the Blum–Micali family of "provably secure" ciphers using modular exponentiation) will be much more heavily affected by quantum computation.

Many messages. Ciphers vary in how their security degrades as the number of legitimate messages increases.

Chosen plaintexts, chosen ciphertexts, chosen message numbers. Some ciphers degrade in security against active attackers who forge ciphertexts or who have some influence over plaintexts or message numbers. The standard view is that it is unacceptable to ask users to control this: all ciphers must be safe against chosen-plaintext attacks and against chosen-ciphertext attacks.

Many users. Ciphers vary in how their security degrades as the number of active keys increases.

Repeated message numbers. Many ciphers require the message numbers for each key to be nonces, and lose all security if the user ever repeats a message number (i.e., if the user uses an ntwice in place of a nonce; often an ntwice is called a "repeated nonce", which is a contradiction in terms). Some ciphers advertise higher security in this situation.

The standard argument for requiring nonces is that there is no way to hide repeated plaintexts that have repeated message numbers; users have to expect that a repeated plaintext with a repeated message number will produce a repeated ciphertext, leaking the plaintext repetition to the attacker. However, for many ciphers the consequences of an ntwice are much more severe, not just leaking whether the plaintexts are the same but (for example) allowing any number of selective forgeries; users are often surprised by this, even if it is documented.

There are several different suggestions for security targets when message numbers are repeated. The highest target is maintaining full security, as if message numbers had been nonces, except that each ntwice leaks whether the two plaintexts are the same. Reaching this target is often perceived to be a performance problem (for example, it requires the entire plaintext to be buffered), so some ciphers aim for lower security levels. For example, some authenticated ciphers leak the number of shared initial blocks of plaintext, and perhaps also the xor of the first non-shared block, but still do not allow forgeries; some authenticated ciphers allow forgeries under the repeated message number, but still do not allow forgeries under any subsequent message number.

Software side channels. Many implementations leak secret data through their own timing (typically via secret branches, secret memory addresses, or, on some CPUs, secret multiplication inputs) or through indirect effects on the timing of other software (typically via cache state or branch state). Every primitive can be implemented in a way that prevents these leaks on all common CPUs, but often this defense produces a drastic loss in performance, especially for ciphers that rely heavily on table access.

Hardware side channels. Power consumption, electromagnetic radiation, etc. are typically hard to see from far away (unlike timing information, which is often easily visible from remote networks), but for a nearby attacker they are tremendously informative. Hundreds of papers have explored the cost of hardware side-channel attacks and the cost and effectiveness of various defenses for various ciphers.

Faults. Sometimes attackers can flip bits in a computation (for example, by firing a laser at a target chip), and deduce secret data from the resulting cipher output.

Thefts and monitors. Sometimes attackers steal a copy of a secret key, or plant a monitor inside a cryptographic device to watch the device's computations. There is no hope of protecting the confidentiality or integrity of future communication against a monitor that sees all of the user's secrets, but one can ask for a cipher to protect the confidentiality of past communication.

Systems that protect earlier messages against espionage, despite subsequent thefts, monitors, etc., often advertise forward secrecy. Forward secrecy requires that the cipher frequently switch keys (in a non-invertible way); this, in turn, requires key generation and key updates to be integrated into the cipher, rather than having the key as an external input.

Performance metrics

Each metric here corresponds to a potential cipher feature; the feature is that the cipher performs well in the metric.

ASIC performance is typically measured in several ways:

  • Low energy (joules) per byte. Energy use is a continuing user-visible cost: it drains the battery, appears as a charge on the power bill, etc.
  • Low power (watts). The power used for cryptography, together with the power used for other operations carried out at the same time, must fit within the power supply. This limits the number of operations that can be carried out in parallel.
  • Low area (square micrometers; loosely predicted by gate equivalents). The circuit area used for cryptography, together with the area used for other operations, must fit within the chip space available. This also limits the number of operations that can be carried out in parallel.
  • High throughput (bytes per second). Applications vary in their throughput requirements.
  • Low latency (seconds; very loosely predicted by cycles). Often an application imposes upper limits on the total time between obtaining input and providing output.

If there are no limits on power or area then implementors can achieve arbitrarily high throughputs by parallelizing across blocks (for, e.g., counter-mode ciphers) or by parallelizing across independent messages (for applications that have many messages to handle in parallel). It is therefore standard to compute the ratio of area and throughput, i.e., the product of area and time per byte (AT/byte). Simple forms of parallelization have negligible effect on this ratio.

The same measurements make sense for FPGAs. The physical FPGA area is consumed by several types of resources: slices (and, at a smaller scale, LUTs); block RAMs; and, on some FPGAs, multipliers. It is standard to count these resources instead of counting gates or actual area.

For software running on CPUs it is standard to simply count cycles per byte. The heuristic here is that an active CPU runs at a constant frequency (its maximum number of cycles per second) and consumes a constant amount of power, so the number of cycles predicts the number of seconds, the energy consumption, and the latency. Modern CPU-design trends have, however, produced an increasing dependence of frequency upon the computation being run (see, e.g., Turbo Boost and Turbo Core) and an even larger dependence of power upon the computation being run (e.g., memory access typically consumes more power than arithmetic). Presumably it will become increasingly common to measure actual energy consumption and throughput of software.

The metrics listed above are often highly platform-dependent: there are many different ASIC manufacturing technologies, many different FPGAs, and many different CPUs. Platforms should be expected to evolve over time as manufacturers continue improving the efficiency of their chips. Designs heavily optimized for today's platforms might turn out to be of less interest in a few years than designs that plan ahead for platforms of the future.

See eBACS, XBX, and ATHENa for much more information regarding performance of cryptography in software and hardware.

Another performance measurement for MACs and authenticated ciphers is bandwidth: an authenticator is transmitted as part of each message, and sometimes keys are transmitted. Of course, security puts a lower bound on authenticator length and key length.

Performance settings

There are several dimensions of user activity that affect the performance of a cipher under the metrics discussed above. Each combination corresponds to a potential cipher feature; the feature is that the cipher performs well when users are in that situation.

Authenticate only, or encrypt and authenticate? An authenticated cipher authenticates both plaintext and associated data, while it encrypts only the plaintext. The cost per byte of associated data is often significantly lower than the cost per byte of plaintext.

Send valid data, receive valid data, or receive invalid data? The cost to send data is not necessarily the same as the cost to receive data. The cost to receive a forgery is not necessarily the same as the cost to receive a valid message. There are some MACs and many authenticated ciphers for which a receiver can handle (and discard) a forgery more efficiently than handling (and accepting) a valid message. For example, any "encrypt-then-MAC" authenticated cipher skips the cost of decryption for forgeries.

For area measurements it is important to distinguish three different targets: an encryption/authentication circuit; a verification/decryption circuit; and a circuit that can dynamically select either encryption/authentication or verification/decryption.

Plaintext length and associated-data length. Performance depends, often heavily, on the plaintext length. Sometimes a cipher performs well for long plaintexts but poorly for short plaintexts, because it incurs heavy per-message overheads that are unnoticeable for long plaintexts. Similar comments apply to the associated-data length.

Beware that comparing overheads between two ciphers can be misleading. What the user wants to know is which cipher has better performance for the input length of interest; this has no logical connection to the question of which cipher has smaller performance differences between one input length and another input length.

Number of inputs. A cipher may be able to merge the processing of several independent inputs (if the application has several inputs to handle at once), handling them more efficiently than handling each one separately. Often this type of gain is limited by various requirements of relationships among the inputs: for example, having the same input length.

Number of times a key is used. Many ciphers can improve performance by storing precomputed "expanded keys". This incurs a cost in area, and it does not help for the first use of the key, but it reduces computation in subsequent uses of the key.

Ciphers with fast key expansion often advertise "key agility". Beware that comparing key agility between two ciphers, like comparing overhead, can be misleading; what the user wants to know is which cipher has better performance for the first use of the key.

General input scheduling. Some ciphers can reduce latency by performing computations that depend only on the key and nonce before associated data is available; and computations that depend only on the key, nonce, and associated data before plaintext is available. Another permutation has been suggested: computations that depend only on the key, nonce, and plaintext before associated data is available.

Scheduling within the plaintext, and scheduling within the ciphertext. In many applications, plaintext is received gradually, from left to right. Many ciphers can reduce latency by performing computations on earlier parts of the plaintext before later parts are available. Similar comments apply to ciphertext.

A related, more flexible, feature is "incrementality": some ciphers allow the output to be efficiently updated when any small part of the input is updated.

The latency benefit here should not be confused with an area benefit for ciphers that can handle a long plaintext in one pass without using a large buffer. Beware that security questions are raised by any authenticated cipher that handles a long ciphertext in one pass without using a large buffer: releasing unverified plaintext to applications often means releasing it to attackers and also requires an analysis of how the applications will react.

Intermediate tags. If a long plaintext is split into separate packets, each of which is separately authenticated (and encrypted), then a long forgery need not be buffered before it is rejected. Applying this split to any MAC (or authenticated cipher) produces a new MAC (or authenticated cipher) with different performance properties; perhaps the same type of fast rejection can be better achieved in another way.

Other operations. Sometimes a cipher can share area with other primitives on the same chip. For example, an integrated primitive for authenticated encryption and hashing might consume considerably less area than separate primitives for the same two tasks; this is a benefit for an application that needs both tasks (at different times). Ciphers that easily provide extra features such as hashing often advertise "flexibility".

A smaller and more common type of "flexibility" is having a family of several ciphers aimed at different applications. One can again ask how much area is consumed by multiple members of the same family. For example, AES is actually one cipher for 128-bit keys, another cipher for 192-bit keys, and another cipher for 256-bit keys; a circuit that can dynamically choose between AES-128, AES-192, and AES-256 is not much larger than a circuit supporting only AES-256.

DRAM, L2 cache, or L1 cache? Software often becomes much slower because the input, or the software itself, does not fit into L1 cache. Even when cryptographic operations fit into cache, a complete system consisting of networking, cryptography, etc. rarely fits into cache. The extra costs of cache misses are somewhat predictable from the total input size (independent of the cipher) and the total software size (dependent upon the cipher).

Support for cryptanalysis

Cryptanalysis is by far the most important tool for evaluating the security of secret-key cryptography. Designers have advertised several features as providing assistance to cryptanalysts. These features are generally difficult to evaluate objectively but are nevertheless of interest.

Simplicity. Ciphers that are simple and easy to understand are often better suited for cryptanalysis than complex ciphers.

Scalability. A typical cipher is structured as a series of similar rounds, naturally providing reduced-round targets for cryptanalysis. Some ciphers also provide reduced-word targets and other simplified versions.

Proofs. Some ciphers offer proofs that attacks meeting certain constraints are difficult, or as difficult as an easier-to-analyze problem. This suggests a division of cryptanalytic work into three parts: (1) study the easier-to-analyze problem; (2) find attacks that violate the constraints; (3) find errors in the proof. Sometimes this is useful guidance. Beware that these proofs are sometimes falsely advertised to users as "proofs of security", even though what they actually prove is something more limited.

Acknowledgments

This web page draws on discussions at the Symmetric Cryptography workshop in Dagstuhl in January 2012, the Directions in Authenticated Ciphers (DIAC) workshop in Stockholm in July 2012, and the Early Symmetric Cryptography (ESC) workshop in Mondorf in January 2013. The CAESAR secretary gratefully acknowledges suggestions from Jean-Philippe Aumasson, Eli Biham, Joan Daemen, Orr Dunkelman, Lars Knudsen, Kaisa Nyberg, David McGrew, Bart Preneel, and Greg Rose.


Version: This is version 2014.01.27 of the features.html web page.