All papers in 2009 (Page 6 of 638 results)

Last updated:  2009-03-27
A Hybrid RFID Protocol against Tracking Attacks
Jen-Chun Chang, Hsin-Lung Wu
We study the problem how to construct an RFID mutual authentication protocol between tags and readers. Juels (in Journal on Selected Areas in Communications, 2006) proposed an open question which asks whether there exists a fully privacy-preserving RFID authentication protocol in which the time complexity of a successful authentication phase can be done in sub-linear time $o(n)$ where $n$ is the number of tags participating in the RFID system. In this work, we answer this question positively in an amortized view. Precisely, we design a fully privacy-preserving protocol in which the amortized cost of a successful authentication phase only requires time $O(n^{3/4})$. In addition, our protocol is a hybrid from the randomized hash lock scheme of Weis et. al., the hash chain scheme of Ohkubo et. al., and the efficient identification scheme of Dimitriou. We combine them delicately to obtain the desired authentication protocol.
Last updated:  2009-05-04
The Dark Side of Security by Obscurity and Cloning MiFare Classic Rail and Building Passes Anywhere, Anytime
Nicolas T. Courtois
MiFare Classic is the most popular contactless smart card with about 200 millions copies in circulation worldwide. At Esorics 2008 Dutch researchers showed that the underlying cipher Crypto-1 can be cracked in as little as 0.1 seconds if the attacker can access or eavesdrop the RF communications with the (genuine) reader. We discovered that a MiFare classic card can be cloned in a much more practical card-only scenario, where the attacker only needs to be in the proximity of the card for a number of minutes, therefore making usurpation of identity through pass cloning feasible at any moment and under any circumstances. For example, anybody sitting next to the victim on a train or on a plane is now be able to clone his/her pass. Other researchers have also (independently from us) discovered this vulnerability, however our attack requires less queries to the card and does not require any precomputation. In addition, we discovered that certain versions or clones of MiFare Classic are even weaker, and can be cloned in 1 second. The main security vulnerability that we need to address with regard to MiFare Classic is not about cryptography, RFID protocols and software vulnerabilities. It is a systemic one: we need to understand how much our economy is vulnerable to sophisticated forms of electronic subversion where potentially one smart card developer can intentionally (or not), but quite easily in fact, compromise the security of governments, businesses and financial institutions worldwide.
Last updated:  2009-03-30
How to Extract and Expand Randomness: A Summary and Explanation of Existing Results
Uncategorized
Yvonne Cliff, Colin Boyd, Juan Gonzalez Nieto
Show abstract
Uncategorized
We examine the use of randomness extraction and expansion in key agreement (KA) protocols to generate uniformly random keys in the standard model. Although existing works provide the basic theorems necessary, they lack details or examples of appropriate cryptographic primitives and/or parameter sizes. This has lead to the large amount of min-entropy needed in the (non-uniform) shared secret being overlooked in proposals and efficiency comparisons of KA protocols. We therefore summarize existing work in the area and examine the security levels achieved with the use of various extractors and expanders for particular parameter sizes. The tables presented herein show that the shared secret needs a min-entropy of at least 292 bits (and even more with more realistic assumptions) to achieve an overall security level of 80 bits using the extractors and expanders we consider. The tables may be used to find the min-entropy required for various security levels and assumptions. We also find that when using the short exponent theorems of Gennaro et al., the short exponents may need to be much longer than they suggested.
Last updated:  2009-06-03
Practical Key Recovery Attack against Secret-prefix Edon-R
Gaëtan Leurent
Edon-R is one of the fastest SHA-3 candidate. In this paper we study the security of Edon-R, and we show that using Edon-R as a MAC with the secret prefix construction is unsafe. We present a practical attack in the case of Edon-R256, which requires 32 queries, 2^30 computations, negligible memory, and a precomputation of 2^50 . This does not directly contradict the security claims of Edon-R or the NIST requirements for SHA-3, but we believe it shows a strong weakness in the design.
Last updated:  2009-03-27
A First Order Recursive Construction of Boolean Function with Optimum Algebraic Immunity
Yindong Chen, Peizhong Lu
This paper proposed a first order recursive construction of Boolean function with optimum algebraic immunity. We also show that the Boolean functions are balanced and have good algebraic degrees.
Last updated:  2009-03-30
Signature Schemes with Bounded Leakage Resilience
Jonathan Katz
A leakage-resilient cryptosystem remains secure even if arbitrary information about the secret key (or possibly other internal state information) is leaked to an adversary. We demonstrate the first constructions of leakage-resilient signature schemes that remain secure as long as a bounded amount of information, depending on the length $n$ of the secret key, is leaked. We show efficient schemes in the random oracle model that handle leakage of up to $(1/2-\epsilon) n$ bits of information about the signer's entire internal state. In the standard model, we show an inefficient scheme that can handle leakage of up to $(1-\epsilon) n$ bits of information about the secret key, and a one-time signature scheme tolerating arbitrary leakage of $n^{1-\epsilon}$ bits.
Last updated:  2009-03-30
A New Lattice for Implicit Factoring
Uncategorized
Yanbin Pan, Yingpu Deng
Uncategorized
In PKC 2009, Alexander May and Maike Ritzenhofen\cite{MR} proposed an ingenious lattice-based method to factor the RSA moduli $N_1=p_1q_1$ with the help of the oracle which outputs $N_2=p_2q_2$, where $p_1$ and $p_2$ share the $t$ least significant bits and $t$ is large enough when we query it with $N_1$. They also showed that when asking $k-1$ queries for RSA moduli with $\alpha$-bit $q_i$, they can improve the bound on $t$ to $t\geq\frac{k}{k-1}\alpha$. In this paper, we propose a new lattice for implicit factoring in polynomial time, and $t$ can be a little smaller than in \cite{MR}. Moreover, we also give a method in which the bound on $t$ can also be improved to $t\geq\frac{k}{k-1}(\alpha-1+\frac{1}{2}\log{\frac{k+3}{k}})$ but with just only one query. Moreover we can show that our method reduces the running time of the implicit factoring for balanced RSA moduli much efficiently and makes it practical.
Last updated:  2009-03-27
Key Predistribution Schemes in Distributed Wireless Sensor Network using Combinatorial Designs Revisited
Uncategorized
Anupam Pattanayak, B. Majhi
Show abstract
Uncategorized
A Sensor Node in Wireless Sensor Network has very limited resources such as processing capability, memory capacity, battery power, and communication capability. When the communication between any two sensor nodes are required to be secured, the symmetric key cryptography technique is used for its advantage over public key cryptography in terms of requirement of less resources. Keys are pre-distributed to each sensor node from a set of keys called key pool before deployment of sensors nodes. Combinatorial design helps in a great way to determine the way keys are drawn from the key pool for distributing to individual sensor nodes. We study various deterministic key predistribution techniques that are based on combinatorial design.
Last updated:  2009-04-01
Constructions of Even-variable Boolean Function with Optimum Algebraic Immunity
Uncategorized
Yindong Chen, Peizhong Lu
Show abstract
Uncategorized
This paper proposed an improved construction of even-variable Boolean function with optimum algebraic immunity. Compared with those in~\cite{Carl06}, our Boolean functions are more balance. Specially, for $k{=}2t{+}1$ $(t{>}1)$, the $2k$-variables Boolean function is balanced. Furthermore, we generalized it to a class of constructions, meaning there would be much more constructions.
Last updated:  2009-06-16
Faster and Timing-Attack Resistant AES-GCM
Emilia Kasper, Peter Schwabe
We present a bitsliced implementation of AES encryption in counter mode for 64-bit Intel processors. Running at 7.59 cycles/byte on a Core~2, it is up to 25% faster than previous implementations, while simultaneously offering protection against timing attacks. In particular, it is the only cache-timing-attack resistant implementation offering competitive speeds for stream as well as for packet encryption: for 576-byte packets, we improve performance over previous bitsliced implementations by more than a factor of 2. We also report more than 30% improved speeds for lookup-table based Galois/Counter mode authentication, achieving 10.68 cycles/byte for authenticated encryption. Furthermore, we present the first constant-time implementation of AES-GCM that has a reasonable speed of $21.99$ cycles/byte, thus offering a full suite of timing-analysis resistant software for authenticated encryption.
Last updated:  2009-03-20
Attacks on a Lightweight Cipher Based on a Multiple Recursive Generator
Lu Xiao, Gregory G. Rose
At IEEE GLOBECOM 2008, a lightweight cipher based on a Multiple Recursive Generator (MRG) was proposed for use in resource limited environment such as sensor nodes and RFID tags. This paper proposes two efficient attacks on this MRG cipher. A distinguishing attack is firstly introduced to identify the use of an MRG cipher that has a modulus suggested by its designers. It requires $2^{18}$ words of ciphertext and the most significant bit of each corresponding plaintext word. Then an efficient known plaintext attack is proposed to construct the cipher's current state and generate subkeys used for all subsequent encryption. The known plaintext attack, when targeted at the MRG ciphers optimized for efficiency, only requires 2k words of known plaintext and trivial computation where k is the MRG order. Even the ciphers based on complicated and inefficient MRGs can be attacked with low complexity, e.g., in the magnitude of $2^{12}$ words of known plaintext for all MRG ciphers with order 47, regardless of which MRG modulus is used. These two attacks indicate that the examined MRG cipher structure is seriously flawed.
Last updated:  2009-03-20
Side Channel Cube Attacks on Block Ciphers
Itai Dinur, Adi Shamir
In this paper we formalize the notion of {\it leakage attacks} on iterated block ciphers, in which the attacker can find (via physical probing, power measurement, or any other type of side channel) one bit of information about the intermediate state of the encryption after each round. Since bits computed during the early rounds can be typically represented by low degree multivariate polynomials, cube attacks seem to be an ideal generic key recovery technique in these situations. However, the original cube attack requires extremely clean data, whereas the information provided by side channel attacks can be quite noisy. To address this problem, we develop a new variant of cube attack which can tolerate considerable levels of noise (affecting more than 11\% of the leaked bits in practical scenarios). Finally, we demonstrate our approach by describing efficient leakage attacks on two of the best known block ciphers, AES (requiring about $2^{35}$ time for full key recovery) and SERPENT (requiring about $2^{18}$ time for full key recovery).
Last updated:  2009-04-02
Threshold Attribute-Based Signatures and Their Application to Anonymous Credential Systems
Siamak F Shahandashti, Reihaneh Safavi-Naini
Inspired by the recent developments in attribute-based encryption, in this paper we propose threshold attribute-based signatures (t-ABS). In a t-ABS, signers are associated with a set of attributes and verification of a signed document against a verification attribute set succeeds if the signer has a threshold number of (at least t) attributes in common with the verification attribute set. A t-ABS scheme enables a signature holder to prove possession of signatures by revealing only the relevant (to the verification attribute set) attributes of the signer, hence providing signer-attribute privacy for the signature holder. We define t-ABS schemes, formalize their security and propose two t-ABS schemes: a basic scheme that is selectively unforgeable and a second one that is existentially unforgeable, both provable in the standard model, assuming hardness of the computational Diffie-Hellman problem. We show that our basic t-ABS scheme can be augmented with two extra protocols that are used for efficiently issuing and verifying t-ABS signatures on committed values. We call the augmented scheme a threshold attribute based c-signature scheme (t-ABCS). We show how a t-ABCS scheme can be used to realize a secure {threshold attribute-based anonymous credential system (t-ABACS) providing signer-attribute privacy. We propose a security model for t-ABACS and give a concrete scheme using t-ABCS scheme. Using the simulation paradigm, we prove that the credential system is secure if the t-ABCS scheme is secure.
Last updated:  2009-03-20
A Full Key Recovery Attack on HMAC-AURORA-512
Yu Sasaki
In this note, we present a full key recovery attack on HMAC-AURORA-512 when 512-bit secret keys are used and the MAC length is 512-bit long. Our attack requires $2^{257}$ queries and the off-line complexity is $2^{259}$ AURORA-512 operations, which is significantly less than the complexity of the exhaustive search for a 512-bit key. The attack can be carried out with a negligible amount of memory. Our attack can also recover the inner-key of HMAC-AURORA-384 with almost the same complexity as in HMAC-AURORA-512. This attack does not recover the outer-key of HMAC-AURORA-384, but universal forgery is possible by combining the inner-key recovery and 2nd-preimage attacks. Our attack exploits some weaknesses in the mode of operation.
Last updated:  2009-03-23
Practical Secure Evaluation of Semi-Private Functions
Annika Paus, Ahmad-Reza Sadeghi, Thomas Schneider
Two-party Secure Function Evaluation (SFE) is a very useful cryptographic tool which allows two parties to evaluate a function known to both parties on their private (secret) inputs. Some applications with sophisticated privacy needs require the function to be known only to one party and kept private (hidden) from the other one. However, existing solutions for SFE of private functions (PF-SFE) deploy Universal Circuits (UC) and are still very inefficient in practice. In this paper we bridge the gap between SFE and PF-SFE with SFE of what we call {\em semi-private functions} (SPF-SFE), i.e., one function out of a given class of functions is evaluated without revealing which one. We present a general framework for SPF-SFE allowing a fine-grained trade-off and tuning between SFE and PF-SFE covering both extremes. In our framework, semi-private functions can be composed from several privately programmable blocks (PPB) which can be programmed with one function out of a class of functions. The framework allows efficient and secure embedding of constants into the resulting circuit to improve performance. To demonstrate practicability of the framework we have implemented a compiler for SPF-SFE based on the Fairplay SFE framework. SPF-SFE is sufficient for many practically relevant privacy-preserving applications, such as privacy-preserving credit checking which can be implemented using our framework and compiler as described in the paper.
Last updated:  2009-03-20
On the Complexity of Integer Factorization
N. A. Carella
This note presents a deterministic integer factorization algorithm based on a system of polynomial equations. The main result establishes a new deterministic time complexity bench mark.
Last updated:  2009-08-04
Hardware Accelerator for the Tate Pairing in Characteristic Three Based on Karatsuba-Ofman Multipliers
Jean-Luc Beuchat, Jérémie Detrey, Nicolas Estibals, Eiji Okamoto, Francisco Rodríguez-Henríquez
This paper is devoted to the design of fast parallel accelerators for the cryptographic Tate pairing in characteristic three over supersingular elliptic curves. We propose here a novel hardware implementation of Miller's loop based on a pipelined Karatsuba-Ofman multiplier. Thanks to a careful selection of algorithms for computing the tower field arithmetic associated to the Tate pairing, we manage to keep the pipeline busy. We also describe the strategies we considered to design our parallel multiplier. They are included in a VHDL code generator allowing for the exploration of a wide range of operators. Then, we outline the architecture of a coprocessor for the Tate pairing over $\mathbb{F}_{3^m}$. However, a final exponentiation is still needed to obtain a unique value, which is desirable in most of the cryptographic protocols. We supplement our pairing accelerator with a coprocessor responsible for this task. An improved exponentiation algorithm allows us to save hardware resources. According to our place-and-route results on Xilinx FPGAs, our design improves both the computation time and the area-time trade-off compared to previoulsy published coprocessors.
Last updated:  2009-07-22
Optimized Public Key Infrastructure -- A PKI to Support Efficient Document's Signatures
Martín Augusto Gagliotti Vigil, Ricardo Felipe Custódio, Nelson da Silva, Ricardo Moraes
Optimized Public Key Infrastructures are traditional PKI in which end users may optimize the signatures of their documents, replacing the signer's validation data with Optimized Certificates (OC). OCs carry the signer's identification and public key, but are issued for a specific time, i.e., fields notBefore and notAfter have the same value, thus there are no reasons to revoke them. The OC's certification path is supposed to be shorter and uses Micali's revocation scheme. Furthermore, OCs include signed document's hashcodes, working also as time-stamps. Therefore, OCs are useful to replace signed document's validation data by one smaller and easier to verify. Finally, when OCs become invalid due to cryptographic algorithm weakness and limits in the validity periods of their certificate chains, they can be easily replaced by new ones, thus this proposal is suitable for efficient long term archiving.
Last updated:  2009-03-15
On the Complexity of Khovratovich et.al's Preimage Attack on Edon-R
Uncategorized
Danilo Gligoroski, Rune Steinsmo Ødegård
Show abstract
Uncategorized
Based on the analysis made by van Oorschot and Wiener for the complexity of parallel memoryless collision search, we show that the memoryless meet-in-the-middle attack which is one part of the whole preimage attack of Khovratovich et. al. on Edon-R hash function has complexity bigger than 2^n.
Last updated:  2009-03-15
A Continuous Fault Countermeasure for AES Providing a Constant Error Detection Rate
Marcel Medwed
Many implementations of cryptographic algorithms have shown to be susceptible to fault attacks. For some of them, countermeasures against specific fault models have been proposed. However, for symmetric algorithms like AES, the main focus of available countermeasures lies on performance so that their achieved error detection rates are rather low or not determinable at all. Even worse, those error detection rates only apply to specific parts of the cipher. In this paper we present a way to achieve a constantly higher error detection rate throughout the whole algorithm while assuming a much stronger adversary model than in previous papers. Furthermore, we propose solutions for two very important, unsolved questions: First, how to do secure and efficient table lookups in redundant algebras. Second, how to implement secure correctness checks to verify the result in a scenario where the adversary can manipulate comparisons. Our paper is therefore the first one to construct a sound and continuous AES fault countermeasure with an attacker-independent minimum error detection rate.
Last updated:  2009-04-14
A2BE: Accountable Attribute-Based Encryption for Abuse Free Access Control
Jin Li, Kui Ren, Kwangjo Kim
As a recently proposed public key primitive, attribute-based encryption (ABE) (including Ciphertext-policy ABE (CP-ABE) and Key-policy ABE (KP-ABE)) is a highly promising tool for secure access control. In this paper, the issue of key abuse in ABE is formulated and addressed. Two kinds of key abuse problems are considered, i) illegal key sharing among colluding users and ii) misbehavior of the semi-trusted attribute authority including illegal key (re-)distribution. Both problems are extremely important as in an ABE-based access control system, the attribute private keys directly imply users' privileges to the protected resources. To the best knowledge of ours, such key abuse problems exist in all current ABE schemes as the attribute private keys assigned to the users are never designed to be linked to any user specific information except the commonly shared user attributes. To be concrete, we focus on the prevention of key abuse in CP-ABE in this paper \footnote{Our technique can easily be extended to KP-ABE as well.}. The notion of accountable CP-ABE (CP-A$^2$BE, in short) is first proposed to prevent illegal key sharing among colluding users. The accountability for user is achieved by embedding additional user specific information in the attribute private key issued to the user. To further obtain accountability for the attribute authority as well, the notion of strong CP-A$^2$BE is proposed, allowing each attribute private key to be linked to the corresponding user's secret that is unknown to the attribute authority. We show how to construct such a strong CP-A$^2$BE and prove its security based on the computational Diffie-Hellman assumption. Finally, we show how to utilize the new technique to solve some open problems existed in the previous accountable identity-based encryption schemes.
Last updated:  2009-03-14
Changing probabilities of differentials and linear sums via isomorphisms of ciphers
Alexander Rostovtsev
\begin{document} Ciphers $y=C(x, k)$ and $Y=C_{1}(X, K)$ are isomorphic if there exists invertible computable in both directions map $y \leftrightarrow Y$, $x \leftrightarrow X$, $k \leftrightarrow K$. Cipher is vulnerable if and only if isomorphic cipher is vulnerable. Instead of computing the key of a cipher it is sufficient to find suitable isomorphic cipher and compute its key. If $\varphi $ is arbitrary substitution and $T$ is round substitution, its conjugate $T_{1}=\varphi T\varphi ^{ - 1}$ is cipher isomorphism. Conjugate substitutions have the same cycle type. Conjugation can be composed with affine maps. Combining conjugation and affine equivalence, sometimes we can transform non-linear special $S$-box to conjugate affine substitution $S_{1}$. Usually for given $S$, $S_{1}$ there are many different auxiliary substitutions $\varphi $. Conjugate diffusion map and XOR operation become non-linear, but taking appropriate $\varphi $ we can get large probabilities of differentials and linear sums of diffusion map and XOR. For example AES substitution (as finite field inverting) is approximately conjugate with bit changing substitution. That conjugate substitution has differentials and linear sums of probability 1. Corresponding byte substitution $\varphi $ defines non-linear conjugate diffusion map and non-linear conjugate to XOR operation with round key. Probabilities of differentials (biases of linear sums) of byte substitution of conjugate diffusion map are 8-12 times more then corresponding values of original $S$-box. Probabilities of differentials of conjugate XOR with the round key byte depends on the round key and can be 1 for some key bytes.
Last updated:  2009-05-22
Information Theoretically Secure Multi Party Set Intersection Re-Visited
Uncategorized
Arpita Patra, Ashish Choudhary, C. Pandu Rangan
Show abstract
Uncategorized
We re-visit the problem of secure multiparty set intersection in information theoretic settings. In \cite{LiSetMPCACNS07}, Li et.al have proposed a protocol for multiparty set intersection problem with $n$ parties, that provides information theoretic security, when $t < \frac{n}{3}$ parties are corrupted by an active adversary having {\it unbounded computing power}. In \cite{LiSetMPCACNS07}, the authors claimed that their protocol takes six rounds of communication and communicates ${\cal O}(n^4m^2)$ field elements, where each party has a set containing $m$ field elements. However, we show that the round and communication complexity of the protocol in \cite{LiSetMPCACNS07} is much more than what is claimed in \cite{LiSetMPCACNS07}. We then propose a {\it novel} information theoretically secure protocol for multiparty set intersection with $n > 3t$, which significantly improves the "actual" round and communication complexity (as shown in this paper) of the protocol given in \cite{LiSetMPCACNS07}. To design our protocol, we use several tools which are of independent interest.
Last updated:  2009-04-09
Scalable Compilers for Group Key Establishment : Two/Three Party to Group
Uncategorized
S. Sree Vivek, S. Sharmila Deva Selvi, Deepanshu Shukla, C. Pandu Rangan
Show abstract
Uncategorized
This work presents the first scalable, efficient and generic compilers to construct group key exchange (GKE) protocols from two/three party key exchange (2-KE/3-KE) protocols. We propose three different compilers where the first one is a 2-KE to GKE compiler (2-TGKE) for tree topology, the second one is also for tree topology but from 3-KE to GKE (3-TGKE) and the third one is a compiler that constructs a GKE from 3-KE for circular topology. Our compilers 2-TGKE and 3-TGKE are first of their kind and are efficient due to the underlying tree topology. For the circular topology, we design a compiler called 3-CGKE. 2-TGKE and 3-TGKE compilers require a total of $\mathcal{O}\left(n\lg n \right)$ communication, when compared to the existing compiler for circular topology, where the communication cost is $\mathcal{O}\left(n^2 \right)$. By extending the compilers 2-TGKE and 3-TGKE using the techniques in \cite{DLB07}, scalable compilers for tree based authenticated group key exchange protocols (2-TAGKE/3-TAGKE), which are secure against active adversaries can be constructed. As an added advantage our compilers can be used in a setting where there is asymmetric distribution of computing power. Finally, we present a constant round authenticated group key exchange (2-TAGKE) obtained by applying Diffie-Hellman protocol and the technique in \cite{DLB07} to our compiler 2-TGKE. We prove the security of our compilers in a stronger Real or Random model and do not assume the existence of random oracles.
Last updated:  2009-03-12
Weakness of Key Predistribution Scheme Proposed by J. Dong et al.
Uncategorized
Anupam Pattanayak, B. Majhi
Show abstract
Uncategorized
A Sensor Node in Wireless Sensor Network has very limited resources such as processing capability, memory capacity, battery power, and communication capability. When the communication between any two sensor nodes are required to be secured, the symmetric key cryptography technique is used for its advantage over public key cryptography in terms of requirement of less resources. Keys are pre-distributed to each sensor node from a set of keys called key pool before deployment of sensors nodes. Combinatorial design helps in a great way to determine the way keys are drawn from the key pool for distributing to individual sensor nodes. J. Dong et al proposed a key predistribution scheme based on orthogonal array. We present the weakness of this predistribution scheme.
Last updated:  2009-03-11
Attacks on AURORA-512 and the Double-Mix Merkle-Damgaard Transform
Niels Ferguson, Stefan Lucks
We analyse the Double-Mix Merkle-Damgaard construction (DMMD) used in the AURORA family of hash functions. We show that DMMD falls short of providing the expected level of security. Specically, we are able to find 2nd pre-images for AURORA-512 in time 2^{291}, and collisions in time 2^{234.4}. A limited-memory variant finds collisions in time 2^{249}.
Last updated:  2009-03-11
A 2nd-Preimage Attack on AURORA-512
Yu Sasaki
In this note, we present a 2nd-preimage attack on AURORA-512, which is one of the candidates for SHA-3. Our attack can generate 2nd-preimages of any given message, in particular, the attack complexity becomes optimal when the message length is 9 blocks or more. In such a case, the attack complexity is approximately $2^{290}$ AURORA-512 operations, which is less than the brute force attack on AURORA-512, namely, $2^{512-\log_2{9}}\approx2^{508}$. Our attack exploits some weakness in the mode of operation.
Last updated:  2009-06-03
Short Chosen-Prefix Collisions for MD5 and the Creation of a Rogue CA Certificate
Marc Stevens, Alexander Sotirov, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Arne Osvik, Benne de Weger
We present a refined chosen-prefix collision construction for MD5 that allowed creation of a rogue Certification Authority (CA) certificate, based on a collision with a regular end-user website certificate provided by a commercial CA. Compared to the previous construction from Eurocrypt 2007, this paper describes a more °exible family of differential paths and a new variable birthdaying search space. Combined with a time-memory trade-off, these improvements lead to just three pairs of near-collision blocks to generate the collision, enabling construction of RSA moduli that are sufficiently short to be accepted by current CAs. The entire construction is fast enough to allow for adequate prediction of certificate serial number and validity period: it can be made to require about 2^{49} MD5 compression function calls. Finally, we improve the complexity of identical-prefix collisions for MD5 to about 2^{16} MD5 compression function calls and use it to derive a practical single-block chosen-prefix collision construction of which an example is given.
Last updated:  2009-03-11
On the Security of Stream Cipher CryptMT v3
Haina Zhang, Xiaoyun Wang
CryptMT v3 is a stream cipher submitted to eStream project, and has entered the third evaluation phase. Any attack has not been found until now. In this paper, we mainly discuss the security of the state initialization process of CryptMT v3. For the key and IV setup function $f_K$, we can construct a probabilistic testing algorithm $A^{f_K}$ with a distinguishing probability 1, which indicates that for each key $K$, $f_K$ is a non-PRF. However, we have not found any non-randomness about the keystream output.
Last updated:  2009-03-11
Cryptanalysis of Stream Cipher Grain Family
Uncategorized
Haina Zhang, Xiaoyun Wang
Show abstract
Uncategorized
Grain v1 is one of the 7 final candidates of ECRYPT eStream project, which involves in the 80-bit secret key. Grain-128 is a variant version with 128-bit secret key, and Grain v0 is the original version in the first evaluation phase. Firstly, we describe a distinguishing attack against the Grain family with weak Key-IVs. Utilizing the second Walsh spectra of the nonlinear functions, we show that there are $2^{64}$/$2^{64}$/$2^{96}$ weak Key-IVs among total $2^{144}$/$2^{144}$/$2^{224}$ Key-IVs, and to distinguish a weak Key-IV needs about $2^{12.6}$/$2^{44.2}$/$2^{86}$ keystream bits and $2^{15.8}$/$2^{47.5}$/ $2^{104.2}$ operations for Grain v0, Grain v1 and Grain-128 respectively. Secondly, we apply algebraic attacks to the Grain family with a weak Key-IV, and can recover the secret key in about 2 seconds and 150 keystream bits for Grain v0 and Grain v1, and reveal the key of Grain-128 with about 100 keystream bits and $2^{93.8}$ operations. Furthermore, we discuss the period of the keystream with a weak Key-IV for any Grain-like structure which can lead in self-sliding attack.
Last updated:  2009-03-30
Further Results on Implicit Factoring in Polynomial Time
Santanu Sarkar, Subhamoy Maitra
In PKC 2009, May and Ritzenhofen presented interesting problems related to factoring large integers with some implicit hints. One of the problems is as follows. Consider $N_1 = p_1 q_1$ and $N_2 = p_2 q_2$, where $p_1, p_2, q_1, q_2$ are large primes. The primes $p_1, p_2$ are of same bit-size with the constraint that certain amount of Least Significant Bits (LSBs) of $p_1, p_2$ are same. Further the primes $q_1, q_2$ are of same bit-size without any constraint. May and Ritzenhofen proposed a strategy to factorize both $N_1, N_2$ in poly$(\log N)$ time ($N$ is an integer with same bit-size as $N_1, N_2$) with the implicit information that $p_1, p_2$ share certain amount of LSBs. We explore the same problem with a different lattice-based strategy. In a general framework, our method works when implicit information is available related to Least Significant as well as Most Significant Bits (MSBs). Given $q_1, q_2 \approx N^{\alpha}$, we show that one can factor $N_1, N_2$ simultaneously in poly$(\log N)$ time (under some assumption related to Gröbner Basis) when $p_1, p_2$ share certain amount of MSBs and/or LSBs. We also study the case when $p_1, p_2$ share some bits in the middle. Our strategy presents new and encouraging results in this direction. Moreover, some of the observations by May and Ritzenhofen get improved when we apply our ideas for the LSB case.
Last updated:  2009-03-11
Compact E-Cash and Simulatable VRFs Revisited
Mira Belenkiy, Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya
Efficient non-interactive zero-knowledge proofs are a powerful tool for solving many cryptographic problems. We apply the recent Groth-Sahai (GS) proof system for pairing product equations (Eurocrypt 2008) to two related cryptographic problems: compact e-cash (Eurocrypt 2005) and simulatable verifiable random functions (CRYPTO 2007). We present the first efficient compact e-cash scheme that does not rely on a random oracle in its security proof. To this end we construct efficient GS proofs for signature possession, pseudo randomness and set membership. The GS proofs for pseudorandom functions give rise to a much cleaner and substantially faster construction of simulatable verifiable random functions (sVRF) under a weaker number theoretic assumption. We obtain the first efficient fully simulatable sVRF with a polynomial sized output domain (in the security parameter).
Last updated:  2009-03-11
A Collision Attack on AURORA-512
Yu Sasaki
In this note, we present a collision attack on AURORA-512, which is one of the candidates for SHA-3. The attack complexity is approximately $2^{236}$ AURORA-512 operations, which is less than the birthday bound of AURORA-512, namely, $2^{256}$. Our attack exploits some weakness in the mode of operation.
Last updated:  2012-05-30
Public-Key Cryptosystems Resilient to Key Leakage
Moni Naor, Gil Segev
Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture {\em side-channel attacks}. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent side-channel attacks, especially the ``cold boot attacks'' of Halderman et al. (USENIX Security '08), Akavia, Goldwasser and Vaikuntanathan (TCC '09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of side-channel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of public-key encryption, Akavia et al. showed that Regev's lattice-based scheme (STOC '05) is resilient to any leakage of $L / \polylog(L)$ bits, where $L$ is the length of the secret key. In this paper we revisit the above-mentioned framework and our main results are as follows: -- We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any {\em universal hash proof system}. The construction does not rely on additional computational assumptions, and the resulting scheme is as efficient as the underlying proof system. Existing constructions of such proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Diffie-Hellman assumption (and its progressively weaker $d$-Linear variants), the quadratic residuosity assumption, and Paillier's composite residuosity assumption. -- We construct a new hash proof system based on the decisional Diffie-Hellman assumption (and its $d$-Linear variants), and show that the resulting scheme is resilient to any leakage of $L(1 - o(1))$ bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO '08), constructed to be a ``circular-secure'' encryption scheme, fits our generic approach and is also resilient to any leakage of $L(1 - o(1))$ bits. -- We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of $L(1 - o(1))$ bits. On the practical side, we prove that variants of the Cramer-Shoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of $L/4$ bits, and CCA2-secure with any leakage of $L/6$ bits.
Last updated:  2009-03-27
1024 - A High Security Software Oriented Block Cipher
Dieter Schmidt
A cryptographer with week algorithm get his feedback almost instantly by the open crypto community. But what about government cryptanalysis ? Given the fact that there is a considerable amount of cryptanalysis behind closed doors, what is to be done to get COMINT deaf ? The NSA, as the most closely examined SIGINT agency, has a workforce of 38,000 [2], among them several thousand cryptologist. The actual wiretapping is done by the Central Security Service with 25,000 women and men. Other industrialised states have also thousands cryptologist at their wage role. The block cipher 1024 is an attempt to make cryptanalysis more difficult especially with differential (DC) and linear cryptanalysis. The assumption is that the increased security will defeat other cryptanalytical not yet known by the open crypto community. 1024 has a block size of 1024 bits and a key length of 2048 bits.
Last updated:  2009-11-27
Constructing pairing-friendly hyperelliptic curves using Weil restriction
David Mandell Freeman, Takakazu Satoh
A pairing-friendly curve is a curve over a finite field whose Jacobian has small embedding degree with respect to a large prime-order subgroup. In this paper we construct pairing-friendly genus 2 curves over finite fields $\mathbb{F}_q$ whose Jacobians are ordinary and simple, but not absolutely simple. We show that constructing such curves is equivalent to constructing elliptic curves over $\mathbb{F}_q$ that become pairing-friendly over a finite extension of $\mathbb{F}_q$. Our main proof technique is Weil restriction of elliptic curves. We describe adaptations of the Cocks-Pinch and Brezing-Weng methods that produce genus 2 curves with the desired properties. Our examples include a parametric family of genus 2 curves whose Jacobians have the smallest recorded $\rho$-value for simple, non-supersingular abelian surfaces.
Last updated:  2009-03-02
A Step Towards QC Blind Signatures
Raphael Overbeck
In this paper we propose a conversion from signature schemes connected to coding theory into blind signature schemes. We give formal security reductions to combinatorial problems not connected to number theory. This is the first blind signature scheme which can not be broken by quantum computers via cryptanalyzing the underlying signature scheme employing Shor’s algorithms. We thus present a step towards diversifying computational assumptions on which blind signatures can be based. We achieve blind signatures by a different concept of blinding: Instead of blinding the message, we blind the public key, such that generating a (blind) signature for the blinded key requires the interaction of the holder of the original secret key. To verify the blind signature, the connection between the original and the blinded key is proven by a static ZK proof. The major ingredient for our conversion is the PKP protocol by Shamir.
Last updated:  2012-09-23
Encryption Schemes Secure under Selective Opening Attack
Uncategorized
Mihir Bellare, Scott Yilek
Show abstract
Uncategorized
We provide the first public key encryption schemes proven secure against selective opening attack (SOA). This means that if an adversary obtains a number of ciphertexts and then corrupts some fraction of the senders, obtaining not only the corresponding messages but also the coins under which they were encrypted then the security of the other messages is guaranteed. Whether or not schemes with this property exist has been open for many years. Our schemes are based on a primitive we call lossy encryption. Our schemes have short keys (public and secret keys of a fixed length suffice for encrypting an arbitrary number of messages), are stateless, are non-interactive, and security does not rely on erasures. The schemes are without random oracles, proven secure under standard assumptions (DDH, Paillier’s DCR, QR, lattices), and even efficient. We are able to meet both an indistinguishability (IND-SOA-C) and a simulation-style, semantic security (SS-SOA-C) definition.
Last updated:  2009-03-17
Computing the endomorphism ring of an ordinary elliptic curve over a finite field
Uncategorized
Gaetan Bisson, Andrew V. Sutherland
Show abstract
Uncategorized
We present two algorithms to compute the endomorphism ring of an ordinary elliptic curve E defined over a finite field F_q. Under suitable heuristic assumptions, both have subexponential complexity. We bound the complexity of the first algorithm in terms of log q, while our bound for the second algorithm depends primarily on log |D_E|, where D_E is the discriminant of the order isomorphic to End(E). As a byproduct, our method yields a short certificate that may be used to verify that the endomorphism ring is as claimed.
Last updated:  2009-03-02
A Single Initialization Server for Multi-Party Cryptography
Hugue Blier, Alain Tapp
We present information-theoretically secure bit commitment, zero-knowledge and multi-party computation based on the assistance of an initialization server. In the initialization phase, the players interact with the server to gather resources that are later used to perform useful protocols. This initialization phase does not depend on the input of the protocol it will later enable. Once the initialization is complete, the server’s assistance is no longer required. This paper improves on previous work as there is only one server and it does not need to be trusted. If the server is honest, the protocols are secure against any coalition of dishonest players. If all players are honest, then there is an exponentially small probability that both the initialization phase succeeds and that later the protocol fails. That is, the server cannot create a situation in the initialization phase that would lead honest players to accuse each other. The protocols are built in a modular fashion and achieve linear complexity for the players in terms of the security parameter, number of players and the size of the circuit.
Last updated:  2009-03-02
Attacking Cryptographic Schemes Based on "Perturbation Polynomials"
Martin Albrecht, Craig Gentry, Shai Halevi, Jonathan Katz
We show attacks on several cryptographic schemes that have recently been proposed for achieving various security goals in sensor networks. Roughly speaking, these schemes all use "perturbation polynomials" to add "noise" to polynomial-based systems that offer information-theoretic security, in an attempt to increase the resilience threshold while maintaining efficiency. We show that the heuristic security arguments given for these modified schemes do not hold, and that they can be completely broken once we allow even a slight extension of the parameters beyond those achieved by the underlying information-theoretic schemes. Our attacks apply to the key predistribution scheme of Zhang et al. (MobiHoc~2007), the access-control schemes of Subramanian et al. (PerCom~2007), and the authentication schemes of Zhang et~al. (INFOCOM~2008).
Last updated:  2009-03-02
Identification of Multiple Invalid Signatures in Pairing-based Batched Signatures
Brian J. Matt
This paper describes new methods in pairing-based signature schemes for identifying the invalid digital signatures in a batch, after batch verification has failed. These methods efficiently identify non-trivial numbers of invalid signatures in batches of (potentially large) numbers of signatures. Our methods use “divide-and-conquer” search to identify the invalid signatures within a batch, but prune the search tree to substantially reduce the number of pairing computations required. The methods presented in this paper require computing on average O(w) products of pairings to identify w invalid signatures within a batch of size N, compared with the O(w(log2(N/w)+1)) [for w < N/2] that traditional divide-and-conquer methods require. Our methods avoid the problem of exponential growth in expected computational cost that affect earlier proposals which, on average, require computing O(w) products of pairings. We compare the expected performance of our batch verification methods with previously published divide-and-conquer and exponential cost methods for Cha-Cheon identity-based signatures [6]. However, our methods also apply to a number of short signature schemes and as well as to other identity-based signature schemes.
Last updated:  2009-03-02
A note on the security of MST3
M. I. Gonzalez Vasco, A. L. Perez del Pozo, P. Taborda Duarte
In this paper, we study the recently proposed encryption scheme MST3, focusing on a concrete instantiation using Suzuki-2-groups. In a passive scenario, we argue that the one wayness of this scheme may not, as claimed, be proven without the assumption that factoring group elements with respect to random covers for a subset of the group is hard. As a result, we conclude that for the proposed Suzuki 2-groups instantiation, impractical key sizes should be used in order to prevent more or less straightforward factorization attacks.
Last updated:  2009-03-02
Enhanced Privacy ID from Bilinear Pairing
Ernie Brickell, Jiangtao Li
Enhanced Privacy ID (EPID) is a cryptographic scheme that enables the remote authentication of a hardware device while preserving the privacy of the device. EPID can be seen as a direct anonymous attestation scheme with enhanced revocation capabilities. In EPID, a device can be revoked if the private key embedded in the hardware device has been extracted and published widely so that the revocation manager finds the corrupted private key. In addition, the revocation manager can revoke a device based on the signatures the device has signed, if the private key of the device is not known. In this paper, we introduce a new security notion of EPID including the formal definitions of anonymity and unforgeability with revocation. We also give a construction of an EPID scheme from bilinear pairing. Our EPID scheme is efficient and provably secure in the random oracle model under the strong Diffie-Hellman assumption and the decisional Diffie-Hellman assumption.
Last updated:  2009-03-17
On the Lower Bounds of the Second Order Nonlinearity of some Boolean Functions
Sugata Gangopadhyay, Sumanta Sarkar, Ruchi Telang
The $r$-th order nonlinearity of a Boolean function is an important cryptographic criterion in analyzing the security of stream as well as block ciphers. It is also important in coding theory as it is related to the covering radius of the Reed-Muller code $\mathcal{R}(r, n)$. In this paper we deduce the lower bounds of the second order nonlinearity of the two classes of Boolean functions of the form \begin{enumerate} \item $f_{\lambda}(x) = Tr_1^n(\lambda x^{d})$ with $d=2^{2r}+2^{r}+1$ and $\lambda \in \mathbb{F}_{2^{n}}$ where $n = 6r$. \item $f(x,y)=Tr_1^t(xy^{2^{i}+1})$ where $x,y \in \mathbb{F}_{2^{t}}, n = 2t, n \ge 6$ and $i$ is an integer such that $1\le i < t$, $\gcd(2^t-1, 2^i+1) = 1$. \end{enumerate} For some $\lambda$, the first class gives bent functions whereas Boolean functions of the second class are all bent, i.e., they achieve optimum first order nonlinearity.
Last updated:  2009-02-24
Cascade Encryption Revisited
Peter Gazi, Ueli Maurer
The security of cascade blockcipher encryption is an important and well-studied problem in theoretical cryptography with practical implications. It is well-known that double encryption improves the security only marginally, leaving triple encryption as the shortest reasonable cascade. In a recent paper, Bellare and Rogaway showed that in the ideal cipher model, triple encryption is significantly more secure than single and double encryption, stating the security of longer cascades as an open question. In this paper, we propose a new lemma on the indistinguishability of systems extending Maurer's theory of random systems. In addition to being of independent interest, it allows us to compactly rephrase Bellare and Rogaway's proof strategy in this framework, thus making the argument more abstract and hence easy to follow. As a result, this allows us to address the security of longer cascades as well as some errors in their paper. Our result implies that for blockciphers with smaller key space than message space (e.g. DES), longer cascades improve the security of the encryption up to a certain limit. This partially answers the open question mentioned above.
Last updated:  2009-02-24
Reducing RFID Reader Load with the Meet-in-the-Middle Strategy
Jung Hee Cheon, Jeongdae Hong, Gene Tsudik
In almost any RFID system, a reader needs to identify, and optionally authenticate, a multitude of tags. If each tag has a unique secret, identification and authentication are trivial, however, the reader (or a back-end server) needs to perform a brute-force search for each tag-reader interaction. In this paper, we suggest a simple, efficient and secure technique that reduces reader computation to $O(\sqrt N \cdot \log N)$. Our technique is based on the well-known ``meet-in-the-middle'' strategy used in the past to attack certain symmetric ciphers.
Last updated:  2009-02-24
Knapsack Cryptosystem on Elliptic Curves
Koichiro Noro, Kunikatsu Kobayashi
The LLL algorithm is strong algorithm that decrypts the additional type Knapsack cryptosystem. However, the LLL algorithm is not applicable in the addition in the group that rational points of elliptic curves on finite fields do. Therefore, we think the Knapsack cryptosystem constructed on elliptic curves. By using the pairing for the decryption, it is shown to be able to make the computational complexity of the decryption a polynomial time by making the decryption function by the pairing values.
Last updated:  2009-02-24
A Brief History of Provably-Secure Public-Key Encryption
Alexander W. Dent
Public-key encryption schemes are a useful and interesting field of cryptographic study. The ultimate goal for the cryptographer in the field of public-key encryption would be the production of a very efficient encryption scheme with a proof of security in a strong security model using a weak and reasonable computational assumption. This ultimate goal has yet to be reached. In this invited paper, we survey the major results that have been achieved in the quest to find such a scheme.
Last updated:  2009-05-07
A Provably Secure And Efficient Countermeasure Against Timing Attacks
Boris Köpf, Markus Dürmuth
We show that the amount of information about the key that an unknown-message attacker can extract from a deterministic side-channel is bounded from above by |O| \log_2 (n+1) bits, where n is the number of side-channel measurements and O is the set of possible observations. We use this bound to derive a novel countermeasure against timing attacks, where the strength of the security guarantee can be freely traded for the resulting performance penalty. We give algorithms that efficiently and optimally adjust this trade-off for given constraints on the side-channel leakage or on the efficiency of the cryptosystem. Finally, we perform a case-study that shows that applying our countermeasure leads to implementations with minor performance overhead and formal security guarantees.
Last updated:  2012-03-22
Lossy Encryption: Constructions from General Assumptions and Efficient Selective Opening Chosen Ciphertext Security
Uncategorized
Brett Hemenway, Benoit Libert, Rafail Ostrovsky, Damien Vergnaud
Show abstract
Uncategorized
In this paper, we present new and general constructions of lossy encryption schemes. By applying results from Eurocrypt '09, we obtain new general constructions of cryptosystems secure against a Selective Opening Adversaries (SOA). Although it was recognized almost twenty years ago that SOA security was important, it was not until the recent breakthrough works of Hofheinz and Bellare, Hofheinz and Yilek that any progress was made on this fundamental problem. The Selective Opening problem is as follows: suppose an adversary receives $n$ commitments (or encryptions) of (possibly) correlated messages, and now the adversary can choose $n/2$ of the messages, and receive de-commitments (or decryptions and the randomness used to encrypt them). Do the unopened commitments (encryptions) remain secure? A protocol achieving this type of security is called secure against a selective opening adversary (SOA). This question arises naturally in the context of Byzantine Agreement and Secure Multiparty Computation, where an active adversary is able to eavesdrop on all the wires, and then choose a subset of players to corrupt. Unfortunately, the traditional definitions of security (IND-CPA, IND-CCA) do not guarantee security in this setting. In this paper: We formally define re-randomizable encryption and show that every re-randomizable encryption scheme gives rise to efficient encryptions secure against a selective opening adversary. (Very informally, an encryption is re-randomizable, if given any ciphertext, there is an efficient way to map it to an almost uniform re-encryption of the same underlying message). We define re-randomizable one-way functions and show that every re-randomizable one-way function family gives rise to efficient commitments secure against a selective opening adversary. We show that statistically-hiding 2-round Oblivious Transfer (OT) implies Lossy Encryption and so do smooth hash proof systems, as defined by Cramer-Shoup. Combining this with known results immediately shows that Private Information Retrieval and homomorphic encryption both imply Lossy Encryption, and thus Selective Opening Secure Public Key Encryption. Applying our constructions to well-known cryptosystems (such as Elgamal or Paillier), we obtain selective opening secure commitments and encryptions from the Decisional Diffie-Hellman (DDH), Decisional Composite Residuosity (DCR) and Quadratic Residuosity (QR) assumptions, that are either simpler or more efficient than existing constructions of Bellare, Hofheinz and Yilek. By applying our general results to the Paillier cryptosystem, we obtain the first cryptosystem to achieve simulation-based selective opening security from the DCR assumption. We provide (indistinguishability and simulation-based) definitions of adaptive chosen-ciphertext security (CCA2) in the selective opening setting and describe the first encryption schemes that provide security in the sense of this definition. In the indistinguishability-based model, we notably achieve short ciphertexts under standard number theoretic assumptions. In the simulation-based security chosen-ciphertext attack scenario, we handle non-adaptive (i.e., CCA1) adversaries and describe the first encryption scheme which is simultaneously CCA1 and SOA-secure.
Last updated:  2012-07-11
Unconditionally Secure Asynchronous Multiparty Computation with Quadratic Communication Per Multiplication Gate
Arpita Patra, Ashish Choudhary, C. Pandu Rangan
Secure multiparty computation (MPC) allows a set of $n$ parties to securely compute an agreed function, even if up to $t$ parties are under the control of an adversary. In this paper, we propose a new {\it Asynchronous secure multiparty computation} (AMPC) protocol that provides information theoretic security with $n = 4t+1$, where $t$ out of $n$ parties can be under the influence of a {\it Byzantine (active)} adversary ${\cal A}_t$ having {\it unbounded computing power}. Our protocol communicates ${\cal O}(n^2 \log|{\mathbb F}|)$ bits per multiplication and involves a negligible error probability of $2^{-\Omega(\kappa)}$, where $\kappa$ is the error parameter and ${\mathbb F}$ is the field over which the computation is carried out. The best known information theoretically secure AMPC with $n=4t+1$ communicates ${\cal O}(n^3 \log|{\mathbb F}|)$ bits per multiplication and does not involve any error probability in computation. Though a negligible error probability is involved, our AMPC protocol provides the best communication complexity among all the known AMPC protocols providing information theoretic security. Moreover, the communication complexity of our AMPC is same as the communication complexity of the best known AMPC protocol with {\it cryptographic assumptions}. As a tool for our AMPC protocol, we propose a new method of efficiently generating {\it $d$-sharing} of multiple secrets concurrently in asynchronous setting, which is of independent interest, where $t \leq d \leq 2t$. In the literature, though there are protocols for generating $t$-sharing and $2t$-sharing separately, there is no generic protocol for generating {\it $d$-sharing} for the range $t \leq d \leq 2t$. Moreover, our protocol provides better communication complexity than the existing methods for generating $2t$-sharing.
Last updated:  2010-09-27
Point Compression for Koblitz Elliptic Curves
Uncategorized
P. N. J. Eagle, Steven D. Galbraith, John Ong
Show abstract
Uncategorized
Elliptic curves over finite fields have applications in public key cryptography. A Koblitz curve is an elliptic curve $E$ over $\F_2$; the group $E( \Ftn )$ has convenient features for efficient implementation of elliptic curve cryptography. Wiener and Zuccherato and Gallant, Lambert and Vanstone showed that one can accelerate the Pollard rho algorithm for the discrete logarithm problem on Koblitz curves. This implies that when using Koblitz curves, one has a lower security per bit than when using general elliptic curves defined over the same field. Hence for a fixed security level, systems using Koblitz curves require slightly more bandwidth. We present a method to reduce this bandwidth when a normal basis representation for $\Ftn$ is used. Our method is appropriate for applications such as Diffie-Hellman key exchange or Elgamal encryption. We show that, with a low probability of failure, our method gives the expected bandwidth for a given security level.
Last updated:  2009-02-24
UC-Secure Source Routing Protocol
Uncategorized
Tao Feng, Xian Guo, Jianfeng Ma, Xinghua Li
Show abstract
Uncategorized
The multi-path routing scheme provides reliable guarantee for mobile ad hoc network. A new method is proposed that is using to analyze the security of multi-path routing protocol within the framework of Universally Composable (UC) security. Based on the topological model that there exist adversarial nodes, the concept of plausible route is extended and the definition of plausible-route set is presented. Plausible-route set is used in description of the multi-path routing for Ad hoc network, and a formal security definition based on UC-RP is given. A provably Security Multiple Node-Disjoint Paths source routing (SMNDP) is proposed and address secure fault issue of MNDP in the active adversary model. The new approach shows that the security of SMNDP can be reduced to the security of the message authentication code and the digital signature. SMNDP implements the correctness of route discovery process, the authentication of nodes identifier and the integrality of route information.
Last updated:  2009-02-24
Simulation without the Artificial Abort: Simplified Proof and Improved Concrete Security for Waters' IBE Scheme
Mihir Bellare, Thomas Ristenpart
Waters' variant of the Boneh-Boyen IBE scheme is attractive because of its efficency, applications, and security attributes,but suffers from a relatively complex proof with poor concrete security. This is due in part to the proof's ``artificial abort'' step, which has then been inherited by numerous derivative works. It has often been asked whether this step is necessary. We show that it is not, providing a new proof that eliminates this step. The new proof is not only simpler than the original one but offers better concrete security for important ranges of the parameters. As a result, one can securely use smaller groups, resulting in significant efficiency improvements.
Last updated:  2009-02-24
Multi-authority attribute based encryption with honest-but-curious central authority
Vladimir Bozovic, Daniel Socek, Rainer Steinwandt, Viktoria I. Villanyi
An attribute based encryption scheme capable of handling multiple authorities was recently proposed by Chase. The scheme is built upon a single-authority attribute based encryption scheme presented earlier by Sahai and Waters. Chase’s construction uses a trusted central authority that is inherently capable of decrypting arbitrary ciphertexts created within the system. We present a multi-authority attribute based encryption scheme in which only the set of recipients defined by the encrypting party can decrypt a corresponding ciphertext. The central authority is viewed as “honest-but-curious”: on the one hand it honestly follows the protocol, and on the other hand it is curious to decrypt arbitrary ciphertexts thus violating the intent of the encrypting party. The proposed scheme, which like its predecessors relies on the Bilinear Diffie-Hellman assumption, has a complexity comparable to that of Chase’s scheme. We prove that our scheme is secure in the selective ID model and can tolerate an honest-but-curious central authority.
Last updated:  2009-12-02
The Case for Quantum Key Distribution
Uncategorized
Douglas Stebila, Michele Mosca, Norbert Lütkenhaus
Show abstract
Uncategorized
Quantum key distribution (QKD) promises secure key agreement by using quantum mechanical systems. We argue that QKD will be an important part of future cryptographic infrastructures. It can provide long-term confidentiality for encrypted information without reliance on computational assumptions. Although QKD still requires authentication to prevent man-in-the-middle attacks, it can make use of either information-theoretically secure symmetric key authentication or computationally secure public key authentication: even when using public key authentication, we argue that QKD still offers stronger security than classical key agreement.
Last updated:  2009-04-27
Ensuring Data Storage Security in Cloud Computing
Cong Wang, Qian Wang, Kui Ren, Wenjing Lou
Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
Last updated:  2009-10-26
CoSP: A General Framework For Computational Soundness Proofs
Uncategorized
Michael Backes, Dennis Hofheinz, Dominique Unruh
Show abstract
Uncategorized
We describe CoSP, a general framework for conducting computational soundness proofs of symbolic models and for embedding these proofs into formal calculi. CoSP considers arbitrary equational theories and computational implementations, and it abstracts away many details that are not crucial for proving computational soundness, such as message scheduling, corruption models, and even the internal structure of a protocol. CoSP enables soundness results, in the sense of preservation of trace properties, to be proven in a conceptually modular and generic way: proving x cryptographic primitives sound for y calculi only requires x+y proofs (instead of x*y proofs without this framework), and the process of embedding calculi is conceptually decoupled from computational soundness proofs of cryptographic primitives. We exemplify the usefulness of CoSP by proving the first computational soundness result for the full-fledged applied pi-calculus under active attacks. Concretely, we embed the applied pi-calculus into CoSP and give a sound implementation of public-key encryption and digital signatures.
Last updated:  2009-11-09
From Dolev-Yao to Strong Adaptive Corruption: Analyzing Security in the Presence of Compromising Adversaries
David Basin, Cas Cremers
We formalize a hierarchy of adversary models for security protocol analysis, ranging from a Dolev-Yao style adversary to more powerful adversaries who can reveal different parts of principals' states during protocol execution. We define our hierarchy by a modular operational semantics describing adversarial capabilities. We use this to formalize various, practically-relevant notions of key and state compromise. Our semantics can be used as a basis for protocol analysis tools. As an example, we extend an existing symbolic protocol-verification tool with our adversary models. The result is the first tool that systematically supports notions such as weak perfect forward secrecy, key compromise impersonation, and adversaries capable of so-called strong corruptions and state-reveal queries. As further applications, we use our model hierarchy to relate different adversarial notions, gaining new insights on their relative strengths, and we use our tool to find new attacks on protocols.
Last updated:  2009-02-16
Attacks on the DECT authentication mechanisms
Stefan Lucks, Andreas Schuler, Erik Tews, Ralf-Philipp Weinmann, Matthias Wenzel
Digital Enhanced Cordless Telecommunications (DECT) is a standard for connecting cordless telephones to a fixed telecommunications network over a short range. The cryptographic algorithms used in DECT are not publicly available. In this paper we reveal one of the two algorithms used by DECT, the DECT Standard Authentication Algorithm (DSAA). We give a very detailed security analysis of the DSAA including some very effective attacks on the building blocks used for DSAA as well as a common implementation error that can practically lead to a total break of DECT security. We also present a low cost attack on the DECT protocol, which allows an attacker to impersonate a base station and therefore listen to and reroute all phone calls made by a handset.
Last updated:  2009-02-16
On the Security of Iterated Hashing based on Forgery-resistant Compression Functions
Charles Bouillaguet, Orr Dunkelman, Pierre-Alain Fouque, Antoine Joux
In this paper we re-examine the security notions suggested for hash functions, with an emphasis on the delicate notion of second preimage resistance. We start by showing that, in the random oracle model, both Merkle-Damgaard and HAIFA achieve second preimage resistance beyond the birthday bound, and actually up to the level of known generic attacks, hence demonstrating the optimality of HAIFA in this respect. We then try to distill a more elementary requirement out of the compression function to get some insight on the properties it should have to guarantee the second preimage resistance of its iteration. We show that if the (keyed) compression function is a secure FIL-MAC then the Merkle-Damgaard mode of iteration (or HAIFA) still maintains the same level of second preimage resistance. We conclude by showing that this ``new'' assumption (or security notion) implies the recently introduced Preimage-Awareness while ensuring all other classical security notions for hash functions.
Last updated:  2009-02-16
Construction of large families of pseudorandom subsets using elliptic curves
Zhixiong Chen, Chenhuang Wu
Recently, Dartyge and Sárközy investigated the measures, i.e., the well distribution measure and the correlation measure of order $k$, of pseudorandomness of subsets of the set $\{1, 2,\ldots, N\}$, and they presented several constructive examples for subsets with strong pseudorandom properties when $N$ is a prime number. In this article, we present a construction of pseudorandom subsets using elliptic curves over finite fields and estimate the pseudorandom measures. Character sums play an important role in the proofs.
Last updated:  2010-07-29
Security of Practical Cryptosystems Using Merkle-Damgard Hash Function in the Ideal Cipher Model
Uncategorized
Yusuke Naito, Kazuki Yoneyama, Lei Wang, Kazuo Ohta
Show abstract
Uncategorized
Since the Merkle-Damgård (MD) type hash functions are differentiable from ROs even when compression functions are modeled by ideal primitives, there is no guarantee as to the security of cryptosystems when ROs are instantiated with structural hash functions. In this paper, we study the security of the instantiated cryptosystems whereas the hash functions have the well known structure of Merkle-Damgård construction with Stam's type-II compression function (denoted MD-TypeII) in the Ideal Cipher Model (ICM). Note that since the Type-II scheme includes the Davies-Meyer compression function, SHA-256 and SHA-1 have the MD-TypeII structure. We show that OAEP, RSA-KEM, PSEC-KEM, ECIES-KEM and many other encryption schemes are secure when using the MD-TypeII hash function. In order to show this, we customize the indifferentiability framework of Maurer, Renner and Holenstein. We call the customized framework ``indifferentiability with condition''. In this framework, for some condition $\alpha$ that cryptosystem $C$ satisfies, if hash function $H$ is indifferentiable from RO under condition $\alpha$, $C$ is secure when RO is instantiated with $H$. We note the condition of ``prefix-free'' that the above schemes satisfy. We show that the MD-TypeII hash function is indifferentiable from RO under this condition. When the output length of RO is incompatible with that of the hash function, the output size is expanded by Key Derivation Functions (KDFs). Since a KDF is specified as MGF1 in RSA's PKCS $\#$1 V2.1, its security discussion is important in practice. We show that, KDFs using the MD-TypeII hash function (KDF-MD-TypeII) are indifferentiable from ROs under this condition of ``prefix-free''. Therefore, we can conclude that the above practical encryption schemes are secure even when ROs are instantiated with (KDF-)MD-TypeII hash functions. Dodis, Ristenpart and Shrimpton showed that FDH, PSS, Fiat-Shamir, and so on are secure when RO is instantiated with the MD-TypeII hash function in the ICM, their analyses use the different approach from our approach called indifferentiability from public-use RO (pub-RO). They showed that the above cryptosystems are secure in the pub-RO model and the MD-TypeII hash function is indifferentiable from pub-RO. Since their analyses did not consider the structure of KDFs, there might exist some attack using a KDF's structure. We show that KDFs using pub-RO (KDF-pub-RO) is differentiable from pub-RO. Thus, we cannot trivially extend the result of Dodis et al to the indifferentiability for KDF-MD-TypeII hash functions. We propose a new oracle called private interface leak RO (privleak-RO). We show that KDF-pub-ROs are indifferentiable from privleak-ROs and the above cryptosystems are secure in the privleak-RO model. Therefore, by combining the result of Dodis et al. with our result, we can conclude that the above cryptosystems are secure when ROs are instantiated with KDF-MD-TypeII hash functions. Since OAEP, RSA-KEM, PSEC-KEM, ECIES-KEM and many other encryption schemes are insecure in the pub-RO (privleak-RO) model, we cannot confirm the security of these encryption schemes from the approach of Dodis et al. Therefore, the result of Dodis et al can be supplemented with our result. Consequently, from the two results we can confirm the security of almost practical cryptosystems when ROs are instantiated with (KDF-)MD-TypeII hash functions.
Last updated:  2009-05-29
Computational Oblivious Transfer and Interactive Hashing
Uncategorized
Kirill Morozov, George Savvides
Show abstract
Uncategorized
We use interactive hashing to achieve the most efficient OT protocol to date based solely on the assumption that trapdoor permutations (TDP) exist. Our protocol can be seen as the following (simple) modification of either of the two famous OT constructions: 1) In the one by Even et al (1985), a receiver must send a random domain element to a sender through IH; 2) In the one by Ostrovsky et al (1993), the players should use TDP instead of one-way permutation. A similar approach is employed to achieve oblivious transfer based on the security of the McEliece cryptosystem. In this second protocol, the receiver inputs a public key into IH, while privately keeping the corresponding secret key. Two different versions of IH are used: the computationally secure one in the first protocol, and the information-theoretically secure one in the second.
Last updated:  2009-02-16
Automatic Approach of Provable Security and its Application for OAEP+
GU Chun-Xiang, Guang Yan, ZHU Yue-Fei
Probable security is an important criteria for analyzing the security of cryptographic protocols. However, writing and verifying proofs by hand are prone to errors. This paper introduces the game-based approach of writing security proofs and its automatic technique. It advocates the automatic security proof approach based on process calculus, and presents the initial game and observational equivalences of OAEP+.
Last updated:  2009-02-16
Implementing cryptographic pairings: a magma tutorial
Luis J Dominguez Perez, Ezekiel J Kachisa, Michael Scott
In this paper we show an efficient implementation if the Tate, ate, and R-ate pairings in magma. This will be demostrated by using the KSS curves with embedding degree k=18
Last updated:  2009-02-16
Secret sharing on trees: problem solved
Laszlo Csirmaz, Gabor Tardos
We determine the worst case information rate for all secret sharing schemes based on trees. It is the inverse of $2-1/c$, where $c$ is the size of the maximal core in the tree. A {\it core} is a connected subset of the vertices so that every vertex in the core has a neighbor outside the core. The upper bound comes from an application of the entropy method, while the lower bound is achieved by a construction using Stinson's decomposition theorem. It is shown that $2-1/c$ is also the {\it fractional cover number} of the tree where the edges of the tree are covered by stars, and the vertex cover should be minimized. We also give an $O(n^2)$ algorithm which finds an optimal cover on any tree, and thus a perfect secret sharing scheme with optimal rate.
Last updated:  2009-11-13
Low Complexity Cubing and Cube Root Computation over $\F_{3^m}$ in Polynomial Basis
Uncategorized
Omran Ahmadi, Francisco Rodríguez-Henriquez
Show abstract
Uncategorized
We present low complexity formulae for the computation of cubing and cube root over $\F_{3^m}$ constructed using special classes of irreducible trinomials, tetranomials and pentanomials. We show that for all those special classes of polynomials, field cubing and field cube root operation have the same computational complexity when implemented in hardware or software platforms. As one of the main applications of these two field arithmetic operations lies in pairing-based cryptography, we also give in this paper a selection of irreducible polynomials that lead to low cost field cubing and field cube root computations for supersingular elliptic curves defined over $\F_{3^m}$, where $m$ is a prime number in the pairing-based cryptographic range of interest, namely, $m\in [47, 541]$.
Last updated:  2013-03-29
Optimistic Fair Exchange with Multiple Arbiters
Alptekin Kupcu, Anna Lysyanskaya
Fair exchange is one of the most fundamental problems in secure distributed computation. Alice has something that Bob wants, and Bob has something that Alice wants. A fair exchange protocol would guarantee that, even if one of them maliciously deviates from the protocol, either both of them get the desired content, or neither of them do. It is known that no two-party protocol can guarantee fairness in general; therefore the presence of a trusted arbiter is necessary. In optimistic fair exchange, the arbiter only gets involved in case of faults, but needs to be trusted. To reduce the trust put in the arbiter, it is natural to consider employing multiple arbiters. Expensive techniques like byzantine agreement or secure multi-party computation with Omega(n^2) communication can be applied to distribute arbiters in a non-autonomous way. Yet we are interested in efficient protocols that can be achieved by keeping the arbiters autonomous (non-communicating), especially for p2p settings in which the arbiters do not even know each other. Avoine and Vaudenay employ multiple autonomous arbiters in their optimistic fair exchange protocol which uses global timeout mechanisms; all arbiters have access to -loosely- synchronized clocks. They left two open questions regarding the use of distributed autonomous arbiters: (1) Can an optimistic fair exchange protocol without timeouts provide fairness (since it is hard to achieve synchronization in a p2p setting) when employing multiple autonomous arbiters? (2) Can any other optimistic fair exchange protocol with timeouts achieve better bounds on the number of honest arbiters required? In this paper, we answer both questions negatively. To answer these questions, we define a general class of optimistic fair exchange protocols with multiple arbiters, called "distributed arbiter fair exchange" (DAFE) protocols. Informally, in a DAFE protocol, if a participant fails to send a correctly formed message, the other party must contact some subset of the arbiters and get correctly formed responses from them. The arbiters do not communicate with each other, but only to Alice and Bob. We prove that no DAFE protocol can meaningfully exist.
Last updated:  2009-02-10
Overview of Turbo-Code Reconstruction Techniques
Johann Barbier, Eric Filiol
In this paper we analyze different techniques to blindly recover the parameters of turbo-code encoders with only the knowledge of noisy eavesdropped binary stream. We compare different strategies to reconstruct convolutional encoders, particularly when both are recursive systematic coders. We explain the intrinsic indetermination due to these techniques but also how to overcome such an issue in the case of turbo-code encoders. Moreover, we generalize Burel and al.'s particular reconstruction of the second (n,1)-encoder for (n,k)-encoders.
Last updated:  2009-02-10
On fractional correlation immunity of majority functions
Chuan-Kun Wu
The correlation immunity is known as an important cryptographic measure of a Boolean function with respect to its resist against the correlation attack. This paper generalizes the concept of correlation immunity to be of a fractional value, called fractional correlation immunity, which is a fraction between 0 and 1, and correlation immune function is the extreme case when the fractional correlation immunity is 1. However when a function is not correlation immune in the traditional sense, it may also has a nonzero fractional correlation immunity, which also indicates the resistance of the function against correlation attack. This paper first shows how this generalized concept of fractional correlation immunity is a reasonable measure on the resistance against the correlation attack, then studies the fractional correlation immunity of a special class of Boolean functions, i.e. majority functions, of which the subset of symmetric ones have been proved to have highest algebraic immunity. This paper shows that all the majority functions, including the symmetric ones and the non-symmetric ones, are not correlation immune. However their fractional correlation immunity approaches to 1 when the number of variable grows. This means that this class of functions also have good resistance against correlation attack, although they are not correlation immune in the traditional sense.
Last updated:  2009-05-22
Adaptive Preimage Resistance and Permutation-based Hash Functions
Uncategorized
Jooyoung Lee, Je Hong Park
Show abstract
Uncategorized
In this paper, we introduce a new notion of security, called \emph{adaptive preimage resistance}. We prove that a compression function that is collision resistant and adaptive preimage resistant can be combined with a public random function to yield a hash function that is indifferentiable from a random oracle. Specifically, we analyze adaptive preimage resistance of $2n$-bit to $n$-bit compression functions that use three calls to $n$-bit public random permutations. This analysis also provides a simpler proof of their collision resistance and preimage resistance than the one provided by Rogaway and Steinberger. By using such compression functions as building blocks, we obtain permutation-based pseudorandom oracles that outperform the Sponge construction and the MD6 compression function both in terms of security and efficiency.
Last updated:  2009-02-10
Foundations of Non-Malleable Hash and One-Way Functions
Alexandra Boldyreva, David Cash, Marc Fischlin, Bogdan Warinschi
Non-malleability is an interesting and useful property which ensures that a cryptographic protocol preserves the independence of the underlying values: given for example an encryption Enc(m) of some unknown message m, it should be hard to transform this ciphertext into some encryption Enc(m*) of a related message m*. This notion has been studied extensively for primitives like encryption, commitments and zero-knowledge. Non-malleability of one-way functions and hash functions has surfaced as a crucial property in several recent results, but it has not undergone a comprehensive treatment so far. In this paper we initiate the study of such non-malleable functions. We start with the design of an appropriate security definition. We then show that non-malleability for hash and one-way functions can be achieved, via a theoretical construction that uses perfectly one-way hash functions and simulation-sound non-interactive zero-knowledge proofs of knowledge (NIZKPoK). We also discuss the complexity of non-malleable hash and one-way functions. Specifically, we give a black-box based separation of non-malleable functions from one-way permutations (which our construction bypasses due to the 'non-black-box' NIZKPoK). We exemplify the usefulness of our definition in cryptographic applications by showing that non-malleability is necessary and sufficient to securely replace one of the two random oracles in the IND-CCA encryption scheme by Bellare and Rogaway, and to improve the security of client-server puzzles.
Last updated:  2009-02-12
On the Data Complexity of Statistical Attacks Against Block Ciphers (full version)
Céline Blondeau, Benoît Gérard
Many attacks on iterated block ciphers rely on statistical considerations using plaintext/ciphertext pairs to distinguish some part of the cipher from a random permutation. We provide here a simple formula for estimating the amount of plaintext/ciphertext pairs which is needed for such distinguishers and which applies to a lot of different scenarios (linear cryptanalysis, differential-linear cryptanalysis, differential/truncated differential/impossible differential cryptanalysis). The asymptotic data complexities of all these attacks are then derived. Moreover, we give an efficient algorithm for computing the data complexity accurately.
Last updated:  2009-02-16
CCZ-equivalence and Boolean functions
Uncategorized
Lilya Budaghyan, Claude Carlet
Show abstract
Uncategorized
We study further CCZ-equivalence of $(n,m)$-functions. We prove that for Boolean functions (that is, for $m=1$), CCZ-equivalence coincides with EA-equivalence. On the contrary, we show that for $(n,m)$- functions, CCZ-equivalence is strictly more general than EA-equivalence when $n\ge5$ and $m$ is greater or equal to the smallest positive divisor of $n$ different from 1. Our result on Boolean functions allows us to study the natural generalization of CCZ-equivalence corresponding to the CCZ-equivalence of the indicators of the graphs of the functions. We show that it coincides with CCZ-equivalence.
Last updated:  2010-02-11
On Deterministic Polynomial-Time Equivalence of Computing the CRT-RSA Secret Keys and Factoring
Subhamoy Maitra, Santanu Sarkar
Let $N = pq$ be the product of two large primes. Consider CRT-RSA with the public encryption exponent $e$ and private decryption exponents $d_p, d_q$. It is well known that given any one of $d_p$ or $d_q$ (or both) one can factorize $N$ in probabilistic poly$(\log N)$ time with success probability almost equal to 1. Though this serves all the practical purposes, from theoretical point of view, this is not a deterministic polynomial time algorithm. In this paper, we present a lattice based deterministic poly$(\log N)$ time algorithm that uses both $d_p, d_q$ (in addition to the public information $e, N$) to factorize $N$ for certain ranges of $d_p, d_q$. We like to stress that proving the equivalence for all the values of $d_p, d_q$ may be a nontrivial task.
Last updated:  2009-02-06
Security Enhancement of Various MPKCs by 2-layer Nonlinear Piece In Hand Method
Shigeo Tsujii, Kohtaro Tadaki, Ryou Fujita, Masahito Gotaishi, Toshinobu Kaneko
Following the last proposal of the nonlinear Piece in Hand method, which has 3-layer structure, 2-layer nonlinear Piece in Hand method is proposed. Both of them aim at enhancing the security of existing and future multivariate public key cryptosystems. The new nonlinear Piece in Hand is compared with the 3-layer method and PMI+, which was proposed by Ding, et al.
Last updated:  2009-09-22
Comparing Two Pairing-Based Aggregate Signature Schemes
Sanjit Chatterjee, Darrel Hankerson, Edward Knapp, Alfred Menezes
In 2003, Boneh, Gentry, Lynn and Shacham (BGLS) devised the first provably-secure aggregate signature scheme. Their scheme uses bilinear pairings and their security proof is in the random oracle model. The first pairing-based aggregate signature scheme which has a security proof that does not make the random oracle assumption was proposed in 2006 by Lu, Ostrovsky, Sahai, Shacham and Waters (LOSSW). In this paper, we compare the security and efficiency of the BGLS and LOSSW schemes when asymmetric pairings derived from Barreto-Naehrig (BN) elliptic curves are employed.
Last updated:  2009-02-06
On the impossibility of graph secret sharing
Laszlo Csirmaz
A {\it perfect secret sharing scheme} based on a graph $G$ is a randomized distribution of a secret among the vertices of the graph so that the secret can be recovered from the information assigned to vertices at the endpoints of any edge, while the total information assigned to an independent set of vertices is independent (in statistical sense) of the secret itself. The efficiency of a scheme is measured by the amount of information the most heavily loaded vertex receives divided by the amount of information in the secret itself. The (worst case) {\it information ratio} of $G$ is the infimum of this number. We calculate the best lower bound on the information ratio for an infinite family of graphs the celebrated entropy method can give. We argue that all existing constructions for secret sharing schemes are special cases of the generalized vector space construction. We give direct constructions of this type for the first two members of the family, and show that for the other members no construction exists which would match the bound yielded by the entropy method.
Last updated:  2010-10-11
On Generalization of Cheon's Algorithm
Takakazu Satoh
We consider a generalization of Cheon's algorithm on the strong Diffie-Hellman problem. More specifically, we consider the circumstance that p^k-1 has a small divisor for k>=3, where p is the order of group on which we consider the strong Diffie-Hellman problem. It seems that our algorithm is only effective for k=1, 2, that is, the original Cheon's algorithm.
Last updated:  2009-02-06
Anonymity in Shared Symmetric Key Primitives
Gregory M. Zaverucha, Douglas R. Stinson
We provide a stronger definition of anonymity in the context of shared symmetric key primitives, and show that existing schemes do not provide this level of anonymity. A new scheme is presented to share symmetric key operations amongst a set of participants according to a $(t,n)$-threshold access structure. We quantify the amount of information the output of the shared operation provides about the group of participants which collaborated to produce it.
Last updated:  2009-07-14
Designing an ASIP for Cryptographic Pairings over Barreto-Naehrig Curves
David Kammler, Diandian Zhang, Peter Schwabe, Hanno Scharwaechter, Markus Langenberg, Dominik Auras, Gerd Ascheid, Rainer Leupers, Rudolf Mathar, Heinrich Meyr
This paper presents a design-space exploration of an application-specific instruction-set processor (ASIP) for the computation of various cryptographic pairings over Barreto-Naehrig curves (BN curves). Cryptographic pairings are based on elliptic curves over finite fields--in the case of BN curves a field Fp of large prime order p. Efficient arithmetic in these fields is crucial for fast computation of pairings. Moreover, computation of cryptographic pairings is much more complex than elliptic-curve cryptography (ECC) in general. Therefore, we facilitate programming of the proposed ASIP by providing a C compiler. In order to speed up $\mathbb{F}_p$ -arithmetic, a RISC core is extended with additional functional units. The critical path delay of these units is adjusted to the base architecture in order to maintain the operating frequency. Independently from that adjustment, these units are scalable allowing for a trade-off between execution time and area consumption. Because the resulting speedup can be limited by the memory throughput, utilization of multiple data memories is proposed. However, developing a C compiler for multiple memories is a challenging task. Therefore, we introduce an enhanced memory system enabling multiple concurrent memory accesses while remaining totally transparent to the C compiler. The proposed design needs 15.8 ms for the computation of the Optimal-Ate pairing over a 256-bit BN curve at 338 MHz implemented with a 130 nm standard cell library. The processor core consumes 97 kGates making it suitable for the use in embedded systems.
Last updated:  2009-08-11
Universally Composable Symmetric Encryption
Ralf Kuesters, Max Tuengerthal
For most basic cryptographic tasks, such as public key encryption, digital signatures, authentication, key exchange, and many other more sophisticated tasks, ideal functionalities have been formulated in the simulation-based security approach, along with their realizations. Surprisingly, however, no such functionality exists for symmetric encryption, except for a more abstract Dolev-Yao style functionality. In this paper, we fill this gap. We propose two functionalities for symmetric encryption, an unauthenticated and an authenticated version, and show that they can be implemented based on standard cryptographic assumptions for symmetric encryption schemes, namely IND-CCA security and authenticated encryption, respectively. We also illustrate the usefulness of our functionalities in applications, both in simulation-based and game-based security settings.
Last updated:  2009-02-04
On the Security of Tandem-DM
Ewan Fleischmann, Michael Gorski, Stefan Lucks
We provide the first proof of security for Tandem-DM one of the oldest and most well-known constructions for turning a blockcipher with n-bit blocklength and 2n-bit keylength into a 2n-bit cryptographic hash function. We prove, that when Tandem-DM is instantiated with AES-256, i.e. blocklength 128 bits and keylength 256 bits, any adversary that asks less than 2^{120.4} queries cannot find a collision with success probability greater than 1/2. We also prove a bound for preimage resistance of Tandem-DM. Interestingly, as there is only one practical construction known (FSE'06, Hirose) turning such an (n,2n)-bit blockcipher into a 2n-bit compression function that has provably birthday-type collision resistance, Tandem-DM is one out of two structures that possess this desirable feature.
Last updated:  2009-02-03
New commutative semifields defined by PN multinomials
Lilya Budaghyan, Tor Helleseth
We introduce infinite families of perfect nonlinear Dembowski-Ostrom multinomials over $F_{p^{2k}}$ where $p$ is any odd prime. We prove that for $k$ odd and $p\ne3$ these PN functions define new commutative semifields (in part by studying the nuclei of these semifields). This implies that these functions are CCZ-inequivalent to all previously known PN mappings.
Last updated:  2009-05-01
ON THE SECURITY OF TWO RING SIGNCRYPTION SCHEMES
Uncategorized
S. Sree Vivek, S. Sharmila Deva Selvi, C. Pandu Rangan
Show abstract
Uncategorized
Ring signcryption is a cryptographic primitive, that allows an user to send a message in confidential, authentic and anonymous way, i.e. the recipient of the message is convinced that the message is valid and it comes from one of the ring member, but does not know the actual sender. In this paper, we show attacks on ring signcryption schemes by Li et al. \cite{FHY08} and Chung et al. \cite{ChungWLC06}. We demonstrate anonymity and confidentiality attack on the scheme by Li et al. \cite{FHY08} and confidentiality attack on the scheme by Chung et al. \cite{ChungWLC06}.
Last updated:  2009-12-05
Enhanced Target Collision Resistant Hash Functions Revisited
Mohammad Reza Reyhanitabar, Willy Susilo, Yi Mu
Enhanced Target Collision Resistance (eTCR) property for a hash function was put forth by Halevi and Krawczyk in Crypto 2006, in conjunction with the randomized hashing mode that is used to realize such a hash function family. eTCR is a strengthened variant of the well-known TCR (or UOWHF) property for a hash function family (i.e. a dedicated-key hash function). The contributions of this paper are twofold. First, we compare the new eTCR property with the well-known collision resistance (CR) property, where both properties are considered for a dedicated-key hash function. We show there is a separation between the two notions, that is, in general, eTCR property cannot be claimed to be weaker (or stronger) than CR property for any arbitrary dedicated-key hash function. Second, we consider the problem of eTCR property preserving domain extension. We study several domain extension methods for this purpose, including (Plain, Strengthened, and Prefix-free) Merkle-Damgård, Randomized Hashing (considered in dedicated-key hash setting), Shoup, Enveloped Shoup, XOR Linear Hash (XLH), and Linear Hash (LH) methods. Interestingly, we show that the only eTCR preserving method is a nested variant of LH which has a drawback of having high key expansion factor. Therefore, it is interesting to design a new and efficient eTCR preserving domain extension in the standard model.
Last updated:  2009-01-30
On the Portability of Generalized Schnorr Proofs
Jan Camenisch, Aggelos Kiayias, Moti Yung
The notion of Zero Knowledge Proofs (of knowledge) [ZKP] is central to cryptography; it provides a set of security properties that proved indispensable in concrete protocol design. These properties are defined for any given input and also for any auxiliary verifier private state, as they are aimed at any use of the protocol as a subroutine in a bigger application. Many times, however, moving the theoretical notion to practical designs has been quite problematic. This is due to the fact that the most efficient protocols fail to provide the above ZKP properties {\em for all} possible inputs and verifier states. This situation has created various problems to protocol designers who have often either introduced imperfect protocols with mistakes or with lack of security arguments, or they have been forced to use much less efficient protocols in order to achieve the required properties. In this work we address this issue by introducing the notion of ``protocol portability,'' a property that identifies input and verifier state distributions under which a protocol becomes a ZKP when called as a subroutine in a sequential execution of a larger application. We then concentrate on the very efficient and heavily employed ``Generalized Schnorr Proofs'' (GSP) and identify the portability of such protocols. We also point to previous protocol weaknesses and errors that have been made in numerous applications throughout the years, due to employment of GSP instances while lacking the notion of portability (primarily in the case of unknown order groups). This demonstrates that cryptographic application designers who care about efficiency need to consider our notion carefully. We provide a compact specification language for GSP protocols that protocol designers can employ. Our specification language is consistent with the ad-hoc notation that is currently widely used and it offers automatic derivation of the proof protocol while dictating its portability (i.e., the proper initial state and inputs) and its security guarantees. Thus, our language specifications can be used modularly in designs and proofs. This assures that the protocol implementation can indeed be used as a subroutine that is ZKP in its context. Finally, as a second alternative to designers wishing to use GSPs, we present a modification of GSP protocols that is unconditionally portable (i.e., ZKP) and is still quite efficient. Our constructions are the first such protocols proven secure in the standard model (while the previously known efficient constructions relied on the Random Oracle model).
Last updated:  2009-06-19
Extensions of the Cube Attack based on Low Degree Annihilators
Uncategorized
Aileen Zhang, Chu-Wee Lim, Khoongming Khoo, Wei Lei, Josef Pieprzyk
Show abstract
Uncategorized
At Crypto 2008, Shamir introduced a new algebraic attack called the cube attack, which allows us to solve black-box polynomials if we are able to tweak the inputs by varying an initialization vector. In a stream cipher setting where the filter function is known, we can extend it to the cube attack with annihilators: By applying the cube attack to Boolean functions for which we can find low-degree multiples (equivalently annihilators), the attack complexity can be improved. When the size of the filter function is smaller than the LFSR, we can improve the attack complexity further by considering a sliding window version of the cube attack with annihilators. Finally, we extend the cube attack to vectorial Boolean functions by finding implicit relations with low-degree polynomials.
Last updated:  2009-07-30
A Trade-Off Between Collision Probability and Key Size in Universal Hashing Using Polynomials
Palash Sarkar
Let $\rF$ be a finite field and suppose that a single element of $\rF$ is used as an authenticator (or tag). Further, suppose that any message consists of at most $L$ elements of $\rF$. For this setting, usual polynomial based universal hashing achieves a collision bound of $(L-1)/|\rF|$ using a single element of $\rF$ as the key. The well-known multi-linear hashing achieves a collision bound of $1/|\rF|$ using $L$ elements of $\rF$ as the key. In this work, we present a new universal hash function which achieves a collision bound of $m\lceil\log_m L\rceil/|\rF|$, $m\geq 2$, using $1+\lceil\log_m L\rceil$ elements of $\rF$ as the key. This provides a new trade-off between key size and collision probability for universal hash functions.
Last updated:  2009-02-03
On Approximating Addition by Exclusive OR
Palash Sarkar
Let $X^{(1)},X^{(2)},\ldots,X^{(n)}$ be independent and uniformly distributed over the non-negative integers $\{0,1,\ldots,2^i-1\}$; $S^{(n)}=X^{(1)}+X^{(2)}+\cdots+X^{(n)}$ and $L^{(n)}=X^{(1)}\oplus X^{(2)}\oplus \cdots\oplus X^{(n)}$. Denote the $i$-th bits of $S^{(n)}$ and $L^{(n)}$ by $S^{(n)}_i$ and $L^{(n)}_i$ respectively. We show that as $i\rightarrow \infty$, $\pr[S^{(n)}_i=L^{(n)}_i]\rightarrow \gamma^{(n)}= \frac{1}{2}+\frac{2^{n+1}(2^{n+1}-1)}{2(n+1)}\times\frac{b_{n+1}}{n!}$, where $b_n$ is the $n$-th Bernoulli number. As a consequence, $\gamma^{(2r)}=1/2$ for every $r$; and we show that $\gamma^{(2r+1)}\rightarrow 1/2$ as $r\rightarrow \infty$. For small values of $r$, $\gamma^{(2r+1)}$ is significantly different from $1/2$; for example $\gamma^{(3)}=1/3$ and $\gamma^{(5)}=17/30$. The behaviour of $\gamma^{(n)}$ for even and odd values of $n$ was earlier shown by Staffelbach and Meier without actually obtaining the formula mentioned above. For every fixed $n\geq 2$, we give a simple method for computing $\pr[S^{(n)}_i=L^{(n)}_i]$ for each $i\geq 0$. The expression involving Bernoulli numbers is arrived at via the Eulerian number triangle which in turn arises in relation to the stationary distribution of a Markov chain formed by the carry values.
Last updated:  2009-01-29
Traceability Codes
Uncategorized
Simon R. Blackburn, Tuvi Etzion, Siaw-Lynn Ng
Show abstract
Uncategorized
Traceability codes are combinatorial objects introduced by Chor, Fiat and Naor in 1994 to be used in traitor tracing schemes to protect digital content. A $k$-traceability code is used in a scheme to trace the origin of digital content under the assumption that no more than $k$ users collude. It is well known that an error correcting code of high minimum distance is a traceability code. When does this `error correcting construction' produce good traceability codes? The paper explores this question. The paper shows (using probabilistic techniques) that whenever $k$ and $q$ are fixed integers such that $k\geq 2$ and $q\geq k^2-\lceil k/2\rceil+1$, or such that $k=2$ and $q=3$, there exist infinite families of $q$-ary $k$-traceability codes of constant rate. These parameters are of interest since the error correcting construction cannot be used to construct $k$-traceability codes of constant rate for these parameters: suitable error correcting codes do not exist because of the Plotkin bound. This answers a question of Barg and Kabatiansky from 2004. Let $\ell$ be a fixed positive integer. The paper shows that there exists a constant $c$, depending only on $\ell$, such that a $q$-ary $2$-traceability code of length $\ell$ contains at most $cq^{\lceil \ell/4\rceil}$ codewords. When $q$ is a sufficiently large prime power, a suitable Reed--Solomon code may be used to construct a $2$-traceability code containing $q^{\lceil \ell/4\rceil}$ codewords. So this result may be interpreted as implying that the error correcting construction produces good $q$-ary $2$-traceability codes of length $\ell$ when $q$ is large when compared with $\ell$.
Last updated:  2009-01-29
Efficient Protocols for Set Intersection and Pattern Matching with Security Against Malicious and Covert Adversaries
Carmit Hazay, Yehuda Lindell
In this paper we construct efficient secure protocols for \emph{set intersection} and \emph{pattern matching}. Our protocols for securely computing the set intersection functionality are based on secure pseudorandom function evaluations, in contrast to previous protocols that are based on polynomials. In addition to the above, we also use secure pseudorandom function evaluation in order to achieve secure pattern matching. In this case, we utilize specific properties of the Naor-Reingold pseudorandom function in order to achieve high efficiency. Our results are presented in two adversary models. Our protocol for secure pattern matching and one of our protocols for set intersection achieve security against \emph{malicious adversaries} under a relaxed definition where one corruption case is simulatable and for the other only privacy (formalized through indistinguishability) is guaranteed. We also present a protocol for set intersection that is fully simulatable in the model of covert adversaries. Loosely speaking, this means that a malicious adversary can cheat, but will then be caught with good probability.
Last updated:  2009-01-29
Un-Trusted-HB: Security Vulnerabilities of Trusted-HB
Dmitry Frumkin, Adi Shamir
With increased use of passive RFID tags, the need for secure lightweight identification protocols arose. HB+ is one such protocol, which was proven secure in the detection-based model, but shown breakable by man-in-the-middle attacks. Trusted-HB is a variant of HB+, specifically designed to resist man-in-the-middle attacks. In this paper, we discuss several weaknesses of Trusted-HB, show that the formal security proof provided by its designers is incorrect, and demonstrate how to break it in realistic scenarios.
Last updated:  2009-01-29
Image Encryption by Pixel Property Separation
Karthik Chandrashekar Iyer, Aravinda Subramanya
Pixels in an image are essentially constituted of two properties, position and colour. Pixel Property Separation, a radically different approach for Symmetric-key image encryption, separates these properties to disturb the semantics of the image. The scheme operates in two orthogonal stages each requiring an encryption key. The first stage creates the Position Vector, an ordered set of Pixel Position Information controlled by a set of plaintext dependent Random Permutations. A bitmap flagging the presence of all the 24 bit colours is generated. The second stage randomly positions the image width and height within the ciphertext and finally applies a byte transposition on the ciphertext bytes. The complete set of image properties including width, height and pixel position-colour correlation are obscured, resulting in a practically unbreakable encryption. The orthogonality of the stages acts as an anti-catalyst for cryptanalysis. The information retrieved from compromising a stage is totally independent and cannot be used to derive the other. Classical cryptanalytic techniques demand huge number of attempts, most failing to generate valid encryption information. Plaintext attacks are rendered ineffective due to the dependency of the Random Permutations on the plaintext. Linear and Differential cryptanalysis are highly inefficient due to high Diffusion and Confusion. Although the paper describes the algorithm as applied to images, its applicability is not limited to images only. The cryptographic strength and the efficiency of operation is independent of the nature of the plaintext.
Last updated:  2009-04-27
On CCZ-equivalence and its use in secondary constructions of bent functions
Lilya Budaghyan, Claude Carlet
We prove that, for bent vectorial functions, CCZ-equivalence coincides with EA-equivalence. However, we show that CCZ-equivalence can be used for constructing bent functions which are new up to CCZ-equivalence. Using this approach we construct classes of nonquadratic bent Boolean and bent vectorial functions.
Last updated:  2009-01-29
Proofs of Retrievability via Hardness Amplification
Yevgeniy Dodis, Salil Vadhan, Daniel Wichs
Proofs of Retrievability (PoR), introduced by Juels and Kaliski, allow the client to store a file $F$ on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client's data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we \begin{itemize} \item Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski~\cite{JuelsK07}, without making any simplifying assumptions on the behavior of the adversary. \item Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters~\cite{ShachamW08}. \item Build the first bounded-use scheme with {\em information-theoretic} security. \end{itemize} The main insight of our work comes from a simple connection between PoR schemes and the notion of {\em hardness amplification}, extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of {\em PoR codes}, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.
Last updated:  2009-01-25
How to Prove the Security of Practical Cryptosystems with Merkle-Damgård Hashing by Adopting Indifferentiability
Uncategorized
Yusuke Naito, Kazuki Yoneyama, Lei Wang, Kazuo Ohta
Show abstract
Uncategorized
In this paper, we show that major cryptosystems such as FDH, OAEP, and RSA-KEM are secure under a hash function $MD^h$ with Merkle-Damgård (MD) construction that uses a random oracle compression function $h$. First, we propose two new ideal primitives called Traceable Random Oracle ($\mathcal{TRO}$) and Extension Attack Simulatable Random Oracle ($\mathcal{ERO}$) which are weaker than a random oracle ($\mathcal{RO}$). Second, we show that $MD^h$ is indifferentiable from $\mathcal{LRO}$, $\mathcal{TRO}$ and $\mathcal{ERO}$, where $\mathcal{LRO}$ is Leaky Random Oracle proposed by Yoneyama et al. This result means that if a cryptosystem is secure in these models, then the cryptosystem is secure under $MD^h$ following the indifferentiability theory proposed by Maurer et al. Finally, we prove that OAEP is secure in the $\mathcal{TRO}$ model and RSA-KEM is secure in the $\mathcal{ERO}$ model. Since it is also known that FDH is secure in the $\mathcal{LRO}$ model, as a result, major cryptosystems, FDH, OAEP and RSA-KEM, are secure under $MD^h$, though $MD^h$ is not indifferentiable from $\mathcal{RO}$.
Last updated:  2009-01-25
Key Insulation and Intrusion Resilience Over a Public Channel
Mihir Bellare, Shanshan Duan, Adriana Palacio
Key insulation (KI) and Intrusion resilience (IR) are methods to protect a user's key against exposure by utilizing periodic communications with an auxiliary helper. But existing work assumes a secure channel between user and helper. If we want to realize KI or IR in practice we must realize this secure channel. This paper looks at the question of how to do this when the communication is over what we are more likely to have in practice, namely a public channel such as the Internet or a wireless network. We explain why this problem is not trivial, introduce models and definitions that capture the desired security in a public channel setting, and provide a complete (and surprising) answer to the question of when KI and IR are possible over a public channel. The information we provide is important to guide practitioners with regard to the usage of KI and IR and also to guide future research in this area.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.