All papers in 2010 (660 results)
Security Evaluation of MISTY Structure with SPN Round Function
This paper deals with the security of MISTY structure with SPN round function. We study the lower bound of the number of active s-boxes for differential and linear characteristics of such block cipher construction. Previous result shows that the differential bound is consistent with the case of Feistel structure with SPN round function, yet the situation changes when considering the linear bound. We carefully revisit such issue, and prove that the same bound in fact could be obtained for linear characteristic. This result combined with the previous one thus demonstrates a similar practical secure level for both Feistel and MISTY structures. Besides, we also discuss the resistance of MISTY structure with SPN round function against other kinds of cryptanalytic approaches including the integral cryptanalysis and impossible differential cryptanalysis. We confirm the existence of 6-round integral distinguishers when the linear transformation of the round function employs a binary matrix (i.e., the element in the matrix is either 0 or 1), and briefly describe how to characterize 5/6/7-round impossible differentials through the matrix-based method.
Identification of Multiple Invalid Pairing-based Signatures in Constrained Batches
This paper describes a new method in pairing-based signature schemes for identifying the invalid digital signatures in a batch after batch verification has failed. The method more efficiently identifies non-trivial numbers, $w$, of invalid signatures in constrained sized, $N$, batches than previously published methods, and does not require that the verifier possess detailed knowledge of $w$. Our method uses ``divide-and-conquer'' search to identify the invalid signatures within a batch, pruning the search tree to reduce the number of pairing computations required. The method prunes the search tree more rapidly than previously published techniques and thereby provides performance gains for batch sizes of interest.
We are motivated by wireless systems where the verifier seeks to conserve computations or a related resource, such as energy, by using large batches. However, the batch size is constrained by how long the verifier can delay batch verification while accumulating signatures to verify.
We compare the expected performance of our method (for a number of different signature schemes at varying security levels) for varying batch sizes and numbers of invalid signatures against earlier methods. We find that our new method provides the best performance for constrained batches, whenever the number of invalid signatures is less than half the batch size. We include recently published methods based on techniques from the group-testing literature in our analysis. Our new method consistently outperforms these group-testing based methods, and substantially reduces the cost ($ > 50\%$) when $w \le N/4$.
Practical Affiliation-Hiding Authentication from Improved Polynomial Interpolation
Among the plethora of privacy-friendly authentication techniques, affiliation-hiding (AH) protocols are valuable for their ability to hide not only identities of communicating users behind their affiliations (memberships to groups), but also these affiliations
from non-members. These qualities become increasingly important in our highly computerized user-centric information society, where privacy is an elusive good. Only little work on practical aspects of AH schemes, pursuing optimized implementations
and deployment, has been done so far, and the main question a practitioner might ask --- whether affiliation-hiding schemes are truly practical today --- remained widely unanswered.
Improving upon recent advances in the area of AH protocols, in particular on pioneering results in the multi-affiliation setting, we can give an affirmative answer to this question. To this end, we propose numerous algorithmic optimizations to a recent AH scheme leading to a remarkable performance gain. Our results are demonstrated not only at theoretical level, but we also offer implementations, performance measurements, and comparisons.
At the same time, our improvements advance the area of efficient polynomial interpolation in finite fields, which is one of our building blocks.
ABC - A New Framework for Block Ciphers
We suggest a new framework for block ciphers named Advanced Block Cipher, or shortly ABC. ABC has additional non-secret parameters that ensure that each call to the underlying block cipher uses a different pseudo-random permutation. It therefore ensures that attacks that require more than one block encrypted under the same secret permutation cannot apply. In particular, this framework protects against dictionary attacks, and differential and linear attacks, and eliminates weaknesses of ECB and CBC modes. This new framework shares a common structure with HAIFA, and can share the same logic with HAIFA compression functions. We analyze the security of several modes of operation for ABCs block ciphers, and suggest a few instances of ABCs.
On small secret key attack against RSA with high bits known prime factor
It is well known that if the higher half bits of a prime factor are known or the secret key is small enough then the RSA cryptosystem is broken (e.g. [Coppersmith, J. Cryptology, 1997] and [Boneh-Durfee, Eurocrypt'99]). Recently, Sarkar-Maitra-Sarkar [Cryptology ePrint Archiv, 2008/315] proposed attacks against RSA under the conditions that the higher bits of a prime factor is known and the secret key is small. In the present paper, we improve their attacks to be effective for larger secret keys.
A Note on Constant-Round Zero-Knowledge Proofs of Knowledge
In this note, we show the existence of \emph{constant-round} computational zero-knowledge \emph{proofs of knowledge} for all $\NP$. The existence of constant-round zero-knowledge proofs was proven by Goldreich and Kahan (Journal of Cryptology, 1996), and the existence of constant-round zero-knowledge \emph{arguments} of knowledge was proven by Feige and Shamir (CRYPTO 1989). However, the existence of constant-round zero-knowledge proofs of knowledge for all $\NP$ is folklore, to the best of our knowledge, since no proof of this fact has been published.
On the Affine Equivalence and Nonlinearity Preserving Bijective Mappings
It is well-known that affine equivalence relations keep nonlineaerity invariant for all Boolean functions. The set of all Boolean functions, $\mathcal{F}_n$, over $\bbbf_2^n$, is naturally regarded as the $2^n$ dimensional vector space, $\bbbf_2^{2^n}$. Thus, while analyzing the transformations acting on $\mathcal{F}_n$, $S_{2^{2^n}}$, the group of all bijective mappings, defined from $\bbbf_2^{2^n}$ onto itself should be considered. As it is shown in \cite{ser,ser:dog,ser:dog:2}, there exist non-affine bijective transformations that preserve nonlinearity. In this paper, first, we prove that the group of affine equivalence relations is isomorphic to the automorphism group of Sylvester Hadamard matrices. Then, we show that new nonlinearity preserving non-affine bijective mappings also exist. Moreover, we propose that the automorphism group of nonlinearity classes, should be studied as a subgroup of $S_{2^{2^n}}$, since it contains transformations which are not affine equivalence relations.
Completeness Theorems with Constructive Proofs for Finite Deterministic 2-Party Functions (full version)
In this paper we present simple but comprehensive combinatorial criteria for completeness of finite deterministic 2-party functions with respect to information-theoretic security. We give a general protocol construction for efficient and statistically secure reduction of oblivious transfer to any finite deterministic 2-party function that fulfills our criteria. For the resulting protocols we prove universal composability. Our results are tight in the sense that our criteria still are necessary for any finite deterministic 2-party function to allow for implementation of oblivious transfer with statistical privacy and correctness.
We unify and generalize results of Joe Kilian (1991, 2000) in two ways. Firstly, we show that his completeness criteria also hold in the UC framework. Secondly, what is our main contribution, our criteria also cover a wide class of primitives that are not subject of previous criteria. We show that there are non-trivial examples of finite deterministic 2-party functions that are neither symmetric nor asymmetric and therefore have not been covered by existing completeness criteria so far.
As a corollary of our work, every finite deterministic 2-party function is either complete or can be considered equivalent to a non-complete symmetric 2-party function---this assertion holds true with respect to active adversaries as well as passive adversaries. Thereby known results on non-complete symmetric 2-party functions are strengthened.
Cubic groups
It is showed that an existence of cubic groups may suggest an existence of one-way function.
Active Domain Expansion for Normal Narrow-pipe Hash Functions
Uncategorized
Uncategorized
Recently several reports of Cryptology ePrint Archive showed the discovering that for
a normal iterative hash function the entropy and codomain would reduce greatly,then some conclusions were given: Narrow-pipe hash functions couldn't resist this reducing (But wide-pipe hash functions could.),and generic collision
attacks on narrow-pipe hash functions would be faster than birthday paradox.The discovering and conclusions rely on the cases of active domain reducing which causes the empty set of a approximative probability $e^{-1}$ in a iteration.However,we can thwart the conclusions by the way of Active Domain Expansion to keep or recover the entropy , by some amending for any
a normal narrow-pipe hash function to realize it.And some hash mode such as LAB Mode can more simply do it.In this paper,we'd introduce Active Domain Expansion which includes Surjection Round and the sum block $\Sigma M_{i}$.The most important is to define a sum block $\Sigma M_{i}$ to replace the input of a normal message block $M_{i}$ in compression
function.$\Sigma M_{i}$ is a sum of the foregoing i ``Encoded Blocks''.since the surjection round has the same purport and the form is a part of Active Domain Expansion,Surjections Round will be non-critical section in this paper.Besides,we can redefine the last block of additional bits.By these,a normal narrow-pipe hash function can resist the reducing completely.
On the Impossibility of Instantiating PSS in the Standard Model
Uncategorized
Uncategorized
In this paper we consider the problem of securely instantiating Probabilistic Signature Scheme (PSS) in the standard model. PSS, proposed by Bellare and Rogaway \cite{BellareR96} is a widely deployed randomized signature scheme, provably secure (\emph{unforgeable under adaptively chosen message attacks}) in Random Oracle Model. \\
Our main result is a black-box impossibility result showing that one can not prove unforgeability of PSS against chosen message attacks using blackbox techniques even assuming existence of \emph{ideal trapdoor permutations} (a strong abstraction of trapdoor permutations which inherits all security properties of a random permutation, introduced by Kiltz and Pietrzak in Eurocrypt 2009) or the \emph{lossy trapdoor permutations} \cite{PeikertW08}. Moreover, we show \emph{onewayness}, the most common security property of a trapdoor permutation does not suffice to prove even the weakest security criteria, namely \emph{unforgeability under zero message attack}. Our negative results can easily be extended to any randomized signature scheme where one can recover the random string from a valid signature.
Cryptanalysis of the RSA Subgroup Assumption from TCC 2005
At TCC 2005, Groth underlined the usefulness of working in small RSA subgroups of hidden order. In assessing the security of the relevant hard problems, however, the best attack considered for a subgroup of size 2^{2k} had a complexity of O{2^k}. Accordingly, k=100 bits was suggested as a concrete parameter.
This paper exhibits an attack with a complexity of roughly 2^{k/2} operations, suggesting that Groth's original choice of parameters was overly aggressive. It also discusses the practicality of this new attack and various implementation issues.
Stronger difficulty notions for client puzzles and denial-of-service-resistant protocols
Client puzzles are meant to act as a defense against denial of service (DoS) attacks by requiring a client to solve some moderately hard problem before being granted access to a resource. However, recent client puzzle difficulty definitions (Stebila and Ustaoglu, 2009; Chen et al., 2009) do not ensure that solving n puzzles is n times harder than solving one puzzle. Motivated by examples of puzzles where this is the case, we present stronger definitions of difficulty for client puzzles that are meaningful in the context of adversaries with more computational power than required to solve a single puzzle.
A protocol using strong client puzzles may still not be secure against DoS attacks if the puzzles are not used in a secure manner. We describe a security model for analyzing the DoS resistance of any protocol in the context of client puzzles and give a generic technique for combining any protocol with a strong client puzzle to obtain a DoS-resistant protocol.
Uniqueness is a Different Story: Impossibility of Verifiable Random Functions from Trapdoor Permutations
Verifiable random functions (VRFs), firstly proposed by Micali, Rabin, and
Vadhan (FOCS 99), are pseudorandom functions with the additional property that
the owner of the seed $\vsk$ can issue publicly-verifiable proofs for the
statements ``$f({\vsk},x)=y$'', for any input $x$. Moreover, the output of
VRFs is guaranteed to be unique, which means that $y=f({\vsk},x)$ is the only
image that can be proven to map to $x$. Due to their properties, VRFs are a
fascinating primitive that have found several theoretical and practical
applications. However, despite their popularity, constructing VRFs seems to be
a challenging task. Indeed only a few constructions based on specific
number-theoretic problems are known and basing a scheme on a general
assumption is still an open problem. Towards this direction, Brakerski,
Goldwasser, Rothblum, and Vaikuntanathan (TCC 2009) recently showed that
verifiable random functions cannot be constructed from one-way permutations in
a black-box way.
In this paper we put forward the study of the relationship between VRFs and
well-established cryptographic primitives. As our main result, we prove that
VRFs cannot be based on the existence of trapdoor permutations (TDPs) in a
black-box manner.
Our result sheds light on the nature of VRFs and can be considered
interesting for at least two reasons:
- First, given the separation result of Brakerski \emph{et al.}, one may
think as though VRFs would naturally belong to the ``public-key world'', and
thus it is interesting to figure out their relationship with other public-key
primitives. In this sense, our result shows that VRFs are much stronger
because we imply separations of VRFs from most of the primitives in this
world: basically everything that is implied by TDPs is strictly weaker. These
primitives include e.g., public-key encryption (even CCA secure), oblivious
transfer, and key-agreement.
- Second, the notion of VRFs is closely related to other two primitives: weak
verifiable random functions (weak-VRFs) and verifiable pseudorandom generators
(VPRGs). Weak-VRFs, defined by Brakerski \emph{et al.}, are a relaxation of
VRFs in which the pseudorandomness holds only with respect to random inputs.
VPRGs, introduced by Dwork and Naor (FOCS 2000), are pseudorandom generators
that allow the owner of the seed to prove the correctness of subsets of the
generated bits. It is well known that ``regular'' PRFs can be constructed
from pseudorandom generators and from weak-PRFs in a black-box way. It was
thus an open problem (already recognized by Dwork and Naor in their paper)
whether similar approaches could be applicable to the ``verifiable'' analogous
of such primitives. Since weak-VRFs and VPRGs are implied by TDPs, we give a
negative answer to this problem showing that the case of verifiable random
functions is essentially different.
Improved Nguyen-Vidick Heuristic Sieve Algorithm for Shortest Vector Problem
Uncategorized
Uncategorized
In this paper, we present an improvement of the Nguyen-Vidick heuristic sieve algorithm for shortest vector problem in general lattices, which time complexity is 2^0.3836n polynomial computations, and space complexity is 2^0.2557n. In the new algorithm, we introduce a new sieve technique with two-level instead of the previous one-level sieve, and complete the complexity estimation by calculating the irregular spherical cap covering.
Statistical Analysis of Second Order Differential Power Analysis
Second Order Differential Power Analysis (2ODPA) is a powerful side channel attack that allows an attacker to bypass the widely used masking countermeasure. To thwart 2ODPA, higher order masking may be employed but it implies an non-negligible overhead. In this context, there is a need to know how efficient a 2O-DPA can be, in order to evaluate the resistance of an implementation that uses first order masking and, possibly, some hardware countermeasures. Different methods of mounting a practical 2O-DPA attack have been proposed in the literature. However, it is not yet clear which of these methods is the most efficient. In this paper, we give a formal description of the higher order DPA that are mounted against software implementations. We then introduce a framework in which the attack efficiencies may be compared. The attacks we focus on involve the combining of several leakage signals and the computation of correlation coefficients to discriminate the wrong key hypotheses. In the second part of this paper, we pay particular attention to 2O-DPA
that involves the product combining or the absolute difference combining. We study them under the assumption that the device leaks the Hamming weight of the processed data together with an independent gaussian noise. After showing a way to improve the product combining, we argue that in this model the product combining is more efficient not only than absolute difference combining, but also than all the other combining techniques proposed in the literature.
A Timed Logic for Modeling and Reasoning about Security Protocols
Many logical methods are usually considered suitable to express the static properties of security protocols while unsuitable to model dynamic processes or properties. However, a security protocol itself is in fact a dynamic process over time, and sometimes it is important to be able to express time-dependent security properties of protocols. In this paper, we present a new timed logic based on predicate modal logic, in which time is explicitly expressed in parameters of predicates or modal operators. This makes it possible to model an agent's actions, knowledge and beliefs at different and exact time points, which enables us to model both protocols and their properties, especially time-dependent properties. We formalize semantics of the presented logic, and prove its soundness.
We also present a modeling scheme for formalizing protocols and security properties of authentication and secrecy under the logic. The scheme provides a flexible and succinct framework to reason about security protocols, and essentially enhances the power of logical methods for protocol analysis. As a case study, we then analyze a timed-release protocol using this framework, and discover a new vulnerability that did not appear previously in the literature. We provide a further example to show additional advantages of the modeling scheme in the new logic.
A Practical Platform for Cube-Attack-like Cryptanalyses
Recently, various cryptanalysis methods related to Cube Attack have attracted a lot of interest. We designed a practical platform to perform such cryptanalysis attacks. We also developed a web-based application at \url{http://cube-attack.appspot.com/}, which is open to public for simple testing and verification. In this paper, we focus on linearity testing and try to verify the data provided in several papers. Some interesting results produced in our work indicate certain improper assumptions were made in these papers.
Construct MD5 Collisions Using Just A Single Block Of Message
Uncategorized
Uncategorized
So far, all the differential attacks on MD5 were constructed through multi-block collision method. Can collisions for MD5 be found using just a single block of message (i.e. 512-bit)? This has been an open problem since the first 2-block collision attack was given. Today, in the last month (Dec,) of 2010, we have to make public a result of our 1-block collision attacks on MD5 in Table 1 as below, which was actually obtained at the beginning of 2010, but for security reasons, the techniques are not allowed to be disclosed at the moment. Here, we are calling for a challenge to the cryptology community that, any one who first gives a new different 1-block collision attack on MD5 will win 10,000 US dollars (about 50,000 RMB in Chinese Yuan) as a reward for his (her) excellent work. This call for challenge will be ended on Jan 1st, 2013. This announcement’s first affiliated unit will be responsible for this amount of reward when a new different 1-block collision attack is received and verified.
More Insights on Blockcipher-Based Hash Functions
In this paper we give more insights on the security of
blockcipher-based hash functions. We give a very simple criterion to
build a secure large class of Single-Block-Length (SBL) or double
call Double-Block-Length (DBL) compression functions based on $(kn,
n)$ blockciphers, where $kn$ is the key length and $n$ is the block
length and $k$ is an integer.
This criterion is simpler than previous works in the literature.
Based on the criterion, we can get many results from this criterion,
and we can get a conclusion on such class of blockcipher-based hash
functions. We solved the open problem left by Hirose. Our results
show that to build a secure double call DBL compression function, it
is required $k >= m+1$ where $m$ is the number of message blocks.
Thus, we can only build rate 1/2 secure double DBL blockcipher-based
compression functions if $k==2$.
At last, we pointed out flaws in Stam's theorem about
supercharged functions and gave a revision of this theorem and added
another condition for the security of supercharged compression
functions.
A new algorithm for computing Groebner bases
Buchberger's algorithm for computing Groebner bases was introduced in 1965, and subsequently there have been extensive efforts in improving its efficiency. Major algorithms include F4 (Faugère 1999), XL (Courtois et al. 2000) and F5 (Faugère 2002). F5 is believed to be the fastest algorithm known in the literature. Most recently, Gao, Guan and Volny (2010) introduced an incremental algorithm (G2V) that is simpler and several times faster than F5. In this paper, a new algorithm is presented that can avoid the incremental nature of F5 and G2V. It matches Buchberger's algorithm in simplicity and yet is more flexible. More precisely, given a list of polynomials, the new algorithm computes simultaneously a Groebner basis for the ideal generated by the polynomials and a Groebner basis for the leading terms of the syzygy module of the given list of polynomials. For any term order for the ideal, one may vary signature orders (i.e. the term orders for the syzygy module). Under one signature order, the new algorithm specializes to the G2V, and under another signature order, the new algorithm is several times faster than G2V, as indicated by computer experiments on benchmark examples.
Short collusion-secure fingerprint codes against three pirates
Uncategorized
Uncategorized
In this article, we propose a new construction of probabilistic collusion-secure fingerprint codes against up to three pirates and give a theoretical security evaluation. Our pirate tracing algorithm combines a scoring method analogous to Tardos codes (J. ACM, 2008) with an extension of parent search techniques of some preceding 2-secure codes. Numerical examples show that our code lengths are significantly shorter than (about 30% to 40% of) the shortest known c-secure codes by Nuida et al. (Des. Codes Cryptogr., 2009) with c = 3. Some preliminary proposal for improving efficiency of our tracing algorithm is also given.
Last updated: 2011-12-10
Enumerating Results of Homogeneous Rotation over $GF(p)$
Uncategorized
Uncategorized
In this paper, we consider the open problem of counting homogeneous rotation
symmetric Boolean functions over $GF(p)$. By using the
Inclusion--Exclusion Principle, we obtain a formula to exactly
enumerate such class of functions. As a consequence, the known
formula of \cite[Theorem 9]{Maitra} in Boolean case is simplified.
One-Pass HMQV and Asymmetric Key-Wrapping
Consider the task of asymmetric key-wrapping, where a key-management server encrypts a cryptographic key under the public key of a client. When used in storage and access-control systems, it is often the case that the server has no knowledge about the client (beyond its public key) and no means of coordinating with it. For example, a wrapped key used to encrypt a backup tape may be needed many years after wrapping, when the server is no longer available, key-wrapping standards have changed, and even the security requirements of the client might have changed. Hence we need a flexible mechanism that seamlessly supports different options depending on what the original server was using and the current standards and requirements.
We show that one-pass HMQV (which we call HOMQV) is a perfect fit for this type of applications in terms of security, efficiency and flexibility. It offers server authentication if the server has its own public key, and degenerates down to the standardized DHIES encryption scheme if the server does not have a public key. The performance difference between the unauthenticated DHIES and the authenticated HOMQV is very minimal (essentially for free for the server and only 1/2 exponentiation for the client). We provide a formal analysis of the protocol's security showing many desirable properties such as sender's forward-secrecy and resilience to compromise of ephemeral data. When adding a DEM part (as needed for key-wrapping) it yields a secure signcryption scheme (equivalently a UC-secure messaging protocol).
The combination of security, flexibility, and efficiency, makes HOMQV a very desirable protocol for asymmetric key wrapping, one that we believe should be incorporated into implementations and standards
Breaking An Identity-Based Encryption Scheme based on DHIES
We present collusion attacks against a recently proposed IBE scheme of Chen et al. from ASIACCS 2010. The attacks recover the master secret key of the scheme and thereby invalidate the existing security analysis of this scheme. The attacks are flexible, allowing, for example, the amount of computation needed to be traded-off against the size of the collusion.
Differential Fault Analysis of AES using a Single Multiple-Byte Fault
In this paper we present an improved fault attack on the Advanced Encryption
Standard (AES). This paper presents an improvement on a recently published
differential fault analysis of AES that requires one fault to recover the secret
key being used. This attack requires that one byte entering into the eighth round is
corrupted. We show that the attack is possible where more than one byte has been
affected. Experimental results are described where a fault is injected using a glitch in
the clock, demonstrating that this attack is practical.
Last updated: 2010-12-14
An Efficient and Information Theoretically Secure Rational Secret Sharing Scheme based on Symmetric Bivariate Polynomials
The design of rational cryptographic protocols is a recently created research area at the intersection of cryptography and game theory. In this paper, we propose a new $m$-out-of-$n$ rational secret sharing scheme requiring neither the involvement of the dealer (except during the initial share distribution) nor a trusted mediator. Our protocol leads to a Nash equilibrium surviving the iterated deletion of weakly dominated strategies for $m \geq 4$. Our construction is information theoretically secure and it is immune against backward induction attacks. Contrary to Kol and Naor who used a specific cryptographic primitive in their TCC'08 paper (namely, meaningful/meaningless encryption), the immunity of our scheme is based on the use of bivariate polynomials and one-time pads. To the best of our knowledge, it is the first time that such polynomials have been used for rational secret sharing. Our scheme is efficient and does not require any physical assumptions such as envelopes or ballot boxes. As most of existing rational protocols, our construction requires simultaneous broadcast channels. However, our proposed scheme does not require any computational assumption and it provides information theoretical security.
ROTIV: RFID Ownership Transfer with Issuer Verification
RFID tags travel between partner sites in a supply chain. For privacy
reasons, each partner “owns” the tags present at his site, i.e.,
the owner is the only entity able to authenticate his tags. However,
when passing tags on to the next partner in the supply chain,
ownership of the old partner is “transferred” to the new partner.
In this paper, we propose ROTIV, a protocol that allows for secure
ownership transfer against some malicious owners. Furthermore,
ROTIV offers issuer verification to prevent malicious partners
from injecting fake tags not originally issued by some trusted
party. As part of ownership, ROTIV provides a constant-time,
privacy-preserving authentication. ROTIV’s main idea is to combine
an HMAC-based authentication with tag key and state updates
during ownership transfer. To assure privacy, ROTIV implements
tag state re-encryption techniques and key update techniques, performed
on the reader. ROTIV is designed for lightweight tags –
tags are only required to evaluate a hash function.
Low Data Complexity Attacks on AES
The majority of current attacks on reduced-round variants of
block ciphers seeks to maximize the number of rounds that can be
broken, using less data than the entire codebook and
less time than exhaustive key search. In this paper, we pursue
a different approach, restricting the data available
to the adversary to a few plaintext/ciphertext pairs.
We show that consideration of such attacks (which received little
attention in recent years) serves an important role in assessing
the security of block ciphers and of other
cryptographic primitives based on block ciphers.
In particular, we show that these
attacks can be leveraged to more complex attacks, either on the
block cipher itself or on other primitives (e.g., stream ciphers,
MACs, or hash functions) that use a small number of rounds of the
block cipher as one of their components.
As a case study, we consider the AES --- the most widely used
block cipher, whose round function is used in various
cryptographic primitives. We present attacks on up to four rounds
of AES that require at most 10 known/chosen plaintexts. We then
apply these attacks to cryptanalyze a variant of the stream cipher
LEX, and to mount a new known plaintext attack on 6-round AES.
Efficient and provably-secure certificateless signature scheme without bilinear pairings
Uncategorized
Uncategorized
Many certificateless signature schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. In order to improve the performance we propose a certificateless signature scheme without bilinear pairings. With the running time being saved greatly, our scheme is more practical than the previous related schemes for practical application.
Black-box property of Cryptographic Hash Functions
Uncategorized
Uncategorized
We define a new black-box property for cryptographic hash function families $H:\{0,1\}^K\times\{0,1\}^*\rightarrow\{0,1\}^y$ which guarantees that for a randomly chosen hash function $H_K$ from the family, everything ``non-trivial'' we are able to compute having access to the key $K$, we can compute only with oracle access to $H_K$. If a hash function family is pseudo-random and has the black-box property then a randomly chosen hash function $H_K$ from the family is resistant to all non-trivial types of attack. We also show that the HMAC domain extension transform is Prf-BB preserving, i.e. if a compression function $f$ is pseudo-random and has black-box property (Prf-BB for short) then $\HMAC^f$ is Prf-BB. On the other hand we show that the Merkle-Damg\aa rd construction is not Prf-BB preserving. Finally we show that every pseudo-random oracle preserving domain extension transform is Prf-BB preserving and vice-versa. Hence, Prf-BB seems to be an all-in-one property for cryptographic hash function families, which guarantees their ``total'' security.
Divison Polynomials for Alternate Models of Elliptic Curves
In this paper we find division polynomials for Huff curves, Jacobi quartics, and Jacobi intersections. These curves are alternate models for elliptic curves to the more common Weierstrass curve. Division polynomials for Weierstrass curves are well known, and the division polynomials we find are analogues for these alternate models. Using the division polynomials, we show recursive formulas for the $n$-th multiple of a point on each curve. As an application, we prove a type of mean-value theorem for Huff curves, Jacobi quartics and Jacobi intersections.
On the Security of Hash Functions Employing Blockcipher Postprocessing
Analyzing desired generic properties of hash functions is an important current area in cryptography.For example, in Eurocrypt 2009, Dodis, Ristenpart and Shrimpton [7] introduced the elegant
notion of “Preimage Awareness” (PrA) of a hash function HP , and they showed that a PrA hash function followed by an output transformation modeled to be a FIL (fixed input length) random oracle is PRO (pseudorandom oracle) i.e. indifferentiable from a VIL (variable input length) random oracle. We observe that for recent practices in designing hash function (e.g. SHA-3 candidates) most output transformations are based on permutation(s) or blockcipher(s), which are not PRO. Thus, a natural question is how the
notion of PrA can be employed directly with these types of more prevalent output transformations? We consider the Davies-Meyer’s type output transformation OT(x) := E(x)xor x where E is an ideal permutation. We prove that OT(HP (·)) is PRO if HP is PrA, preimage resistant and computable message aware (a related but not redundant notion, needed in the analysis that we introduce in the paper). The similar result is also obtained for 12 PGV output transformations. We also observe that some popular double block length output transformations can not be employed as output transformation.
State convergence and keyspace reduction of the Mixer stream cipher
This paper presents an analysis of the stream cipher Mixer, a bit-based cipher with structural components similar to the well-known Grain cipher and the LILI family of keystream generators. Mixer uses a 128-bit key and 64-bit IV to initialise a 217-bit internal state. The analysis is focused on the initialisation function of Mixer and shows that there exist multiple key-IV pairs which, after initialisation, produce the same initial state, and consequently will generate the same keystream. Furthermore, if the number of iterations of the state update function performed during initialisation is increased, then the number of distinct initial states that can be obtained decreases. It is also shown that there exist some distinct initial states which produce the same keystream, resulting in a further reduction of the effective key space.
Secure and Efficient Protocols for Iris and Fingerprint Identification
Recent advances in biometric recognition and the increasing use of biometric
data prompt significant privacy challenges associated with the possible
misuse, loss or theft, of biometric data. Biometric matching is often
performed by two mutually suspicious parties, one of which holds one
biometric image while the other owns a possibly large biometric collection.
Due to privacy and liability considerations, neither party is willing to
share its data. This gives rise to the need to develop secure computation
techniques over biometric data where no information is revealed to the
parties except the outcome of the comparison or search. To address the
problem, in this work we develop and implement the first privacy-preserving
identification protocol for iris codes. We also design and implement a
secure protocol for fingerprint identification based on FingerCodes with a
substantial improvement in the performance compared to existing solutions.
We show that new techniques and optimizations employed in this work allow us
to achieve particularly efficient protocols suitable for large data sets and
obtain notable performance gain compared to the state-of-the-art prior work.
Public-Key Encryption with Fuzzy Keyword Search: A Provably Secure Scheme under Keyword Guessing Attack
Uncategorized
Uncategorized
A lot of interest has been drawn recently into public-key encryption
with keyword search (PEKS), which keeps public-key encrypted
documents amendable to secure keyword search. However, PEKS resist
against keyword guessing attack by assuming that the size of the
keyword space is beyond the polynomial level. But this assumption is
ineffective in practice. PEKS are insecure under keyword guessing
attack. As we observe, the key to defend such attack is to avoid the
availability of the exact search trapdoor to adversaries.
Accordingly, we compromise the exactness of search trapdoor by
mapping at least two different keywords into a fuzzy search
trapdoor. We propose a novel concept called public-key encryption
with fuzzy keyword search (PEFKS), by which the un-trusted server
only obtains the fuzzy search trapdoor instead of the exact search
trapdoor, and define its semantic security under chosen keyword
attack (SS-CKA) and indistinguishability of keywords under
non-adaptively chosen keywords and keyword guessing attack
(IK-NCK-KGA). For the keyword space with and without uniform
distribution, we respectively present two universal transformations
from anonymous identity-based encryption to PEFKS, and prove their
SS-CKA and IK-NCK-KGA securities. To our knowledge, PEFKS is the
first scheme to resist against keyword guessing attack on condition
that the keyword space is not more than the polynomial level.
Attacking and fixing Helios: An analysis of ballot secrecy
Helios 2.0 is an open-source web-based end-to-end verifiable electronic voting system, suitable for use in low-coercion environments. In this article, we analyse ballot secrecy in Helios and discover a vulnerability which allows an adversary to compromise the privacy of voters. The vulnerability exploits the absence of ballot independence in Helios and works by replaying a voter's ballot or a variant of it, the replayed ballot magnifies the voter's contribution to the election outcome and this magnification can be used to violated privacy. We demonstrate the practicality of the attack by violating a voter's privacy in a mock election using the software implementation of Helios. Moreover, the feasibility of an attack is considered in the context of French legislative elections and, based upon our findings, we believe it constitutes a real threat to ballot secrecy. We present a fix and show that our solution satisfies a formal definition of ballot secrecy using the applied pi calculus. Furthermore, we present similar vulnerabilities in other electronic voting protocols -- namely, the schemes by Lee et al., Sako & Kilian, and Schoenmakers -- which do not assure ballot independence. Finally, we argue that independence and privacy properties are unrelated, and non-malleability is stronger than independence.
No-leak authentication by the Sherlock Holmes method
We propose a class of authentication schemes that are literally zero-knowledge, as compared to what is formally defined as ``zero-knowledge" in cryptographic literature. We call this ``no-leak" authentication to distinguish from an established ``zero-knowledge" concept. The ``no-leak" condition implies ``zero-knowledge" (even ``perfect zero-knowledge"), but it is actually stronger, as we illustrate by examples.
The principal idea behind our schemes is: the verifier challenges the prover with questions that he (the verifier) already knows answers to; therefore, even a computationally unbounded verifier who follows the protocol cannot possibly learn anything new during any number of authentication sessions. This is therefore also true for a computationally unbounded passive adversary.
Cryptanalysis of Skein
Several attacks on Skein.
A new result on the distinctness of primitive sequences over Z(pq) modulo 2
Let Z/(pq) be the integer residue ring modulo pq with odd prime numbers p and q. This paper studies the distinctness problem of modulo 2 reductions of two primitive sequences over Z/(pq), which has been studied by H.J. Chen and W.F. Qi in 2009. First, it is shown that almost every element in Z/(pq) occurs in a primitive sequence of order n > 2 over Z/(pq). Then based on this element distribution property of primitive sequences over Z/(pq), previous results are greatly improved and the set of primitive sequences over Z/(pq) that are known to be distinct modulo 2 is further enlarged.
Generic Compilers for Authenticated Key Exchange (Full Version)
Uncategorized
Uncategorized
So far, all solutions proposed for {\em authenticated key agreement} combine key agreement and authentication into a single cryptographic protocol. However, in many important application scenarios, key agreement and entity authentication are clearly separated protocols. This fact enables efficient attacks on the na\"ıve combination of these protocols. In this paper, we propose new compilers for two-party key agreement and authentication, which are provably secure in the standard Bellare-Rogaway model. The constructions are generic: key agreement is executed first and results (without intervention of the adversary) in a secret session key on both sides. This key (or a derived key) is handed over, together with a transcript of all key exchange messages, to the authentication protocol, where it is combined with the random challenge(s) exchanged during authentication.
Last updated: 2010-12-10
Identity-based Digital Signature Scheme Without Bilinear Pairings
Many identity-based digital signature schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. In order to save the running time and the size of the signature, in this letter, we propose an identity based signature scheme without bilinear pairings. With both the running time and the size of the signature being saved greatly, our scheme is more practical than the previous related schemes for practical application.
Further Observations on Certificate-Base Encryption and its Generic Construction from Certificateless Public Key Encryption
Certificate-based encryption (CBE) is a new asymmetric encryption paradigm which was introduced to solve the certificate management problem in traditional public key encryption (PKI). It combines PKE and identity-based encryption (IBE) while preserving some of their most attractive features. CBE provides an efficient implicit certificate mechanism which eliminates the third-party queries and simplifies the certificate revocation problem in the traditional PKI. It also solves the key escrow problem and key distribution problem inherent in IBE. In this paper, we introduce the key replacement attack and the malicious-but-passive certifier attack into CBE, and define a class of new security models for CBE under different security levels according to the power of the adversaries against CBE. Our new security models are more elaborated and stronger compared with other existing ones. Then, we propose a generic construction of CBE from certificateless public key encryption and prove its security under the proposed security models in the standard model. We also show a concrete conversion using the proposed generic construction.
A Forgery Attack on the Candidate LTE Integrity Algorithm 128-EIA3
In this note we show that the message authentication code 128-EIA3 considered for adoption as one of the integrity algorithms of the emerging mobile standard LTE is vulnerable to a simple existential forgery attack. This attack allows, given any message and the associated MAC value under an unknown integrity key and an initial vector, to predict the MAC value of a related message under the same key and the same initial vector with a success probability 1/2.
Computing Discrete Logarithms in an Interval
The discrete logarithm problem in an interval of size $N$ in a group $G$ is: Given $g, h \in G$ and an integer $ N$ to find an integer $0 \le n \le N$, if it exists, such that $h = g^n$. Previously the best low-storage algorithm to solve this problem was the van Oorschot and Wiener version of the Pollard kangaroo method. The heuristic average case running time of this method is $(2 + o(1)) \sqrt{N}$ group operations.
We present two new low-storage algorithms for the discrete logarithm problem in an interval of size $N$. The first algorithm is based on the Pollard kangaroo method, but uses 4 kangaroos instead of the usual two. We explain why this algorithm has heuristic average case expected running time of $(1.715 + o(1)) \sqrt{N}$ group operations. The second algorithm is based on the Gaudry-Schost algorithm and the ideas of our first algorithm. We explain why this algorithm has heuristic average case expected running time of $(1.661 + o(1)) \sqrt{N}$ group operations. We give experimental results that show that the methods do work close to that predicted by the theoretical analysis.
This is a revised version since the published paper that contains a corrected proof of Theorem 6 (the statement of Theorem 6 is unchanged). We thank Ravi Montenegro for pointing out the errors.
A non-uniform birthday problem with applications to discrete logarithms
Uncategorized
Uncategorized
We consider a generalisation of the birthday problem that arises in the analysis of algorithms for certain variants of the discrete logarithm problem in groups. More precisely, we consider sampling coloured balls and placing them in urns, such that the distribution of assigning balls to urns depends on the colour of the ball. We determine the expected number of trials until two balls of different colours are placed in the same urn. As an aside we present an amusing ``paradox'' about birthdays.
Using Equivalence Classes to Accelerate Solving the Discrete Logarithm Problem in a Short Interval
The Pollard kangaroo method solves the discrete logarithm problem (DLP) in an interval of size $N$ with heuristic average case expected running time approximately $2 \sqrt{N}$ group operations. A recent variant of the kangaroo method, requiring one or two inversions in the group, solves the problem in approximately $1.71 \sqrt{N}$ group operations. It is well-known that the Pollard rho method can be sped-up by using equivalence classes (such as orbits of points under an efficiently computed group homomorphism), but such ideas have not been used for the DLP in an interval.
Indeed, it seems impossible to implement the standard kangaroo method with equivalence classes.
The main result of the paper is to give an algorithm, building on work of Gaudry and Schost, to solve the DLP in an interval of size $N$ with heuristic average case expected running time of close to $1.36\sqrt{N}$ group operations for groups with fast inversion.
In practice the algorithm is not quite this fast, due to problems with pseudorandom walks going outside the boundaries of the search space, and due to the overhead of handling fruitless cycles. We present some experimental results.
This is the full version (with some minor corrections and updates) of the paper which was published in P. Q. Nguyen and D. Pointcheval (eds.), PKC 2010, Springer LNCS 6056 (2010) 368-383.
An Evaluation of Hash Functions on a Power Analysis Resistant Processor Architecture
Cryptographic hash functions are an omnipresent components in security-critical software and devices; they support, for example, digital signature and data authenticity schemes, mechanisms for key derivation, pseudo-random number generation and so on. A criteria for candidate hash functions in the SHA-3 contest is resistance against side-channel analysis which is a major concern for mobile devices as well. This paper explores the implementation of said candidates on a variant of the Power-Trust platform; our results highlight this representing a flexible solution to power analysis attacks, implying only a modest performance overhead.
Better Key Sizes (and Attacks) for LWE-Based Encryption
We analyze the concrete security and key sizes of theoretically sound
lattice-based encryption schemes based on the ``learning with errors''
(LWE) problem. Our main contributions are: (1)~a new lattice attack
on LWE that combines basis reduction with an enumeration algorithm
admitting a time/success tradeoff, which performs better than the
simple distinguishing attack considered in prior analyses;
(2)~concrete parameters and security estimates for an LWE-based
cryptosystem that is more compact and efficient than the well-known
schemes from the literature. Our new key sizes are up to $10$ times
smaller than prior examples, while providing even stronger concrete
security levels.
Last updated: 2011-01-06
Cryptanalysis of Hummingbird-1
Hummingbird-1 is a lightweight encryption and message authentication primitive published in RISC ’09 and WLC ’10. Hummingbird-1 utilizes a 256-bit secret key and a 64-bit IV. We report a chosen-IV, chosen message attack that can recover the full secret key with a few million chosen messages processed under two related IVs. The attack requires at most 264 off-line computational effort. The attack has been implemented and demonstrated to work against a real-life implementation of Hummingbird-1. By attacking the differentially weak E component, the overall attack complexity can be reduced by a significant factor. Our cryptanalysis is based on a differential divide-and-conquer method with some novel techniques that are uniquely applicable to ciphers of this type.
Statistical Analysis of Reduced Round Compression Functions of SHA-3 Second Round Candidates
Uncategorized
Uncategorized
National Institute of Standards and Technology announced a competition in 2008, of which the winner will be acknowledged as the new hash standard SHA-3. There are 14 second round candidates which are selected among 51 first round algorithms. In this paper, we apply statistical analysis to the second round candidate algorithms by using two different methods, and observe how conservative the algorithms are in terms of randomness. The first method evaluates 256-bit outputs, obtained from reduced round versions of the algorithms, through statistical randomness tests. On the other hand, the second method evaluates the randomness of the reduced round compression functions based on certain cryptographic properties. This analysis gives a rough idea on the security factor of the compression functions.
Separating Succinct Non-Interactive Arguments From All Falsifiable Assumptions
In this paper, we study succinct computationally sound proofs (arguments) for NP, whose communication complexity is polylogarithmic the instance and witness sizes. The seminal works of Kilian '92 and Micali '94 show that such arguments can be constructed under standard cryptographic hardness assumptions with four rounds of interaction, and that they be made non-interactive in the random-oracle model. The latter construction also gives us some evidence that succinct non interactive arguments (SNARGs) may exist in the standard model with a common reference string (CRS), by replacing the oracle with a sufficiently complicated hash function whose description goes in the CRS. However, we currently do not know of any construction of SNARGs with a formal proof of security under any simple cryptographic assumption.
In this work, we give a broad black-box separation result, showing that black-box reductions cannot be used to prove the security of any SNARG construction based on any falsifiable cryptographic assumption. This includes essentially all common assumptions used in cryptography (one-way functions, trapdoor permutations, DDH, RSA, LWE etc.). More generally, we say that an assumption is falsifiable if it can be modeled as an interactive game between an adversary and an efficient challenger that can efficiently decide if the adversary won the game. This is similar, in spirit, to the notion of falsifiability of Naor '03, and captures the fact that we can efficiently check if an adversarial strategy breaks the assumption.
Our separation result also extends to designated verifier SNARGs, where the verifier needs a trapdoor associated with the CRS to verify arguments, and slightly succinct SNARGs, whose size is only required to be sublinear in the statement and witness size.
The Round Complexity of General VSS
The round complexity of verifiable secret sharing (VSS) schemes has been studied extensively for threshold adversaries. In particular, Fitzi et al. showed an efficient 3-round VSS for $n \geq 3t+1$ \cite{FitziVSSTCC06}, where an infinitely powerful adversary can corrupt t (or less) parties out of $n$ parties. This paper shows that for non-threshold adversaries,
-Two round VSS is possible iff the underlying adversary structure
satisfies ${\cal Q}^4$ condition;
-Three round VSS is possible iff the underlying adversary structure
satisfies ${\cal Q}^3$ condition.
Further as a special case of our three round protocol, we can obtain a more efficient 3-round VSS than the VSS of Fitzi et al. for $n = 3t+1$. More precisely, the communication complexity of the reconstruction phase is reduced from ${\cal O}(n^3)$ to ${\cal O}(n^2)$. We finally point out a flaw in the reconstruction phase of VSS of Fitzi et al., and show how to fix it.
A New Model of Binary Elliptic Curves with Fast Arithmetic
Uncategorized
Uncategorized
This paper presents a new model of ordinary elliptic curves with fast arithmetic over field of characteristic two. In addition, we propose two isomorphism maps between new curves and Weierstrass curves. This paper proposes new explicit addition law for new binary curves and prove the addition law corresponds to the usual addition law on Weierstrass curves. This paper also presents fast unified addition formulae and doubling formulae for these curves. The unified addition formulae cost $12M+2D$, where $M$ is the cost of a field multiplication, and $D$ is the cost of multiplying by a curve parameter. These formulae are more efficient than other formulae in literature. Finally, this paper presents explicit formulae for $w$-coordinates differential addition. In a basic step of Montgomery ladder, the cost of a projective differential addition and doubling are $5M$ and $1M+1D$ respectively, and the cost of mixed $w$-coordinates differential addition is $4M$.
How to Improve Rebound Attacks
Uncategorized
Uncategorized
Rebound attacks are a state-of-the-art analysis method for hash functions. These cryptanalysis methods are based on a well chosen differential path and have been applied to several hash functions from the SHA-3 competition, providing the best known analysis in these cases. In this paper we study rebound attacks in detail and find for a large number of cases that the complexities of existing attacks can be improved.
This is done by identifying problems that optimally adapt to the cryptanalytic situation, and by using better algorithms to find solutions for the differential path. Our improvements affect one particular operation that appears in most rebound attacks and
which is often the bottleneck of the attacks. This operation, which varies depending on the attack, can be roughly described as {\em merging} large lists. As a result, we introduce new general purpose algorithms for enabling further rebound analysis to be as performant as possible.
We illustrate our new algorithms on real hash functions.
More precisely, we demonstrate how to reduce the complexities of the best known analysis on four SHA-3 candidates: JH, Gr\o{}stl, ECHO and {\sc Lane} and on the best known rebound analysis on the SHA-3 candidate Luffa.
Weakness of two ID-based remote mutual authentication with key agreement protocols for mobile devices
Recently, Yoon et al. and Wu proposed two improved remote mutual authentication and key agreement schemes for mobile devices on elliptic curve cryptosystem. In this paper, we show that Yoon et al.’s protocol fails to provide explicit key perfect forward secrecy and fails to achieve explicit key confirmation. We also point out Wu’s scheme decreases efficiency by using the double secret keys and private/public pair, and is vulnerable to the password guessing attack and the forgery attack.
A Closer Look at Keyboard Acoustic Emanations: Random Passwords, Typing Styles and Decoding Techniques
Uncategorized
Uncategorized
We take a closer look at keyboard acoustic emanations specifically
for the purpose of eavesdropping over random passwords. In this
scenario, dictionary and HMM language models are not applicable;
the attacker can only utilize the raw acoustic information which
has been recorded. We investigate several existing signal processing techniques for our purpose, and introduce a novel technique
– time-frequency decoding – that improves the detection accuracy
compared to previous techniques. We also carefully examine the
effect of typing style – a crucial variable largely ignored by prior
research – on the detection accuracy. Our results show that using
the same typing style (hunt and peck) for both training and decoding the data, the best case success rate for detecting correctly the
typed key is 64% per character. The results also show that changing
the typing style, to touch typing, during the decoding stage reduces
the success rate, but using the time-frequency technique, we can
still achieve a success rate of around 40% per character.
Our work takes the keyboard acoustic attack one step further,
bringing it closer to a full-fledged vulnerability under realistic scenarios (different typing styles and random passwords). Our results
suggest that while the performance of these attacks degrades under such conditions, it is still possible, utilizing the time-frequency
technique, to considerably reduce the exhaustive search complexity
of retrieving a random password.
On Functional Decomposition of Multivariate Polynomials with Differentiation and Homogenization
In this paper, we give a theoretical analysis for the algorithms to
compute functional decomposition for multivariate polynomials based
on differentiation and homogenization which are proposed by Ye, Dai,
Lam (1999) and Faugère, Perret (2006, 2008, 2009).
We show that a degree proper functional decomposition for a set of
randomly decomposable quartic homogenous polynomials can be computed
using the algorithm with high probability. This solves a conjecture
proposed by Ye, Dai, and Lam (1999). We also propose a conjecture
such that the decomposition for a set of polynomials can be computed
from that of its homogenization with high probability. Finally, we
prove that the right decomposition factors for a set of polynomials
can be computed from its right decomposition factor space.
Combining these results together, we prove that the algorithm can
compute a degree proper decomposition for a set of randomly
decomposable quartic polynomials with probability one when the base
field is of characteristic zero, and with probability close to one
when the base field is a finite field with sufficiently large odd
number under the assumption that the conjecture is correct.
Cryptanalysis of Dual CRT-RSA
Several schemes under the framework of Dual RSA have been proposed by
Sun et al (IEEE-IT, August 2007). We here concentrate on the Dual CRT-RSA scheme and present certain range of parameters for which this is insecure. As a corollary of our work, we prove that the Dual Generalized Rebalanced-RSA (Scheme III of Sun et al) can be efficiently broken for a significant region where the scheme has been claimed to be secure.
An Improved Algebraic Attack on Hamsi-256
Hamsi is one of the $14$ second-stage candidates in NIST's SHA-3
competition. The only previous attack on this hash function was a
very marginal attack on its 256-bit version published by Thomas Fuhr
at Asiacrypt $2010$, which is better than generic attacks only for
very short messages of fewer than $100$ 32-bit blocks, and is only
$26$ times faster than a straightforward exhaustive search attack. In
this paper we describe a different algebraic attack which is less
marginal: It is better than the best known generic attack for all
practical message sizes (up to $4$ gigabytes), and it outperforms
exhaustive search by a factor of at least $512$. The attack is based
on the observation that in order to discard a possible second
preimage, it suffices to show that one of its hashed output bits is
wrong. Since the output bits of the compression function of Hamsi-256
can be described by low degree polynomials, it is actually faster to
compute a small number of output bits by a fast polynomial evaluation
technique rather than via the official algorithm.
Fast Endomorphism for any Genus 2 Hyperelliptic Curve over a Finite Field of Even Characteristic
In EUROCRYPT 2009, Galbraith, Lin and Scott constructed an efficiently computable endomorphism for a large family of elliptic curves defined over finite fields of large characteristic. They demonstrated that the endomorphism can be used to accelerate scalar multiplication in the elliptic curve cryptosystem based on these curves. In this paper we extend the method to any genus 2 hyperelliptic curve defined over a finite field of even characteristic. We propose an efficient algorithm to generate a random genus 2 hyperelliptic curve and its quadratic twist equipped with a fast endomorphism on the Jacobian. The analysis of the operation amount of the scalar multiplication is also given.
Exact, Efficient and Information-Theoretically Secure Voting with an Arbitrary Number of Cheaters
We present three voting protocols with unconditional privacy and correctness, without assuming any bound on the number of corrupt participants. All protocols have polynomial complexity and require private channels and a simultaneous broadcast channel. Unlike previously proposed protocols in this model, the protocols that we present deterministically output the exact tally. Our first protocol is a basic voting scheme which allows voters to interact in order to compute the tally. Privacy of the ballot is unconditional in the sense that regardless of the behavior of the dishonest participants nothing can be learned through the protocol that could not be learned in an ideal realisation. Unfortunately, a single dishonest participant can make the protocol abort, in which case the dishonest participants can nevertheless learn the outcome of the tally. Our second protocol introduces voting authorities which improves the communication complexity by limiting interaction to be only between voters and authorities and among the authorities themselves; the simultaneous broadcast is also limited to the authorities. In the second protocol, as long as a single authority is honest, the privacy is unconditional, however, a single corrupt authority or a single corrupt voter can cause the protocol to abort. Our final protocol provides a safeguard against corrupt voters by enabling a verification technique to allow the authorities to revoke incorrect votes without aborting the protocol. Finally, we discuss the implementation of a simultaneous broadcast channel with the use of temporary computational assumptions, yielding versions of our protocols that achieve everlasting security.
Secure Multiparty Computation with Partial Fairness
A protocol for computing a functionality is secure if an adversary in this protocol cannot cause more harm than in an ideal computation where parties give their inputs to a trusted party which returns the
output of the functionality to all parties. In particular, in the ideal model such computation is fair – all parties get the output. Cleve (STOC 1986) proved that, in general, fairness is not possible without an honest majority. To overcome this impossibility, Gordon and Katz (Eurocrypt 2010) suggested a relaxed definition – 1/p-secure computation – which guarantees partial fairness. For two parties, they
construct 1/p-secure protocols for functionalities for which the size of either their domain or their range is polynomial (in the security parameter). Gordon and Katz ask whether their results can be extended to multiparty protocols.
We study 1/p-secure protocols in the multiparty setting for general functionalities. Our main result is constructions of 1/p-secure protocols when the number of parties is constant provided that less than 2/3 of the parties are corrupt. Our protocols require that either (1) the functionality is deterministic and the size of the domain is polynomial (in the security parameter), or (2) the functionality can be randomized and the size of the range is polynomial. If the size of the domain is constant and the functionality is deterministic, then our protocol is efficient even when the number of parties is O(log log n) (where n is
the security parameter). On the negative side, we show that when the number of parties is super-constant, 1/p-secure protocols are not possible when the size of the domain is polynomial.
A Broadcast Attack against NTRU Using Ding's Algorithm
Very recently, Ding proposed an ingenious algorithm to solve LWE
problem with bounded errors in polynomial time. We find that it can
be easily used to give a broadcast attack against NTRU, the most
efficient lattice-based public-key cryptosystem known to date.
A New Class of Bent--Negabent Boolean Functions
In this paper we develop a technique of constructing bent--negabent
Boolean functions by using complete mapping polynomials. Using this
technique we demonstrate that for each $\ell \ge 2$ there exits
bent--negabent functions on $n = 12\ell$ variables with algebraic degree
$\frac{n}{4}+1 = 3\ell + 1$. It is also demonstrated that there exist
bent--negabent functions on $8$ variables with algebraic degrees
$2$, $3$ and $4$.
Solving Systems of Multivariate Quadratic Equations over Finite Fields or: From Relinearization to MutantXL
Uncategorized
Uncategorized
In this article we investigate algorithms for solving non-linear multivariate equations over finite fields and the relation between them. For non binary fields usually computing the Gröbner basis of the corresponding ideal is the best choice in this context. One class of algorithms is based on Buchberger's algorithm. Today's best algorithms like F_4 and F_5 belong to this class. Another strategy to solve such systems is called eXtended Linearization (XL) from Eurocrypt 2000. In the past both strategies were treated as different ideas and there was a heated discussion which of them to prefer. Since Ars et al. proved in 2004 that XL is a redundant version of F_4, the latter seemed to be the winner. But that was not the end of the line as piece for piece the idea emerged that both classes are only different views on the same problem. We even think that they are just different time-memory optimizations. One indication to that can be found in the PhD of Albrecht, who introduced MatrixF_5, a F_5 version of XL. A second indication can be found in the PhD of Mohamed, who introduced a memory-friendly version of XL using Wiedemanns algorithm. We want to give further evidence by providing a theoretical analysis of MutantXL. We show that MutantXL solves at the same degree of regularity as its competitors F_4 and F_5 for most instances. Thereby we also confirm recent results of Albrecht, who showed that MutantXL is a redundant version of F_4, i.e. it never solves below the degree of regularity. We show that MutantXL has, compared to WiedemannXL, to pay its gain in efficiency with memory.
To enhance the understanding of the whole XL-family of algorithms we give a full overview from Relinearization over XL to MutantXL and provide some additional theoretical insights.
Attribute-Based Signatures
We introduce {\em Attribute-Based Signatures (ABS)}, a versatile primitive that allows a party to sign a message with fine-grained control over identifying information. In ABS, a signer, who possesses a set of attributes from the authority, can sign a message with a predicate that is satisfied by his attributes. The signature reveals no more than the fact that a single user with some set of attributes satisfying the predicate has attested to the message. In particular, the signature hides the attributes used to satisfy the predicate and any identifying information about the signer (that could link multiple signatures as being from the same signer). Furthermore, users cannot collude to pool their attributes together.
We give a general framework for constructing ABS schemes, and then show several practical instantiations based on groups with bilinear pairing operations, under standard assumptions. Further, we give a construction which is secure even against a malicious attribute authority, but the security for this scheme is proven in the generic group model. We describe several practical problems that motivated this work, and how ABS can be used to solve them. Also, we show how our techniques allow us to extend Groth-Sahai NIZK proofs to be simulation-extractable and identity-based with low overhead.
Cache Games - Bringing Access Based Cache Attacks on AES to Practice
Side channel attacks on cryptographic systems are attacks exploiting information gained from physical implementations rather than utilizing theoretical weaknesses of a scheme. In particular, during the last years, major achievements were made for the class of access-driven cache-attacks. The source of information leakage for such attacks are the locations of memory accesses performed by a victim process.
In this paper we analyze the case of AES and present an attack which is capable of recovering the full secret key in almost realtime for AES-128, requiring only a very limited number of observed encryptions. Unlike most other attacks, ours neither needs to know the ciphertext, nor does it need to know any information about the plaintext (such as its distribution, etc.). Moreover, for the first time we also show how the plaintext can be recovered without having access to the ciphertext. Further, our spy process can be run under an unprivileged user account. It is the first working attack for implementations using compressed tables, where it is not possible to find out the beginning of AES rounds any more -- a corner stone for all efficient previous attacks. All results of our attack have been demonstrated by a fully working implementation, and do not solely rely on theoretical considerations or simulations.
A contribution of probably independent interest is a denial of service attack on the scheduler of current Linux systems (CFS), which allows to monitor memory accesses with novelly high precision. Finally, we give some generalizations of our attack, and suggest some possible countermeasures which would render our attack impossible.
Differential Attack on Five Rounds of the SC2000 Block Cipher
The SC2000 block cipher has a 128-bit block size and a user key of 128, 192 or 256 bits, which employs a total of 6.5 rounds if a 128-bit user key is used. It is a CRYPTREC recommended e-government cipher. In this
paper we describe two 4.75-round differential characteristics with
probability $2^{-126}$ of SC2000 and seventy-six 4.75-round differential characteristics with probability $2^{-127}$. Finally,
we present a differential cryptanalysis attack on a 5-round reduced
version of SC2000 when used with a 128-bit key; the attack requires
$2^{125.68}$ chosen plaintexts and has a time complexity of $2^{125.75}$ 5-round SC2000 encryptions. It suggests for the first time that the safety margin of SC2000 with a 128-bit key decreases below one and a half rounds.
Last updated: 2010-11-24
Better Key Sizes (and Attacks) for LWE-Based Encryption
We analyze the concrete security and associated key sizes for
theoretically sound lattice-based encryption schemes based on the
``learning with errors'' (LWE) problem. Our main contributions are
(1)~a new, detailed model and experimental analysis of how
basis-reduction and post-reduction attacks perform on the specific
family of random lattices arising from the use of LWE, and
(2)~concrete parameters and security estimates for an LWE-based
cryptosystem that is more compact and efficient than the more
well-known schemes from the literature. For security levels exceeding
that of a $128$-bit symmetric cipher, our new key sizes are at least
$10$ times smaller than prior recommendations.
Bonsai Trees, or How to Delegate a Lattice Basis
We introduce a new \emph{lattice-based} cryptographic structure called
a \emph{bonsai tree}, and use it to resolve some important open
problems in the area. Applications of bonsai trees include:
\begin{itemize}
\item An efficient, stateless `hash-and-sign' signature scheme in the
\emph{standard model} (i.e., no random oracles), and
\item The first \emph{hierarchical} identity-based encryption (HIBE)
scheme (also in the standard model) that does not rely on bilinear
pairings.
\end{itemize}
Interestingly, the abstract properties of bonsai trees seem to have no
known realization in conventional number-theoretic cryptography.
Beyond the Limits of DPA: Combined Side-Channel Collision Attacks
Uncategorized
Uncategorized
The fundamental problem of extracting the highest possible amount of key-related information using the lowest possible number of measurements is central to side-channel attacks against embedded implementations of cryptographic algorithms. To address it, this work proposes a novel framework enhancing side-channel collision attacks with divide-and-conquer attacks such as differential power analysis (DPA). An information-theoretical metric is introduced for the evaluation of collision detection efficiency. Improved methods of dimension reduction for side-channel traces are developed based on a statistical model of Euclidean distance.
The theoretical and experimental results of this work confirm that DPA-combined collision attacks are superior to both DPA-only and collision-only attacks. The new methods of dimension reduction lead to further complexity improvements. All attacks are treated for the case of AES-128 and are practically validated on a wide-spread 8-bit RISC microcontroller whose architecture is similar to that of many smart cards.
Higher-order differential properties of Keccak and Luffa
In this paper, we identify higher-order differential and zero-sum properties in the full Keccak-f permutation, in the Luffa v1 hash function, and in components of the Luffa v2 algorithm. These structural properties rely on a new bound on the degree of iterated permutations with a nonlinear layer composed of parallel applications of smaller balanced Sboxes. These techniques yield zero-sum partitions of size $2^{1590}$ for the full Keccak-f permutation and several observations on the Luffa hash family. We first show that Luffa v1 applied to one-block messages is a function of 255 variables with degree at most 251. This observation leads to the construction of a higher-order differential distinguisher for the full Luffa v1 hash function, similar to the one presented by Watanabe et al. on a reduced version. We show that similar techniques can be used to find all-zero higher-order differentials in the Luffa v2 compression function, but the additional blank round destroys this property in the hash function.
Improved Collisions for Reduced ECHO-256
In this work, we present a collision attack on 5 out of 8 rounds of the ECHO-256 hash function with a complexity of $2^{112}$ in time and $2^{85.3}$ memory. In this work, we further show that the merge inbound phase can still be solved in the case of hash function attacks on ECHO. As correctly observed by Jean et al., the merge inbound phase of previous hash function attacks succeeds only with a probability of $2^{-128}$. The main reason for this behavior is the low rank of the linear SuperMixColumns transformation. However, since there is enough freedom in ECHO we can solve the resulting linear equations with a complexity much lower than $2^{128}$. On the other hand, also this low rank of the linear SuperMixColumns transformation allows us to extend the collision attack on the reduced hash function from 4 to 5 rounds. Additionally, we present a collision attack on 6 rounds of the compression function of ECHO-256 and show that a subspace distinguisher is still possible for 7 out of 8 rounds of the compression function of ECHO-256. Both compression function attacks have a complexity of $2^{160}$ with memory requirements of $2^{128}$ and chosen salt.
Group Message Authentication
Group signatures is a powerful primitive with many practical applications, allowing a group of parties to share a signature functionality, while protecting the anonymity of the signer. However, despite intensive research in the past years, there is still no fully satisfactory implementation of group signatures in the plain model. The schemes proposed so far are either too inefficient to be used in practice, or their security is based on rather strong, non-standard assumptions.
We observe that for some applications the full power of group signatures is not necessary. For example, a group signature can be verified by any third party, while in many applications such a universal verifiability is not needed or even not desired. Motivated by this observation, we propose a notion of \emph{group message authentication}, which can be viewed as a relaxation of group signatures. Group message authentication enjoys the group-oriented features of group signatures, while dropping some of the features which are not needed in many real-life scenarios. An example application of group message authentication is an implementation of an \emph{anonymous} credit card.
We present a generic implementation of group message authentication, and also propose an efficient concrete implementation based on standard assumptions, namely strong RSA and DDH.
Enhanced FPGA Implementation of the Hummingbird Cryptographic Algorithm
Hummingbird is a novel ultra-lightweight cryptographic algorithm aiming at resource-constrained devices. In this work, an enhanced hardware implementation of the Hummingbird cryptographic algorithm for low-cost Spartan-3 FPGA family is described. The enhancement is due to the introduction of the coprocessor approach. Note that all Virtex and Spartan FPGAs consist of many embedded memory blocks and this work explores the use of these functional blocks. The intrinsic serialism of the algorithm is exploited so that each step performs just one operation on the data. We compare our performance results with other reported FPGA implementations of the lightweight cryptographic algorithms. As far as author’s knowledge, this work presents the smallest and the most efficient FPGA implementation of the Hummingbird cryptographic algorithm.
Smaller decoding exponents: ball-collision decoding
Uncategorized
Uncategorized
Very few public-key cryptosystems are known that can encrypt and decrypt in time b^(2+o(1)) with conjectured security level 2^b against conventional computers and quantum computers. The oldest of these systems is the classic McEliece code-based cryptosystem.
The best attacks known against this system are generic decoding attacks that treat McEliece’s hidden binary Goppa codes as random linear codes. A standard conjecture is that the best possible w-error-decoding attacks against random linear codes of dimension k and length n take time 2^((alpha(R,W)+o(1))n) if k/n goes to R and w/n goes to W as n goes to infinity.
Before this paper, the best upper bound known on the exponent alpha(R, W) was the exponent of an attack introduced by Stern in 1989. This paper introduces “ball-collision decoding” and shows that it has a smaller exponent for each (R, W): the speedup from Stern’s algorithm to ball-collision decoding is exponential in n.
VMCrypt - Modular Software Architecture for Scalable Secure Computation
Garbled circuits play a key role in secure computation. Unlike previous work, which focused mainly on efficiency and automation aspects of secure computation, in this paper we focus on software modularity and scalability, considering very large circuits. Our main contribution is a virtual machine that dynamically loads hardware descriptions into memory and destructs them as soon as they are done computing. Our software also introduces a new technique for parallel evaluation of garbled circuits. The software is designed in a completely modular fashion, allowing developers to integrate garbled circuits through
an API (Abstract Programming Interface), without having to modify the base code. We measure the performance of this architecture on several circuits with hundreds of millions of gates. To the best of our
knowledge, these are the largest scalable secure computations done to date.
Improved Preimage Attack on One-block MD4
We propose an improved preimage attack on one-block MD4 with the
time complexity $2^{94.98}$ MD4 compression function operations, as
compared to $2^{107}$ in \cite{AokiS-sac08}. We research the attack
procedure in \cite{AokiS-sac08} and formulate the complexity for
computing a preimage attack on one-block MD4. We attain the result
mainly through the following two aspects with the help of the
complexity formula. First, we continue to compute two more steps
backward to get two more chaining values for comparison during the
meet-in-the-middle attack. Second, we search two more neutral words
in one independent chunk, and then propose the multi-neutral-word
partial-fixing technique to get more message freedom and skip ten
steps for partial-fixing, as compared to previous four steps. We
also use the initial structure technique and apply the same idea to
improve the pseudo-preimage and preimage attacks on Extended MD4
with $2^{25.2}$ and $2^{12.6}$ improvement factor, as compared to
previous attacks in \cite{SasakiA-acisp09}, respectively.
Secret Key Leakage from Public Key Perturbation of DLP-based Cryptosystems
Finding efficient countermeasures for cryptosystems against fault attacks is challenged by a constant discovery of flaws in designs. Even elements, such as public keys, that do not seem critical must be protected. From the attacks against RSA, we develop a new attack of DLP-based cryptosystems, built in addition on a lattice analysis to recover DSA public keys from partially known nonces. Based on a realistic fault model, our attack only requires 16 faulty signatures to recover a 160-bit DSA secret key within a few minutes on a standard PC. These results significantly improves the previous public element fault attack in the context of DLP-based cryptosystems.
Fast Algorithm to solve a family of SIS problem with $l_\infty$ norm
In this paper, we present a new algorithm, such that, for the
small integer solution (SIS) problem, if the solution is bounded ( by an integer $\beta$
in $l_\infty$ norm, which we call a bounded SIS (BSIS) problem, {\it and if the difference between the row dimension $n$ and the column dimension $m$ of the corresponding matrix is relatively small with respect the row dimension $m$}, we can solve it easily with a complexity of polynomial in $m$.
The Cube Attack on Stream Cipher Trivium and Quadraticity Tests
In 2008 I. Dinur and A. Shamir presented a new type of algebraic
attack on symmetric ciphers named cube attack. The method has
been applied to reduced variants of stream ciphers Trivium and Grain-
128, reduced variants of the block ciphers Serpent and CTC and to a
reduced version of the keyed hash function MD6. Independently a very
similar attack named AIDA was introduced by M. Vielhaber. In this
paper we develop quadraticity tests within the cube attack and apply
them to a variant of stream cipher Trivium reduced to 709 initialization
rounds. Using this method we obtain the full 80-bit secret key. In this
way it eliminates the stage of brute force search of some secret key bits which occured in previous cube attacks.
Construction of Highly Nonlinear Resilient Boolean Functions Satisfying Strict Avalanche Criterion
A method is proposed to construct resilient Boolean functions on $n$ variables ($n$ even) satisfying strict avalanche criterion (SAC) with nonlinearity $>2^{n-1}-2^{n/2}$. A large class of cryptographic Boolean functions which were not known earlier are obtained.
L1 - An Intermediate Language for Mixed-Protocol Secure Computation
Uncategorized
Uncategorized
Secure Computation (SC) enables secure distributed computation of arbitrary functions of private inputs.
It has many useful applications, e.g. benchmarking or auctions. Several general protocols for SC have been proposed and recently been implemented in a number of compilers and frameworks.
These compilers or frameworks implement one general SC protocol and then require the programmer to implement the function he wants the protocol to compute.
Performance remains a challenge for this approach and it has been realized early on that special protocols for important problems can deliver superior performance.
In this paper we propose a new intermediate language (L1) for optimizing SC compilers which enables efficient implementation of special protocols potentially mixing several general SC protocols.
We show by three case studies -- one for computation of the median, one for weighted average, one for division -- that special protocols and mixed-protocol implementations in our language L1 can lead to superior performance. Moreover, we show that only a combined view on algorithm \emph{and} cryptographic protocol can discover SCs with best run-time performance.
Discrete Logarithms, Diffie-Hellman, and Reductions
We consider the One-Prime-Not-p and All-Primes-But-p variants of the Discrete Logarithm (DL) problem in a group of prime order p. We give reductions to the Diffie-Hellman (DH) problem that do not depend on any unproved conjectures about smooth or prime numbers in short intervals. We show that the One-Prime-Not-p-DL problem reduces to DH in time roughly L_p(1/2); the All-Primes-But-p-DL problem reduces to DH in time roughly L_p(2/5); and the All-Primes-But-p-DL problem reduces to the DH plus Integer Factorization problems in polynomial time. We also prove that under the Riemann Hypothesis, with e*log p queries to a yes-or-no oracle one can reduce DL to DH in time roughly L_p(1/2); and under a conjecture about smooth numbers, with e*log p queries to a yes-or-no oracle one can reduce DL to DH in polynomial time.
Efficient Hashing using the AES Instruction Set
In this work, we provide a software benchmark for a large range of 256-bit blockcipher-based hash functions. We instantiate the underlying blockcipher with AES, which allows us to exploit the recent AES instruction set (AES-NI). Since AES itself only outputs 128 bits, we consider double-block-length constructions, as well as (single-block-length) constructions based on RIJNDAEL-256. Although we primarily target architectures supporting AES-NI, our framework has much broader applications by estimating the performance of these hash functions on any (micro-)architecture given AES-benchmark results. As far as we are aware, this is the first comprehensive performance comparison of multi-block-length hash functions in software.
A Discrete Logarithm Attack on Elliptic Curves
We give an improved index calculus attack for a large class of elliptic curves. Our algorithm works by efficiently transferring the group structure of an elliptic curve to a weaker group. The running time of our attack poses a significant and realistic threat to the security of the elliptic curves in this class. As a consequence of our construction, we will also derive entirely new point counting algorithms. These algorithms set new run-time complexity records. We discuss implementations of these algorithms and give examples.
Cryptanalysis of PRESENT-like ciphers with secret S-boxes
At Eurocrypt 2001, Biryukov and Shamir investigated the security of AES-like ciphers where the substitutions and affine transformations are all key-dependent and successfully cryptanalysed two and a half rounds. This paper considers PRESENT-like ciphers in a similar manner.
We focus on the settings where the S-boxes are key dependent, and repeated for every round. We break one particular variant which was proposed in 2009 with practical complexity in a chosen plaintext/chosen ciphertext scenario. Extrapolating these results suggests that up to 28 rounds of such ciphers can be broken. Furthermore, we outline how our attack strategy can be applied to an extreme case where the S-boxes are chosen uniformly at random for each round and where the bit permutation is secret as well.
On permutation polynomials EA-equivalent to the inverse function over $GF(2^n)$
It is proved that there does not exist a linearized polynomial
$L(x)\in\mathbb{F}_{2^n}[x]$ such that $x^{-1}+L(x)$ is a
permutation on $\mathbb{F}_{2^n}$ when $n\geq5$, which is proposed
as a conjecture in \cite{li}. As a consequence, a permutation is
EA-equivalent to the inverse function over $\mathbb{F}_{2^n}$ if and
only if it is affine equivalent to it when $n\geq 5$.
Cryptanalysis of splay tree based encryption
We present a chosen-plaintext attack on KIST, a recently proposed encryption scheme based on splay trees. Our attack recovers a 128-bit key with approximately 2^28 bit operations and fewer than 2^19 chosen-plaintext queries.
Single Core Implementation of Blue Midnight Wish Hash Function on VIRTEX 5 Platform
Uncategorized
Uncategorized
This paper presents the design and analysis of an area efficient implementation of the SHA-3 candidate Blue
MidnightWish hash function with different digest sizes of 256 and 512 bits on an FPGA platform. The core functionality
with finalization implementation without padding stage of BMW on Xilinx Virtex-5 FPGA requires 51 slices for BMW-
256 and 105 slices for BMW-512. Both BMW versions require two blocks of memory: one memory block to store the
intermediate values and hash constants and the other memory block to store the instruction controls. The proposed
implementation achieves a throughput of 68.71 Mpbs for BMW-256 and 112.18 Mpbs for BMW-512.
Breaking Grain-128 with Dynamic Cube Attacks
We present a new variant of cube attacks called a \emph{dynamic cube
attack}. Whereas standard cube attacks \cite{4} find the key by
solving a system of linear equations in the key bits, the new attack
recovers the secret key by exploiting distinguishers obtained from
cube testers. Dynamic cube attacks can create lower degree
representations of the given cipher, which makes it possible to
attack schemes that resist all previously known attacks. In this
paper we concentrate on the well-known stream cipher Grain-128
\cite{6}, on which the best known key recovery attack \cite{15} can
recover only $2$ key bits when the number of initialization rounds is
decreased from $256$ to $213$. Our first attack runs in practical
time complexity and recovers the full 128-bit key when the number of
initialization rounds in Grain-128 is reduced to $207$. Our second
attack breaks a Grain-128 variant with $250$ initialization rounds
and is faster than exhaustive search by a factor of about $2^{28}$.
Finally, we present an attack on the full version of Grain-128 which
can recover the full key but only when it belongs to a large subset
of $2^{-10}$ of the possible keys. This attack is faster than
exhaustive search over the $2^{118}$ possible keys by a factor of
about $2^{15}$. All of our key recovery attacks are the best known so
far, and their correctness was experimentally verified rather than
extrapolated from smaller variants of the cipher. This is the first
time that a cube attack was shown to be effective against the full
version of a well known cipher which resisted all previous attacks.
Practical Near-Collisions and Collisions on Round-Reduced ECHO-256 Compression Function
Uncategorized
Uncategorized
In this paper, we present new results on the second-round SHA-3
candidate ECHO. We describe a method to construct a collision in the
compression function of ECHO-256 reduced to four rounds in 2^52
operations on AES-columns without significant memory requirements. Our
attack uses the most recent analyses on ECHO, in particular the
SuperSBox and SuperMixColumns layers to utilize efficiently the
available freedom degrees. We also show why some of these results are
flawed and we propose a solution to fix them. Our work improve the
time and memory complexity of previous known techniques by using
available freedom degrees more precisely. Finally, we validate our
work by an implementation leading to near-collisions in 2^36
operations.
Efficient Two-Move Blind Signatures in the Common Reference String Model
Blind signatures provide a mechanism for achieving privacy and anonymity whereby a user gets the signer to sign a message of his choice without the signer learning the content of the message, nor linking message/signature request pairs when he sees the nal signature. In this paper, we construct a blind signature that requires minimal interaction (two moves) between the user and the signer, and which results in a signature which is a signature with respect to a standard (i.e. non-blind) signature scheme. The signature request protocol is akin to the classic, blind-unblind methodology used for RSA blind signatures in the random oracle model; whilst the output signature is a standard Camenisch-Lysyanskaya signature in bilinear groups. The scheme is secure in the common reference string model, assuming a discrete logarithm related assumption in bilinear groups; namely a new variant of the LRSW assumption. We provide evidence for the hardness of our new variant of the LRSW by showing it is intractable in the generic group model.
ON DILLON'S CLASS H OF BENT FUNCTIONS, NIHO BENT FUNCTIONS AND O-POLYNOMIALS
Uncategorized
Uncategorized
One of the classes of bent Boolean functions introduced by John Dillon in his thesis
is family H. While this class corresponds to a nice original construction of bent functions in
bivariate form, Dillon could exhibit in it only functions which already belonged to the well-
known Maiorana-McFarland class. We first notice that H can be extended to a slightly larger
class that we denote by H. We observe that the bent functions constructed via Niho power
functions, which four examples are known, due to Dobbertin et al. and to Leander-Kholosha,
are the univariate form of the functions of class H. Their restrictions to the vector spaces
uF2n=2 , u 2 F?
2n, are linear. We also characterize the bent functions whose restrictions to the
uF2n=2 's are affine. We answer to the open question raised by Dobbertin et al. in JCT A 2006
on whether the duals of the Niho bent functions introduced in the paper are Niho bent as well,
by explicitely calculating the dual of one of these functions. We observe that this Niho function
also belongs to the Maiorana-McFarland class, which brings us back to the problem of knowing
whether H (or H) is a subclass of the Maiorana-McFarland completed class. We then show that
the condition for a function in bivariate form to belong to class H is equivalent to the fact that
a polynomial directly related to its definition is an o-polynomial and we deduce eight new cases
of bent functions in H which are potentially new bent functions and most probably not affine
equivalent to Maiorana-McFarland functions.
Blockcipher-based Double-length Hash Functions for Pseudorandom Oracles
The notion of PRO (pseudorandom oracle) is an important security notion of hash functions
because a PRO hash function inherits all properties of a random oracle up to the PRO bound (e.g., security against generic attacks, collision resistant security, preimage resistant security and so on).
In this paper, we propose a new block cipher-based double-length hash function for PROs.
Our hash function uses a single block cipher, which encrypts an $n$-bit string using a $2n$-bit key, and maps an input of arbitrary length to a $2n$-bit output.
Since many block ciphers supports a $2n$-bit key (e.g. AES supports a $256$-bit key),
the assumption to use the $2n$-bit key length block cipher is acceptable.
We prove that our hash function is PRO up to $\order(2^n)$ query complexity as long as the block cipher is an ideal cipher.
To our knowledge, this is the first time double-length hash function based on a single (practical size) block cipher with the birthday type PRO security.
Self-Protecting Electronic Medical Records Using Attribute-Based Encryption
We provide a design and implementation of self-protecting electronic medical records (EMRs) using attribute-based encryption. Our system allows healthcare organizations to export EMRs to storage locations outside of their trust boundary, including mobile devices, Regional Health Information Organizations (RHIOs), and cloud systems such as Google Health. In contrast to some previous approaches to this problem, our solution is designed to maintain EMR availability even when providers are offline, i.e., where network connectivity is not available (for example, during a natural disaster). To balance the needs of emergency care and patient privacy, our system is designed to provide for fine-grained encryption and is able to protect individual items within an EMR, where each encrypted item may have its own access control policy. To validate our architecture, we implemented a prototype system using a new dual-policy attribute-based encryption library that we developed. Our implementation, which includes an iPhone app for storing and managing EMRs offline, allows for flexible and automatic policy generation. An evaluation of our design shows that our ABE library performs well, has acceptable storage requirements, and is practical and usable on modern smartphones.
Cryptographic Randomness Testing of Block Ciphers and Hash Functions
One of the most basic properties expected from block ciphers and
hash functions is passing statistical randomness testing, as they are expected to behave like random mappings. Previously, testing of AES candidate block ciphers was done by concatenating the outputs of the algorithms obtained from various input types. In this work, a more convenient method, namely the cryptographic randomness testing is introduced. A package of statistical tests are designed based on certain cryptographic properties of block ciphers and hash functions to evaluate their randomness. The package is applied to the AES finalists, and produced more precise results than those obtained in similar applications.
Fully Secure Functional Encryption with General Relations from the Decisional Linear Assumption
This paper presents a fully secure functional encryption scheme for a wide class of relations, that are specified by non-monotone access structures combined with inner-product relations. The security is proven under a standard assumption, the decisional linear (DLIN) assumption, in the standard model. The proposed functional encryption
scheme covers, as special cases, (1) key-policy and ciphertext-policy attribute-based encryption with non-monotone access structures, and (2) (hierarchical) predicate encryption with inner-product relations and functional encryption with non-zero inner-product relations.
How to Leak on Key Updates
Uncategorized
Uncategorized
In the continual memory leakage model, security against attackers who can repeatedly obtain leakage is achieved by periodically updating the secret key. This is an appealing model which captures a wide class of side-channel attacks, but all previous constructions in this model provide only a very minimal amount of leakage tolerance \emph{during secret key updates}. Since key updates may happen frequently, improving security guarantees against attackers who obtain leakage during these updates is an important problem. In this work, we present the first cryptographic primitives which are secure against a super-logarithmic amount of leakage during secret key updates. We present signature and public key encryption schemes in the standard model which can tolerate a constant fraction of the secret key to be leaked between updates as well as \emph{a constant fraction of the secret key and update randomness} to be leaked during updates. Our signature scheme also allows us to leak a constant fraction of the entire secret state during signing. Before this work, it was unknown how to tolerate super-logarithmic leakage during updates even in the random oracle model. We rely on subgroup decision assumptions in composite order bilinear groups.