All papers in 2013 (Page 6 of 881 results)
Last updated: 2013-08-17
Breaking the Even-Mansour Hash Function: Collision and Preimage Attacks on JH and Grøstl
Uncategorized
Uncategorized
The Even-Mansour structure and the chopMD mode are two widely-used strategies in hash function designs. They are adopted by many hash functions including two SHA-3 finalists, the JH hash function and the Grøstl hash function. The Even-Mansour structure combining the chopMD mode is supposed to enhance the security of hash functions against collision and preimage attacks, while our results show that it is not possible to achieve this goal with an unbalanced compression function. In this paper, we show generic attacks on the Even-Mansour hash functions including both collision and preimage attacks. Our attacks show the structure flaws of the Even-Mansour hash functions. All these attacks can be applied to specific hash functions based on the Even-Mansour structure. We achieve the first collision and (2nd-)preimage attacks on full JH and Grøstl respectively. For the JH hash function, we achieve collision and (2nd-)preimage attacks on the full JH compression function with a time gain $2^{10.22}$. After a simple modification of the padding rules, we obtain full round collision and (2nd-)preimage attacks on the modified JH hash function with a time gain $2^{10.22}$. For the Grøstl hash function, we obtain both collision and (2nd-)preimage attacks on the full Grøstl hash function with a limited time gain $2^{0.58}$.
Comments on Three Multi-Server Authentication Protocols
Recently, Tsai et al., Liao et al. and Li et al. each proposed a multi-server authentication protocol. They claimed their protocols are secure and can withstand various attacks. However, we found some security loopholes in each of their schemes, for example, both Tsai et al.’s and Liao et al.‘s schemes suffers from server spoofing attack by an insider server. Li et al.s’ suffers from the lost smart card password-guessing attack. In addition, Liao et al.‘s scheme also has the off-line password-guessing attack. In this paper, we will first review then show the attacks on each of the schemes. Then, based on Li et al.’s scheme, we proposed a novel one and examined its security in several security features. After security analysis, we concluded that our protocol outperformed Li et al.’s scheme in the security feature of lost smart card password-guessing attack.
Keywords: multi-server, password authentication protocol
Delegatable Pseudorandom Functions and Applications
Uncategorized
Uncategorized
We put forth the problem of delegating the evaluation of a
pseudorandom function (PRF) to an untrusted proxy. A {\em delegatable PRF}, or DPRF for short, is a new primitive that enables a proxy to evaluate a PRF on a strict subset of its domain using a trapdoor derived from the DPRF secret-key. PRF delegation is
\emph{policy-based}: the trapdoor is constructed with respect to a
certain policy that determines the subset of input values which the
proxy is allowed to compute. Interesting DPRFs should achieve
\emph{low-bandwidth delegation}: Enabling the proxy to compute the PRF values that conform to the policy should be more efficient than simply providing the proxy with the sequence of all such values precomputed.
The main challenge in constructing DPRFs is in maintaining the
pseudorandomness of unknown values in the face of an attacker that
adaptively controls proxy servers. A DPRF may be optionally equipped
with an additional property we call \emph{policy privacy}, where any
two delegation predicates remain indistinguishable in the view of a
DPRF-querying proxy: Achieving this raises new design challenges as
policy privacy and efficiency are seemingly conflicting goals.
For the important class of policies described as (1-dimensional)
\emph{ranges}, we devise two DPRF constructions and rigorously prove
their security. Built upon the well-known tree-based GGM PRF
family~\cite{GGM86}, our constructions are generic and feature only
logarithmic delegation size in the number of values conforming to the
policy predicate. At only a constant-factor efficiency reduction, we
show that our second construction is also policy private. As we
finally describe, their new security and efficiency properties render
our delegated PRF schemes particularly useful in numerous security
applications, including RFID, symmetric searchable encryption, and
broadcast encryption.
A note on quantum related-key attacks
In a basic related-key attack against a block cipher, the adversary has access to encryptions under keys that differ from the target key by bit-flips. In this short note we show that for a quantum adversary such attacks are quite powerful: if the secret key is (i) uniquely determined by a small number of plaintext-ciphertext pairs, (ii) the block cipher can be evaluated efficiently, and (iii) a superposition of related keys can be queried, then the key can be extracted efficiently.
An Algebraic Framework for Diffie-Hellman Assumptions
We put forward a new algebraic framework to generalize and
analyze Diffie-Hellman like Decisional Assumptions which allows
us to argue about security and applications by considering only algebraic properties.
Our $D_{\ell,k}-MDDH$ assumption states that it is hard to decide whether
a vector in $G^\ell$ is linearly dependent of the columns of some matrix in $G^{\ell\times k}$ sampled according to distribution $D_{\ell,k}$.
It covers known assumptions such as $DDH$, $2-Lin$ (linear assumption), and $k-Lin$ (the $k$-linear assumption).
Using our algebraic viewpoint, we can relate the generic hardness of our assumptions in $m$-linear groups to the irreducibility of certain polynomials which describe the output of $D_{\ell,k}$.
We use the hardness results to find new distributions for which the $D_{\ell,k}-MDDH$-Assumption holds generically in $m$-linear groups.
In particular, our new assumptions $2-SCasc$ and $2-ILin$ are generically hard in bilinear groups and, compared to $2-Lin$, have shorter description size, which is a relevant parameter for efficiency in many applications.
These results support using our new assumptions as natural replacements for the $2-Lin$ Assumption which was already used in a large number of applications.
To illustrate the conceptual advantages of our algebraic framework, we construct several fundamental primitives based on any $MDDH$-Assumption. In particular, we can give many instantiations of a primitive in a compact way, including public-key encryption, hash-proof systems, pseudo-random functions, and Groth-Sahai NIZK and NIWI proofs.
As an independent contribution we give more efficient NIZK and NIWI proofs for membership in a subgroup of $G^\ell$, for validity of ciphertexts and for equality of plaintexts. The results imply very significant efficiency improvements for a large number of schemes, most notably Naor-Yung type of constructions.
An Accurate Probabilistic Reliability Model for Silicon PUFs
The power of an accurate model for describing a physical process or designing a physical system is beyond doubt. The currently used reliability model for physically unclonable functions (PUFs) assumes an equally likely error for every evaluation of every PUF response bit. This limits an accurate description since experiments show that certain responses are more error-prone than others, but this fixed error rate model only captures average case behavior. We introduce a new PUF reliability model taking this observed heterogeneous nature of PUF cells into account. An extensive experimental validation demonstrates that the new predicted distributions describe the empirically observed data statistics almost perfectly, even considering sensitivity to operational temperature. This allows studying PUF reliability behavior in full detail, including average and worst case probabilities, and is an invaluable tool for designing more efficient and better adapted PUFs and PUF-based systems.
NaCl on 8-Bit AVR Microcontrollers
This paper presents first results of the Networking and Cryptography library (NaCl) on the 8-bit AVR family of microcontrollers. We show that NaCl, which has so far been optimized mainly for different desktop and server platforms, is feasible on resource-constrained devices while being very fast and memory efficient. Our implementation shows that encryption using Salsa20 requires 268 cycles/byte, authentication using Poly1305 needs 195 cycles/byte, a Curve25519 scalar multiplication needs 22,791,579 cycles, signing of data using Ed25519 needs 23,216,241 cycles, and verification can be done within 32,634,713 cycles. All implemented primitives provide at least 128-bit security, run in constant time, do not use secret-data-dependent branch conditions, and are open to the public domain (no usage restrictions).
A Secure and efficient elliptic curve based authentication and key agreement protocol suitable for WSN
Authentication and key agreement protocols play an important role in wireless sensor communication networks. Recently Xue et al'. suggested a key agreement protocols for WSN which in this paper we show that the protocol has some security flaws. Also we introduce an enhanced authentication and key agreement protocol for WSN satisfying all the security requirements.
Injective Encoding to Elliptic Curves
For a number of elliptic curve-based cryptographic protocols, it is useful and sometimes necessary to be able to encode a message (a bit string) as a point on an elliptic curve in such a way that the message can be efficiently and uniquely recovered from the point. This is for example the case if one wants to instantiate CPA-secure ElGamal encryption directly in the group of points of an elliptic curve. More practically relevant settings include Lindell's UC commitment scheme (EUROCRYPT 2011) or structure-preserving primitives.
It turns out that constructing such an encoding function is not easy in general, especially if one wishes to encode points whose length is large relative to the size of the curve. There is a probabilistic, ``folklore'' method for doing so, but it only provably works for messages of length less than half the size of the curve.
In this paper, we investigate several approaches to injective encoding to elliptic curves, and in particular, we propose a new, essentially optimal geometric construction for a large class of curves, including Edwards curves; the resulting algorithm is also quite efficient, requiring only one exponentiation in the base field and simple arithmetic operations (however, the curves for which the map can be constructed have a point of order two, which may be a limiting factor for possible applications). The new approach is based on the existence of a covering curve of genus 2 for which a bijective encoding is known.
Practical Bootstrapping in Quasilinear Time
Gentry's ``bootstrapping'' technique (STOC 2009) constructs a fully
homomorphic encryption (FHE) scheme from a ``somewhat homomorphic''
one that is powerful enough to evaluate its own decryption function.
To date, it remains the only known way of obtaining unbounded FHE.
Unfortunately, bootstrapping is computationally very expensive,
despite the great deal of effort that has been spent on improving its
efficiency. The current state of the art, due to Gentry, Halevi, and
Smart (PKC 2012), is able to bootstrap ``packed'' ciphertexts (which
encrypt up to a linear number of bits) in time only \emph{quasilinear}
$\Otil(\lambda) = \lambda \cdot \log^{O(1)} \lambda$ in the security
parameter. While this performance is \emph{asymptotically} optimal up
to logarithmic factors, the practical import is less clear: the
procedure composes multiple layers of expensive and complex
operations, to the point where it appears very difficult to implement,
and its concrete runtime appears worse than those of prior methods
(all of which have quadratic or larger asymptotic runtimes).
In this work we give \emph{simple}, \emph{practical}, and entirely
\emph{algebraic} algorithms for bootstrapping in quasilinear time, for
both ``packed'' and ``non-packed'' ciphertexts. Our methods are easy
to implement (especially in the non-packed case), and we believe that
they will be substantially more efficient in practice than all prior
realizations of bootstrapping. One of our main techniques is a
substantial enhancement of the ``ring-switching'' procedure of Gentry
et al.~(SCN 2012), which we extend to support switching between two
rings where neither is a subring of the other. Using this procedure,
we give a natural method for homomorphically evaluating a broad class
of structured linear transformations, including one that lets us
evaluate the decryption function efficiently.
Domain-Polymorphic Programming of Privacy-Preserving Applications
Secure Multiparty Computation (SMC) is seen as one of the main enablers for secure outsourcing of computation. Currently, there are many different SMC techniques (garbled circuits, secret sharing, homomorphic encryption, etc.) and none of them is clearly superior to others in terms of efficiency, security guarantees, ease of implementation, etc. For maximum efficiency, and for obeying the trust policies, a privacy-preserving application may wish to use several different SMC techniques for different operations it performs. A straightforward implementation of this application may result in a program that
(i) contains a lot of duplicated code, differing only in the used SMC technique;
(ii) is difficult to maintain, if policies or SMC implementations change; and
(iii) is difficult to reuse in similar applications using different SMC techniques.
In this paper, we propose a programming language with associated compilation techniques for simple orchestration of multiple SMC techniques and multiple protection domains. It is a simple imperative language with function calls where the types of data items are annotated with protection domains and where the function declarations may be domain-polymorphic. This allows most of the program code working with private data to be written in a SMC-technique-agnostic manner. It also allows rapid deployment of new SMC techniques and implementations in existing applications. We have implemented the compiler for the language, integrated it with an existing SMC framework, and are currently using it for new privacy-preserving applications.
Leakage-Resilient Symmetric Cryptography Under Empirically Verifiable Assumptions
Leakage-resilient cryptography aims at formally proving the security of cryptographic implementations against large classes of side-channel adversaries. One important challenge for such an approach to be relevant is to adequately connect the formal models used in the proofs with the practice of side-channel attacks. It raises the fundamental problem of finding reasonable restrictions of the leakage functions that can be empirically verified by evaluation laboratories. In this paper, we first argue that the previous ``bounded leakage" requirements used in leakage-resilient cryptography are hard to fulfill by hardware engineers. We then introduce a new, more realistic and empirically verifiable assumption of simulatable leakage, under which security proofs in the standard model can be obtained. We finally illustrate our claims by analyzing the physical security of an efficient pseudorandom generator (for which security could only be proven under a random oracle based assumption so far). These positive results come at the cost of (algorithm-level) specialization, as our new assumption is specifically defined for block ciphers. Nevertheless, since block ciphers are the main building block of many leakage-resilient
cryptographic primitives, our results also open the way towards more realistic constructions and proofs for other pseudorandom objects.
Block Ciphers that are Easier to Mask: How Far Can we Go?
The design and analysis of lightweight block ciphers has been a very active research area over the last couple of years, with many innovative proposals trying to optimize different performance figures. However, since these block ciphers are dedicated to low-cost embedded devices, their implementation is also a typical target for side-channel adversaries. As preventing such attacks with countermeasures usually implies significant performance overheads, a natural open problem is to propose new algorithms for which physical security is considered as an optimization criteria, hence allowing better performances again. We tackle this problem by studying how much we can tweak standard block ciphers such as the AES Rijndael in order to allow efficient masking (that is one of the most frequently considered solutions to improve security against side-channel attacks). For this purpose, we first investigate alternative S-boxes and round structures. We show that both approaches can be used separately in order to limit the total number of non-linear operations in the block cipher, hence allowing more efficient masking. We then combine these ideas into a concrete instance of block cipher called Zorro. We further provide a detailed security analysis of this new cipher taking its design specificities into account, leading us to exploit innovative techniques borrowed from hash function cryptanalysis (that are sometimes of independent interest). Eventually, we conclude the paper by evaluating the efficiency of masked Zorro implementations in an 8-bit microcontroller, and exhibit their interesting performance figures.
Security in $O(2^n)$ for the Xor of Two Random Permutations\\ -- Proof with the standard $H$ technique--
Xoring two permutations is a very simple way to construct pseudorandom functions from pseudorandom permutations. In~\cite{P08a}, it is proved that we have security against CPA-2 attacks when $m \ll O(2^n)$, where $m$ is the number of queries and $n$ is the number of bits of the inputs
and outputs of the bijections. In this paper, we will obtain similar (but slightly different) results by using the
``standard H technique'' instead of the ``$H_{\sigma}$ technique''. It will be interesting to
compare the two techniques, their similarities and the differences between the proofs and the
results.
On the Security of TLS-DH and TLS-RSA in the Standard Model
TLS is the most important cryptographic protocol in the Internet. At CRYPTO 2012, Jager et al. presented the first proof of the unmodified TLS with ephemeral Diffie-Hellman key exchange (TLS-DHE) for mutual authentication. Since TLS cannot be proven secure under the classical definition of authenticated key exchange (AKE), they introduce a new security model called authenticated and confidential channel establishment (ACCE) that captures the security properties expected from TLS in practice. We extend this result in two ways. First we show that the cryptographic cores of the remaining ciphersuites, RSA encrypted key transport (TLS-RSA) and static Diffie-Hellman (TLS-DH), can be proven secure for mutual authentication in an extended ACCE model that also allows the adversary to register new public keys. In our security analysis we show that if TLS-RSA is instantiated with a CCA secure public key cryptosystem and TLS-DH is used in scenarios where a) the knowledge of secret key assumption holds or b) the adversary may not register new public keys at all, both ciphersuites can be proven secure in the standard model under standard security assumptions. Next, we present new and strong definitions of ACCE (and AKE) for server-only authentication which fit well into the general framework of Bellare-Rogaway-style models. We show that all three ciphersuites families do remain secure in this server-only setting. Our work identifies which primitives need to be exchanged in the TLS handshake to obtain strong security results under standard security assumptions (in the standard model) and may so help to guide future revisions of the TLS standard and make improvements to TLS's extensibility pay off.
Structural Evaluation of AES and Chosen-Key Distinguisher of 9-round AES-128
While the symmetric-key cryptography community has now a good
experience on how to build a secure and efficient fixed permutation,
it remains an open problem how to design a key-schedule for block
ciphers, as shown by the numerous candidates broken in the related-key
model or in a hash function setting. Provable security against
differential and linear cryptanalysis in the related-key scenario is
an important step towards a better understanding of its construction.
Using a structural analysis, we show that the full AES-128 cannot be
proven secure unless the exact coefficients of the MDS matrix and the
S-Box differential properties are taken into account since its
structure is vulnerable to a related-key differential attack. We then
exhibit a chosen-key distinguisher for AES-128 reduced to 9 rounds,
which solves an open problem of the symmetric community. We obtain
these results by revisiting algorithmic theory and graph-based ideas
to compute all the best differential characteristics in SPN ciphers,
with a special focus on AES-like ciphers subject to related-keys. We
use a variant of Dijkstra's algorithm to efficiently find the most
efficient related-key attacks on SPN ciphers with an algorithm linear
in the number of rounds.
Efficient eCK-secure Authenticated Key Exchange Protocols in the Standard Model
The extended Canetti–Krawczyk (eCK) security models, are widely used to provide security arguments for authenticated key exchange protocols that capture leakage of various kinds of secret information like the long-term private key and session-specific secret state. In this paper, we study the open problem on constructing eCK secure AKE protocol without random oracles and NAXOS like trick. A generic construction GC-KKN satisfying those requirements is first given relying on standard cryptographic primitives following the guideline of efficiency. On the second a concrete protocol is proposed which is the first eCK secure protocol in the standard model under both standard assumptions and post-specified peer setting. Both proposed schemes can be more efficiently implemented with secure device than previous eCK secure protocols in the standard model, where the secure device might be normally used to store the long-term private key and implement algorithms of protocol which require to be resilience of state leakage.
On the Achievability of Simulation-Based Security for Functional Encryption
Uncategorized
Uncategorized
Recently, there has been rapid progress in the area of functional encryption (FE), in which a receiver with secret-key Sk_y can compute from an encryption of x the value F(x,y) for some functionality F. Two central open questions that remain are: (1) Can we construct FE secure under an indistinguishability-based (IND) security notion for general circuits? (2) To what extent can we achieve a simulation-based (SIM) security notion for FE? Indeed, it was previously shown that IND-security for FE is too weak for some functionalities [Boneh et al. -- TCC'11, O'Neill -- ePrint '10], but that there exist strong impossibility results for SIM-security [Boneh et al. -- TCC'11, Agrawal et al. -- ePrint 2012].
Our work establishes a connection between these questions by giving a compiler that transforms any IND-secure FE scheme for general circuits into one that is SIM-secure for general circuits.
1) In the random oracle model, our resulting scheme is SIM-secure for an unbounded number of ciphertexts and key-derivation queries. We achieve this result by starting from an IND-secure FE scheme for general circuits with random oracle gates.
2) In the standard model, our resulting scheme is secure for a bounded number of ciphertexts and non-adaptive key-derivation queries (i.e., those made before seeing the challenge ciphertexts), but an unbounded number of adaptive key-derivation queries. These parameters match the known impossibility results for SIM-secure FE and improve upon the parameters achieved by Gorbunov et al. [CRYPTO'12].
The techniques for our compiler are inspired by constructions of non-committing encryption [Nielsen -- CRYPTO '02] and the celebrated Feige-Lapidot-Shamir paradigm [FOCS'90] for obtaining zero-knowledge proof systems from witness-indistinguishable proof systems.
Our compiler in the standard model requires an IND-secure FE scheme for general circuits, it leaves open the question of whether we can obtain SIM-secure FE for special cases of interest under weaker assumptions. To this end, we next show that our approach leads to a direct construction of SIM-secure hidden vector encryption (an important special case of FE that generalizes anonymous identity-based encryption). The scheme, which is set in composite order bilinear groups under subgroup decision assumptions, achieves security for a bounded number of ciphertexts but unbounded number of both non-adaptive and adaptive key-derivation queries, again matching the known impossibility results. In particular, to our knowledge this is the first construction of SIM-secure FE (for any non-trivial functionality) in the standard model handling an unbounded number of adaptive key-derivation queries.
Finally, we revisit the negative results for SIM-secure FE. We observe that the known results leave open the possibility of achieving SIM-security for various natural formulations of security (such as non-black-box simulation for non-adaptive adversaries). We settle these questions in the negative, thus providing essentially a full picture of the (un)achievability of SIM-security.
A New Class of Public Key Cryptosystems Constructed Based on Reed-Solomon Codes, K(XII)SE(1)PKC.-- Along with a presentation of K(XII)SE(1)PKC over the extension field extensively used for present day various storage and transmission systems --
Uncategorized
Uncategorized
In this paper, we present a new class of public key cryptosystem based on Reed-Solomon codes, a member of the code based PKC(CBPKC), referred to as K(XII)SE(1)PKC. We show that K(XII)SE(1)PKC can be secure against the various attacks. Particularly we present a member of K(XII)SE(1)PKC constructed based on the Reed-Solomon code over the extension field, which is extensively used in the present day storage systems and the various digital transmission systems. In a sharp contrast with the conventional CBPKC that uses Goppa code, in K(XII)SE(1)PKC, we do not care for the security of the primitive polynominal that generates the Reed-Solomon code. The probabilistic scheme presented in this paper would yield a brand-new technique in the field of CBPKC.
A Fast Implementation of the Optimal Ate Pairing over BN curve on Intel Haswell Processor
We present an efficient implementation of the Optimal Ate Pairing on Barreto-Naehrig
curve over a 254-bit prime field on Intel Haswell processor.
Our library is able to compute the optimal ate pairing over a 254-bit prime field,
in just 1.17 million of clock cycles on a single core of an Intel Core i7-4700MQ(2.4GHz)
processor with TurboBoost technology disabled.
Linearly Homomorphic Structure-Preserving Signatures and Their Applications
Structure-preserving signatures (SPS) are signature schemes where messages, signatures and public keys all consist of elements of a group over which a bilinear map is efficiently computable. This property makes them useful in cryptographic protocols as they nicely compose with other algebraic tools (like the celebrated Groth-Sahai proof systems). In this paper, we consider SPS systems with homomorphic properties and suggest applications that have not been provided before (in particular, not by employing ordinary SPS). We build linearly homomorphic structure-preserving signatures under simple assumptions and show that the primitive makes it possible to verify the calculations performed by a server on outsourced encrypted data (i.e., combining secure computation and authenticated computation to allow reliable and secure cloud storage and computation, while freeing the client from retaining cleartext storage). Then, we give a generic construction of non-malleable (and actually simulation-sound) commitment from any linearly homomorphic SPS. This notably provides the first constant-size non-malleable commitment to group elements.
Achieving the limits of the noisy-storage model using entanglement sampling
Uncategorized
Uncategorized
A natural measure for the amount of quantum information that a physical system $E$ holds about another system $A = A_1,...,A_n$ is given by the min-entropy $\hmin(A|E)$. Specifically, the min-entropy measures the amount of entanglement between $E$ and $A$, and is the relevant measure when analyzing a wide variety of problems ranging from randomness extraction in quantum cryptography, decoupling used in channel coding, to physical processes such as thermalization or the thermodynamic work cost (or gain) of erasing a quantum system. As such, it is a central question to determine the behaviour of the min-entropy after some process M is applied to the system $A$. Here we introduce a new generic tool relating the resulting min-entropy to the original one, and apply it to several settings of interest, including sampling of subsystems and measuring in a randomly chosen basis.
The results on random measurements yield new high-order entropic uncertainty relations with which we prove the optimality of cryptographic schemes in the bounded quantum storage model. This is an abridged version of the paper; the full version containing all proofs and further applications can be found in \cite{DFW13}.
A heuristic for finding compatible differential paths with application to HAS-160
The question of compatibility of differential paths plays a central role in second order
collision attacks on hash functions. In this context, attacks typically proceed by starting from the
middle and constructing the middle-steps quartet in which the two paths are enforced on the respec-
tive faces of the quartet structure. Finding paths that can fit in such a quartet structure has been
a major challenge and the currently known compatible paths extend over a suboptimal number of
steps for hash functions such as SHA-2 and HAS-160. In this paper, we investigate a heuristic that
searches for compatible differential paths. The application of the heuristic in case of HAS-160 yields
a practical second order collision over all of the function steps, which is the first practical result that
covers all of the HAS-160 steps. An example of a colliding quartet is provided
Counter-cryptanalysis
We introduce \emph{counter-cryptanalysis} as a new paradigm for strengthening weak cryptographic primitives against cryptanalytic attacks.
Redesigning a weak primitive to more strongly resist cryptanalytic techniques will unavoidably break backwards compatibility.
Instead, counter-cryptanalysis exploits unavoidable anomalies introduced by cryptanalytic attacks to detect and block
cryptanalytic attacks while maintaining full backwards compatibility.
Counter-cryptanalysis in principle enables the continued secure use of weak cryptographic primitives.
Furthermore, we present the first example of counter-cryptanalysis, namely the efficient detection whether any given single message has been constructed -- together with an \emph{unknown} sibling message -- using a cryptanalytic collision attack on MD5 or SHA-1.
An immediate application is in digital signature verification software to ensure that an (older) MD5 or SHA-1 based digital signature is not a forgery using a collision attack.
This would certainly be desirable for two reasons.
Firstly, it might still be possible to generate malicious forgeries using collision attacks as too many parties still sign using MD5 (or SHA-1) based signature schemes.
Secondly, any such forgeries are currently accepted nearly everywhere due to the ubiquitous support of MD5 and SHA-1 based signature schemes.
Despite the academic push to use more secure hash functions over the last decade, these two real-world arguments (arguably) will remain valid for many more years.
Only due to counter-cryptanalysis were we able to discover that Flame,
a highly advanced malware for cyberwarfare uncovered in May 2012,
employed an as of yet unknown variant of our chosen-prefix collision attack on MD5 \cite{DBLP:conf/eurocrypt/StevensLW07,DBLP:conf/crypto/StevensSALMOW09}.
In this paper we disect the revealed cryptanalytic details and work towards the reconstruction of the algorithms underlying Flame's new variant attack.
Finally, we make a preliminary comparision between Flame's attack and our chosen-prefix collision attack.
The LOCAL attack: Cryptanalysis of the authenticated encryption scheme ALE
We show how to produce a forged (ciphertext,tag) pair for the scheme ALE with data and time complexity of 2^102 ALE encryptions of short messages and the same number of authentication attempts.
We use a differential attack based on a local collision, which exploits the availability of extracted state bytes to the adversary. Our approach allows for a time-data complexity tradeoff, with an extreme case of a forgery produced after $2^119 attempts and based on a single authenticated message. Our attack is further turned into a state recovery and a universal forgery attack with a time complexity of 2^120 verification attempts using only a single authenticated 48-byte message.
Verifying Computations with State (Extended Version)
Uncategorized
Uncategorized
When a client outsources a job to a third party (e.g., the cloud), how
can the client check the result, without reexecuting the computation?
Recent work in _proof-based verifiable computation_ has made
significant progress on this problem by incorporating deep results from
complexity theory and cryptography into built systems. However, these
systems work within a stateless model: they exclude computations that
interact with RAM or a disk, or for which the client does not have the
full input.
This paper describes Pantry, a built system that overcomes these
limitations. Pantry composes proof-based verifiable computation with
untrusted storage: the client expresses its computation in terms of
digests that attest to state, and verifiably outsources _that_
computation. Using Pantry, we extend verifiability to MapReduce jobs,
simple database queries, and interactions with private state. Thus,
Pantry takes another step toward practical proof-based verifiable
computation for realistic applications.
New Attacks against Transformation-Based Privacy-Preserving Linear Programming
In this paper we demonstrate a number of attacks against proposed protocols for privacy-preserving linear programming, based on publishing and solving a transformed version of the problem instance. Our attacks exploit the geometric structure of the problem, which has
mostly been overlooked in the previous analyses and is largely preserved by the proposed transformations. The attacks are efficient in practice and cast serious doubt to the viability of transformation-based approaches in general.
Programmable Hash Functions in the Multilinear Setting
Uncategorized
Uncategorized
We adapt the concept of a programmable hash function (PHF, Crypto 2008) to a setting in which a multilinear map is available. This enables new PHFs with previously unachieved parameters.
To demonstrate their usefulness, we show how our (standard-model) PHFs can replace random oracles in several well-known cryptographic constructions. Namely, we obtain standard-model versions of the Boneh-Franklin identity-based encryption scheme, the Boneh-Lynn-Shacham signature scheme, and the Sakai-Ohgishi-Kasahara identity-based non-interactive key exchange (ID-NIKE) scheme. The ID-NIKE scheme is the first scheme of its kind in the standard model.
Our abstraction also allows to derive hierarchical versions of the above schemes in settings with multilinear maps. This in particular yields simple and efficient hierarchical generalizations of the BF, BLS, and SOK schemes. In the case of hierarchical ID-NIKE, ours is the first such scheme with full security, in either the random oracle model or the standard model.
While our constructions are formulated with respect to a generic multilinear map, we also outline the necessary adaptations required for the recent ``noisy'' multilinear map candidate due to Garg, Gentry, and Halevi.
Profiling DPA: Efficacy and efficiency trade-offs
Linear regression-based methods have been proposed as efficient means of characterising device leakage in the training phases of profiled side-channel attacks. Empirical comparisons between these and the `classical' approach to template building have confirmed the reduction in profiling complexity to achieve the same attack-phase success, but have focused on a narrow range of leakage scenarios which are especially favourable to simple (i.e.\ efficiently estimated) model specifications. In this contribution we evaluate---from a theoretic perspective as much as possible---the performance of linear regression-based templating in a variety of realistic leakage scenarios as the complexity of the model specification varies. We are particularly interested in complexity trade-offs between the number of training samples needed for profiling and the number of attack samples needed for successful DPA: over-simplified models will be cheaper to estimate but DPA using such a degraded model will require more data to recover the key. However, they can still offer substantial improvements over non-profiling strategies relying on the Hamming weight power model, and so represent a meaningful middle-ground between `no' prior information and `full' prior information.
Constrained Pseudorandom Functions and Their Applications
Uncategorized
Uncategorized
We put forward a new notion of pseudorandom functions (PRFs) we call
constrained PRFs. In a standard PRF there is a master key k that enables one to evaluate the function at all points in the domain of the
function. In a constrained PRF it is possible to derive constrained keys kS from the master key k. A constrained key kS enables the
evaluation of the PRF at a certain subset S of the domain and
nowhere else. We present a formal framework for this concept and show
that constrained PRFs can be used to construct powerful primitives such as identity-based key exchange and an optimal private broadcast
encryption system. We then construct constrained PRFs for several natural set systems needed for these applications. We conclude with several open problems relating to this new concept.
Time-Optimal Interactive Proofs for Circuit Evaluation
Several research teams have recently been working toward the development of practical general-purpose protocols for verifiable computation. These protocols enable a computationally weak verifier to offload computations to a powerful but untrusted prover while providing the verifier with a guarantee that the prover performed the requested computations correctly. Despite substantial progress, existing implementations require further improvements before they become practical for most settings. The main bottleneck is typically the extra effort required by the prover to return an answer with a guarantee of correctness, compared to returning an answer with no guarantee.
We describe a refinement of a powerful interactive proof protocol due to Goldwasser, Kalai, and Rothblum. Cormode, Mitzenmacher, and Thaler show how to implement the prover in this protocol in time $O(S \log S)$, where $S$ is the size of an arithmetic circuit computing the function of interest. Our refinements apply to circuits with sufficiently ``regular'' wiring patterns; for these circuits, we bring the runtime of the prover down to $O(S)$. That is, our prover can evaluate the circuit with a guarantee of correctness, with only a constant-factor blowup in work compared to evaluating the circuit with no guarantee.
We argue that our refinements capture a large class of circuits, and we complement our theoretical results with experiments on problems such as matrix multiplication and determining the number of distinct elements in a data stream. Experimentally, our refinements yield a 200x speedup for the prover over the implementation of Cormode et al., and our prover is less than 10x slower than a C++ program that simply evaluates the circuit. Along the way, we describe a special-purpose protocol for matrix multiplication that is of interest in its own right.
Our final contribution is the design of an interactive proof protocol targeted at general data parallel computation. Compared to prior work, this protocol can more efficiently verify complicated computations as long as that computation is applied independently to many different pieces of data.
Ideal-Cipher (Ir)reducibility for Blockcipher-Based Hash Functions
Preneel et al.~(Crypto 1993) assessed 64 possible ways to construct a compression function out of a blockcipher. They conjectured that 12 out of these 64 so-called PGV constructions achieve optimal security bounds for collision resistance and preimage resistance. This was proven by Black et al.~(Journal of Cryptology, 2010), if one assumes that the blockcipher is ideal. This result, however, does not apply to ``non-ideal'' blockciphers such as AES. To alleviate this problem, we revisit the PGV constructions in light of the recently proposed idea of random-oracle reducibility (Baecher and Fischlin, Crypto 2011). We say that the blockcipher in one of the 12 secure PGV constructions reduces to the one in another construction, if \emph{any} secure instantiation of the cipher, ideal or not, for one construction also makes the other secure. This notion allows us to relate the underlying assumptions on blockciphers in different constructions, and show that the requirements on the blockcipher for one case are not more demanding than those for the other. It turns out that this approach divides the 12 secure constructions into two groups of equal size, where within each group a blockcipher making one construction secure also makes all others secure. Across the groups this is provably not the case, showing that the sets of ``good'' blockciphers for each group are qualitatively distinct. We also relate the ideal ciphers in the PGV constructions with those in double-block-length hash functions such as Tandem-DM, Abreast-DM, and Hirose-DM. Here, our results show that, besides achieving better bounds, the double-block-length hash functions rely on weaker assumptions on the blockciphers to achieve collision and everywhere preimage resistance.
A Dynamic Tradeoff Between Active and Passive Corruptions in Secure Multi-Party Computation
At STOC '87, Goldreich et al.~presented two protocols for secure multi-party computation (MPC) among $n$ parties: The first protocol provides \emph{passive} security against $t<n$ corrupted parties. The second protocol provides even \emph{active} security, but only against $t<n/2$ corrupted parties. Although these protocols provide security against the provably highest possible number of corruptions, each of them has its limitation: The first protocol is rendered completely insecure in presence of a single active corruption, and the second protocol is rendered completely insecure in presence of $\lceil n/2 \rceil$ passive corruptions.
At Crypto 2006, Ishai et al.~combined these two protocols into a single protocol which provides passive security against $t<n$ corruptions and active security against $t<n/2$ corruptions. This protocol unifies the security guarantees of the passive world and the active world (``best of both worlds''). However, the corruption threshold $t<n$ can be tolerated only when \emph{all} corruptions are passive. With a single active corruption, the threshold is reduced to $t<n/2$.
As our main result, we introduce a \emph{dynamic tradeoff} between active and passive corruptions: We present a protocol which provides security against $t<n$ passive corruptions, against $t<n/2$ active corruptions, \emph{and everything in between}. In particular, our protocol provides full security against $k$ active corruptions, as long as less than $n-k$ parties are corrupted in total, for any unknown $k$.
The main technical contribution is a new secret sharing scheme that, in the reconstruction phase, releases secrecy \emph{gradually}. This allows to construct non-robust MPC protocols which, in case of an abort, still provide some level of secrecy. Furthermore, using similar techniques, we also construct protocols for reactive MPC with hybrid security, i.e., different thresholds for secrecy, correctness, robustness, and fairness. Intuitively, the more corrupted parties, the less security is guaranteed.
Multi-file proofs of retrievability for cloud storage auditing
Cloud storage allows clients to store a large amount of data with the help of storage service providers (SSPs). Proof-of-retrievability(POR) protocols allow one server to prove to a verifier the availability of data stored by some client. Shacham et al. presented POR protocols based on homomorphic authenticators and proved security of their schemes under a stronger security model, which requires the existence of an extractor to retrieve the original file by receiving the program of a successful prover. When using their POR protocol with public verifiability to verify the availability of multiple files separately, the number of pairing operations computed by a verifier is linear with the number of files. To improve the heavy burden on the verifier, we introduce a notion called multi-proof-of-retrievability(MPOR), allowing one verifier to verify the availability of multiple files stored by a server in one pass. We also design a MPOR protocol with public verifiability by extending the work of Shacham et al. The advantage of our MPOR scheme is that computational overhead of a verifier in our scheme is constant, independent of the number of files. Nevertheless, the soundness of our MPOR protocol is proved under a relatively weak security notion. In particular, analysis of our MPOR protocol shows that each file can be extracted in expected polynomial time under certain restriction on the size of processed files.
STES: A Stream Cipher Based Low Cost Scheme for Securing Stored Data
The problem of securing data present on USB memories and SD cards has not been adequately addressed in the cryptography literature. While the formal notion of a tweakable enciphering scheme (TES) is well accepted as the proper primitive for secure data storage, the real challenge is to design a low cost TES which can perform at the data rates of the targeted memory devices. In this work, we provide the first answer to this problem. Our solution, called STES, combines a stream cipher with a XOR universal hash function. The security
of STES is rigorously analyzed in the usual manner of provable security approach. By carefully defining appropriate variants of the multi-linear hash function and the pseudo-dot product based
hash function we obtain controllable trade-offs between area and throughput. We combine the hash function with the recent hardware oriented stream ciphers, namely Mickey, Grain and Trivium. Our implementations are targeted towards two low cost FPGAs -- Xilinx Spartan~3 and Lattice ICE40. Simulation results demonstrate
that the speed of encryption/decryption matches the data rates of different USB and SD memories. We believe that our work opens up the possibility of actually putting FPGAs within controllers of such memories to perform low-level in-place encryption.
Using Bleichenbacher's Solution to the Hidden Number Problem to Attack Nonce Leaks in 384-Bit ECDSA
In this paper we describe an attack against nonce leaks in 384-bit ECDSA using an FFT-based attack due to Bleichenbacher. The signatures were computed by a modern smart card. We extracted the low-order bits of each nonce using a template-based power analysis attack against the modular inversion of the nonce. We also developed a BKZ-based method for the range reduction phase of the attack, as it was impractical to collect enough signatures for the collision searches originally used by Bleichenbacher. We confirmed our attack by extracting the entire signing key using a 5-bit nonce leak from 4000 signatures.
Analysis and Improvement of the Generic Higher-Order Masking Scheme of FSE 2012
Masking is a well-known technique used to prevent block cipher implementations from side-channel attacks. Higher-order side channel attacks (e.g. higher-order DPA attack) on widely used block cipher like AES have motivated the design of efficient higher-order masking schemes. Indeed, it is known that as the masking order increases, the difficulty of side-channel attack increases exponentially. However, the main problem in higher-order masking is to design an efficient and secure technique for S-box computations in block cipher implementations. At FSE 2012, Carlet et al. proposed a generic masking scheme that can be applied to any S-box at any order. This is the first generic scheme for efficient software implementations. Analysis of the running time, or \textit{masking complexity}, of this scheme is related to a variant of the well-known problem of efficient exponentiation (\textit{addition chain}), and evaluation of polynomials.
In this paper we investigate optimal methods for exponentiation
in $\mathbb{F}_{2^{n}}$ by studying a variant of addition chain,
which we call \textit{cyclotomic-class addition chain}, or \textit{CC-addition chain}. Among several interesting properties, we prove lower bounds on min-length CC-addition
chains. We define the notion of \GFn-polynomial chain, and use it to count the number of \textit{non-linear} multiplications required while evaluating polynomials over $\mathbb{F}_{2^{n}}$. We also give a lower bound on the length of such a chain for any polynomial. As a consequence, we show that a lower bound for the masking complexity of DES S-boxes is three, and that of PRESENT S-box is two. We disprove a claim previously made by Carlet et al. regarding min-length CC-addition chains. Finally, we give a polynomial evaluation method, which results into an improved masking scheme (compared to the technique of Carlet et al.) for DES S-boxes. As an illustration we apply this method to several other S-boxes and show significant improvement for them.
Limits of provable security for homomorphic encryption
We show that public-key bit encryption schemes which support weak (i.e., compact) homomorphic evaluation of any sufficiently "sensitive" collection of functions cannot be proved message indistinguishable beyond AM intersect coAM via general (adaptive) reductions, and beyond statistical zero-knowledge via reductions of constant query complexity. Examples of sensitive collections include parities, majorities, and the class consisting of all AND and OR functions.
Our techniques also give a method for converting a strong (i.e., distribution-preserving) homomorphic evaluator for essentially any boolean function (except the trivial ones, the NOT function, and the AND and OR functions) into a rerandomization algorithm: This is a procedure that converts a ciphertext into another ciphertext which is statistically close to being independent and identically distributed with the original one. Our transformation preserves negligible statistical error.
Quantum one-time programs
A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions.
This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.
Attribute-Based Encryption for a Subclass of Circuits with Bounded Depth from Lattices
Uncategorized
Uncategorized
In this work, we present two Key-Policy Attribute-Based Encryption (ABE) schemes for some subclass of circuits based on the Learning with Error (LWE) assumption. Our constructions are selectively secure in the standard model. More specifically, our first construction supports a subclass of circuits with polynomially bounded depth. We call this subclass the OR-restricted circuits which means that for any input $x$, if $f(x)=0$ then for all the OR gates in $f$, at least one of its incoming wires will evaluate to $0$. The second one is a Key-Policy ABE scheme for shallow circuits whose depth is bounded by $O(\log\log\lambda)$, where $\lambda$ is the security parameter.
Trapdoor Smooth Projective Hash Functions
Katz and Vaikuntanathan recently improved smooth projective hash functions in order to build one-round password-authenticated key exchange protocols (PAKE). To achieve security in the UC framework they allowed the simulator to extract the hashing key, which required simulation-sound non-interactive zero-knowledge proofs that are unfortunately inefficient.
We improve the way the latter extractability is obtained by introducing the notion of trapdoor smooth projective hash function (TSPHF). A TSPHF is an SPHF with a trapdoor, which may not allow to recover the complete hashing key, but which still allows to compute the hash value, which is enough for an application to PAKE with UC-security against static corruptions. We additionally show that TSPHFs yield zero-knowledge proofs in two flows, with straight-line extractability.
Besides those quite interesting applications of TSPHF, we also show how to generically build them on languages of ciphertexts, using any ElGamal-like encryption. Our concrete instantiations lead to efficient one-round UC-secure PAKE, extractable zero-knowledge arguments, and verifiable encryption of Waters signatures. In the case of the PAKE, our construction is the most efficient one-round UC-secure PAKE to date.
Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based
We describe a comparatively simple fully homomorphic encryption (FHE) scheme based on the learning with errors (LWE) problem. In previous LWE-based FHE schemes, multiplication is a complicated and expensive step involving "relinearization". In this work, we propose a new technique for building FHE schemes that we call the "approximate eigenvector" method. In our scheme, for the most part, homomorphic addition and multiplication are just matrix addition and multiplication. This makes our scheme both asymptotically faster and (we believe) easier to understand.
In previous schemes, the homomorphic evaluator needs to obtain the user's "evaluation key", which consists of a chain of encrypted secret keys. Our scheme has no evaluation key. The evaluator can do homomorphic operations without knowing the user's public key at all, except for some basic parameters. This fact helps us construct the first identity-based FHE scheme. Using similar techniques, we show how to compile a recent attribute-based encryption scheme for circuits by Gorbunov et al. into an attribute-based FHE scheme that permits data encrypted under the same index to be processed homomorphically.
On the Security of the TLS Protocol: A Systematic Analysis
TLS is the most widely-used cryptographic protocol on the Internet. It comprises the TLS Handshake Protocol, responsible for authentication and key establishment, and the TLS Record Protocol, which takes care of subsequent use of those keys to protect bulk data. TLS has proved remarkably stubborn to analysis using the tools of modern cryptography. This is due in part to its complexity and its flexibility. In this paper, we present the most complete analysis to date of the TLS Handshake protocol and its application to data encryption (in the Record Protocol). We show how to extract a key-encapsulation mechanism (KEM) from the TLS Handshake Protocol, and how the security of the entire TLS protocol follows from security properties of this KEM when composed with a secure authenticated encryption scheme in the Record Protocol. The security notion we achieve is a variant of the ACCE notion recently introduced by Jager et al. (Crypto ’12). Our approach enables us to analyse multiple different key establishment methods in a modular fashion, including the first proof of the most common deployment mode that is based on RSA PKCS #1v1.5 encryption, as well as Diffie-Hellman modes. Our results can be applied to settings where mutual authentication is provided and to the more common situation where only server authentication is applied.
Security Analysis of Pseudo-Random Number Generators with Input: /dev/random is not Robust
Uncategorized
Uncategorized
A pseudo-random number generator (PRNG) is a deterministic algorithm that produces numbers whose distribution is indistinguishable from uniform. A formal security model for PRNGs with input was proposed in 2005 by Barak and Halevi (BH). This model involves an internal state that is refreshed with a (potentially biased) external random source, and a cryptographic function that outputs random numbers from the continually internal state. In this work we extend the BH model to also include a new security property capturing how it should accumulate the entropy of the input data into the internal state after state compromise. This property states that a good PRNG should be able to eventually recover from compromise even if the entropy is injected into the system at a very slow pace, and expresses the real-life expected behavior of existing PRNG designs.
Unfortunately, we show that neither the model nor the specific PRNG construction proposed by Barak and Halevi meet this new property, despite meeting a weaker robustness notion introduced by BH. From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom. In particular, we show several attacks proving that these PRNGs are not robust according to our definition, and do not accumulate entropy properly. These attacks are due to the vulnerabilities of the entropy estimator and the internal mixing function of the Linux PRNGs. These attacks against the Linux PRNG show that it does not satisfy the "robustness" notion of security, but it remains unclear if these attacks lead to actual exploitable vulnerabilities in practice. Finally, we propose a simple and very efficient PRNG construction that is provably robust in our new and stronger adversarial model.
We therefore recommend to use this construction whenever a PRNG with input is used for cryptography.
Attribute-Based Encryption for Circuits
In an attribute-based encryption (ABE) scheme, a ciphertext is associated with
an L-bit public index IND and a message m, and
a secret key
is associated with a
Boolean predicate P. The secret key allows to decrypt the ciphertext and learn m iff P(IND)=1. Moreover, the scheme should be secure against collusions of users, namely,
given secret keys for polynomially many predicates, an adversary
learns nothing about the message
if none of the secret keys can individually decrypt the ciphertext.
We present
attribute-based encryption schemes for circuits
of any arbitrary polynomial size, where the public parameters and
the ciphertext grow linearly with the depth of the circuit. Our construction
is secure under the standard learning with errors (LWE) assumption. Previous
constructions of attribute-based encryption were for Boolean formulas, captured
by the complexity class NC1.
In the course of our construction, we
present a new framework for constructing ABE schemes.
As a by-product of our framework, we obtain ABE schemes
for polynomial-size branching programs,
corresponding to the complexity class LOGSPACE, under
quantitatively better assumptions.
Last updated: 2013-06-03
A Novel Technique in Linear Cryptanalysis
In this paper, we focus on a novel technique called cube-linear attack, which is obtained by combining the cube and linear attacks together, is first proposed to deal with the probabilistic polynomial, aiming to furthermore mine the available secret information. Based on different combination ways of the two attacks, moreover, two cube-linear schemes are discussed. Naturally, we can use cube-linear attack as an unordinary trick in linear cryptanalysis, which has never been considered by the previous linear cryptanalysis yet. As a new contribution to linear cryptanalysis, it is beneficial to allow for a reduction in the amount of data required for a successful attack in specific circumstances. Applying our method to a reduced-round Trivium, as an example, we get better linear cryptanalysis results. More importantly, we believe that the novel linear cryptanalysis technique introduced in this paper can be extended to other ciphers. In other words, it is worth considering for our method in linear cryptanalysis.
Parallel and Dynamic Searchable Symmetric Encryption
Searchable symmetric encryption (SSE) enables a client to outsource a collection of encrypted documents in the cloud and retain the ability to perform keyword searches without revealing information about the contents of the documents and queries. Although efficient SSE constructions are known, previous solutions are highly sequential. This is mainly due to the fact that, currently, the only method for achieving sub-linear time search is the inverted index approach (Curtmola, Garay, Kamara and Ostrovsky, CCS ’06) which requires the search algorithm to access a sequence of memory locations, each of which is unpredictable and stored at the previous location in the
sequence.
Motivated by advances in multi-core architectures, we present a new method for constructing sub-linear SSE schemes. Our approach is highly parallelizable and dynamic. With roughly a logarithmic number of cores in place, searches for a keyword w in our scheme execute in o(r) parallel time, where r is the number of documents containing keyword w (with more cores, this bound can go down to O(log n), i.e., independent of the result size r). Such time complexity outperforms the optimal \theta(r) sequential search time - a similar bound holds for the updates.
Our scheme also achieves the following important properties: (a) it enjoys a strong notion of security, namely security against adaptive chosen-keyword attacks; (b) compared to existing sub-linear dynamic SSE schemes (e.g., Kamara, Papamanthou, Roeder, CCS ’12), updates in our scheme do not leak any information, apart from information that can be inferred from previous search tokens; (c) it can be implemented efficiently in external memory (with logarithmic I/O overhead). Our technique is simple and uses a red-black tree data structure; its security is proven in the random oracle model.
Protecting PUF Error Correction by Codeword Masking
One of the main applications of Physical Unclonable Functions~(PUFs) is unique key generation. While the advantages of PUF-based key extraction and embedding have been shown in several papers, physical attacks on it have gained only little interest until now. In this work, we demonstrate the feasibility of a differential power analysis attack on the error correction module of a secure sketch. This attack can also be applied to code-offset fuzzy extractors because they build upon secure sketches. We propose a codeword masking scheme to protect key generation algorithms used for PUFs. Our proposed countermeasure enables masking of linear Error-Correcting Codes~(ECCs) without impact on their error correction capabilities while keeping the overhead low. This is achieved by random masking codewords, which can be efficiently generated by the ECC's encoding function. Further, it allows to consistently protect the PUF-based key generation process and can provide the masked key and its mask to a subsequent crypto module which implements masking as well. We demonstrate the practical protection of our codeword masking scheme by attacking a masked secure sketch implementation. We emphasize that, besides protecting code-offset algorithms, the proposed masking scheme can also be applied to index-based syndrome coding and other security-critical error correction modules.
Double-authentication-preventing signatures
Uncategorized
Uncategorized
Digital signatures are often used by trusted authorities to make unique bindings between a subject and a digital object; for example, certificate authorities certify a public key belongs to a domain name, and time-stamping authorities certify that a certain piece of information existed at a certain time. Traditional digital signature schemes however impose no uniqueness conditions, so a trusted authority could make multiple certifications for the same subject but different objects, be it intentionally, by accident, or following a (legal or illegal) coercion. We propose the notion of a double-authentication-preventing signature, in which a value to be signed is split into two parts: a subject and a message. If a signer ever signs two different messages for the same subject, enough information is revealed to allow anyone to compute valid signatures on behalf of the signer. This double-signature forgeability property discourages signers from misbehaving---a form of self-enforcement---and would give binding authorities like CAs some cryptographic arguments to resist legal coercion. We give a generic construction using a new type of trapdoor functions with extractability properties, which we show can be instantiated using the group of sign-agnostic quadratic residues modulo a Blum integer.
A method for obtaining lower bounds on the higher order nonlinearity of Boolean function
Uncategorized
Uncategorized
Obtainment of exact value or high lower bound on the $r$-th order nonlinearity of Boolean function is a very complicated problem (especial if $r > 1$). In a number of papers lower bounds on the $r$-th order nonlinearity of Boolean function via its algebraic immunity were obtain for different $r$. This bounds is rather high for function with maximum near maximum possible algebraic immunity. In this paper we prove theorem, which try to obtain rather high lower bound on the $r$-th order nonlinearity for many functions with small algebraic immunity.
New Constructions and Applications of Trapdoor DDH Groups
Trapdoor Decisional Diffie-Hellman (TDDH) groups, introduced by Dent and Galbraith (ANTS 2006), are groups where the DDH problem is hard, unless one is in possession of a secret trapdoor which enables solving it efficiently. Despite their intuitively appealing properties, they have found up to now very few cryptographic applications. Moreover, among the two constructions of such groups proposed by Dent and Galbraith, only a single one based on hidden pairings remains unbroken.
In this paper, we extend the set of trapdoor DDH groups by giving a construction based on composite residuosity. We also introduce a more restrictive variant of these groups that we name \emph{static} trapdoor DDH groups, where the trapdoor only enables to solve the DDH problem with respect to a fixed pair $(G,G^x)$ of group elements. We give two constructions for such groups whose security relies respectively on the RSA and the factoring assumptions. Then, we show that static trapdoor DDH groups yield elementary constructions of convertible undeniable signature schemes allowing delegatable verification. Using our constructions of static trapdoor DDH groups from the RSA or the factoring assumption, we obtain slightly simpler variants of the undeniable signature schemes of respectively Gennaro, Rabin, and Krawczyk (J. Cryptology, 2000) and Galbraith and Mao (CT-RSA 2003). These new schemes are conceptually more satisfying since they can strictly be viewed as instantiations, in an adequate group, of the original undeniable signature scheme of Chaum and van Antwerpen (CRYPTO~'89).
Trapdoor Privacy in Asymmetric Searchable Encryption Schemes
Uncategorized
Uncategorized
Asymmetric searchable encryption allows searches to be carried over ciphertexts, through delegation, and by means of trapdoors issued by the owner of the data. Public Key Encryption with Keyword Search (PEKS) is a primitive with such functionality that provides delegation of exact-match searches. As it is important that ciphertexts preserve data privacy, it is also important that trapdoors do not expose the user's search criteria. The difficulty of formalizing a security model for trapdoor privacy lies in the verification functionality, which gives the adversary the power of verifying if a trapdoor encodes a particular keyword. In this paper, we provide a broader view on what can be achieved regarding trapdoor privacy in asymmetric searchable encryption schemes, and bridge the gap between previous definitions, which give limited privacy guarantees in practice against search patterns. We propose the notion of Strong Search Pattern Privacy for PEKS and construct a scheme that achieves this security notion.
Protocol Variants and Electronic Identification
It is important to be able to evaluate information security systems involving humans. We propose an approach in which we consider the system as a cryptographic protocol, and users are modeled as ordinary players. To model the fact that users make mistakes that affect security, we introduce protocol variants that model mistakes or combinations of mistakes. By analysing the base protocol and its variants, and at the same time considering how likely each variant is, we get a reasonable estimate of the real security of the system.
Our work takes the form of a case study of four Norwegian federated identity systems, as well as two proposals for improved systems. The four systems span a good mix of various types of federated identity systems.
Towards Finding Optimal Differential Characteristics for ARX: Application to Salsa20
Uncategorized
Uncategorized
An increasing number of cryptographic primitives are built using the ARX operations: addition modulo $2^n$, bit rotation and XOR. Because of their very fast performance in software, ARX ciphers are becoming increasingly common. However, there is currently no rigorous understanding of the security of ARX ciphers against one of the most common attacks in symmetric-key cryptography: differential cryptanalysis. In this paper, we introduce a tool to search for optimal differential characteristics for ARX ciphers. Our technique is very easy to use, as it only involves writing out simple equations for every addition, rotation and XOR operation in the cipher, and applying an off-the-shelf SAT solver. As is commonly done for ARX ciphers, our analysis assumes that the probability of a characteristic can be computed by multiplying the probabilities of each operation, and that the probability of the best characteristic is a good estimate for the probability of the corresponding differential. Using extensive experiments for
Salsa20, we find that these assumptions are not always valid. To overcome these issues, we propose a method to accurately estimate the probability of ARX differentials.
A Lightweight Hash Function Resisting Birthday Attack and Meet-in-the-middle Attack
Uncategorized
Uncategorized
To examine the integrity and authenticity of an IP address efficiently and economically, this paper proposes a new non-Merkle-Damgard structural (non-MDS) hash function called JUNA that is based on a multivariate permutation problem and an anomalous subset product problem to which no subexponential time solutions are found so far. JUNA includes an initialization algorithm and a compression algorithm, and converts a short message of n bits which is regarded as only one block into a digest of m bits, where 80 <= m <= 232 and 80 <= m <= n <= 4096. The analysis and proof show that the new hash is one-way, weakly collision-free, and strongly collision-free, and its security against existent attacks such as birthday attack and meet-in-the- middle attack is to O(2 ^ m). Moreover, a detailed proof that the new hash function is resistant to the birthday attack is given. Compared with the Chaum-Heijst-Pfitzmann hash based on a discrete logarithm problem, the new hash is lightweight, and thus it opens a door to convenience for utilization of lightweight digital signing schemes.
Key-Versatile Signatures and Applications: RKA, KDM and Joint Enc/Sig
This paper introduces key-versatile signatures. Key-versatile signatures allow us to sign with keys already in use for another purpose, without changing the keys and without impacting the security of the original purpose. This allows us to obtain advances across a collection of challenging domains including joint Enc/Sig, security against related-key attack (RKA) and security for key-dependent messages (KDM). Specifically we can (1) Add signing capability to existing encryption capability with zero overhead in the size of the public key (2) Obtain RKA-secure signatures from any RKA-secure one-way function, yielding new RKA-secure signature schemes (3) Add integrity to encryption while maintaining KDM-security.
Elligator: Elliptic-curve points indistinguishable from uniform random strings
Uncategorized
Uncategorized
Censorship-circumvention tools are in an arms race against censors.
The censors study all traffic passing into and out of
their controlled sphere,
and try to disable censorship-circumvention tools
without completely shutting down the Internet.
Tools aim to shape their traffic patterns to match unblocked programs,
so that simple traffic profiling
cannot identify the tools within a reasonable number of traces;
the censors respond by deploying firewalls
with increasingly sophisticated deep-packet inspection.
Cryptography hides patterns in user data
but does not evade censorship
if the censor can recognize patterns in the cryptography itself.
In particular,
elliptic-curve cryptography
often transmits points on known elliptic curves,
and those points are easily distinguishable from uniform random strings of bits.
This paper introduces high-security high-speed elliptic-curve systems
in which elliptic-curve points are encoded so as to be indistinguishable
from uniform random strings.
At a lower level,
this paper introduces a new bijection
between strings and about half of all curve points;
this bijection is applicable to every odd-characteristic
elliptic curve with a point of order 2,
except for curves of j-invariant 1728.
This paper also presents guidelines to construct, and two examples of,
secure curves suitable for these encodings.
Sieve-in-the-Middle: Improved MITM Attacks (Full Version)
This paper presents a new generic technique, named sieve-in-the-middle, which improves meet-in-the-middle attacks in the sense that it provides an attack on a higher number of rounds. Instead of selecting the key candidates by searching for a collision in an intermediate state which can be computed forwards and backwards, we here look for the existence of valid transitions through some middle sbox. Combining this technique with short bicliques allows to freely add one or two more rounds with the same time complexity. Moreover, when the key size of the cipher is larger than its block size, we show how to build the bicliques by an improved technique which does not require any additional data (on the contrary to previous biclique attacks). These techniques apply to PRESENT, DES, PRINCE and AES, improving the previously known results on these four ciphers. In particular, our attack on PRINCE applies to 8 rounds (out of 12), instead of 6 in the previous cryptanalyses. Some results are also given for theoretically estimating the sieving probability provided by some subsets of the input and output bits of a given sbox.
Encryption Schemes with Post-Challenge Auxiliary Inputs
In this paper, we tackle the open problem of proposing a leakage-resilience encryption model that can capture leakage from both the secret key owner and the encryptor, in the auxiliary input model. Existing models usually do not allow adversaries to query more leakage
information after seeing the challenge ciphertext of the security games. On one hand, side-channel attacks on the random factor (selected by the encryptor) are already shown to be feasible. Leakage from the encryptor should not be overlooked. On the other hand, the technical challenge for allowing queries from the adversary after he sees the ciphertext is to avoid a trivial attack to the system since he can then embed the decryption function as the leakage function (note that we consider the auxiliary input model in which the leakage is modeled as computationally hard-to-invert functions). We solve this problem by defining the post-challenge auxiliary input model in which the family of leakage functions must be defined before the adversary is given the public key. Thus the adversary cannot embed the decryption function as a leakage function after seeing the challenge ciphertext while is allowed to make challenge-dependent queries. This model is able to capture a wider class of real-world side-channel attacks.
To realize our model, we propose a generic transformation from the auxiliary input model to our new post-challenge auxiliary input model for both public key encryption (PKE) and identity-based encryption (IBE). Furthermore, we extend Canetti et al.'s technique, that converts CPA-secure IBE to CCA-secure PKE, into the leakage-resilient setting. More precisely, we construct a CCA-secure PKE in the post-challenge auxiliary input model, by using strong one-time signatures and strong extractor with hard-to-invert auxiliary inputs, together with a CPA-secure IBE in the auxiliary input model. Moreover, we extend our results to signatures, to obtain fully leakage-resilient signatures with auxiliary inputs using standard signatures and strong extractor with hard-to-invert auxiliary inputs. It is more efficient than the existing fully leakage-resilient signature schemes.
BLAKE2: simpler, smaller, fast as MD5
We present the hash function BLAKE2, an improved version of the SHA-3 finalist BLAKE optimized for speed in software. Target applications include cloud storage, intrusion detection, or version control systems. BLAKE2 comes in two main flavors: BLAKE2b is optimized for 64-bit platforms, and BLAKE2s for smaller architectures. On 64-bit platforms, BLAKE2 is often faster than MD5, yet provides security similar to that of SHA-3: up to 256-bit collision resistance, immunity to length extension, indifferentiability from a random oracle, etc. We specify parallel versions BLAKE2bp and BLAKE2sp that are up to 4 and 8 times faster, by taking advantage of SIMD and/or multiple cores. BLAKE2 reduces the RAM requirements of BLAKE down to 168 bytes, making it smaller than any of the five SHA-3 finalists, and 32% smaller than BLAKE. Finally, BLAKE2 provides a comprehensive support for tree-hashing as well as keyed hashing (be it in sequential or tree mode).
Generic Constructions of Secure-Channel Free Searchable Encryption with Adaptive Security
For searching keywords against encrypted data, the public key encryption scheme with keyword search (PEKS), and its an extension called secure-channel free PEKS (SCF-PEKS) have been proposed.
In SCF-PEKS, a receiver makes a trapdoor for a keyword, and uploads it on a server. A sender computes an encrypted keyword, and sends it to the server. The server executes the searching procedure (called the test algorithm, which takes as inputs an encrypted keyword, trapdoor, and secret key of the server).
In this paper, we extend the security of SCF-PEKS, calling it adaptive SCF-PEKS, wherein an adversary (modeled as a ``malicious-but-legitimate" receiver) is allowed to issue test queries \emph{adaptively}, and show that adaptive SCF-PEKS can be generically constructed by anonymous identity-based encryption (anonymous IBE) only. That is, for constructing adaptive SCF-PEKS we need not require any additional cryptographic primitive when compared to the Abdalla et al. PEKS construction (J. Cryptology 2008), even though adaptive SCF-PEKS requires additional functionalities. Note that our generic construction needs to apply the KEM/DEM framework (a.k.a. hybrid encryption), where KEM stands for key encapsulation mechanism, and DEM stands for data encapsulation mechanism. We also show that there is a class of anonymous IBE that can be applied for constructing adaptive SCF-PEKS without using hybrid encryption, and propose an adaptive SCF-PEKS construction based on this IBE. Although our second construction is not fully generic, it is efficient compared to the first, since we can exclude the DEM part. Finally, we instantiate an adaptive SCF-PEKS scheme (via our second construction) that achieves a similar level of efficiency for the costs of the test procedure and encryption, compared to the (non-adaptive secure) SCF-PEKS scheme by Fang et al. (CANS2009).
Instantaneous Frequency Analysis
This paper investigated the use of instantaneous frequency (IF)
instead of power amplitude and power spectrum in side-channel analysis.
By opposition to the constant frequency used in Fourier Transform, instantaneous frequency reflects local phase differences and allows detecting frequency variations. These variations reflect the processed binary data and are hence cryptanalytically useful. IF exploits the fact that after higher power drops more time is required to restore power back to its nominal value. Whilst our experiments reveal IF does not bring specific benefits over usual power attacks when applied to unprotected designs, IF allows to obtain much better results in the presence of amplitude modification
countermeasures.
On the use of continued fractions for stream ciphers
In this paper, we present a new approach to stream ciphers. This method draws its strength from public key algorithms such as RSA and the development in continued fractions of certain irrational numbers to produce a pseudo-random stream. Although the encryption scheme proposed in this paper is based on a hard mathematical problem, its use is fast.
Fully-Anonymous Functional Proxy-Re-Encryption
In this paper, we introduce a general notion of functional proxy-re-encryption (F-PRE), where a wide class of functional encryption (FE) is combined with proxy-re-encryption (PRE) mechanism. The PRE encryption system should reveal {\em minimal} information to a proxy, in particular, hiding parameters of re-encryption keys and of original ciphertexts which he manipulate is highly desirable. We first formulate such a {\em fully-anonymous} security notion of F-PRE including usual payload-hiding properties. We then propose the first fully-anonymous inner-product PRE (IP-PRE) scheme, whose security is proven under the DLIN assumption and the existence of a strongly unforgeable one-time signature scheme in the standard model. Also, we propose the first ciphertext-policy F-PRE scheme with the access structures of Okamoto-Takashima (CRYPTO 2010), which also has an anonymity property for re-encryption keys as well as payload-hiding for original and re-encrypted ciphertexts. The security is proven under the same assumptions as the above IP-PRE scheme in the standard model. For these results, we develop novel {\em blind delegation} and {\em subspace insulation for re-enc key basis} techniques on the dual system encryption (DSE) paradigm and the dual pairing vector spaces (DPVS) approach. These techniques seem difficult to be realized by a {\em composite-order} bilinear group DSE approach.
Anon-Pass: Practical Anonymous Subscriptions
We present the design, security proof, and implementation of an anonymous subscription service. Users register for the service by providing some form of identity, which might or might not be linked to a real-world identity such as a credit card, a web login, or a public key. A user logs on to the system by presenting a credential derived from information received at registration. Each credential allows only a single login in any authentication window, or epoch. Logins are anonymous in the sense that the service cannot distinguish which user is logging in any better than random guessing. This implies unlinkability of a user across different logins.
We find that a central tension in an anonymous subscription service is the service provider’s desire for a long epoch (to reduce server-side computation) versus users’ desire for a short epoch (so they can repeatedly “re-anonymize” their sessions). We balance this tension by having short epochs, but adding an efficient operation for clients who do not need unlinkability to cheaply re-authenticate themselves for the next time period.
We measure performance of a research prototype of our pro- tocol that allows an independent service to offer anonymous access to existing services. We implement a music service, an Android-based subway-pass application, and a web proxy, and show that adding anonymity adds minimal client latency and only requires 33 KB of server memory per active user.
Certified computer-aided cryptography: efficient provably secure machine code from high-level implementations
We present a computer-aided framework for proving concrete security bounds for cryptographic machine code implementations. The front-end of the framework is an interactive verification tool that extends the EasyCrypt framework to reason about relational properties of C-like programs extended with idealised probabilistic operations in the style of code-based security proofs. The framework also incorporates an extension of the CompCert certified compiler to support trusted libraries providing complex arithmetic calculations or instantiating idealised components such as sampling operations. This certified compiler allows us to carry to executable code the security guarantees established at the high-level, and is also instrumented to detect when compilation may interfere with side-channel countermeasures deployed in source code.
We demonstrate the applicability of the framework with the RSA-OAEP encryption scheme, as standardized in PKCS#1 v2.1. The outcome is a rigorous analysis of the advantage of an adversary to break the security of assembly implementations of the algorithms specified by the standard. The example also provides two contributions of independent interest: it is the first application of computer-aided cryptographic tools to real-world security, and the first application of CompCert to cryptographic software.
Hybrid Approach for the Fast Verification for Improved Versions of the UOV and Rainbow Signature Schemes
Multivariate cryptography is one of the main candidates to guarantee the security of communication in the post-quantum era. Especially in the area of digital signatures, multivariate cryptography offers a wide range of practical schemes. In \cite{PB12} and \cite{PB13} Petzoldt et al. showed a way to speed up the verification process of improved variants of the UOV and Rainbow signature schemes. In this paper we show how we can do even better by a slight variation of their algorithms.
Keyed Side-Channel Based Hashing for IP Protection using Wavelets
The protection of intelligent property (IP) is a challenging task, especially in secured embedded systems where program code that is supposed to be a plagiarism cannot be simply read-out for further inspection. This is even more aggravated if the original source code was modified to prevent comparisons of any kind. For instance, watermarks that are actually hidden in the code are at risk to be rendered useless if the attacker has full access to the original code and some knowledge about the watermark. The unlicensed use of patented algorithms is a further problem that belongs to IP plagiarism as well. A Recent work presented a framework based on perceptual hashing to detect intelligent property in hardware and software designs. In this work we consequently extend this framework to detect IP plagiarism in embedded systems that can reliably match contents even in the presence of attacks. Therefore, we propose an adapted signal feature extraction method, the wavelet transform, to form a keyed side-channel hash function.
Pairing Inversion via Non-degenerate Auxiliary Pairings
Uncategorized
Uncategorized
The security of pairing-based cryptosystems is closely related to the difficulty of the pairing inversion problem(PI). In this paper, we discuss the difficulty of pairing inversion on the generalized ate pairings of Vercauteren.
First, we provide a simpler approach for PI by generalizing and simplifying Kanayama-Okamoto’s approach; our approach involves modifications of exponentiation inversion(EI) and Miller inversion(MI), via an “auxiliary” pairing. Then we provide a complexity of the modified MI, showing that the complexity depends on the sum-norm of the integer vector defining the auxiliary pairing.
Next, we observe that degenerate auxiliary pairings expect to make modified EI harder. We provide a sufficient condition on the integer vector, in terms of its max norm, so that the corresponding auxiliary paring is non-degenerate.
Finally, we define an infinite set of curve parameters, which includes those of typical pairing friendly curves, and we show that, within those parameters, PI of arbitrarily given generalized ate pairing can be reduced to modified EI in polynomial time.
Families of fast elliptic curves from Q-curves
Uncategorized
Uncategorized
We construct new families of elliptic curves over \(\FF_{p^2}\) with efficiently computable endomorphisms, which can be used to accelerate elliptic curve-based cryptosystems in the same way as Gallant--Lambert--Vanstone (GLV) and Galbraith--Lin--Scott (GLS) endomorphisms.
Our construction is based on reducing \(\QQ\)-curves---curves over quadratic number fields without complex multiplication, but with isogenies to their Galois conjugates---modulo inert primes.
As a first application of the general theory we construct, for every \(p > 3\), two one-parameter families of elliptic curves over \(\FF_{p^2}\) equipped with endomorphisms that are faster than doubling.
Like GLS (which appears as a degenerate case of our construction), we offer the advantage over GLV of selecting from a much wider range of curves, and thus finding secure group orders when \(p\) is fixed. Unlike GLS, we also offer the possibility of constructing twist-secure curves.
Among our examples are prime-order curves equipped with fast endomorphisms, with almost-prime-order twists, over \(\FF_{p^2}\) for \(p = 2^{127}-1\) and \(p = 2^{255}-19\).
Four-dimensional GLV via the Weil restriction
Uncategorized
Uncategorized
The Gallant-Lambert-Vanstone (GLV) algorithm uses efficiently computable endomorphisms to accelerate the computation of scalar multiplication of points on an abelian variety. Freeman and Satoh proposed for cryptographic use two families of genus 2 curves defined over $\F_{p}$ which have the property that the corresponding Jacobians are $(2,2)$-isogenous over an extension field to a product of elliptic curves defined over $\F_{p^2}$. We exploit the relationship between the endomorphism rings of isogenous abelian varieties to exhibit efficiently computable endomorphisms on both the genus 2 Jacobian and the elliptic curve. This leads to a four dimensional GLV method on Freeman and Satoh's Jacobians and on two new families of elliptic curves defined over $\F_{p^2}$.
MinimaLT: Minimal-latency Networking Through Better Security
Minimal Latency Tunneling (MinimaLT) is a new network protocol that provides ubiquitous encryption for maximal confidentiality, including protecting packet headers. MinimaLT provides server and user authentication, extensive Denial-of-Service protections, privacy-preserving IP mobility, and fast key erasure. We describe the protocol, demonstrate its performance relative to TLS and unencrypted TCP/IP, and analyze its protections, including its resilience against DoS attacks. By exploiting the properties of its cryptographic protections, MinimaLT is able to eliminate three-way handshakes and thus create connections faster than unencrypted TCP/IP.
Bypassing Passkey Authentication in Bluetooth Low Energy
This memo describes new cryptographic weakness of the passkey-based pairing of Bluetooth Low Energy (also known as Bluetooth Smart). The vulnerability discussed here extends the set of possible attacking scenarios that were already elaborated before by Mike Ryan at Shmoocon 2013.
Instead of the passive sniffing attack on pairing secrets, we show how an active fraudulent Responder can gracefully bypass passkey authentication, despite it being possibly based on even one-time generated PIN.
Lattice-Based Group Signatures with Logarithmic Signature Size
Group signatures are cryptographic primitives where users can
anonymously sign messages in the name of a population they belong
to. Gordon et al. (Asiacrypt 2010) suggested the first realization of group signatures based on lattice assumptions in the random oracle model. A significant drawback of their scheme is its linear signature size in the cardinality $N$ of the group. A recent extension proposed by Camenisch et al. (SCN 2012) suffers from the same overhead. In this paper, we describe the first lattice-based group signature schemes where the signature and public key sizes are essentially logarithmic in $N$ (for any fixed security level). Our basic construction only satisfies a relaxed definition of anonymity (just like the Gordon et al. system) but readily extends into a fully anonymous group signature (i.e., that resists adversaries equipped with a signature opening
oracle). We prove the security of our schemes in the random oracle
model under the SIS and LWE assumptions.
Maliciously Circuit-Private FHE
Uncategorized
Uncategorized
We present a framework for transforming FHE (fully homomorphic encryption) schemes with no circuit privacy requirements into maliciously circuit-private FHE. That is, even if both maliciously formed public key and ciphertext are used, encrypted outputs only reveal the evaluation of the circuit on some well-formed input $x^*$.
Previous literature on FHE only considered semi-honest circuit privacy.
Circuit-private FHE schemes have direct applications to computing on encrypted data. In that setting, one party (a receiver) holding an input $x$ wishes to learn the evaluation of a circuit $C$ held by another party (a sender). The goal is to make receiver's work sublinear (and ideally independent) of $|C|$, using a 2-message protocol.
The transformation technique may be of independent interest, and have various additional applications.
The framework uses techniques akin to Gentry's bootstrapping and conditional disclosure of secrets (CDS [AIR01]) combining a non circuit private FHE scheme, with a homomorphic encryption (HE) scheme for a smaller class of circuits which is maliciously circuit-private.
We devise the first known circuit private FHE, by instantiating our framework by various (standard) FHE schemes from the literature.
Solving a $6120$-bit DLP on a Desktop Computer
In this paper we show how some recent ideas regarding the discrete logarithm problem (DLP) in finite fields of small characteristic may be applied to compute logarithms in some very large fields extremely efficiently. By combining the polynomial time relation generation from the authors' CRYPTO 2013 paper, an improved degree two elimination technique, and an analogue of Joux's recent small-degree elimination method, we solved a DLP in the record-sized finite field of $2^{6120}$ elements, using just a single core-month. Relative to the previous record set by Joux in the field of $2^{4080}$ elements, this represents a $50\%$ increase in the bitlength, using just $5\%$ of the core-hours. We also show that for the fields considered, the parameters for Joux's $L_Q(1/4 + o(1))$ algorithm may be optimised to produce an $L_Q(1/4)$ algorithm.
Towards Fresh Re-Keying with Leakage-Resilient PRFs: Cipher Design Principles and Analysis
Leakage-resilient cryptography aims at developing new algorithms for which physical security against side-channel attacks can be formally analyzed. Following the work of Dziembowski and Pietrzak at FOCS 2008, several symmetric cryptographic primitives have been investigated in this setting. Most of them can be instantiated with a block cipher as underlying component. Such an approach naturally raises the question whether certain block ciphers are better suited for this purpose. In order to answer this question, we consider a leakage-resilient re-keying function, and evaluate its security at different abstraction levels. That is, we study possible attacks exploiting specific features of the algorithmic description, hardware architecture and physical implementation of this construction. These evaluations lead to two main outcomes. First, we complement previous works on leakage-resilient cryptography and further specify the conditions under which they actually provide physical security. Second, we take advantage of our analysis to extract new design principles for block ciphers to be used in leakage-resilient primitives. While our investigations focus on side-channel attacks in the first place, we hope these new design principles will trigger the interest of symmetric cryptographers to design new block ciphers combining good properties for secure implementations and security against black box (mathematical) cryptanalysis.
Secure PRNG Seeding on Commercial Off-the-Shelf Microcontrollers
The generation of high quality random numbers is crucial to many cryptographic applications, including cryptographic protocols, secret of keys, nonces or salts. Their values must contain enough randomness to be unpredictable to attackers. Pseudo-random number generators require initial data with high entropy as a seed to produce a large stream of high quality random data. Yet, despite the importance of randomness, proper high quality random number generation is often ignored. Primarily embedded devices often suffer from weak random number generators. In this work, we focus on identifying and evaluating SRAM in commercial off-the-shelf microcontrollers as an entropy source for PRNG seeding. We measure and evaluate the SRAM start-up patterns of two popular types of microcontrollers, a STMicroelectronics STM32F100R8 and a Microchip PIC16F1825. We also present an efficient software-only architecture for secure PRNG seeding. After analyzing over 1 000 000 measurements in total, we conclude that of these two devices, the PIC16F1825 cannot be used to securely seed a PRNG. The STM32F100R8, however, has the ability to generate very strong seeds from the noise in its SRAM start-up pattern. These seeds can then be used to ensure a PRNG generates high quality data.
Theory of masking with codewords in hardware: low-weight $d$th-order correlation-immune Boolean functions
In hardware, substitution boxes for block ciphers can be saved already masked in the implementation.
The masks must be chosen under two constraints:
their number is determined by the implementation area and their properties should allow to deny high-order zero-offset attacks of highest degree.
First, we show that this problem translates into a known trade-off in Boolean functions, namely
finding correlation-immune functions of lowest weight.
For instance, this allows to prove that a byte-oriented block cipher such as AES can be protected with only $16$ mask values against zero-offset correlation power attacks of orders $1$, $2$ and $3$.
Second, we study $d$th-order correlation-immune Boolean functions $\F_2^n \to \F_2$ of low-weight
and exhibit such functions of minimal weight found by a satisfiability modulo theory tool.
In particular, we give the minimal weight for $n \leq 10$.
Some of these results were not known previously, such as the minimal weight for
$(n=9, d=4)$ and
$(n=10, d \in \{4,5,6\})$.
These results set new bounds for the minimal number of lines of binary orthogonal arrays.
In particular, we point out that the minimal weight $w_{n,d}$ of a $d$th-order correlation-immune function might not be increasing with the number of variables $n$.
Cryptanalysis of Grigoriev-Shpilrain Physical Asymmetric Scheme With Capacitors
Few days ago Grigoriev and Shpilrain have proposed to build a system for transmission of information without a shared secret, or essentially a sort of public key cryptosystem, based on properties of physical systems.
In this paper we show that their second scheme based on capacitors is insecure and extremely easy to break in practice.
Impossible Differential-Linear Cryptanalysis of Reduced-Round CLEFIA-128
Uncategorized
Uncategorized
CLEFIA is a 128-bit block cipher proposed by Sony Corporation in
2007. Our paper introduces a new chosen text attack, the
impossible differential-linear attack, on iterated cryptosystems.
The attack is efficient for $16$-round CLEFIA with whitening keys.
In the paper, we construct a $13$-round impossible
differential-linear distinguisher. Based on the distinguisher, we
present an effective attack on 16-round CLEFIA-$128$ with data
complexity of $2^{122.73}$, recovering $96$-bit subkeys in total.
Our attack can also be applied to CLEFIA-192 and CLEFIA-$256$.
A Profitable Sub-Prime Loan: Obtaining the Advantages of Composite Order in Prime-Order Bilinear Groups
Composite-order bilinear groups provide many structural features that are useful for both constructing cryptographic primitives and enabling security reductions. Despite these convenient features, however, composite-order bilinear groups are less desirable than prime-order bilinear groups for reasons of both efficiency and security. A recent line of work has therefore focused on translating these structural features from the composite-order to the prime-order setting; much of this work focused on two such features, projecting and canceling, in isolation, but a result due to Seo and Cheon showed that both features can be obtained simultaneously in the prime-order setting.
In this paper, we reinterpret the construction of Seo and Cheon in the context of dual pairing vector spaces (which provide canceling as well as useful parameter hiding features) to obtain a unified framework that simulates all of these composite-order features in the prime-order setting. We demonstrate the strength of this framework by providing two applications: one that adds dual pairing vector spaces to the existing projection in the Boneh-Goh-Nissim encryption scheme to obtain leakage resilience, and another that adds the concept of projecting to the existing dual pairing vector spaces in an IND-CPA secure IBE scheme to "boost" its security to IND-CCA1. Our leakage-resilient BGN application is of independent interest, and it is not clear how to achieve it from pure composite-order techniques without mixing in additional vector space tools. Both applications rely solely on the Symmetric External Diffie Hellman assumption (SXDH).
Computing class polynomials for abelian surfaces
We describe a quasi-linear algorithm for computing Igusa class polynomials of Jacobians of genus 2 curves via complex floating-point approximations of their roots. After providing an explicit treatment of the computations in quartic CM fields and their Galois closures, we pursue an approach due to Dupont for evaluating ϑ-constants in quasi-linear time using Newton iterations on the Borchardt mean. We report on experiments with our implementation and present an example with class number 17608.
Does My Device Leak Information? An a priori Statistical Power Analysis of Leakage Detection Tests
Uncategorized
Uncategorized
The development of a leakage detection testing methodology for the side-channel resistance of cryptographic devices is an issue that has received recent focus from standardisation bodies such as NIST. Statistical techniques such as hypothesis and significance testing appear to be ideally suited for this purpose. In this work we evaluate the candidacy of three such detection tests: a \emph{t}-test proposed by Cryptography Research Inc., and two mutual information-based tests, one in which data is treated as continuous and one as discrete. Our evaluation investigates three particular areas: statistical power, the effectiveness of multiplicity corrections, and computational complexity. To facilitate a fair comparison we conduct a novel \emph{a priori} statistical power analysis of the three tests in the context of side-channel analysis, finding surprisingly that the continuous mutual information and \emph{t}-tests exhibit similar levels of power. We also show how the inherently parallel nature of the continuous mutual information test can be leveraged to reduce a large computational cost to insignificant levels. To complement the \emph{a priori} statistical power analysis we include two real-world case studies of the tests applied to software and hardware implementations of the AES
Improvement and Efficient Implementation of a Lattice-based Signature Scheme
Uncategorized
Uncategorized
Lattice-based signature schemes constitute an interesting alternative
to RSA and discrete logarithm based systems which may become
insecure in the future, for example due to the possibility of quantum
attacks. A particularly interesting scheme in this context is the GPV signature scheme [GPV08] combined with the trapdoor construction from
Micciancio and Peikert [MP12] as it admits strong security proofs and
is believed to be very efficient in practice. This paper confirms this belief and shows how to improve the GPV scheme in terms of space and
running time and presents an implementation of the optimized scheme.
A ring variant of this scheme is also introduced which leads to a more
efficient construction. Experimental results show that GPV with the new
trapdoor construction is competitive to the signature schemes that are
currently used in practice.
Universally Composable Symbolic Analysis for Two-Party Protocols based on Homomorphic Encryption
We consider a class of two-party function evaluation protocols in which the parties are allowed to use ideal functionalities as well as a set of powerful primitives, namely commitments, homomorphic encryption, and certain zero-knowledge proofs. We illustrate that with these it is possible to capture protocols for oblivious transfer, coin-flipping, and generation of multiplication-triple.
We show how any protocol in our class can be compiled to a symbolic representation expressed as a process in an abstract process calculus, and prove a general computational soundness theorem implying that if the protocol realises a given ideal functionality in the symbolic setting, then the original version also realises the ideal functionality in the standard computational UC setting. In other words, the theorem allows us to transfer a proof in the abstract symbolic setting to a proof in the standard UC model.
Finally, we show that the symbolic interpretation is simple enough in a number of cases for the symbolic proof to be partly automated using the ProVerif tool.
Survey and Benchmark of Lightweight Block Ciphers for Wireless Sensor Networks
For security applications in wireless sensor networks (WSNs), choosing best algorithms in terms of energy-efficiency and of small memory requirements is a real challenge because the sensor networks must be autonomous. In \cite{EisenbarthGGHIKKNPRSO12,LawDH06}, the authors have benchmarked on a dedicated platform some block-ciphers and have deduced the best candidates to use in the context of small embedded platforms.
This article proposes to study on a dedicated platform of sensors most of the recent lightweight block ciphers as well as some conventional block ciphers. First, we describe the design of the chosen block ciphers with a security summary and we then present some implementation tests performed on our platform.
Synchronous Sampling and Clock Recovery of Internal Oscillators for Side Channel Analysis
Measuring power consumption for side-channel analysis typically uses an oscilloscope, which measures the data relative to an internal sample clock. By synchronizing the sampling clock to the clock of the target device, the sample rate requirements are considerably relaxed; the attack will succeed with a much lower sample rate.
This work measures the performance of a synchronous sampling system attacking a modern microcontroller running a software AES implementation. This attack is characterized under four conditions: with a stable crystal-oscillator based clock, with a clock that is randomly varied between 3.9 MHz - 13 MHz, with an internal oscillator that is randomly varied between 7.2 MHz - 8.1 MHz, and with an internal oscillator that has slight random variation due to natural `drift' in the oscillator.
Traces captured with the synchronous sampling technique can be processed with a standard Differential Power Analysis (DPA) style attack in all four cases, whereas when an oscilloscope is used only the stable oscillator setup is successful. This work also develops the hardware to recover the internal clock of a device which does not have an externally available clock. It is possible to implement this scheme in software only, allowing it to work with existing oscilloscope-based test environments.
A Toolkit for Ring-LWE Cryptography
Recent advances in lattice cryptography, mainly stemming from the
development of ring-based primitives such as ring-$\lwe$, have made it
possible to design cryptographic schemes whose efficiency is
competitive with that of more traditional number-theoretic ones, along
with entirely new applications like fully homomorphic encryption.
Unfortunately, realizing the full potential of ring-based cryptography
has so far been hindered by a lack of practical algorithms and
analytical tools for working in this context. As a result, most
previous works have focused on very special classes of rings such as
power-of-two cyclotomics, which significantly restricts the possible
applications.
We bridge this gap by introducing a toolkit of fast, modular
algorithms and analytical techniques that can be used in a wide
variety of ring-based cryptographic applications, particularly those
built around ring-\lwe. Our techniques yield applications that work
in \emph{arbitrary} cyclotomic rings, with \emph{no loss} in their
underlying worst-case hardness guarantees, and very little loss in
computational efficiency, relative to power-of-two cyclotomics. To
demonstrate the toolkit's applicability, we develop two illustrative
applications: a public-key cryptosystem and a ``somewhat homomorphic''
symmetric encryption scheme. Both apply to arbitrary cyclotomics, have
tight parameters, and very efficient implementations.
A Leakage Resilient MAC
Uncategorized
Uncategorized
We put forward the first practical message authentication code (MAC) which is provably secure against continuous leakage under the Only Computation Leaks Information (OCLI) assumption. Within the context of continuous leakage, we introduce a novel modular proof technique: while most previous schemes are proven secure directly in the face of leakage, we reduce the (leakage) security of our scheme to its non-leakage security. This modularity, while known in other contexts, has two advantages: it makes it clearer which parts of the proof rely on which assumptions (i.e. whether a given assumption is needed for the leakage or the non- leakage security) and it also means that, if the security of the non-leakage version is improved, the security in the face of leakage is improved ‘for free’. We conclude the paper by discussing implementations; one on a popular core for embedded systems (the ARM Cortex-M4) and one on a high end processor (Intel i7), and investigate some performance and security aspects.
Security ranking among assumptions within the Uber assumption framework
Uncategorized
Uncategorized
Over the past decade bilinear maps have been used to build a large variety of cryptosystems. In parallel to new functionalities, we have also seen the emergence of many security assumptions. This leads to the general question of comparing two such assumptions. Boneh, Boyen and Goh introduced the Uber assumption as an attempt to offer a general framework for security assessment. Their idea is to propose a generic security assumption that can be specialized to suit the needs of any proof of protocols involving bilinear pairing. Even though the Uber assumption has been only stated in the bilinear setting, it can be easily restated to deal with ordinary Diffie-Hellman groups and assess other type of protocols.
In this article, we explore some particular instances of the Uber assumption; namely the n-CDH-assumption, the nth-CDH-assumption and the Q-CDH-assumption. We analyse the relationships between those assumption and more precisely from a security point of view. Our analysis does not rely on any special property of the considered group(s) and does not use the generic group model.
Massive Group Message Authentication with Revocable Anonymity
We present and implement schemes for authenticating messages from a
group of users to a recipient, with revocable anonymity and massive (very high) message rate. Our implementations present a trade-off between the efficiency and the security required: from online group managers that participate in every message sent to offline managers, from assuming a trusted group manager and a trusted recipient to securing against both entities. All implementations have the {\em traceablity} feature, allowing to distributively and efficiently trace
all messages that originated from a specific group member without violating anonymity of other members. In addition, our schemes are efficient and practical.
Secure Second Price Auctions with a Rational Auctioneer
We present novel security requirements for second price auctions and a
simple, efficient and practical protocol that provably maintains these
requirements. Novel requirements are needed because commonly used requirements,
such as the indistinguishability-based secrecy requirement of encryption schemes
presented by \cite{goldwasser1982pep}, do not fit properly in the second price
auctions context. Additionally, the presented protocol uses a trustworthy
supervisor that checks if the auctioneer deviated from the protocol and fines
him accordingly. By making sure the expected utility of the auctioneer when
deviating from the protocol is lower than his expected utility when abiding by
the protocol we ascertain that a {\em rational} auctioneer will abide by the
protocol. This allows the supervisor to optimize by performing
(computationally-intensive) inspections of the auctioneer with only low
probability.
Key Classification Attack on Block Ciphers
Uncategorized
Uncategorized
In this paper, security analysis of block ciphers with key length greater than block length is proposed. For a well-designed block cipher with key length k and block length n s.t. k>n and for all P, C, there are 2^{k-n} keys which map P to C. For given block cipher, if there is an efficient algorithm that can classify such keys, we propose an algorithm will be able to recover the secret key with complexity O(max{2^n, 2^{k-n}}). We apply this method on 2-round block cipher KASUMI.
The failure of McEliece PKC based on Reed-Muller codes.
This paper describes new algorithm for breaking McEliece cryptosystem, built on Reed-Muller binary code $RM(r, m)$, which receives the private key from the public key. The algorithm has complexity $O(n^d+n^4log_2n)$ bit operations, where $n=2^m, d=\text{GCD}(r,m-1).$ In the case of $\text{GCD}(r,m-1)$ limitation, attack has polynomial complexity. Practical results of implementation show that McEliece cryptosystems, based on the code with length $n=65536$ bits, can be broken in less than 7 hours on a personal computer.
Salvaging Indifferentiability in a Multi-stage Setting
The indifferentiability framework by Maurer, Renner and Holenstein (MRH; TCC 2004) formalizes
a sufficient condition to safely replace a random oracle by a construction based on a
(hopefully) weaker assumption such as an ideal cipher. Indeed, many indifferentiable hash functions
have been constructed and could since be used in place of random oracles. Unfortunately,
Ristenpart, Shacham,
and Shrimpton (RSS; Eurocrypt 2011) discovered that for a large class of
security notions,
the MRH composition theorem actually does not apply. To bridge the gap
they suggested a stronger notion called reset indifferentiability and established a generalized
version of the MRH composition theorem. However, as recent works by Demay et al.~(Eurocrypt 2013) and
Baecher et al.~(Asiacrypt 2013) brought to light, reset indifferentiability is not achievable thereby re-opening the quest for
a notion that is sufficient for multi-stage games and achievable at the same time.
We present a condition on multi-stage games that we call \emph{unsplittability}. We show that
if a game is unsplittable for a hash construction then the MRH composition theorem can be salvaged. Unsplittability
captures a restricted yet broad class of games together with a set of practical hash constructions including HMAC, NMAC
and several Merkle-Damgård variants. We show unsplittability for the chosen distribution attack (CDA) game of Bellare et al. (Asiacrypt 2009), a multi-stage game capturing the security of deterministic encryption schemes; for message-locked encryption (Bellare et al.; Eurocrypt 2013) a related primitive that allows for secure deduplication; for universal computational extractors (UCE) (Bellare et al., Crypto 2013), a recently introduced standard model assumption to replace random oracles; as well as for the proof-of-storage game given by Ristenpart et al.~as a counterexample to the general applicability of the indifferentiability framework.
A Novel Proof on Weil Pairing
In this paper we will prove a basic property of weil pairing which helps in evaluating its value. We will show that the weil pairing value as
computed from the definition is equivalent with the ratio formula based on the miller function. We prove a novel theorem (Theorem 2) and use it
to establish the equivalence. We further validate our claims with actual random examples.
A Secure Paper-Based Electronic Voting With No Encryption
Abstract: We present a paper-based voting method that attempts to achieve the privacy of voters and election universal verifiability and integrity with only paper ballots and without using any cryptography method. The voting procedure is easy and it needs only selecting the intention of voter over screen of an electronic device. The rest of the voting procedure will be carried out by the device. Voter gets a receipt that can be used to verify that his vote has been counted in final tally as he intended. However the receipt cannot help voter to reveal who he voted for. Also vote selling or coercion is not possible even with the voter’s cooperation. The ballot in our voting method has two side, one positive and one negative. Ballots have been prepared for voting in prepackaged form (i.e. 5 ballots per package). Some bubbles of each ballot are prefilled in random way. Numbers of positive and negative filled bubbles are equal with each other and also for each candidate in a package. For example if every package has 30 filled bubbles and if there are three candidates, there would be 10 filled bubbles for each candidate in a package. As it is clear half of those are positive and the other half are negative. The procedure of OneBallot voting is as follows: Voter puts the ballot inside of an electronic device and then he chooses his candidate on the device screen. Then device print another ballot exact same as the original one by one difference; the device fills one positive bubble or unfills one negative bubbles for the selected candidate. First action can be done on the original ballot but the second one needs to print new ballot inevitably. Then device makes a copy from new ballot as voter’s receipt and transfers original ballot to the ballot box. After election, there will be a copy from all of ballots in a public board (i.e. a website).
Function-Private Identity-Based Encryption: Hiding the Function in Functional Encryption
We put forward a new notion, function privacy, in identity-based encryption and, more generally, in functional encryption. Intuitively, our notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. This is motivated by the need for providing predicate privacy in public-key searchable encryption. Formalizing such a notion, however, is not straightforward as given a decryption key it is always possible to learn some information on its corresponding identity by testing whether it correctly decrypts ciphertexts that are encrypted for specific identities.
In light of such an inherent difficulty, any meaningful notion of function privacy must be based on the minimal assumption that, from the adversary's point of view, identities that correspond to its given decryption keys are sampled from somewhat unpredictable distributions. We show that this assumption is in fact sufficient for obtaining a strong and realistic notion of function privacy. Loosely speaking, our framework requires that a decryption key corresponding to an identity sampled from any sufficiently unpredictable distribution is indistinguishable from a decryption key corresponding to an independently and uniformly sampled identity.
Within our framework we develop an approach for designing function-private identity-based encryption schemes, leading to constructions that are based on standard assumptions in bilinear groups (DBDH, DLIN) and lattices (LWE). In addition to function privacy, our schemes are also anonymous, and thus yield the first public-key searchable encryption schemes that are provably keyword private: A search key sk_w enables to identify encryptions of an underlying keyword w, while not revealing any additional information about w beyond the minimum necessary, as long as the keyword w is sufficiently unpredictable.
Three Snakes in One Hole: The First Systematic Hardware Accelerator Design for SOSEMANUK with Optional Serpent and SNOW 2.0 Modes
Uncategorized
Uncategorized
With increasing usage of hardware accelerators in modern heterogeneous
System-on-Chips (SoCs), the distinction between hardware and software is no longer rigid. The domain of cryptography is no exception and efficient hardware design of so-called software ciphers are becoming increasingly popular. In this paper, for the first time we propose an efficient hardware accelerator design for SOSEMANUK, one of the finalists of the eSTREAM stream cipher competition in the software category. Since SOSEMANUK combines the design principles of the block cipher Serpent and the stream cipher SNOW 2.0, we make our design flexible to accommodate the option for independent execution of Serpent and SNOW 2.0. In the process, we identify interesting design points and explore different levels of optimizations. We perform a detailed experimental evaluation for the performance figures of each design point. The best throughput achieved by the combined design is 67.84 Gbps for SOSEMANUK, 33.92 Gbps for SNOW 2.0 and 2.12 Gbps for Serpent. Our design outperforms all existing hardware (as well as software) designs of Serpent, SNOW 2.0 and SOSEMANUK, along with those of all other eSTREAM candidates.