All papers in 2016 (Page 9 of 1195 results)
Efficient Beyond-Birthday-Bound-Secure Deterministic Authenticated Encryption with Minimal Stretch
Block-cipher-based authenticated encryption has obtained considerable attention from the ongoing CAESAR competition. While the focus of CAESAR resides primarily on nonce-based authenticated encryption, Deterministic Authenticated Encryption (DAE) is used in domains such as key wrap, where the available message entropy motivates to omit the overhead for nonces. Since the highest possible security is desirable when protecting keys, beyond-birthday-bound (BBB) security is a valuable goal for DAE. In the past, significant efforts had to be invested into designing BBB-secure AE schemes from conventional block ciphers, with the consequences of losing efficiency and sophisticating security proofs.
This work proposes Deterministic Counter in Tweak (DCT), a BBB-secure DAE scheme inspired by the Counter-in-Tweak encryption scheme by Peyrin and Seurin. Our design combines a fast $\epsilon$-almost-XOR-universal family of hash functions, for $\epsilon$ close to $2^{-2n}$, with a single call to a $2n$-bit SPRP, and a BBB-secure encryption scheme. First, we describe our construction generically with three independent keys, one for each component. Next, we present an efficient instantiation which (1) requires only a single key, (2) provides software efficiency by encrypting at less than two cycles per byte on current x64 processors, and (3) produces only the minimal $\tau$-bit stretch for $\tau$ bit authenticity. We leave open two minor aspects for future work: our current generic construction is defined for messages of at least $2n-\tau$ bits, and the verification algorithm requires the inverse of the used $2n$-bit SPRP and the encryption scheme.
Strengthening the Known-Key Security Notion for Block Ciphers
We reconsider the formalization of known-key attacks against ideal primitive-based block ciphers. This was previously tackled by Andreeva, Bogdanov, and Mennink (FSE 2013), who introduced the notion of known-key indifferentiability. Our starting point is the observation, previously made by Cogliati and Seurin (EUROCRYPT 2015), that this notion, which considers only a single known key available to the attacker, is too weak in some settings to fully capture what one might expect from a block cipher informally deemed resistant to known-key attacks. Hence, we introduce a stronger variant of known-key indifferentiability, where the adversary is given multiple known keys to ``play'' with, the informal goal being that the block cipher construction must behave as an independent random permutation for each of these known keys. Our main result is that the 9-round iterated Even-Mansour construction (with the trivial key-schedule, i.e., the same round key xored between permutations) achieves our new ``multiple'' known-keys indifferentiability notion, which contrasts with the previous result of Andreeva et al. that one single round is sufficient when only a single known key is considered. We also show that the 3-round iterated Even-Mansour construction achieves the weaker notion of multiple known-keys sequential indifferentiability, which implies in particular that it is correlation intractable with respect to relations involving any (polynomial) number of known keys.
De Bruijn Sequences, Adjacency Graphs and Cyclotomy
Uncategorized
Uncategorized
We study the problem of constructing De Bruijn sequences by joining cycles of linear feedback shift registers (LFSRs) with reducible characteristic polynomials. The main difficulty for joining cycles is to find the location of conjugate pairs between cycles, and the distribution of conjugate pairs in cycles is defined to be adjacency graphs. Let l(x) be a characteristic polynomial, and l(x)=l_1(x)l_2(x)\cdots l_r(x) be a decomposition of l(x) into pairwise co-prime factors. Firstly, we show a connection between the adjacency graph of FSR(l(x)) and the association graphs of FSR(l_i(x)), 1\leq i\leq r. By this connection, the problem of determining the adjacency graph of FSR(l(x)) is decomposed to the problem of determining the association graphs of FSR(l_i(x)), 1\leq i\leq r, which is much easier to handle. Then, we study the association graphs of LFSRs with irreducible characteristic polynomials and give a relationship between these association graphs and the cyclotomic numbers over finite fields. At last, as an application of these results, we explicitly determine the adjacency graphs of some LFSRs, and show that our results cover the previous ones.
Last updated: 2016-04-27
Towards a Further Understanding of Bit-Based Division Property
At EUROCRYPT 2015, Todo proposed the division property. Since then, many researches about the division property had occurred in succession. Inspired by the bit-based division property on SIMON introduced by Todo and Morri at FSE 2016,
we give a further understanding of bit-based division property and come up with a new method to reconsider the \textbf{Substitution} rule given by Todo.
By integrating the method of division property with the concrete boolean function expressions of S-box,
this new idea can help us trace the propagation of division property at the bit level and escape the tedious and direct application of the original propagation rules.
Benefit from this fact, this method can be applied to find integral distinguishers for some bit-oriented block ciphers other than SIMON.
Since this method replaces the \textbf{Substitution} rules with a subtle propagation table, we call it table-aided bit-based division property.
In order to verify our new method, we apply it to find integral distinguishers for CipherFour.
The experimental results indicate that the table-aided bit-based division property is indeed a valid and efficient tool to search for integral distinguishers for some bit-oriented block ciphers.
To handle the huge memory complexity of utilizing this new method, we apply early reduce technique, which was proposed by Zhang and Wu at INDOCRYPT 2015.
With the help early reduce technique, a 8-round higher-order integral distinguisher for RECTANGLE can be constructed, which attains one more round than the previous one proposed by the designers. For PRESENT, we can find new 5-round and 6-round integral distinguishers.
As to SPONGENT-88, a new 14-round zero-sum distinguisher with data complexity $2^{80}$ can be found by combining our new method with previous techniques.
The table-aided bit-based division property can also be applied to find integral distinguishers for some word-oriented block ciphers, like TWINE and LBlock. Although we do not find any new integral distinguishers for these two ciphers, we believe that considering the S-box at the bit level is of great importance even for a word-oriented block cipher.
A Digital Signature Scheme Based on Random Split of St-Gen Codes
Recently we proposed a method for a random split of Staircase-Generator codes (St-Gen codes) to counter the weaknesses found in the previous constructions of public key schemes using St-Gen codes. The initial proposal for the random split addressed only the encryption scheme, and we left the problem of how to apply the random splitting on the signature scheme open. In this work we solve that open problem and describe a digital signature scheme based on random split of St-Gen codes.
Obfuscation without the Vulnerabilities of Multilinear Maps
Indistinguishability obfuscation is a central primitive in cryptography. Security of existing multilinear maps constructions on which current obfuscation candidates are based is poorly understood. In a few words, multilinear maps allow for checking if an arbitrary bounded degree polynomial on hidden values evaluates to zero or not. All known attacks on multilinear maps depend on the information revealed on computations that result in encodings of zero. This includes the recent annihilation attacks of Miles, Sahai and Zhandry [EPRINT 2016/147] on obfuscation candidates as a special case.
Building on a modification of the Garg, Gentry and Halevi [EUROCRYPT 2013] multilinear maps (GGH for short), we present a new obfuscation candidate that is resilient to these vulnerabilities. Specifically, in our construction the results of all computations yielding a zero provably hide all the secret system parameters. This is the first obfuscation candidate that weakens the security needed from the zero-test.
Formally, we prove security of our construction in a weakening of the idealized graded encoding model that accounts for all known vulnerabilities on GGH maps.
A Quasipolynomial Reduction for Generalized Selective Decryption on Trees
Uncategorized
Uncategorized
Generalized Selective Decryption (GSD), introduced by Panjwani [TCC’07], is a game for a symmetric encryption scheme Enc that captures the difficulty of proving adaptive security of certain protocols, most notably the Logical Key Hierarchy (LKH) multicast encryption protocol. In the GSD game there are n keys k1, . . . , kn, which the adversary may adaptively corrupt (learn); moreover, it can ask for encryptions Enc_ki (kj ) of keys under other keys. The adversary’s task is to distinguish keys (which it cannot trivially compute) from random. Proving the hardness of GSD assuming only IND-CPA security of Enc is surprisingly hard. Using “complexity leveraging” loses a factor exponential in n, which makes the proof practically meaningless.
We can think of the GSD game as building a graph on n vertices, where we add an edge i → j when the adversary asks for an encryption of kj under ki. If restricted to graphs of depth l, Panjwani gave a reduction that loses only a factor exponential in l (not n). To date, this is the only non-trivial result known for GSD.
In this paper we give almost-polynomial reductions for large classes of graphs. Most importantly, we prove the security of the GSD game restricted to trees losing only a quasi-polynomial factor n^(3 log n+5). Trees are an important special case capturing real-world protocols like the LKH protocol. Our new bound improves upon Panjwani’s on some LKH variants proposed in the literature where the underlying tree is not balanced. Our proof builds on ideas from the “nested hybrids” technique recently introduced by Fuchsbauer et al. [Asiacrypt’14] for proving the adaptive security of constrained PRFs.
Tightly-Secure Authenticated Key Exchange without NAXOS' approach based on Decision Linear Problem
Uncategorized
Uncategorized
Design secure Authenticated Key Exchange (AKE) protocol without NAXOS approach is remaining as an open problem. NAXOS approach \cite{4} is used to hide the secret ephemeral key from an adversary even if the adversary in somehow may obtain the ephemeral secret key. Using NAXOS approach will cause two main drawbacks, (i) leaking of the static secret key which will be used in computing the exponent of the ephemeral public key. (ii) maximize of using random oracle when applying to the exponent of the ephemeral public key and session key derivation. In this paper, we present another AKE-secure without NAXOS approach based on decision linear assumption in the random oracle model. We fasten our security using games sequences tool which gives tight security for our protocol.
Game-Based Cryptanalysis of a Lightweight CRC-Based Authentication Protocol for EPC Tags
The term "Internet of Things (IoT)" expresses a huge network of smart and connected objects which can interact with other devices without our interposition. Radio frequency identification (RFID) is a great technology and an interesting candidate to provide communications for IoT networks, but numerous security and privacy issues need to be considered. In this paper, we analyze the security and the privacy of a new RFID authentication protocol proposed by Shi et al. in 2014. We prove that although Shi et al. have tried to present a secure and untraceable authentication protocol, their protocol still suffers from several security and privacy weaknesses which make it vulnerable to various security and privacy attacks. We present our privacy analysis based on a well-known formal privacy model which is presented by Ouafi and Phan in 2008. Moreover, to stop such attacks on the protocol and increase the performance of Shi et al.’s scheme, we present some modifications and propound an improved version of the protocol. Finally, the security and the privacy of the proposed protocol were analyzed against various attacks.
Canary Numbers: Design for Light-weight Online Testability of True Random Number Generators
We introduce the concept of canary numbers, to be used in health tests for true random number generators. Health tests are essential components of true random number generators because they are used to detect defects and failures of the entropy source. These tests need to be lightweight, low-latency and highly reliable. The proposed solution uses canary numbers which are an extra output of the entropy source of lower quality. This enables an early-warning attack detection before the output of the generator is compromised. We illustrate the idea with 2 case studies of true random number generators implemented on a Xilinx Spartan-6 FPGA.
A note on Ring-LWE security in the case of Fully Homomorphic Encryption
Evaluating the practical security of Ring-LWE based cryptography has attracted lots of efforts recently. Indeed, some differences from the standard LWE problem enable new attacks. In this paper we discuss the security of Ring-LWE as found in Fully Homomorphic Encryption (FHE) schemes. These schemes require parameters of very special shapes, that an attacker might use to its advantage. First we present the specificities of this case and recall state-of-the-art attacks, then we derive a new special-purpose attack. Our experiments show that this attack has unexpected performance and confirm that we need to study the security of special parameters sets carefully.
Attacks against Filter Generators Exploiting Monomial Mappings
Filter generators are vulnerable to several attacks which have led to well-known design criteria on the Boolean filtering function. However, Rønjom and Cid have observed that a change of the primitive root defining the LFSR leads to several equivalent generators. They usually offer different security levels since they involve filtering functions of the form F(x^k) where k is coprime to (2^n-1) and n denotes the LFSR length. It is proved here that this monomial equivalence does not affect the resistance of the generator against algebraic attacks, while it usually impacts the resistance to correlation attacks. Most importantly, a more efficient attack can often be mounted by considering non-bijective monomial mappings. In this setting, a divide-and-conquer strategy applies based on a search within a multiplicative subgroup of F_{2^n}^*. Moreover, if the LFSR length n is not a prime, a fast correlation involving a shorter LFSR can be performed.
Reducing the Key Size of the SRP Encryption Scheme - Extended version
Multivariate Public Key Cryptography (MPKC) is one of the main candidates for secure communication in a post-quantum era. Recently, Yasuda and Sakurai proposed in [24] a new multivariate encryption scheme called SRP, which offers effcient decryption, a small blow up factor between plaintext and ciphertext and resists all known attacks against multivariate schemes. However, similar to other MPKC schemes, the key sizes of SRP are quite large. In this paper we propose a technique to reduce the key size of the SRP scheme, which enables us to reduce the size of the public key by up to 54%. Furthermore, we can use the additional structure in the public key polynomials to speed up the encryption process of the scheme by up to 50%. We show by experiments that our modifications do not weaken the security of the scheme.
Faster elliptic-curve discrete logarithms on FPGAs
This paper accelerates FPGA computations of discrete logarithms
on elliptic curves over binary fields. As a toy example,
this paper successfully attacks the SECG standard curve sect113r2,
a binary elliptic curve that was not removed from the SECG standard until 2010 and was not disabled in OpenSSL until June 2015.
This is a new size record for completed ECDL computations,
using a prime order very slightly larger than the previous record holder. More importantly, this paper uses FPGAs much more efficiently,
saving a factor close to 3/2 in the size of each high-speed ECDL core. This paper squeezes 3 cores into a low-cost Spartan-6 FPGA
and many more cores into larger FPGAs. The paper also
benchmarks many smaller-size attacks to demonstrate reliability of the estimates, and covers a much larger curve over a 127-bit field to demonstrate scalability.
FHE Circuit Privacy Almost For Free
Uncategorized
Uncategorized
Circuit privacy is an important property for many applications of fully homomorphic encryption. Prior approaches for achieving circuit privacy rely on superpolynomial noise flooding or on bootstrapping. In this work, we present a conceptually different approach to circuit privacy based on a novel characterization of the noise distribution. In particular, we show that a variant of the GSW FHE for branching programs already achieves circuit privacy; this immediately yields a circuit-private FHE for NC$^1$ circuits under the standard LWE assumption with polynomial modulus-to-noise ratio. Our analysis relies on a variant of the discrete Gaussian leftover hash lemma which states that $e^t \mathbf{G}^{-1}(v)+small$ $noise$ does not depend on $v$. We believe that this result is of independent interest.
Parallel Implementation of BDD enumeration for LWE
One of the most attractive problems for post-quantum secure cryptographic schemes is the LWE problem. Beside combinatorial and algebraic attacks, LWE can be solved by a lattice-based Bounded Distance Decoding (BDD) approach. We provide the first parallel implementation of an enumeration-based BDD algorithm that employs the Lindner-Peikert and Linear Length pruning strategies. We ran our algorithm on a large variety of LWE parameters, from which we derive the following interesting results. First, our parallel enumeration achieves almost perfect speed-up, which allows us to provide for the first time practical cryptanalytic results on standard LWE parameters of meaningful size. Second, we conclude that lattice-based attacks perform better than recent advanced BKW-type algorithms even for small noise, while requiring way less samples. Third, we experimentally show weaknesses for a binary matrix LWE proposal of Galbraith.
Two More Efficient Variants of the J-PAKE Protocol
Recently, the password-authenticated key exchange protocol J-PAKE of Hao and Ryan (Workshop on Security Protocols 2008) was formally proven secure in the algebraic adversary model by Abdalla et al.(IEEE S&P 2015). In this paper, we propose and examine two variants of J-PAKE - which we call RO-J-PAKE and CRS-J-PAKE - that each
makes the use of two less zero-knowledge proofs than the original protocol. We show that they are provably secure following a similar strategy to that of Abdalla et al. We also study their efficiency as compared to J-PAKE's, also taking into account how the groups are chosen. Namely, we treat the cases of subgroups of the finite fields and elliptic curves. Our work reveals that, for subgroups of finite fields, CRS-J-PAKE is indeed more efficient than J-PAKE, while RO-J-PAKE is much less efficient. On the other hand, when instantiated with elliptic curves, both RO-J-PAKE and CRS-J-PAKE are more efficient than J-PAKE, with CRS-J-PAKE being the best of the three. We illustrate this experimentally, making use of recent research by Brier et al. (CRYPTO 2010). Regardless of implementation, we note that RO-J-PAKE enjoys a looser security reduction than both J-PAKE and CRS-J-PAKE. CRS-J-PAKE has the tightest security proof, but relies on an additional trust assumption at setup time.
We believe our results can be useful to anyone interested in implementing J-PAKE, as perhaps either of these two new protocols may also be options, depending on the deployment context.
Using semidirect product of (semi)groups in public key cryptography
In this survey, we describe a general key exchange protocol based on semidirect product of (semi)groups (more specifically, on extensions of (semi)groups by automorphisms), and then focus on practical instances of this general idea. This protocol can be based on any group or semigroup, in particular on any non-commutative group. One of its special cases is the standard Diffie-Hellman protocol, which is based on a cyclic group. However, when this protocol is used with a non-commutative (semi)group, it acquires several useful features that make it compare favorably to the Diffie-Hellman protocol. The focus then shifts to selecting an optimal platform (semi)group, in terms of security and efficiency. We show, in particular, that one can get a variety of new security assumptions by varying an automorphism used for a (semi)group extension.
Differential Cryptanalysis of Salsa and ChaCha -- An Evaluation with a Hybrid Model
While \textsf{Salsa} and \textsf{ChaCha} are well known software oriented stream ciphers, since the work of Aumasson et al in FSE 2008 there aren't many significant results against them. The basic model of their attack was to introduce differences in the IV bits, obtain biases after a few forward rounds, as well as to look at the Probabilistic Neutral Bits (PNBs) while reverting back. In this paper we first consider the biases in the forward rounds, and estimate an upper bound on the number of rounds till such biases can be observed. For this, we propose a hybrid model (under certain assumptions), where initially the nonlinear rounds as proposed by the designer are considered, and then we employ their linearized counterpart. The effect of reverting the rounds with the idea of PNBs is also considered. Based on the assumptions and analysis, we conclude that 12 rounds of \textsf{Salsa} and \textsf{ChaCha} should be considered sufficient for 256-bit keys under the current best known attack models.
A Systematic Analysis of the Juniper Dual EC Incident
In December 2015, Juniper Networks announced that unknown attackers had added
unauthorized code to ScreenOS, the operating system for their NetScreen VPN routers. This code created two vulnerabilities: an authentication bypass that enabled remote administrative access, and a second vulnerability that allowed passive decryption of VPN traffic. Reverse engineering of ScreenOS binaries revealed that the first of these vulnerabilities was a conventional back door in the SSH password checker. The second is far more intriguing: a change to the Q parameter used by the Dual EC pseudorandom number generator. It is widely known that Dual EC has the unfortunate property that an attacker with the ability to choose Q can, from a small sample of the generator's output, predict all future outputs. In a 2013 public statement, Juniper noted the use of Dual EC but claimed that ScreenOS included countermeasures that neutralized this form of attack.
In this work, we report the results of a thorough independent analysis of the
ScreenOS randomness subsystem, as well as its interaction with the IKE VPN key
establishment protocol. Due to apparent flaws in the code, Juniper's countermeasures against a Dual EC attack are never executed. Moreover, by comparing sequential
versions of ScreenOS, we identify a cluster of additional changes that were introduced concurrently with the inclusion of Dual EC in a single 2008 release. Taken as a whole, these changes render the ScreenOS system vulnerable to passive exploitation by an attacker who selects Q. We demonstrate this by installing our own parameters, and showing that it is possible to passively decrypt a single IKE handshake and its associated VPN traffic in isolation without observing any other network traffic.
Can PPAD Hardness be Based on Standard Cryptographic Assumptions?
Uncategorized
Uncategorized
We consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption. This question is particularly well-motivated in light of new devastating attacks on obfuscation candidates and their underlying building blocks, which are currently the only known source for PPAD hardness.
Central in the study of obfuscation-based PPAD hardness is the sink-of-verifiable-line (SVL) problem, an intermediate step in constructing instances of the PPAD-complete problem source-or-sink. Within the framework of black-box reductions we prove the following results:
-- Average-case PPAD hardness (and even SVL hardness) does not imply any form of cryptographic hardness (not even one-way functions). Moreover, even when assuming the existence of one-way functions, average-case PPAD hardness (and, again, even SVL hardness) does not imply any public-key primitive. Thus, strong cryptographic assumptions (such as obfuscation-related ones) are not essential for average-case PPAD hardness.
-- Average-case SVL hardness cannot be based either on standard cryptographic assumptions or on average-case PPAD hardness. In particular, average-case SVL hardness is not essential for average-case PPAD hardness.
-- Any attempt for basing the average-case hardness of the PPAD-complete problem source-or-sink on standard cryptographic assumptions must result in instances with a nearly-exponential number of solutions. This stands in striking contrast to the obfuscation-based approach, which results in instances having a unique solution.
Taken together, our results imply that it may still be possible to base PPAD hardness on standard cryptographic assumptions, but any such black-box attempt must significantly deviate from the obfuscation-based approach: It cannot go through the SVL problem, and it must result in source-or-sink instances with a nearly-exponential number of solutions.
Analysis of SHA-512/224 and SHA-512/256
In 2012, NIST standardized SHA-512/224 and SHA-512/256, two truncated variants of SHA-512, in FIPS 180-4. These two hash functions are faster than SHA-224 and SHA-256 on 64-bit platforms, while maintaining the same hash size and claimed security level. So far, no third-party analysis of SHA-512/224 or SHA-512/256 has been published. In this work, we examine the collision resistance of step-reduced versions of SHA-512/224 and SHA-512/256 by using differential cryptanalysis in combination with sophisticated search tools. We are able to generate practical examples of free-start collisions for 44-step SHA-512/224 and 43-step SHA-512/256. Thus, the truncation performed by these variants on their larger state allows us to attack several more rounds compared to the untruncated family members. In addition, we improve upon the best published collisions for 24-step SHA-512 and present practical collisions for 27 steps of SHA-512/224, SHA-512/256, and SHA-512.
Adaptive partitioning
We present a new strategy for partitioning proofs, and use it to obtain new tightly secure encryption schemes. Specifically, we provide the following two conceptual contributions:
- A new strategy for tight security reductions that leads to compact public keys and ciphertexts.
- A relaxed definition of non-interactive proof systems for non-linear (``OR-type'') languages. Our definition is strong enough to act as a central tool in our new strategy to obtain tight security, and is achievable both in pairing-friendly and DCR groups.
We apply these concepts in a generic construction of a tightly secure public-key encryption scheme. When instantiated in different concrete
settings, we obtain the following:
- A public-key encryption scheme whose chosen-ciphertext security can be tightly reduced to the DLIN assumption in a pairing-friendly group.
Ciphertexts, public keys, and system parameters contain 6, 24, and 2 group elements, respectively. This improves heavily upon a recent scheme of Gay et al. (Eurocrypt 2016) in terms of public key size, at the cost of using a symmetric pairing.
- The first public-key encryption scheme that is tightly chosen-ciphertext secure under the DCR assumption. While the scheme is not very practical (ciphertexts carry 29 group elements), it enjoys constant-size parameters, public keys, and ciphertexts.
NIZKs with an Untrusted CRS: Security in the Face of Parameter Subversion
Motivated by the subversion of ``trusted'' public parameters in mass-surveillance activities, this paper studies the security of NIZKs in the presence of a maliciously chosen common reference string. We provide definitions for subversion soundness, subversion witness indistinguishability and subversion zero knowledge. We then provide both negative and positive results, showing that certain combinations of goals are unachievable but giving protocols to achieve other combinations.
A Cryptographic Analysis of UMTS/LTE AKA
Secure communications between mobile subscribers and their associated operator networks require mutual authentication and key derivation protocols. The 3GPP standard provides the AKA protocol for just this purpose. Its structure is generic, to be instantiated with a set of seven cryptographic algorithms. The currently-used proposal instantiates these by means of a set of AES-based algorithms called MILENAGE; as an alternative, the ETSI SAGE committee submitted the TUAK algorithms, which rely on a truncation of the internal permutation of Keccak.
In this paper, we provide a formal security analysis of the AKA protocol in its complete three-party setting. We formulate requirements with respect to both Man-in-the-Middle (MiM) adversaries, i.e. key-indistinguishability and impersonation security, and to local untrusted serving networks, denoted “servers”, namely state-confidentiality and soundness. We prove that the unmodified AKA protocol attains these properties as long as servers cannot be corrupted. Furthermore, adding a unique server identifier suffices to guarantee all the security statements even in in the presence of corrupted servers. We use a modular proof approach: the first step is to prove the security of (modified and unmodified) AKA with generic cryptographic algorithms that can be represented as a unitary pseudorandom function –PRF– keyed either with the client’s secret key or with the operator key. A second step proceeds to show that TUAK and MILENAGE guarantee this type of pseudorandomness, though the guarantee for MILENAGE requires a stronger assumption. Our paper provides (to our knowledge) the first complete, rigorous analysis of the original AKA protocol and these two instantiations. We stress that such an analysis is important for any protocol deployed in real-life scenarios.
Malleability of the blockchain’s entropy
Uncategorized
Uncategorized
Trustworthy generation of public random numbers is necessary for the security of many cryptographic applications. It was suggested to use the inherent unpredictability of blockchains as a source of public randomness. Entropy from the Bitcoin blockchain in particular has been used in lotteries and has been suggested for a number of other applications ranging from smart contracts to election auditing. In this Arcticle, we analyse this idea and show how an adversary could manipulate these random numbers, even with limited computational power and financial budget.
Efficient Multi-Point Local Decoding of Reed-Muller Codes via Interleaved Codex
Reed-Muller codes are among the most important classes of locally correctable codes. Currently local decoding of Reed-Muller codes is based on decoding on lines or quadratic curves to recover one single coordinate. To recover multiple Reed-Muller codes are among the most important classes of locally correctable codes. Currently local decoding of Reed-Muller codes is based on decoding on lines or quadratic curves to recover one single coordinate. To recover multiple coordinates simultaneously, the naive way is to repeat the local decoding for recovery of a single coordinate. This decoding algorithm might be more expensive, i.e., require higher query complexity. %Precisely speaking, to correct arbitrarily large number $k$ of coordinates simultaneously by repetition of local decoding for recovery of a single coordinate, one has to query $\Omega(k\sqrt{q}\log k/\log q)$ coordinates, where $q$ is the code alphabet size.
In this paper, we focus on Reed-Muller codes with usual parameter regime, namely, the total degree of evaluation polynomials is $d=\Theta({q})$, where $q$ is the code alphabet size (in fact, $d$ can be as big as $q/4$ in our setting).
%the usual parameter regime where the number $m$ of variables of evaluation polynomials is much bigger than the code alphabet size $q$ and total degree of polynomials is $d=\Theta({q})$.
By introducing a novel variation of codex, i.e., interleaved codex (the concept of codex has been used for arithmetic secret sharing \cite{C11,CCX12}), we are able to locally recover arbitrarily large number $k$ of coordinates of a Reed-Muller code simultaneously with error probability $\exp(-\Omega(k))$ at the cost of querying merely $O(q^2k)$ coordinates. It turns out that our local decoding of Reed-Muller codes shows ({\it perhaps surprisingly}) that accessing $k$ locations is in fact cheaper than repeating the procedure for accessing a single location for $k$ times. Precisely speaking,
%by repetition of local decoding for recovery of a single coordinate, one has to query $\Omega(qk\log k/\log q)$ coordinates (thus, the query complexity of our local decoding becomes cheaper for $k=\Omega(q^q)$). Furthermore, our decoding success probability is $1-\Ge$ with $\Ge=O\left(\left(\frac1{\sqrt{q}}\right)^k\right)$.
to get the same success probability by repeating the local decoding algorithm of a single coordinate, one has to query $\Omega(qk^2)$ coordinates. Thus, the query complexity of our local decoding is smaller for $k=\Omega(q)$. If we impose the same query complexity constraint on both algorithm, our local decoding algorithm yields smaller error probability when $k=\Omega(q^q)$.
In addition, our local decoding is efficient, i.e., the decoding complexity is $\poly(k,q)$. Construction of an interleaved codex is based on concatenation of a codex with a multiplication friendly pair, while the main tool to realize codex is based on algebraic function fields (or more precisely, algebraic geometry codes).
Foundations of Fully Dynamic Group Signatures
Group signatures allow members of a group to anonymously sign on behalf of the group. Membership is administered by a designated group manager. The group manager can also reveal the identity of a signer if and when needed to enforce accountability and deter abuse.
For group signatures to be applicable in practice, they need to support fully dynamic groups, i.e., users may join and leave at any time.
Existing security definitions for fully dynamic group signatures are informal, have shortcomings, and are mutually incompatible. We fill the gap by providing a formal rigorous security model for fully dynamic group signatures. Our model is general and is not tailored towards a specific design paradigm and can therefore, as we show, be used to argue about the security of different existing constructions following different design paradigms. Our definitions are stringent and when possible incorporate protection against maliciously chosen keys. We consider both the case where the group management and tracing signatures are administered by the same authority, i.e.~a single group manager, and also the case where those roles are administered by two separate authorities, i.e. a group manager and an opening authority. We also show that a specialization of our model captures existing models for static and partially dynamic schemes.
In the process, we identify a subtle gap in the security achieved by group signatures using revocation lists. We show that in such schemes new members achieve a slightly weaker notion of traceability. The flexibility of our security model allows to capture such relaxation of traceability.
An Analysis of OpenSSL's Random Number Generator
Uncategorized
Uncategorized
In this work we demonstrate various weaknesses of the random number generator (RNG)
in the OpenSSL cryptographic library.
We show how OpenSSL's RNG, knowingly in a low entropy state, potentially leaks low entropy
secrets in its output, which were never intentionally fed to the RNG by
client code, thus posing vulnerabilities even when in the given
usage scenario the low entropy state is respected by the client application.
Turning to the core cryptographic functionality of the RNG,
we show how OpenSSL's functionality for
adding entropy to the RNG state fails to be effectively a mixing function.
If an initial low entropy state of the RNG
was falsely presumed to have 256 bits of entropy based on wrong entropy
estimations, this causes attempts to recover from this state to succeed only in long term
but to fail in short term.
As a result, the entropy level of generated cryptographic keys can be limited to
80 bits, even though thousands of bits of entropy might have been fed to the RNG state
previously. In the same scenario, we demonstrate an attack recovering the RNG
state from later output with an off-line effort between $2^{82}$ and $2^{84}$ hash evaluations,
for seeds with an entropy level $n$ above 160 bits. We also show
that seed data with an entropy of $160$ bits, fed into the RNG, under certain
circumstances, might be recovered from
its output with an effort of $2^{82}$ hash evaluations.
These results are highly relevant for embedded systems that fail to provide sufficient
entropy through their operating system RNG at boot time and rely on subsequent reseeding of
the OpenSSL RNG.
Furthermore, we identify a
design flaw that limits the entropy of the RNG's output to 240 bits in the general case
even for an initially correctly seeded RNG, despite the fact that a security level of
256 bits is intended.
\(\mu\)Kummer: efficient hyperelliptic signatures and key exchange on microcontrollers
Uncategorized
Uncategorized
We describe the design and implementation of efficient signature and key-exchange schemes for the AVR~ATmega and ARM Cortex~M0 microcontrollers, targeting the 128-bit security level. Our algorithms are based on an efficient Montgomery ladder scalar multiplication on the Kummer surface of Gaudry and Schost's genus-2 hyperelliptic curve, combined with the Jacobian point recovery technique of Chung, Costello, and Smith. Our results are the first to show the feasibility of software-only hyperelliptic cryptography on constrained platforms, and represent a significant improvement on the elliptic-curve state-of-the-art for both key exchange and signatures on these architectures. Notably, our key-exchange scalar-multiplication software runs in under 9520k cycles on the ATmega and under 2640k cycles on the Cortex M0, improving on the current speed records by 32% and 75% respectively.
Fast Modular Arithmetic on the Kalray MPPA-256 Processor for an Energy-Efficient Implementation of ECM
The Kalray MPPA-256 processor is based on a recent low-energy manycore architecture. In this article, we investigate its performance in multiprecision arithmetic for number-theoretic applications. We have developed a library for modular arithmetic that takes full advantage of the particularities of this architecture. This is in turn used in an implementation of the ECM, an algorithm for integer factorization using el-liptic curves. For parameters corresponding to a crypt-analytic context, our implementation compares well to state-of-the-art implementations on GPU, while using much less energy.
Last updated: 2016-05-13
Cryptographic Analysis of the 3GPP AKA Protocol
Secure communications between mobile subscribers and their associated operator networks require mutual authentication and key derivation protocols. The 3GPP standard provides the \aka\ protocol for just this purpose. Its structure is generic, to be instantiated with a set of seven cryptographic algorithms. The currently-used proposal instantiates these by means of a set of AES-based algorithms called Milenage; as an alternative, the ETSI SAGE committee submitted the TUAK algorithms, which rely on a truncation of the internal permutation of Keccak.
In this paper, we provide a formal security analysis of the \aka\ protocol in its complete three-party setting. We formulate requirements with respect to both Man-in-the-Middle (MiM) adversaries, i.e. key-indistinguishability and impersonation security, and to local untrusted serving networks, denoted "servers", namely state-confidentiality and soundness. We prove that the unmodified AKA protocol attains these properties as long as servers cannot be corrupted. Furthermore, adding a unique server identifier suffices to guarantee all the security statements even in in the presence of corrupted servers. We use a modular proof approach: the first step is to prove the security of (modified and unmodified) AKA with generic cryptographic algorithms that can be represented as a unitary pseudorandom function --PRF-- keyed either with the client's secret key or with the operator key. A second step proceeds to show that TUAK and Milenage guarantee this type of pseudorandomness, though the guarantee for Milenage requires a stronger assumption. Our paper provides (to our knowledge) the first complete, rigorous analysis of the original AKA protocol and these two instantiations. We stress that such an analysis is important for any protocol deployed in real-life scenarios.
Legally Fair Contract Signing Without Keystones
In two-party computation, achieving both fairness and guaranteed
output delivery is well known to be impossible. Despite this limitation,
many approaches provide solutions of practical interest by weakening
somewhat the fairness requirement. Such approaches fall roughly in three
categories: “gradual release” schemes assume that the aggrieved party
can eventually reconstruct the missing information; “optimistic schemes”
assume a trusted third party arbitrator that can restore fairness in case of
litigation; and “concurrent” or “legally fair” schemes in which a breach of
fairness is compensated by the aggrieved party having a digitally signed
cheque from the other party (called the keystone).
In this paper we describe and analyse a new contract signing paradigm
that doesn’t require keystones to achieve legal fairness, and give a concrete
construction based on Schnorr signatures which is compatible with
standard Schnorr signatures and provably secure.
An Empirical Study towards Refining the AKS Primality Testing Algorithm
The AKS (Agrawal-Kayal-Saxena) algorithm is the first ever deterministic polynomial-time primality-proving algorithm whose asymptotic run time complexity is $O(\log^{12+\epsilon} n)$, where $\epsilon > 0$. Despite this theoretical breakthrough, the algorithm serves no practical use in conventional cryptologic applications, as the existing probabilistic primality tests like ECPP in conjunction with conditional usage of sub-exponential time deterministic tests are found to have better practical running time. Later, the authors of AKS test improved the algorithm so that it runs in $O(\log^{10.5+\epsilon} n)$ time. A variant of AKS test was demonstrated by Carl Pomerance and H. W. Lenstra, which runs in almost half the number of operations required in AKS. This algorithm also suffers impracticality. Attempts were made to efficiently implement AKS algorithm, but in contrast with the slightest improvements in performance which target specific machine architectures, the limitations of the algorithm are found highlighted. In this paper we present our analysis and observations on AKS algorithm based on the empirical results and statistics of certain parameters which control the asymptotic running time of the algorithm. From this analysis we refine AKS so that it runs in $O(\log^{4+\epsilon} n)$ time.
Functional Encryption for Bounded Collusions, Revisited
Uncategorized
Uncategorized
We provide a new construction of functional encryption (FE) for circuits in the bounded collusion model. In this model, security of the scheme is guaranteed as long as the number of colluding adversaries can be a-priori bounded by some polynomial q . Our construction supports arithmetic circuits as against Boolean circuits, which have been the focus of all prior work. The ciphertext of our scheme is sublinear in the circuit size for the circuit class NC_1 when based on Ring LWE and any constant depth when based on standard LWE. This gives the first constructions of arithmetic reusable garbled circuits.
Additionally, our construction achieves several desirable features:
• Our construction for reusable garbled circuits for NC 1 achieves the optimal “full” simulation based security.
• When generalised to handle Q queries for any fixed polynomial Q, our ciphertext size grows additively with Q^2 . Such query dependence on ciphertext size has only been achieved in a
weaker security game otherwise [Agr17].
• The ciphertext of our scheme can be divided into a succinct data dependent component and a non-succinct data independent component. This makes it well suited for optimization in an online-offline model that allows a majority of the computation to be performed in an offline phase, before the data becomes available.
Security of our scheme may be based on the Learning With Errors assumption (LWE) or its ring variant (Ring-LWE). To achieve our result, we provide new public key and ciphertext evaluation algorithms in the context of functional encryption. These algorithms are general, and may find application elsewhere.
Another Look at Tightness II: Practical Issues in Cryptography
How to deal with large tightness gaps in security proofs is a vexing issue in cryptography. Even when analyzing protocols that are of practical importance, leading researchers often fail to treat this question with the seriousness that it deserves. We discuss nontightness in connection with complexity leveraging, HMAC, lattice-based cryptography, identity-based encryption, and hybrid encryption.
Less is More - Dimensionality Reduction from a Theoretical Perspective
Reducing the dimensionality of the measurements is an important problem in side-channel analysis. It allows to capture multi-dimensional leakage as one single compressed sample, and therefore also helps to reduce the computational complexity. The other side of the coin with dimensionality reduction is that it may at the same time reduce the efficiency of the attack, in terms of success probability.
In this paper, we carry out a mathematical analysis of dimensionality reduction. We show that optimal attacks remain optimal after a first pass of preprocessing, which takes the form of a linear projection of the samples. We then investigate the state-of-the-art dimensionality reduction techniques, and find that asymptotically, the optimal strategy coincides with the linear discriminant analysis.
The Ring of Gyges: Investigating the Future of Criminal Smart Contracts
Thanks to their anonymity (pseudonymity) and elimination of trusted intermediaries, cryptocurrencies such as Bitcoin have created or stimulated growth in many businesses and communities. Unfortunately, some of these are criminal, e.g., money laundering, illicit marketplaces, and ransomware.
Next-generation cryptocurrencies such as Ethereum will include rich scripting languages in support of {\em smart contracts}, programs that autonomously intermediate transactions. In this paper, we explore the risk of smart contracts fueling new criminal ecosystems. Specifically, we show how what we call {\em criminal smart contracts} (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).
We show that CSCs for leakage of secrets (à la Wikileaks) are efficiently realizable in existing scripting languages such as that in Ethereum. We show that CSCs for theft of cryptographic keys can be achieved using primitives, such as Succinct Non-interactive ARguments of Knowledge (SNARKs), that are already expressible in these languages and for which efficient supporting language extensions are anticipated. We show similarly that authenticated data feeds, an emerging feature of smart contract systems, can facilitate CSCs for real-world crimes (e.g., property crimes).
Our results highlight the urgency of creating policy and technical safeguards against CSCs in order to realize the promise of smart contracts for beneficial goals.
State Management for Hash-Based Signatures
The unavoidable transition to post-quantum cryptography
requires dependable quantum-safe digital signature schemes. Hash-based
signatures are well-understood and promising candidates, and the object
of current standardization efforts. In the scope of this standardization
process, the most commonly raised concern is statefulness, due to the use
of one-time signature schemes. While the theory of hash-based signatures
is mature, a discussion of the system security issues arising from the
concrete management of their state has been lacking. In this paper,
we analyze state management in N -time hash-based signature schemes,
considering both security and performance, and categorize the security
issues that can occur due to state synchronization failures. We describe
a state reservation approach that loosens the coupling between volatile
and nonvolatile storage, and show that it can be naturally realized in a
hierarchical signature scheme. To protect against unintentional copying
of the private key state, we consider a hybrid stateless/stateful scheme,
which provides a graceful security degradation in the face of unintentional
copying, at the cost of increased signature size. Compared to a completely
stateless scheme, the hybrid approach realizes the essential benefits, with
smaller signatures and faster signing.
More Efficient Constructions for Inner-Product Encryption
We propose new constructions for inner product encryption -- $\mathcal{IPE}_1$ and $\mathcal{IPE}_2$, both secure under the eXternal Diffie-Hellman assumption (SXDH) in asymmetric pairing groups.
The first scheme has constant-size ciphertexts whereas the second one is weakly attribute hiding. $\mathcal{IPE}_2$ is derived from the identity-based encryption scheme of Jutla Roy (Asiacrypt 2013),
that was extended from tag-based quasi-adaptive non-interactive zero-knowledge (QA-NIZK) proofs for linear subspaces of vector spaces over bilinear groups. The verifier common reference string (CRS) in these tag-based systems are split into two parts, that are combined during verification. We consider an alternate form of the tag-based QA-NIZK proof with a single verifier CRS that already includes a tag, different from the one defining the language. The verification succeeds as long as the two tags are unequal. Essentially, we embed a two-equation revocation mechanism in the verification.
The new QA-NIZK proof system leads to $\mathcal{IPE}_1$, a constant-sized ciphertext IPE scheme with very short ciphertexts.
Both the IPE schemes are obtained by applying the $n$-equation revocation technique of Attrapadung and Libert (PKC 2010) to the corresponding identity based encryption schemes and proved
secure under SXDH assumption. As an application, we show how our schemes can be specialised to obtain the first fully secure identity-based broadcast encryption based on SXDH with a trade-off
among the public parameters, ciphertext and key sizes, all of them being sub-linear in the maximum number of recipients of a broadcast.
Fruit-v2: Ultra-Lightweight Stream Cipher with Shorter Internal State
Uncategorized
Uncategorized
A few lightweight stream ciphers were introduced for hardware applications in the eSTREAM project. In FSE 2015, while presenting a new idea (i.e. the design of stream ciphers with the shorter internal state by using a secret key, not only in the initialization but also in the keystream generation), Sprout was proposed. Unfortunately, Sprout is insecure. Because Grain-v1 is the lightest cipher in the portfolio of the eSTREAM project, we introduce Fruit-v2 as a successor of the Grain-v1 and Sprout. It is demonstrated that Fruit-v2 is safe and ultra-lightweight. The size of LFSR and NFSR in Fruit-v2 is only 80 bits (for 80-bit security level), while for resistance to the classical time-memory-data trade-off attack, the internal state size should be at least twice of the security level. To satisfy this rule and to design a concrete cipher, we used some new design ideas. The discussions are presented that Fruit-v2 can be more resistant than Grain-v1 to some attacks such as classical time-memory-data trade-off. The main objective of this work is to show how it is possible to exploit a secret key in a design to achieve smaller area size. It is possible to redesign many of stream ciphers (by the new idea) and achieve significantly smaller area size by the new idea.
A Pairing-Free, One Round Identity Based Authenticated Key Exchange Protocol Secure Against Memory-Scrapers
Security of a key exchange protocol is formally established through an abstract game between a challenger and an adversary. In this game the adversary can get various information which are modeled by giving the adversary access to appropriate oracle queries. Empowered with all these information, the adversary will try to break the protocol. This is modeled by a test query which asks the adversary to distinguish between a session key of a fresh session from a random session key; properly guessing which correctly leads the adversary to win the game. In this traditional model of security the adversary sees nothing apart from the input/ output relationship of the algorithms. However, in recent past an adversary could obtain several additional information beyond what he gets to learn in these black box models of computation, thanks to the availability of powerful malwares. This data exfiltration due to the attacks of Memory Scraper/Ram-Scraper-type malwares is an emerging threat. In order to realistically capture these advanced classes of threats posed by such malwares we propose a new security model for identity-based authenticated key exchange (ID-AKE) which we call the Identity based Strong Extended Canetti Krawzyck (ID-seCK) model. Our security model captures leakages of intermediate values by appropriate oracle queries given to the adversary. Following this, we propose a round optimal (i.e., single round) ID-AKE protocol for two-party settings. Our design assumes a hybrid system equipped with a bare minimal Trusted Platform Module (TPM) that can only perform group exponentiations. One of the major advantages of our construction is that it does not involve any pairing operations, works in prime order group and have a tight security reduction to the Gap Diffie Hellman (GDH) problem under our new ID-seCK model. Our scheme also has the
capability to handle active adversaries while most of the previous ID-AKE protocols are secure only against passive adversaries. The security of our protocol is proved in the Random Oracle (RO) model.
General Bounds for Small Inverse Problems and Its Applications to Multi-Prime RSA
In 1999, Boneh and Durfee introduced the {\em small inverse problem}, which solves the bivariate modular equation x(N+y)\equiv1 \pmod{e}. Absolute values of solutions for x and y are bounded above by X=N^{\delta} and Y=N^{\beta}, respectively. They solved the problem for \beta={1/2} in the context of small secret exponent attacks on RSA and proposed a polynomial time algorithm that works when \delta<(7-2\sqrt{7})/6\approx{0.284}. In the same work, the bound was further improved to \delta<1-1/\sqrt{2}\approx{0.292}. Thus far, the small inverse problem has also been analyzed for an arbitrary \beta. Generalizations of Boneh and Durfee's lattices to obtain the stronger bound yielded the bound \delta<1-\sqrt{\beta}. However, the algorithm works only when \beta\geq1/4. When 0<\beta<1/4, there have been several works where the authors claimed their results are the best.
In this paper, we revisit the problem for an arbitrary \beta. At first, we summarize the previous results for 0<\beta<1/4. We reveal that there are some results that are not valid and show that Weger's algorithms provide the best bounds. Next, we propose an improved algorithm to solve the problem for 0<\beta<1/4. Our algorithm works when \delta<1-2(\sqrt{\beta(3+4 \beta)}-\beta)/3. Our algorithm construction is based on the combinations of Boneh and Durfee's two forms of lattices and it is more natural compared with previous works. For the cryptographic application, we introduce small secret exponent attacks on Multi-Prime RSA with small prime differences.
Closing the Gap in RFC 7748: Implementing Curve448 in Hardware
Uncategorized
Uncategorized
With the evidence on comprised cryptographic standards in the context of elliptic curves, the IETF TLS working group has issued a request to the IETF Crypto Forum Research Group (CFRG) to recommend new elliptic curves that do not leave a doubt regarding their rigidity or any backdoors. This initiative has recently published RFC 7748 proposing two elliptic curves, known as Curve25519 and Curve448, for use with the next generation of TLS. This choice of elliptic curves was already picked up by the IETF working group curdle for adoption in further security protocols, such as DNSSEC. Hence it can be expected that these two curves will become predominant in the Internet and will form one basis for future secure communication.
Unfortunately, both curves were solely designed and optimized for pure
software implementation; their implementation in hardware or their physical protection against side-channel attacks were not considered at any time. However, for Curve25519 it has been shown recently that efficient implementations in hardware along with side-channel protection are possible. In this work we aim to close this gap and demonstrate that fortunately the second curve can be efficiently implemented in hardware as well. More precisely, we demonstrate that the high-security Curve448 can be implemented on a Xilinx XC7Z7020 at moderate costs of just 963 logic and 30 DSP slices and performs a scalar multiplication in 2.5ms.
How (Not) to Instantiate Ring-LWE
The \emph{learning with errors over rings} (Ring-LWE) problem---or
more accurately, family of problems---has emerged as a promising
foundation for cryptography due to its practical efficiency,
conjectured quantum resistance, and provable \emph{worst-case
hardness}: breaking certain instantiations of Ring-LWE is at least
as hard as quantumly approximating the Shortest Vector Problem on
\emph{any} ideal lattice in the ring.
Despite this hardness guarantee, several recent works have shown that
certain instantiations of Ring-LWE can be broken by relatively simple
attacks. While the affected instantiations are not supported by
worst-case hardness theorems (and were not ever proposed for
cryptographic purposes), this state of affairs raises natural
questions about what other instantiations might be vulnerable, and in
particular whether certain classes of rings are inherently unsafe for
Ring-LWE.
This work comprehensively reviews the known attacks on Ring-LWE and
vulnerable instantiations. We give a new, unified exposition which
reveals an elementary geometric reason why the attacks work, and
provide rigorous analysis to explain certain phenomena that were
previously only exhibited by experiments. In all cases, the
insecurity of an instantiation is due to the fact that the error
distribution is insufficiently ``well spread'' relative to the ring.
In particular, the insecure instantiations use the so-called
\emph{non-dual} form of Ring-LWE, together with \emph{spherical} error
distributions that are much narrower and of a very different shape
than the ones supported by hardness proofs.
On the positive side, we show that any Ring-LWE instantiation which
satisfies (or only almost satisfies) the hypotheses of the
``worst-case hardness of search'' theorem is \emph{provably immune} to
broad generalizations of the above-described attacks: the running time
divided by advantage is at least exponential in the degree of the
ring. This holds for the ring of integers in \emph{any} number field,
so the rings themselves are not the source of insecurity in the
vulnerable instantiations. Moreover, the hypotheses of the worst-case
hardness theorem are \emph{nearly minimal} ones which provide these
immunity guarantees.
Probabilistic Termination and Composability of Cryptographic Protocols
When analyzing the round complexity of multi-party protocols, one often overlooks the fact that underlying resources, such as a broadcast channel, can by themselves be expensive to implement. For example, it is well known that it is impossible to implement a broadcast channel by a (deterministic) protocol in a sub-linear (in the number of corrupted parties) number of rounds.
The seminal works of Rabin and Ben-Or from the early 80's demonstrated that limitations as the above can be overcome by using randomization and allowing parties to terminate at different rounds, igniting the study of protocols over point-to-point channels with probabilistic termination and expected constant round complexity. However, absent a rigorous simulation-based definition, the suggested protocols are proven secure in a property-based manner or via ad hoc simulation-based frameworks, therefore guaranteeing limited, if any, composability.
In this work, we put forth the first simulation-based treatment of multi-party cryptographic protocols with probabilistic termination. We define secure multi-party computation (MPC) with probabilistic termination in the UC framework and prove a universal composition theorem for probabilistic-termination protocols. Our theorem allows to compile a protocol using deterministic-termination hybrids into a protocol that uses expected-constant-round protocols for emulating these hybrids, preserving the expected round complexity of the calling protocol.
We showcase our definitions and compiler by providing the first composable protocols (with simulation-based security proofs) for the following primitives, relying on point-to-point channels: (1) expected-constant-round perfect Byzantine agreement, (2) expected-constant-round perfect parallel broadcast, and (3) perfectly secure MPC with round complexity independent of the number of parties.
Note on Impossible Differential Attacks
While impossible differential cryptanalysis is a well-known and popular cryptanalytic method, errors in the analysis are often discovered and many papers in the literature present flaws. Wishing to solve that, Boura \textit{et al.} presented at ASIACRYPT'14 a generic vision of impossible differential attacks with the aim of simplifying and helping the construction and verification of this type of cryptanalysis. In particular, they gave generic complexity analysis formulas for mounting such attacks and develop new ideas for optimizing them.
In this paper we carefully study this generic formula and show impossible differential attacks for which the real time complexity is much higher than estimated by it. In particular, we show that the impossible differential attack against 25-round TWINE-128, presented at FSE'15 by Biryukov \textit{et al.}, actually has a complexity higher than the natural bound of exhaustive search.
A Note on Non-Perfect Secret Sharing
By using a recently introduced framework for non-perfect secret sharing, several known results on perfect secret sharing are generalized to non-perfect secret sharing schemes with constant increment, in which the amount of information provided by adding a single share to a set is either zero or some constant value. Specifically, we discuss ideal secret sharing schemes, constructions of efficient linear secret sharing schemes, and the search for lower bounds on the length of the shares. Similarly to perfect secret sharing, matroids and polymatroids are very useful to analyze these questions.
Cryptanalysis of Searchable Anonymous Attribute Based Encryption
Ciphertext Policy Attribute Based Encryption (CP - ABE) is a public key primitive in which a user is only able to decrypt a ciphertext if the attributes associated with secret key and the access policy connected with ciphertext matches. CP-ABE provides both confidentiality and access control to the data stored in public cloud. Anonymous CP-ABE is an adaptation of ABE where in addition to data confidentiality and access control, receiver anonymity is also provided. Recently, Koo et al. (2013) proposed a scheme in the area of anonymous CP-ABE. We found security flaws in Koo et al.’s scheme. Their scheme does not support anonymity of the intended receiver, which was the main security claim in [8]. In this paper, we show how one can identify the receiver in Koo et al.’s scheme.
Last updated: 2017-10-05
New Framework for Secure Server-Designation Public Key Encryption with Keyword Search
Recently, a new framework, called secure server-designation public key encryption with keyword search (SPEKS), was introduced to improve the security of dPEKS (which suffers from the on-line keyword guessing attack) by defining a new security model ‘original ciphertext indistinguishability’. In this paper, we note that off-line keyword
guessing attack can be launched by a malicious server to find the keyword used for generating the trapdoor, which was not considered in the related work. SPEKS can suffer from this kind of attack. Moreover, the security model defined for TD-IND in SPEKS is incomplete. Owing to the shown weaknesses, the existing security models are enhanced for trapdoor indistinguishability by defining two new security models. Finally, we propose a new framework.
Provably Secure Password Reset Protocol: Model, Definition, and Generic Construction
Many online services adopt a password-based user authentication system because of its usability. However, several problems have been pointed out on it, and one of the well-known problems is that a user forgets his/her password and cannot login the services. To solve this problem, most online services support a mechanism with which a user can reset a password. In this paper, we consider a provable security treatment for a password reset protocol. We formalize a model and security definitions, propose a generic construction based on a pseudorandom function and public key encryption. In addition, we implement a prototype of our protocol to evaluate its efficiency.
Encoding Rational Numbers for FHE-based Applications
This work addresses a basic problem of security systems that operate on very sensitive information, such as healthcare data. Specifically, we are interested in the problem of privately handling medical data represented by rational numbers.
Considering the complicated computations on encrypted medical data,
one of the natural and powerful tools for ensuring privacy of the data is fully homomorphic encryption (FHE). However, because the plaintext domain of known FHE schemes is restricted to a set of quite small integers, it is not easy to obtain efficient algorithms for encrypted rational numbers in terms of space and computation costs. Our observation is that this inefficiency can be alleviated by using a different representation of rational numbers instead of naive expressions. For example, the naïve decimal representation considerably restricts the choice of parameters in employing an FHE scheme, particularly the plaintext size.
The starting point of our technique in this work is to encode rational numbers using continued fractions. Because continued fractions enable us to represent
rational numbers as a sequence of integers, we can use a plaintext space with a small size while preserving the same quality of precision. However, this encoding technique requires performing very complex arithmetic operations, such as division and modular reduction. Theoretically, FHE allows the evaluation of any function, including modular reduction at encrypted data, but it requires a Boolean circuit of very high degree to be constructed. Hence, we primarily focus on developing an approach to solve this efficiency problem using homomorphic operations with small degrees.
On the complexity of constructing pseudorandom functions (especially when they don't exist)
We study the complexity of black-box constructions of pseudorandom functions (PRF) from one-way functions (OWF) that are secure against non-uniform adversaries. We show that if OWF do not exist, then given as an oracle any (inefficient) hard-to-invert function, one can compute a PRF in polynomial time with only k(n) oracle queries, for any k(n) = \omega(1) (e.g. k(n) = log^* n). Combining this with the fact that OWF imply PRF, we show that unconditionally there exists a (pathological) construction of PRF from OWF making at most k(n) queries. This result shows a limitation of a certain class of techniques for proving efficiency lower bounds on the construction of PRF from OWF. Our result builds on the work of Reingold, Trevisan, and Vadhan (TCC ’04), who show that when OWF do not exist there is a pseudorandom generator (PRG) construction that makes only one oracle query to the hard-to-invert function. Our proof combines theirs with the Nisan-Wigderson generator (JCSS ’94), and with a recent technique by Berman and Haitner (TCC ’12).
Working in the same context (i.e. when OWF do not exist), we also construct a poly-time PRG with arbitrary polynomial stretch that makes non-adaptive queries to an (inefficient) one-bit-stretch oracle PRG. This contrasts with the well-known adaptive stretch-increasing construction due to Goldreich and Micali.
Both above constructions simply apply an affine function (parity or its complement) to the query answers. We complement this by showing that if the post-processing is restricted to only taking projections then non-adaptive constructions of PRF, or even linear-stretch PRG, can be ruled out.
On the Selective Opening Security of Practical Public-Key Encryption Schemes
We show that two well-known and widely employed public-key encryption schemes -- RSA Optimal Asymmetric Encryption Padding (RSA-OAEP) and Diffie-Hellman Integrated Encryption Scheme (DHIES), instantiated with a one-time pad, -- are secure under (the strong, simulation-based security notion of) selective opening security against chosen-ciphertext attacks in the random oracle model.
Both schemes are obtained via known generic transformations that transform relatively weak primitives (with security in the sense of one-wayness) to IND-CCA secure encryption schemes.
We also show a similar result for the well-known Fujisaki-Okamoto transformation that can generically turn a one-way secure public key encryption system and a one-time pad into a INDCCA-secure public-key encryption system. We prove that selective opening security comes for free in these transformations.
Both DHIES and RSA-OAEP are important building blocks in several standards for public key encryption and key exchange protocols. The Fujisaki-Okamoto transformation is very versatile and has successfully been utilised to build efficient lattice-based cryptosystems.
The considered schemes are the first practical cryptosystems that meet the strong notion of simulation-based selective opening (SIM-SO-CCA) security.
Semantically Secure Anonymity: Foundations of Re-encryption
The notion of universal re-encryption is an established primitive
used in the design of many anonymity protocols. It allows anyone
to randomize a ciphertext without changing its size, without first
decrypting it, and without knowing who the receiver is (i.e., not
knowing the public key used to create it).
By design it prevents the randomized ciphertext from being
correlated with the original ciphertext.
We revisit and analyze the security
foundation of universal re-encryption and show a subtlety in it,
namely, that it does not require that the encryption function
achieve key anonymity. Recall that the encryption function is
different from the re-encryption function.
We demonstrate this subtlety by constructing a cryptosystem that satisfies the
established definition of a universal cryptosystem but that has an encryption
function that does not achieve key anonymity, thereby instantiating the gap in
the definition of security of universal re-encryption. We note that the
gap in the definition carries over to a set of applications
that rely on universal re-encryption, applications in the original
paper on universal re-encryption and also follow-on work.
This shows that the original definition needs to be corrected
and it shows that it had a knock-on
effect that negatively impacted security in later work.
We then introduce a new definition that includes
the properties that are needed for a re-encryption cryptosystem to achieve
key anonymity in both the encryption function and the re-encryption
function, building on Goldwasser and Micali's "semantic security" and
the original "key anonymity" notion of Bellare, Boldyreva, Desai, and Pointcheval.
Omitting any of the properties in our definition leads to a problem.
We also introduce a new generalization of the Decision
Diffie-Hellman (DDH) random self-reduction and use it, in turn, to prove
that the original ElGamal-based universal cryptosystem of Golle et al
is secure under our revised security definition.
We apply our new DDH reduction
technique to give the first proof in the standard model that ElGamal-based
incomparable public keys achieve key anonymity under DDH.
We present a novel secure Forward-Anonymous Batch Mix
as a new application.
Non-Malleable Extractors and Codes, with their Many Tampered Extensions
Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are \emph{seeded non-malleable extractors}, introduced by Dodis and Wichs \cite{DW09}; \emph{seedless non-malleable extractors}, introduced by Cheraghchi and Guruswami \cite{CG14b}; and \emph{non-malleable codes}, introduced by Dziembowski, Pietrzak and Wichs \cite{DPW10}. Besides being interesting on their own, they also have important applications in cryptography, e.g, privacy amplification with an active adversary, explicit non-malleable codes etc, and often have unexpected connections to their non-tampered analogues \cite{Li12b} \cite{CZ15}. %For example, seeded non-malleable extractors are closely related to privacy amplification with an active adversary, non-malleable codes are related to non-malleable secret sharing, and seedless non-malleable extractors provide a universal way to construct explicit non-malleable codes.
However, the known constructions are far behind their non-tampered counterparts. Indeed, the best known seeded non-malleable extractor requires min-entropy rate at least $0.49$ \cite{Li12b}; while explicit constructions of non-malleable two-source extractors were not known even if both sources have full min-entropy, and was left as an open problem in \cite{CG14b}.
In this paper we make progress towards solving the above problems and other related generalizations. Our contributions are as follows.
\begin{itemize}
\item We construct an explicit seeded non-malleable extractor for min-entropy $k \geq \log^2 n$. This dramatically improves all previous results and gives a simpler 2-round privacy amplification protocol with optimal entropy loss, matching the best known result in \cite{Li15b}. In fact, we construct more general seeded non-malleable extractors (that can handle multiple adversaries) which were used in the recent construction of explicit two-source extractors for polylogarithmic min-entropy \cite{CZ15}.
\item We construct the first explicit non-malleable two-source extractor for min-entropy $k \geq n-n^{\Omega(1)}$, with output size $n^{\Omega(1)}$ and error $2^{-n^{\Omega(1)}}$, thus resolving the open question in \cite{CG14b}.
\item We motivate and initiate the study of two natural generalizations of seedless non-malleable extractors and non-malleable codes, where the sources or the codeword may be tampered many times. For this, we construct the first explicit non-malleable two-source extractor with tampering degree $t$ up to $n^{\Omega(1)}$. %which works for min-entropy $k \geq n-n^{\Omega(1)}$, with output size $n^{\Omega(1)}$ and error $2^{-n^{\Omega(1)}}$.
By using the connection in \cite{CG14b} and providing efficient sampling algorithms, we obtain the first explicit non-malleable codes with tampering degree $t$ up to $n^{\Omega(1)}$, relative rate $n^{\Omega(1)}/n$, and error $2^{-n^{\Omega(1)}}$. We call these stronger notions \emph{one-many} and \emph{many-many} non-malleable codes. This provides a stronger information theoretic analogue of a primitive known as continuous non-malleable codes.
\end{itemize}
Our basic technique used in all of our constructions can be seen as inspired, in part, by the techniques previously used to construct \emph{cryptographic non-malleable commitments}.
Lattice-Based Fully Dynamic Multi-Key FHE with Short Ciphertexts
We present a multi-key fully homomorphic encryption scheme that supports an unbounded number of homomorphic operations for an unbounded number of parties. Namely, it allows to perform arbitrarily many computational steps on inputs encrypted by an a-priori unbounded (polynomial) number of parties. Inputs from new parties can be introduced into the computation dynamically, so the final set of parties needs not be known ahead of time. Furthermore, the length of the ciphertexts, as well as the space complexity of an atomic homomorphic operation, grow only linearly with the current number of parties.
Prior works either supported only an a-priori bounded number of parties (Lopez-Alt, Tromer and Vaikuntanthan, STOC '12), or only supported single-hop evaluation where all inputs need to be known before the computation starts (Clear and McGoldrick, Crypto '15, Mukherjee and Wichs, Eurocrypt '16). In all aforementioned works, the ciphertext length grew at least quadratically with the number of parties.
Technically, our starting point is the LWE-based approach of previous works. Our result is achieved via a careful use of Gentry's bootstrapping technique, tailored to the specific scheme. Our hardness assumption is that the scheme of Mukherjee and Wichs is circular secure (and thus bootstrappable). A leveled scheme can be achieved under standard LWE.
Mixed Integer Programming Models for Finite Automaton and Its Application to Additive Differential Patterns of Exclusive-Or
Uncategorized
Uncategorized
Inspired by Fu et al. work on modeling the exclusive-or differential property of the modulo addition as an mixed-integer programming problem, we propose a method with which any finite automaton can be formulated as an mixed-integer programming model. Using this method, we show how to construct a mixed integer programming model whose feasible region is the set of all differential patterns $(\alpha, \beta, \gamma)$'s, such that ${\rm adp}^\oplus(\alpha, \beta \rightarrow \gamma) = {\rm Pr}_{x,y}[((x + \alpha) \oplus (y + \beta))-(x \oplus y) = \gamma] > 0$. We expect that this may be useful in automatic differential analysis with additive difference.
State recovery of RC4 and Spritz Revisited
We provide an improved complexity analysis of backtracking-based state recovery
attacks on RC4 and Spritz. Comparing new estimates with known results on Spritz,
our analysis shows a significantly lower complexity estimate for simple
state recovery attack as well as special state recovery attack.
We validated the estimates by performing experiments for selected feasible parameters.
We also propose a prefix check optimization for simple state recovery attack on
Spritz. We believe that the simple state recovery attack with this optimization
and so-called ``change order'' optimization inspired by Knudsen et al. attack on
RC4 constitutes currently the best state recovery attack on Spritz (when no
special state is observed).
No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples with Applications to CAPTCHA
Recent advances in Deep Learning (DL) allow for solving complex AI problems that used to be very hard. While this progress has advanced many fields, it is considered to be bad news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), the security of which is based on the hardness of learning problems.
In this paper we introduce DeepCAPTCHA, a new and secure CAPTCHA scheme based on adversarial examples, an inherit limitation of the current Deep Learning networks. These adversarial examples are constructed inputs, computed by adding a small and specific perturbation called adversarial noise to correctly classified
items, causing the targeted DL network to misclassify them. We show that plain adversarial noise is insufficient to achieve secure CAPTCHA schemes, which leads us to introduce immutable adversarial noise - an adversarial noise resistant to removal attempts.
We implement a proof of concept system and its analysis shows that the scheme offers high security and good usability compared to the best existing CAPTCHAs.
Complete characterization of generalized bent and 2^k-bent Boolean functions
In this paper we investigate properties of generalized bent Boolean functions and 2k-bent (i.e., negabent, octabent, hex-
adecabent, et al.) Boolean functions in a uniform framework. We generalize the work of Stˇ anicˇ a et al., present necessary and
sufficient conditions for generalized bent Boolean functions and 2k-bent Boolean functions in terms of classical bent functions,
and completely characterize these functions in a combinatorial form. The result of this paper further shows that all generalized
bent Boolean functions are regular.
Probability that the k-gcd of products of positive integers is B-friable
Uncategorized
Uncategorized
In 1849, Dirichlet~\cite{D49} proved that the probability that two positive integers are relatively prime is 1/\zeta(2). Later, it was generalized into the case that positive integers has no nontrivial $k$th power common divisor.
In this paper, we further generalize this result: the probability that the gcd of m products of n positive integers is B-friable is \prod_{p>B}[1-{1-(1-\frac{1}{p})^{n}}^{m}] for m >= 2. We show that it is lower bounded by \frac{1}{\zeta(s)} for some s>1 if B>n^{\frac{m}{m-1}}, which completes the heuristic proof in the cryptanalysis of cryptographic multilinear maps by Cheon et al.~\cite{CHLRS15}.
We extend this result to the case of $k$-gcd: the probability is
\prod_{p>B}[1-{1-(1-\frac{1}{p})^{n}(1+\frac{_{n}H_{1}}{p}+\cdot\cdot\cdot+\frac{_{n}H_{k-1}}{p^{k-1}})}^{m}], where _{n}H_{i} = n+i-1 \choose i.
Proof of Space from Stacked Expanders
Recently, proof of space (PoS) has been suggested as a more egalitarian alternative to the traditional hash-based proof of work.
In PoS, a prover proves to a verifier that it has dedicated some specified amount of space.
A closely related notion is memory-hard functions (MHF), functions that require a lot of memory/space to compute.
While making promising progress, existing PoS and MHF have several problems.
First, there are large gaps between the desired space-hardness and what can be proven.
Second, it has been pointed out that PoS and MHF should require a lot of space not just at some point, but throughout the entire computation/protocol;
few proposals considered this issue.
Third, the two existing PoS constructions are both based on a class of graphs called superconcentrators, which are either hard to construct or add a logarithmic factor overhead to efficiency.
In this paper, we construct PoS from stacked expander graphs.
Our constructions are simpler, more efficient and have tighter provable space-hardness than prior works.
Our results also apply to a recent MHF called Balloon hash.
We show Balloon hash has tighter space-hardness than previously believed and consistent space-hardness throughout its computation.
Micropayments for Decentralized Currencies
Electronic financial transactions in the US, even those enabled by Bitcoin, have relatively high transaction costs. As a result, it becomes infeasible to make \emph{micropayments}, i.e. payments that are pennies or fractions of a penny.
To circumvent the cost of recording all transactions, Wheeler (1996) and Rivest (1997) suggested the notion of a \emph{probabilistic payment}, that is, one implements payments that have \emph{expected} value on the order of micro pennies by running an appropriately biased lottery for a larger payment. While there have been quite a few proposed solutions to such lottery-based micropayment schemes, all these solutions rely on a trusted third party to coordinate the transactions; furthermore, to implement these systems in today's economy would require a a global change to how either banks or electronic payment companies (e.g., Visa and Mastercard) handle transactions.
We put forth a new lottery-based micropayment scheme for any ledger-based transaction system, that can be used today without any change to the current infrastructure. We implement our scheme in a sample web application and show how a single server can handle thousands of micropayment requests per second. We analyze how the scheme can work at Internet scale.
TRVote: A New, Trustworthy and Robust Electronic Voting System
We propose a new Direct-Recording Electronic (DRE)-based voting system that we call TRVote. The reliability of TRVote is ensured during the vote generation phase where the random challenges are generated by the voters instead of utilizing the random number generator of the machine. Namely, the challenges of voters are utilized to prevent and detect a malicious behavior of a corrupted voting machine. Due to the unpredictability of the challenges, the voting machine cannot cheat voters without being detected. TRVote provides two kinds of verification; "cast-as-intended" is ensured during the vote generation phase and "recorded-as-cast" is ensured through a secure Web Bulletin Board (WBB). More concretely, voters can verify that their votes are cast as intended via a zero-knowledge proof on a printed receipt using QR codes. After the election, the central server broadcasts all receipts in a secure WBB where the voters (or, perhaps proxies) can check whether their receipts appear correctly. In order to implement the voting process, the proposed voting machine has simple components such as mechanical switches, a touchscreen, and a printer. In this system, each candidate is represented by a mechanical switch which is equipped within the voting machine. The machine has a flexible structure in the sense that the mechanical switches can be placed and removed as plug-ins depending on the number of candidates which allows to support arbitrary number of candidates. We show that the proposed system is robust and guarantees privacy of voters. We further analyze that it is universally verifiable and secure against coercion.
NaCl's Crypto_Box in Hardware
This paper presents a low-resource hardware implementation of the widely used crypto_box function of the Networking and Cryptography library (NaCl). It supports the X25519 Diffie-Hellman key exchange using Curve25519, the Salsa20 stream cipher, and the Poly1305 message authenticator. Our targeted application is a secure communication between devices in the Internet of Things (IoT) and Internet servers. Such devices are highly resource-constrained and require carefully optimized hardware implementations. We propose the first solution that enables 128-bit-secure public-key authenticated encryption on passively-powered IoT devices like WISP nodes. From a cryptographic point of view we thus make a first step to turn these devices into fully-fledged participants of Internet communication. Our crypto processor needs a silicon area of 14.6 kGEs and less than 40 uW of power at 1MHz for a 130nm low-leakage CMOS process technology.
A modified block Lanczos algorithm with fewer vectors
The block Lanczos algorithm proposed by Peter Montgomery is an efficient means to tackle the sparse linear algebra problem which arises in the context of the number field sieve factoring algorithm and its predecessors. We present here a modified version of the algorithm, which incorporates several improvements: we discuss how to efficiently handle homogeneous systems and how to reduce the number of vectors stored in the course of the computation. We also provide heuristic justification for the success probability of our modified algorithm.
While the overall complexity and expected number of steps of the block Lanczos is not changed by the modifications presented in this article, we expect these to be useful for implementations of the block Lanczos algorithm where the storage of auxiliary vectors sometimes has a non-negligible cost.
Constructing genus 3 hyperelliptic Jacobians with CM
Uncategorized
Uncategorized
Given a sextic CM field K, we give an explicit method for finding all genus 3 hyperelliptic curves defined over the complex whose Jacobians are simple and have complex multiplication by the maximal order of this field, via an approximation of their Rosenhain invariants. Building on the work of Weng, we give an algorithm which works in complete generality, for any CM sextic field K, and computes minimal polynomials of the Rosenhain invariants for any period matrix of the Jacobian. This algorithm can be used to generate genus 3 hyperelliptic curves over a finite field F_p with a given zeta function by finding roots of the Rosenhain minimal polynomials modulo p.
Proxy Re-Encryption Schemes with Key Privacy from LWE
Proxy re-encryption (PRE) is a cryptographic primitive in which a proxy can transform Alice's ciphertexts into ones decryptable by Bob. Key-private PRE specifies an additional level of security,
requiring that proxy keys leak no information on the identities of Alice and Bob. In this paper, we build two key-private PRE schemes: (1) we propose a CPA-secure key-private PRE scheme in the standard model, and (2) we then transform it into a CCA-secure scheme in the random oracle model. Both schemes enjoy following properties: both are uni-directional and the CPA-secure one is a multi-hop scheme.
In addition, the security of our schemes is based on the hardness of the standard Learning-With-Errors (LWE) problem, itself reducible from worst-case lattice hard problems that are conjectured immune to quantum cryptanalysis, or ``post-quantum''. We implement the CPA-secure scheme and point out that, among many applications, it can be sufficiently used for the practical task of key rotation over encrypted data.
Square Attack on 7-Round Kiasu-BC
Kiasu-BC is a tweakable block cipher presented within the TWEAKEY framework at AsiaCrypt 2014. Kiasu-BC is almost identical to AES-128, the only difference to AES-128 is the tweak addition, where the 64-bit tweak is xored to the first two rows of every round-key. The security analysis of the designers focuses primarily on related-key related-tweak differential characteristics and meet-in-the-middle attacks. For other attacks, they conclude that the security level of Kiasu-BC is similar to AES-128. In this work, we provide the first third-party analysis of Kiasu-BC. We show that we can mount Square attacks on up to 7-round Kiasu-BC with a complexity of about $2^{48.5}$ encryptions, which improves upon the best published 7-round attacks for AES-128. Furthermore, we show that such attacks are applicable to the round-reduced OCB3-like mode of the CAESAR candidate Kiasu. To be specific, we show a key-recovery attack on 7-round Kiasu$\neq$ with a complexity of about $2^{82}$ encryptions.
Optimized quantization in Zero Leakage Helper Data Systems
Uncategorized
Uncategorized
Helper Data Systems are a cryptographic primitive that allows for the reproducible extraction of secrets from noisy measurements. Redundancy data called Helper Data makes it possible to do error correction while leaking little or nothing ("Zero Leakage") about the extracted secret string.
We study the case of non-discrete measurement outcomes. In this case a quantization step is required. Recently de Groot et al described a generic method to perform the quantization in a Zero Leakage manner. We extend their work and show how the quantization intervals should be set to maximize the amount of extracted secret key material when noise is taken into account.
Interactive Oracle Proofs with Constant Rate and Query Complexity
We study *interactive oracle proofs* (IOPs) [BCS16,RRR16], which combine aspects of probabilistically checkable proofs (PCPs) and interactive proofs (IPs). We present IOP constructions and techniques that enable us to obtain tradeoffs in proof length versus query complexity that are not known to be achievable via PCPs or IPs alone. Our main results are:
1. Circuit satisfiability has 3-round IOPs with linear proof length (counted in bits) and constant query complexity.
2. Reed--Solomon codes have 2-round IOPs of proximity with linear proof length and constant query complexity.
3. Tensor product codes have 1-round IOPs of proximity with sublinear proof length and constant query complexity.
For all the above, known PCP constructions give *quasilinear* proof length and constant query complexity [BS08,Din07]. Also, for circuit satisfiability, [BKKMS13] obtain PCPs with linear proof length but *sublinear* (and super-constant) query complexity. As in [BKKMS13], we rely on algebraic-geometry codes to obtain our first result; but, unlike that work, our use of such codes is much "lighter" because we do not rely on any automorphisms of the code.
We obtain our results by proving and combining "IOP-analogues" of tools underlying numerous IPs and PCPs:
> Interactive proof composition. Proof composition [AS98] is used to reduce the query complexity of PCP verifiers, at the cost of increasing proof length by an additive factor that is exponential in the verifier's randomness complexity. We prove a composition theorem for IOPs where this additive factor is linear.
> Sublinear sumcheck. The sumcheck protocol [LFKN92] is an IP that enables the verifier to check the sum of values of a low-degree multi-variate polynomial on an exponentially-large hypercube, but the verifier's running time depends linearly on the bound on individual degrees. We prove a sumcheck protocol for IOPs where this dependence is sublinear (e.g., polylogarithmic).
Our work demonstrates that even constant-round IOPs are more efficient than known PCPs and IPs.
A Family of Scalable Polynomial Multiplier Architectures for Ring-LWE Based Cryptosystems
Many lattice based cryptosystems are based on the Ring learning with errors (Ring-LWE) problem.
The most critical and computationally intensive operation of these Ring-LWE based cryptosystems is polynomial multiplication over rings.
In this paper, we exploit the number theoretic transform (NTT) to build a family of scalable polynomial multiplier architectures,
which provide designers with a trade-off choice of speed vs. area.
Our polynomial multipliers are capable to calculate the product of two $n$-degree polynomials in about $(1.5n\lg n + 1.5n)/b$ clock cycles,
where $b$ is the number of the butterfly operators.
In addition, we exploit the cancellation lemma to reduce the required ROM storage.
The experimental results on a Spartan-6 FPGA show that the proposed polynomial multiplier architectures achieve a speedup of 3 times on average
and consume less Block RAMs and slices
when compared with the compact design.
Compared with the state of the art of high-speed design,
the proposed hardware architectures save up to 46.64\% clock cycles
and improve the utilization rate of the main data processing units by 42.27\%.
Meanwhile, our designs can save up to 29.41\% block RAMs.
On the Security of PUF Protocols under Bad PUFs and PUFs-inside-PUFs Attacks
Uncategorized
Uncategorized
We continue investigations on the use of so-called Strong PUFs as a cryptographic primitive in realistic attack models, in particular in the “Bad/Malicious PUF Model”. We obtain the following results:
– Bad PUFs and Simplification: As a minor contribution, we simplify a recent OT-protocol for malicious PUFs by Dachman-Soled et al. [4] from CRYPTO 2014. We can achieve the same security properties under the same assumptions, but use only one PUF instead of two.
– PUFs-inside-PUFs, Part I: We propose the new, realistic adversarial models of PUF modifications and PUFs-inside-PUF attacks, and show that the earlier protocol of Dachman-Soled et al. [4] is vulnerable against PUFs-inside-PUFs attacks (which lie outside the original framework of [4]).
– PUFs-inside-PUFs, Part II: We construct a new PUF-based OT-protocol,
which is secure against PUFs-inside-PUFs attacks if the used bad PUFs are stateless. Our protocol introduces the technique of interleaved challenges.
– PUFs-inside-PUFs, Part III: In this context, we illustrate why the use of interactive hashing in our new protocol appears necessary, and why a first protocol attempt without interactive hashing fails.
Algebraic Decomposition for Probing Security
Uncategorized
Uncategorized
The probing security model is very popular to prove the side-channel security of cryptographic implementations protected by masking. A common approach to secure nonlinear functions in this model is to represent them as polynomials over a binary field and to secure their nonlinear multiplications thanks to a method introduced by Ishai, Sahai and Wagner at Crypto 2003. Several schemes based on this approach have been published, leading to the recent proposal of Coron, Roy and Vivek which is currently the best known method when no particular assumption is made on the algebraic structure of the function. In the present paper, we revisit this idea by trading nonlinear multiplications for low-degree functions. Specifically, we introduce an algebraic decomposition approach in which a nonlinear function is represented as a sequence of functions with low algebraic degrees. We therefore focus on the probing-secure evaluation of such low-degree functions and we introduce three novel methods to tackle this particular issue. The paper concludes with a comparative analysis of the proposals, which shows that our algebraic decomposition method outperforms the method of Coron, Roy and Vivek in several realistic contexts.
On Metrics to Quantify the Inter-Device Uniqueness of PUFs
Physically Unclonable Functions (PUFs) have been an emerging topic in hardware security and trust in recent years, and many different kinds of PUFs have been presented in the literature. An important criterion is always the diversity of PUF responses for different devices, called inter-device uniqueness. A very popular uniqueness metric consists of calculating the pairwise hamming distance between the response bit-strings of all devices, assuming that all response bits are uncorrelated. Such correlations, however, should be regarded when a statement about inter-device uniqueness is made. We therefore propose a novel correlation metric to fulfil this requirement. Furthermore, we show that the hamming distance metric is actually redundant when at the same time the also popular bit-aliasing metric is applied.
High-precision Secure Computation of Satellite Collision Probabilities
The costs of designing, building, launching and maintaining satellites make satellite operators extremely motivated to protect their on-orbit assets. Unfortunately, privacy concerns present a serious barrier to coordination between different operators. One obstacle to improving safety arises because operators view the trajectories of their satellites as private, and refuse to share this private information with other operators. Without data-sharing, preventing collisions between satellites becomes a challenging task.
A 2014 report from the RAND Corporation proposed using cryptographic tools from the domain of secure Multiparty Computation (MPC) to allow satellite operators to calculate collision probabilities (conjunction analyses) without sharing private information about the trajectories of their satellites.
In this work, we report on the design and implementation of a powerful new MPC framework for high-precision arithmetic on real-valued variables in a two-party setting where, unlike previous works, there is no honest majority, and where the players are not assumed to be semi-honest. We show how to apply this new solution in the domain of securely computing conjunction analyses. Our solution extends existing protocols, in particular the integer-based Goldreich-Micali-Wigderson (GMW) protocol, whereby we use combine and optimize GMW with Garbled Circuits (GC). We prove security of our protocol in the two party, semi-honest setting, assuming only the existence of one-way functions and Oblivious Transfer (the OT-hybrid model). The protocol allows a pair of satellite operators to compute the probability that their satellites will collide without sharing their underlying private orbital information. Techniques developed in this paper would potentially have a wide impact on general secure numerical analysis computations. We also show how to strengthen our construction with standard arithmetic message-authentication-codes (MACs) to enforce honest behavior beyond the semi-honest setting.
Computing a conjunction analysis requires numerically estimating a complex double integral to a high degree of precision. The complexity of the calculation, and the possibility of numeric instability presents many challenges for MPC protocols which typically model calculations as simple (integer) arithmetic or binary circuits.
Our secure numerical integration routines are extremely stable and efficient, and our secure conjunction analysis protocol takes only a few minutes to run on a commodity laptop.
Generic Construction of Certificateless Signcryption Scheme
Uncategorized
Uncategorized
Confidentiality and message authentication are the most important security goals that can be achieved simultaneously by Signcryption scheme. It is a cryptographic technique that performs both the functions of digital signature and public key encryption in a single logical step significantly at a lower cost than that of conventional method of signature-then-encryption. The paper proposes an efficient Certificateless Signcryption Scheme(CLSC) in random oracle
model on bilinear mapping. It is provably secure under the assumptions of intractability of k-CAA, Inv-CDH, q-BDHI and CDH problems.
Semi-Adaptive Security and Bundling Functionalities Made Generic and Easy
Semi-adaptive security is a notion of security that lies between selective and adaptive security for Attribute-Based Encryption (ABE) and Functional Encryption (FE) systems. In the semi-adaptive model the attacker is forced to disclose the challenge messages before it makes any key queries, but is allowed to see the public parameters.
We show how to generically transform any selectively secure ABE or FE scheme into one that is semi-adaptively secure with the only additional assumption being public key encryption, which is already naturally included in almost any scheme of interest. Our technique utilizes a fairly simple application of garbled circuits where instead of encrypting directly, the encryptor creates a garbled circuit that takes as input the public parameters and outputs a ciphertext in the underlying selective scheme. Essentially, the encryption algorithm encrypts without knowing the `real' public parameters. This allows one to delay giving out the underlying selective parameters until a private key is issued, which connects the semi-adaptive to selective security. The methods used to achieve this result suggest that the moral gap between selective and semi-adaptive security is in general much smaller than that between semi-adaptive and full security.
Finally, we show how to extend the above idea to generically bundle a family of functionalities under one set of public parameters. For example, suppose we had an inner product predicate encryption scheme where the length of the vectors was specified at setup and therefore fixed to the public parameters. Using our transformation one could create a system where for a single set of public parameters the vector length is not apriori bounded, but instead is specified by the encryption algorithm. The resulting ciphertext would be compatible with any private key generated to work on the same input length.
A Note on Black-Box Separations for Indistinguishability Obfuscation
Mahmoody et al. (TCC 2016-A) showed that basing indistinguishability obfuscation (IO) on a wide range of primitives in a black-box way is \emph{as hard as} basing public-key cryptography on one-way functions. The list included any primitive $P$ that could be realized relative to random trapdoor permutation or degree-$O(1)$ graded encoding oracle models in a secure way against computationally unbounded polynomial-query attackers.
In this work, relying on the recent result of Brakerski, Brzuska, and Fleischhacker (ePrint 2016/226) in which they ruled out statistically secure approximately correct IO, we show that there is no fully black-box constructions of IO from any of the primitives listed above, assuming the existence of one-way functions and $NP \not \subseteq coAM$.
At a technical level, we provide an alternative lemma to the Borel-Cantelli lemma that is useful for deriving black-box separations. In particular, using this lemma we show that attacks in idealized models that succeed with only a \emph{constant} advantage over the trivial bound are indeed sufficient for deriving fully black-box separations from primitives that exist in such idealized models unconditionally.
Flattening NTRU for Evaluation Key Free Homomorphic Encryption
Uncategorized
Uncategorized
We propose a new FHE scheme {\sf F-NTRU} that adopts the flattening technique proposed in GSW to derive an NTRU based scheme that (similar to GSW) does not require evaluation keys or key switching. Our scheme eliminates the decision small polynomial ratio (DSPR) assumption but relies only on the standard R-LWE assumption. It uses wide key distributions, and hence is immune to the Subfield Lattice Attack. In practice, our scheme achieves competitive timings compared to the existing schemes. We are able to compute a homomorphic multiplication in $24.4$~msec and $34.3$~msec for $5$ and $30$ levels, respectively, without amortization. Furthermore, our scheme features small ciphertexts, e.g. $1152$~KB for $30$ levels, and eliminates the need for storing and managing costly evaluation keys.
In addition, we present a slightly modified version of F-NTRU that is capable to support integer operations with a very large message space along with noise analysis for all cases. The assurance gained by using wide key distributions along with the message space flexibility of the scheme, i.e. bits, binary polynomials, and integers with a large message space, allows the use of the proposed scheme in a wide array of applications.
Blind Source Separation from Single Measurements using Singular Spectrum Analysis
Singular Spectrum Analysis (SSA) is a powerful data decomposition/recomposition technique that can be used to reduce the noise in time series. Compared to existing solutions aiming at similar purposes, such as frequency-based filtering, it benefits from easier-to-exploit intuitions, applicability in contexts where low sampling rates make standard frequency analyses challenging, and the (theoretical) possibility to separate a signal source from a noisy source even if both run at the same frequency. In this paper, we first describe how to apply SSA in the context of side-channel analysis, and then validate its interest in three different scenarios. Namely, we consider unprotected software, masked software, and unprotected hardware block cipher implementations. Our experiments confirm significant noise reductions in all three cases, leading to success rates improved accordingly. They also put forward the stronger impact of SSA in more challenging scenarios, e.g. masked implementations (because the impact of noise increases exponentially with the number of shares in this case), or noisy hardware implementations (because of the established connection between the amount of noise and the attacks' success rate in this case). Since noise is a fundamental ingredient for most countermeasures against side-channel attacks, we conclude SSA can be an important element in the toolbox of evaluation laboratories, in order to efficiently preprocess their measurements in a black box manner.
Fiat-Shamir for Highly Sound Protocols is Instantiable
The Fiat-Shamir (FS) transformation (Fiat and Shamir, Crypto '86) is a popular paradigm for constructing very efficient non-interactive zero-knowledge (NIZK) arguments and signature schemes using a hash function, starting from any three-move interactive protocol satisfying certain properties. Despite its wide-spread applicability both in theory and in practice, the known positive results for proving security of the FS paradigm are in the random oracle model, i.e., they assume that the hash function is modelled as an external random function accessible to all parties. On the other hand, a sequence of negative results shows that for certain classes of interactive protocols, the FS transform cannot be instantiated in the standard model.
We initiate the study of complementary positive results, namely, studying classes of interactive protocols where the FS transform *does* have standard-model instantiations. In particular, we show that for a class of "highly sound" protocols that we define, instantiating the FS transform via a $q$-wise independent hash function yields NIZK arguments and secure signature schemes. In the case of NIZK, we obtain a weaker "$q$-bounded" zero-knowledge flavor where the simulator works for all adversaries asking an a-priori bounded number of queries $q$; in the case of signatures, we obtain the weaker notion of random-message unforgeability against $q$-bounded random message attacks.
Our main idea is that when the protocol is highly sound, then instead of using random-oracle programming, one can use complexity leveraging. The question is whether such highly sound protocols exist and if so, which protocols lie in this class. We answer this question in the affirmative in the common reference string (CRS) model and under strong assumptions. Namely, assuming indistinguishability obfuscation and puncturable pseudorandom functions we construct a compiler that transforms any 3-move interactive protocol with instance-independent commitments and simulators (a property satisfied by the Lapidot-Shamir protocol, Crypto '90) into a compiled protocol in the CRS model that is highly sound. We also present a second compiler, in order to be able to start from a larger class of protocols, which only requires instance-independent commitments (a property for example satisfied by the classical protocol for quadratic residuosity due to Blum, Crypto '81). For the second compiler we require dual-mode commitments.
We hope that our work inspires more research on classes of (efficient) 3-move protocols where Fiat-Shamir is (efficiently) instantiable.
Refinements of the k-tree Algorithm for the Generalized Birthday Problem
We study two open problems proposed by Wagner in his seminal work on the generalized birthday problem. First, with the use of multicollisions, we improve Wagner's $3$-tree algorithm. The new 3-tree only slightly outperforms Wagner's 3-tree, however, in some applications this suffices, and as a proof of concept, we apply the new algorithm to slightly reduce the security of two CAESAR proposals.
Next, with the use of multiple collisions based on Hellman's table, we give improvements to the best known time-memory tradeoffs for the k-tree. As a result, we obtain the a new tradeoff curve T^2 \cdot M^{\lg k -1} = k \cdot N. For instance, when k=4, the tradeoff has the form T^2 M = 4 \cdot N.
Fast Correlation Attacks over Extension Fields, Large-unit Linear Approximation and Cryptanalysis of SNOW 2.0
Several improvements of fast correlation attacks have been proposed during the past two decades, with a regrettable lack of a better generalization and adaptation to the concrete involved primitives, especially to those modern stream ciphers based on word-based LFSRs. In this paper, we develop some necessary cryptanalytic tools to bridge this gap. First, a formal framework for fast correlation attacks over extension fields is constructed, under which the theoretical predictions of the computational complexities for both the offline and online/decoding phase can be reliably derived. Our decoding algorithm makes use of Fast Walsh
Transform (FWT) to get a better performance. Second, an efficient algorithm to compute the large-unit distribution of a broad class of functions is proposed, which allows to find better linear approximations than the bitwise ones with low complexity in symmetric-key primitives. Last, we apply our methods to SNOW 2.0, an ISO/IEC 18033-4 standard stream
cipher, which results in the significantly reduced complexities all below
2^164.15. This attack is more than 2^49 times better than the best published result at Asiacrypt 2008. Our results have been verified by experiments on a small-scale version of SNOW 2.0.
Coded-BKW: Solving LWE Using Lattice Codes
In this paper we propose a new algorithm for solving the Learning With Errors (LWE) problem based on the steps of the famous Blum-Kalai-Wasserman (BKW) algorithm. The new idea is to introduce an additional procedure of mapping subvectors into codewords of a lattice code, thereby increasing the amount of positions that can be cancelled in each BKW step. The procedure introduces an additional noise term, but it is shown that by using a sequence of lattice codes with different rates the noise can be kept small. Developed theory shows that the new approach compares favorably to previous methods. It performs particularly well for the binary-LWE case, i.e., when the secret vector is sampled from $(0,1)^*$.
Privately Outsourcing Exponentiation to a Single Server: Cryptanalysis and Optimal Constructions
We address the problem of speeding up group computations in cryptography using a single untrusted computational resource. We analyze the security of an efficient protocol for securely outsourcing multi-exponentiations proposed at ESORICS 2014. We show that this scheme does not achieve the claimed security guarantees and we present several practical polynomial-time attacks on the delegation protocol which allows the untrusted helper to recover part (or the whole) of the device secret inputs. We then provide simple constructions for outsourcing group exponentiations in different settings (e.g. public/secret, fixed/variable bases and public/secret exponents). Finally, we prove that our attacks on the ESORICS 2014 protocol are unavoidable if one wants to use a single untrusted computational resource and to limit the computational cost of the limited device to a constant number of (generic) group operations. In particular, we show that our constructions are actually optimal.
Strongly Leakage-Resilient Authenticated Key Exchange
Authenticated Key Exchange (AKE) protocols have been widely deployed in many real-world applications for securing communication channels. In this paper, we make the following contributions. First, we revisit the security modelling of leakage-resilient AKE protocols, and show that the existing models either impose some unnatural restrictions or do not sufficiently capture leakage attacks in reality. We then introduce a new strong yet meaningful security model, named challenge-dependent leakage-resilient eCK (CLR-eCK) model, to capture challenge-dependent leakage attacks on both long-term secret key and ephemeral secret key (i.e., randomness). Second, we propose a general framework for constructing one-round CLR-eCK-secure AKE protocols based on smooth projective hash functions (SPHFs). This framework ensures the session key is private and authentic even if the adversary learns a large fraction of both long-term secret key and ephemeral secret key, and hence provides stronger security guarantee than existing AKE protocols which become insecure if the adversary can perform leakage attacks during the execution of a session. Finally, we also present a practical instantiation of the general framework based on the Decisional Diffie-Hellman assumption without random oracle. Our result shows that the instantiation is efficient in terms of the communication and computation overhead and captures more general leakage attacks.
Non-Malleable Codes for Bounded Depth, Bounded Fan-in Circuits
We show how to construct efficient, unconditionally secure non-malleable codes for bounded output locality. In particular, our scheme is resilient against functions such that any output bit is dependent on at most $n^{\delta}$ bits, where $n$ is the total number of bits in a codeword and $0 \leq \delta < 1$ a constant. Notably, this tampering class includes $\mathsf{NC}^0$.
A Formal Treatment of Backdoored Pseudorandom Generators
We provide a formal treatment of backdoored pseudorandom generators (PRGs).
Here a saboteur chooses a PRG instance for which she knows a trapdoor that
allows prediction of future (and possibly past) generator outputs. This
topic was formally studied by Vazirani and Vazirani, but only in a limited
form and not in the context of subverting cryptographic protocols. The latter
has become increasingly important due to revelations about NIST's backdoored
Dual EC PRG and new results about its practical exploitability using a trapdoor.
We show that backdoored PRGs are equivalent to public-key encryption schemes
with pseudorandom ciphertexts. We use this equivalence to build backdoored PRGs
that avoid a well known drawback of the Dual EC PRG, namely biases in outputs
that an attacker can exploit without the trapdoor. Our results also yield a
number of new constructions and an explanatory framework for why there are no
reported observations in the wild of backdoored PRGs using only symmetric
primitives.
We also investigate folklore suggestions for countermeasures to backdoored PRGs,
which we call {\em immunizers}. We show that simply hashing PRG outputs is not
an effective immunizer against an attacker that knows the hash function in use.
Salting the hash, however, does yield a secure immunizer, a fact we prove using
a surprisingly subtle proof in the random oracle model. We also give a proof in
the standard model under the assumption that the hash function is a universal
computational extractor (a recent notion introduced by Bellare, Tung, and
Keelveedhi).
Certicateless Aggregate Short Signature Scheme
An aggregate signature scheme is the aggregation of multiple signatures into a single compact signature of short string that can convince to any arbitrary verifier participating in the scheme. The aggregate signature scheme is very useful for real-world cryptographic applications such as secure routing, database outsourcing, etc. where the signatures on several distinct messages generated by many distinct users requires to be compact. In this paper, we presented an aggregate signature scheme using Certificateless Public Key Cryptography(CL-PKC). The scheme is provably secure with strongest security and shortest length. We have proven the scheme is existentially unforgeable under adaptive chosen message attack, assuming the hardness of computational Diffie-Hellman(CDH) Problem.
A Fast Attribute Based Encryption
Uncategorized
Uncategorized
Our new Access Control Encryption is an implementation of CP-ABE, when used as part of a key delivery mechanism for an encrypted Data Base. It focuses on improving performance. In ACE the access policies are any predicates over the set of object attributes. Efficiency gains are most pronounced when the DNF representations of policies are compact. In ACE, within the life span of the keys, each user has to perform very few ABE decryptions, regardless of the number of policies accessible to her. Keys to most objects are then computed using only symmetric key decryptions.
ACE is not the first to utilize symmetric key cryptography to reduce the number of CP-ABE operations, when access policies form a multi-level partially ordered set. However, in addition to this significant saving, ACE also takes advantage of overlaps among policies on clauses of the policies, thus further reducing computational complexity.
Let R denote the number of user roles, N be the number of object access policies, k the ratio between the cost of CP-ABE encryption and symmetric key encryption complexities (for 10 attributes k is about a million), and N=cR. The gain factor of ACE vs. a competing hybrid system is kc/(k+c). Usually c>>1, but in some systems it may happen that c<1.
ACE is composed of two sub systems encrypting the same messages: A CP-ABE and a symmetric key encryption system. We prove that ACE is secure under a new Uniform Security Game that we propose and justify, assuming that its building blocks, namely CP-ABE and block ciphers are secure. We require that CP-ABE be secure under the Selective Set Model, and that the block cipher be secure under Multi-User CPA, which we define.
We present Policy Encryption (PE) that can replace CP-ABE as a component of ACE. In many cases, PE is more efficient than CP-ABE. However PE does not prevent collusions. Instead it limits collusions. PE is useful in those cases where owners can compartmentalize objects and subjects, so that within each compartment the owners can tolerate collusions. PE prevents inter compartmental collusions. PE has also the following appealing properties: It relies on older hence more reliable intractability assumption, the Computational Diffie-Hellman assumption, whereas CP-ABE relies on the newer Bilinear Diffie-Hellman assumption. PE uses off-the shelf standard crypto building blocks with one small modification, with proven security. For a small number of compartments PE is much faster than CP-ABE. PE and CP-ABE can coexist in the same system, where ABE is used in high security compartments.
We apply ACE to a practical financial example, the Consolidate Audit Trail (CAT), which is expected to become the largest repository of financial data in the world.
From Obfuscation to the Security of Fiat-Shamir for Proofs
The Fiat-Shamir paradigm [CRYPTO'86] is a heuristic for converting
three-round identification schemes into signature schemes, and more
generally, for collapsing rounds in constant-round public-coin
interactive protocols. This heuristic is very popular both in theory
and in practice, and its security has been the focus of extensive
study.
In particular, this paradigm was shown to be secure in the so-called
Random Oracle Model. However, in the plain model, mainly negative
results were shown. In particular, this heuristic was shown to be
insecure when applied to computationally sound proofs (also known as
arguments). Moreover, recently it was shown that even in the
restricted setting where the heuristic is applied to interactive
proofs (as opposed to arguments), its soundness cannot be proven via a
black-box reduction to any so-called falsifiable assumption.
In this work, we give a positive result for the security of this
paradigm in the plain model. Specifically, we construct a hash
function for which the Fiat Shamir paradigm is secure when applied to
proofs (as opposed to arguments), assuming the existence of a
sub-exponentially secure indistinguishability obfuscator, the
existence of an exponentially secure input-hiding obfuscator for the
class of multi-bit point functions, and the existence of a
sub-exponentially secure one-way function.
While the hash function we construct is far from practical, we believe
that this is a first step towards instantiations that are both more
efficient and provably secure. In addition, we show that this result
resolves a long-lasting open problem in the study of zero-knowledge
proofs: It implies that there does not exist a public-coin
constant-round zero-knowledge proof with negligible soundness (under
the assumptions stated above).
A Polynomial-Time Attack on the BBCRS Scheme
The BBCRS scheme is a variant of the McEliece public-key encryption scheme where the hiding phase is performed by taking the inverse of a matrix which is of the form T+R where T is a sparse matrix with average row/column weight equal to a very small quantity m, usually m<2, and R is a matrix of small rank z⩾1. The rationale of this new transformation is the reintroduction of families of codes, like generalized Reed-Solomon codes, that are famously known for representing insecure choices. We present a key-recovery attack when z=1 and m is chosen between 1 and 1+R+O(1n√) where R denotes the code rate. This attack has complexity O(n6) and breaks all the parameters suggested in the literature.
Constrained Pseudorandom Functions for Unconstrained Inputs
Uncategorized
Uncategorized
A constrained pseudo random function (PRF) behaves like a standard PRF, but with the added feature that the (master) secret key holder, having secret key K, can produce a constrained key, K{f}, that allows for the evaluation of the PRF on all inputs satisfied by the constraint f. Most existing constrained PRF constructions can handle only bounded length inputs. In a recent work, Abusalah et al. [AFP14] constructed a constrained PRF scheme where constraints can be represented as Turing machines with unbounded inputs. Their proof of security, however, requires risky “knowledge type” assumptions such as (public coins) differing inputs obfuscation for circuits and SNARKs.
In this work, we construct a constrained PRF scheme for Turing machines with unbounded inputs under weaker assumptions, namely, the existence of indistinguishability obfuscation for circuits (and DDH).
Flush, Gauss, and Reload -- A Cache Attack on the BLISS Lattice-Based Signature Scheme
We present the first side-channel attack on a lattice-based signature scheme, using the FLUSH+RELOAD cache-attack. The attack is targeted at the discrete Gaussian sampler, an important step in the Bimodal Lattice Signature Schemes (BLISS). After observing only 450 signatures with a perfect side-channel, an attacker is able to extract the secret BLISS-key in less than 2 minutes, with a success probability of 0.96. Similar results are achieved in a proof-of-concept implementation using the FLUSH+RELOAD technique with less than 3500 signatures.
We show how to attack sampling from a discrete Gaussian using CDT or
rejection sampling by showing potential information leakage via cache memory. For both sampling methods, a strategy is given to use this additional information, finalize the attack and extract the secret key.
We provide experimental evidence for the idealized perfect side-channel attacks and the FLUSH+RELOAD attack on two recent CPUs.
Efficient Design Strategies Based on the AES Round Function
We show several constructions based on the AES round function that can be used as building blocks for MACs and authenticated encryption schemes. They are found by a search of the space of all secure constructions based on an efficient design strategy that has been shown to be one of the most optimal among all the considered.
We implement the constructions on the latest Intel's processors. Our benchmarks show that on Intel Skylake the smallest construction runs at 0.188 c/B, while the fastest at only 0.125 c/B, i.e. five times faster than AES-128.
Reverse-Engineering of the Cryptanalytic Attack Used in the Flame Super-Malware
In May 2012, a highly advanced malware for espionage dubbed Flame was found targeting the Middle-East. As it turned out, it used a forged signature to infect Windows machines by MITM-ing Windows Update. Using counter-cryptanalysis, Stevens found that the forged signature was made possible by a chosen-prefix attack on MD5 \cite{DBLP:conf/crypto/Stevens13}. He uncovered some details that prove that this attack differs from collision attacks in the public literature, yet many questions about techniques and complexity remained unanswered.
In this paper, we demonstrate that significantly more information can be deduced from the example collision. Namely, that these details are actually sufficient to reconstruct the collision attack to a great extent using some weak logical assumptions. In particular, we contribute an analysis of the differential path family for each of the four near-collision blocks, the chaining value differences elimination procedure and a complexity analysis of the near-collision block attacks and the associated birthday search for various parameter choices. Furthermore, we were able to prove a lower-bound for the attack's complexity.
This reverse-engineering of a non-academic cryptanalytic attack exploited in the real world seems to be without precedent. As it allegedly was developed by some nation-state(s), we discuss potential insights to their cryptanalytic knowledge and capabilities.
A Unified Metric for Quantifying Information Leakage of Cryptographic Devices under Power Analysis Attacks
Uncategorized
Uncategorized
To design effective countermeasures for cryptosystems against
side-channel power analysis attacks, the evaluation of the system leakage has to be lightweight and often times at the early stage like on cryptographic algorithm or source code. When real implementations and power leakage measurements are not available, security evaluation has to be through metrics for the information leakage of algorithms. In this work, we propose such a general and unified metric, information leakage amount - ILA. ILA has several distinct advantages over existing metrics. It unifies the measure of information leakage to various attacks: first-order and higher-order DPA and CPA attacks. It works on algorithms with no mask protection or perfect/imperfect masking countermeasure.It is explicitly connected to the success rates of attacks, the ultimate security metric on physical implementations. Therefore, we believe ILA is an accurate indicator of the side-channel security level of the physical system, and can be used during the countermeasure design stage effectively and efficiently for choosing the best countermeasure.
How to Sequentialize Independent Parallel Attacks?
Uncategorized
Uncategorized
We assume a scenario where an attacker can mount several independent attacks on a single CPU. Each attack can be run several times in independent ways. Each attack can succeed after a given number of steps with some given and known probability.
A natural question is to wonder what is the optimal strategy to run steps of the attacks in a sequence.
In this paper, we develop a formalism to tackle this problem.
When the number of attacks is infinite, we show that there is a magic number of steps m such that the optimal strategy is to run an attack for m steps and to try again with another attack until one succeeds. We also study the case of a finite number of attacks.
We describe this problem when the attacks are exhaustive key searches, but the result is more general.
We apply our result to the learning parity with noise (LPN) problem and the password search problem. Although the optimal m decreases as the distribution is more biased,
we observe a phase transition in all cases: the decrease is very abrupt from m corresponding to exhaustive search on a
single target to m=1 corresponding to running a single step of the attack on each target.
For all practical biased examples, we show that the best strategy is to use m=1.
For LPN, this means to guess that
the noise vector is 0 and to solve the secret by Gaussian elimination. This is actually better than all variants of the
Blum-Kalai-Wasserman (BKW) algorithm.