All papers in 2015 (Page 11 of 1255 results)
Cryptanalysis of Three Certificate-Based Authenticated Key Agreement Protocols and a Secure Construction
Uncategorized
Uncategorized
Certificate-based cryptography is a new public-key cryptographic paradigm that has very appealing features, namely it simplifies the certificate management problem in traditional public key cryptography while eliminating the key escrow problem in identity-based cryptography. So far, three authenticated key agreement (AKA) protocols in the setting of certificate-based cryptography have been proposed in the literature. Unfortunately, none of them are secure under the public key replacement (PKR) attack. In this paper, we first present a security model for certificate-based AKA protocols that covers the PKR attacks. We then explore the existing three certificate-based AKA protocols and show the concrete attacks against them respectively. To overcome the weaknesses in these protocols, we propose a new certificate-based AKA protocol and prove its security strictly in the random oracle model. Performance comparison shows that the proposed protocol outperforms all the previous certificate-based AKA protocols.
A comprehensive analysis of game-based ballot privacy definitions
We critically survey game-based security definitions for the privacy of voting schemes. In addition to known limitations, we unveil several previously unnoticed shortcomings. Surprisingly, the conclusion of our study is that none of the existing definitions is satisfactory: they either provide only weak guarantees, or can be applied only to a limited class of schemes, or both.
Based on our findings, we propose a new game-based definition of privacy which we call BPRIV. We also identify a new property which we call {\em strong consistency}, needed to express that tallying does not leak sensitive information. We validate our security notions by showing that BPRIV, strong consistency (and an additional simple property called strong correctness) for a voting scheme imply its security in a simulation-based sense. This result also yields a proof technique for proving entropy-based notions of privacy which offer the strongest security guarantees but are hard to prove directly: first prove your scheme BPRIV, strongly consistent (and correct),then study the entropy-based privacy of the result function of the election, which is a much easier task.
Tornado Attack on RC4 with Applications to WEP and WPA
In this paper, we construct several tools for building and manipulating pools of statistical correlations in the analysis of RC4. We develop a theory to analyze these correlations in an optimized manner. We leverage this theory to mount several attacks on IEEE 802.11 wireless communication protocols WEP and WPA. Based on several partial temporary key recovery attacks, we recover the full 128-bit temporary key of WPA by using $2^{42}$ packets. It works with complexity $2^{96}$. Then, we describe a distinguisher for WPA with complexity $2^{42}$ and advantage 0.5 which uses $2^{42}$ packets. Moreover, we report extremely fast and optimized active and passive attacks against WEP. This was achieved through an extensive amount of theoretical and experimental analysis (capturing WiFi packets), refinement and optimization of all the former known attacks and methodologies against RC4. Our theory is supported and verified by a patch on top of Aircrack-ng. Our new attack improves its success probability drastically. Our active attack, based on ARP injection, requires 22500 packets to gain success probability of 50\% against a 104-bit WEP key, using Aircrack-ng in non-interactive mode. It runs in less than 5 seconds on an off-the-shelf PC. Using the same number of packets, Aicrack-ng yields around 3\% success rate. Furthermore, we describe very fast passive only attacks by eavesdropping TCP/IPv4 packets in a WiFi communication. Our passive attack requires 27500 packets. This is much less than the number of packets Aircrack-ng requires in active mode (around 37500), which is a significant improvement. We believe that our analysis brings on further insight to the security of RC4.
Stability and Linearization of Multi-valued Nonlinear Feedback Shift Registers
In this paper, we study stability and linearization of multi-
valued nonlinear feedback shift registers which are considered as logic
networks. First, the linearization of multi-valued nonlinear feedback shift
registers (NFSRs) is discussed, which is to nd their state transition ma-
trices by considering it as a logical network via a semi-tensor product ap-
proach. For a multi-valued NFSR, the new state transition matrix which
can be simply computed from the truth table of its feedback function is
more explicit. Second, based on the linearization theory of multi-valued
NFSRs, we investigate the stability of multi-valued NFSRs, and some suf-
cient and necessary conditions are provided for globally (locally) stable
multi-valued NFSRs. Finally, some examples are presented to show the
eectiveness of the proposed results.
Linearization of Multi-valued Nonlinear Feedback Shift Registers
The Linearization of Nonlinear feedback shift registers (NFSRs) is to find their state transition matrices. In this paper,
we investigate the linearization multi-valued NFSRs by considering it as a logical network via a semi-tensor product approach.
A new state transition matrix is found for an multi-valued NFSR, which can be simply computed from the truth table of its
feedback function, and the new state transition matrix is easier to compute and is more explicit. First, a linear representation of a
multi-valued NFSR is given, based on which several necessary and sufficient conditions for the nonsingularity are given. Then,
some properties of the state transition matrice are provided, which are helpful to theoretically analyze NFSRs. Finally, we give
properties of a maximum length multi-valued NFSR and the linear representation of the general structure of an n-bit shift register
with updating functions.
How to Construct UC-Secure Searchable Symmetric Encryption Scheme
A searchable symmetric encryption (SSE) scheme allows a client to store a set of encrypted files on an untrusted server in such a way that he can efficiently retrieve some of the encrypted files containing (or indexed by) specific keywords keeping the keywords and the files secret.
In this paper, we first extend the model of SSE schemes to that of verifiable SSE schemes, and formulate the UC security. We then prove its weak equivalence with privacy and reliability. Finally we show an efficient verifiable SSE scheme which is UC-secure.
Design and Analysis of Information-Theoretically Secure Authentication Codes with Non-Uniformly Random Keys
The authentication code (A-code) is the one of the most fundamental cryptographic protocols in information-theoretic cryptography, and it provides information-theoretic integrity or authenticity, i.e., preventing information from being altered or substituted by the adversary having unbounded computational powers. In addition, it has a wide range of applications such as multiparty computations and quantum key distribution protocols. The traditional A-code theory states that a good A-code is characterized as an A-code which satisfies equality of a lower bound on size of secret-keys, i.e., an A-code satisfying |K|=\epsilon^{-2}, where |K}| is cardinality of the set of secret-keys and \epsilon is the success probability of attacks of the adversary. However, good A-codes imply that secret-keys must be uniformly distributed. Therefore, if a non-uniformly random key is given, we cannot realize a good A-code by using it as a secret-key. Then, a natural question about this is: what is a good A-code having non-uniformly random keys? And, how can we design such a good A-code having non-uniformly random keys? To answer the questions, in this paper, we perform analysis of A-codes having non-uniformly random keys, and show the principle that guides the design for such good A-codes.
Specifically, the contribution of this paper is as follows. We first derive a new lower bound on entropy of secret-keys, and it is described in terms of \R entropy. Next, we define that a good A-code having non-uniformly random keys is the one satisfying equality of the bound, and it is characterized by the min-entropy (a special case of \R entropy). Furthermore, we introduce the classification methodology for A-codes which are realizable from a biased key-source. This classification is performed by using a mathematical tool, i.e., a group action on the set of authentication matrices. By this analysis, we can understand what kind of A-codes is actually constructable.
Finally, we design how to construct good A-codes having 1-bit messages from von Neumann sources. We also show that our construction methodology is superior to the one by applying von Neumann extractors and the traditional optimal A-code constructions. Although the case of 1-bit messages may be restricted, however, this case is simple and we believe that a general case will develop from this simple case.
Improved (Hierarchical) Inner-Product Encryption from Lattices
Inner-product encryption (IPE) provides fine-grained access control and has attractive applications. Agrawal, Freeman, and Vaikuntanathan~(Asiacrypt 2011) proposed the first IPE scheme from lattices by twisting the identity-based encryption (IBE) scheme by Agrawal, Boneh, and Boyen~(Eurocrypt 2010). Their IPE scheme supports inner-product predicates over $R^{\mu}$, where the ring is $R = \mathbb{Z}_q$. Several applications require the ring $R$ to be exponentially large and, thus, they set $q = 2^{O(n)}$ to implement such applications. This choice results in the AFV IPE scheme with public parameters of size $O(\mu n^2 \lg^3{q}) = O(\mu n^5)$ and ciphertexts of size $O(\mu n \lg^3{q}) = O(\mu n^4)$, where $n$ is the security parameter. Hence, this makes the scheme impractical, as they noted.
We address this efficiency issue by ``untwisting'' their twist and providing another twist. Our scheme supports inner-product predicates over $R^\mu$ where $R = \mathrm{GF}(q^n)$ instead of $\mathbb{Z}_q$. Our scheme has public parameters of size $O(\mu n^2 \lg^2{q})$ and ciphertexts of size $O(\mu n \lg^2{q})$. Since the cardinality of $\mathrm{GF}(q^n)$ is inherently exponential in $n$, we have no need to set $q$ as the exponential size for applications.
As side contributions, we extend our IPE scheme to a hierarchical IPE (HIPE) scheme and propose a fuzzy IBE scheme from IPE. Our HIPE scheme is more efficient than that developed by Abdalla, De Caro, and Mochetti (Latincrypt 2012). Our fuzzy IBE is secure under a much weaker assumption than that employed by Agrawal et al.~(PKC 2012), who constructed the first lattice-based fuzzy IBE scheme.
Verifiably Encrypted Signatures with Short Keys based on the Decisional Linear Problem and Obfuscation for Encrypted VES
Verifiably encrypted signatures (VES) are signatures encrypted by a public key of a trusted third party and we can verify their validity without decryption. This paper proposes a new VES scheme which is secure under the decisional linear (DLIN) assumption in the standard model. We also propose new obfuscators for encrypted signatures (ES) and encrypted VES (EVES) which are secure under the DLIN assumption.
All previous efficient VES schemes in the standard model are either secure under standard assumptions (such as the computational Diffie-Hellman assumption) with large verification (or secret) keys or secure under \emph{(non-standard) dynamic $q$-type assumptions} (such as the $q$-strong Diffie-Hellman extraction assumption) with short verification keys. Our construction is the first efficient VES scheme with short verification (and secret) keys secure under \emph{a standard assumption (DLIN)}.
As by-products of our VES scheme, we construct new obfuscators for ES/EVES based on our new VES scheme. They are more efficient than previous obfuscators with respect to the public key size. Previous obfuscators for EVES are secure under non-standard assumption and use zero-knowledge (ZK) proof systems and Fiat-Shamir heuristics to obtain non-interactive ZK, i.e., its security is considered in the random oracle model. Thus, our construction also has an advantage with respect to assumptions and security models. Our new obfuscator for ES is obtained from our new obfuscator for EVES.
Subgroup security in pairing-based cryptography
Uncategorized
Uncategorized
Pairings are typically implemented using ordinary pairing-friendly elliptic curves. The two input groups of the pairing function are groups of elliptic curve points, while the target group lies in the multiplicative group of a large finite field. At moderate levels of security, at least two of the three pairing groups are necessarily proper subgroups of a much larger composite-order group, which makes pairing implementations potentially susceptible to small-subgroup attacks.
To minimize the chances of such attacks, or the effort required to thwart them, we put forward a property for ordinary pairing-friendly curves called subgroup security. We point out that existing curves in the literature and in publicly available pairing libraries fail to achieve this notion, and propose a list of replacement curves that do offer subgroup security. These curves were chosen to drop into existing libraries with minimal code change, and to sustain state-of-the-art performance numbers. In fact, there are scenarios in which the replacement curves could facilitate faster implementations of protocols because they can remove the need for expensive group exponentiations that test subgroup membership.
Implicit Zero-Knowledge Arguments and Applications to the Malicious Setting
We introduce \emph{implicit zero-knowledge} arguments (iZK) and simulation-sound variants thereof (SSiZK); these are lightweight alternatives to zero-knowledge arguments for enforcing semi-honest behavior. Our main technical contribution is a construction of efficient two-flow iZK and SSiZK protocols for a large class of languages under the (plain) DDH assumption in cyclic groups in the common reference string model. As an application of iZK, we improve upon the round-efficiency of existing protocols for securely computing inner product under the DDH assumption. This new protocol in turn provides privacy-preserving biometric authentication with lower latency.
Practical Attacks on the Round-reduced PRINCE
The PRINCE cipher is the result of a cooperation between the Technical University of Denmark (DTU), NXP Semiconductors and the Ruhr University Bochum. The cipher was designed to reach an extremely low-latency encryption and instant response time. PRINCE has already gained a lot of attention from the academic community, however, most of the attacks are theoretical, usually with very high time or data complexity. Our work helps to fill the gap in more practically oriented attacks, with more realistic scenarios and complexities. We present new attacks, up to 7 rounds, relying on integral and higher-order differential cryptanalysis.
Internal Differential Boomerangs: Practical Analysis of the Round-Reduced Keccak-f Permutation
We introduce internal differential boomerang distinguisher as a combination of internal differentials and classical boomerang distinguishers. The new boomerangs can be successful against cryptographic primitives having high-probability round-reduced internal differential characteristics. The internal differential technique, which follow the evolution of differences between parts of the state, is particularly meaningful for highly symmetric functions like the inner permutation Keccak-f of the hash functions defined in the future SHA-3 standard.
We find internal differential and standard characteristics for three to four rounds of Keccak-f, and with the use of the new technique, enhanced with a strong message modification, show practical distinguishers for this permutation.
Namely, we need $2^{12}$ queries to distinguish 7 rounds of the permutation starting from the first round, and approximately $2^{18}$ queries to distinguish 8 rounds starting from the fourth round.
Due to the exceptionally low complexities, all of our results have been completely verified with a computer implementation of the analysis.
Reliable communication via semilattice properties of partial knowledge
Uncategorized
Uncategorized
A fundamental communication primitive in distributed computing is Reliable Message Transmission (RMT), which refers to the task of correctly sending a message from a party to another, despite the presence of Byzantine corruptions. In this work we address the problem
in the general adversary model of Hirt and Maurer [5], which subsumes earlier models such as the global or local threshold adversaries. Regarding the topology knowledge, we employ the recently introduced Partial Knowledge Model [12], which encompasses both the full knowledge and the ad hoc model; the latter assumes knowledge of the local neighborhood only.
Our main contribution is a tight condition for achieving RMT in the partial knowledge model under a general adversary. A key algorithmic tool that we define and use is the joint view operation which imposes a semilattice structure on the partial knowledge possessed by different parties. In this context, we prove that the worst possible adversary structure, conforming with the initial knowledge of a set of parties, can be expressed as the supremum of the parties’ knowledge under the semilattice partial order. The new operation allows for
the definition of an appropriate network separator notion that yields a necessary condition for achieving RMT. In order to show the sufficiency of the condition, we propose the RMT Partial Knowledge Algorithm (RMT-PKA), an algorithm that also employs the joint view
operation to solve RMT whenever the condition is met. This implies that RMT-PKA achieves reliable message transmission in every instance where this is possible, therefore it is a unique algorithm [13]. To the best of our knowledge, this is the first unique protocol for RMT against general adversaries in the partial knowledge model. Due to the generality of the model, our results provide, for any level of topology knowledge and any adversary structure, an exact characterization of instances where RMT is possible and an algorithm to achieve RMT on such instances.
Compactly Hiding Linear Spans: Tightly Secure Constant-Size Simulation-Sound QA-NIZK Proofs and Applications
Quasi-adaptive non-interactive zero-knowledge (QA-NIZK) proofs is a recent paradigm, suggested by Jutla and Roy (Asiacrypt'13), which is motivated by the Groth-Sahai seminal techniques for efficient non-interactive zero-knowledge (NIZK) proofs. In this paradigm, the common reference string may depend on specific language parameters, a fact that allows much shorter proofs in important cases. It even makes certain standard model applications competitive with the Fiat-Shamir heuristic in the Random Oracle idealization. Such QA-NIZK proofs were recently optimized to constant size by Jutla and Roy (Crypto'14) and Libert et al. (Eurocrypt'14) for the important case of proving that a vector of group elements belongs to a linear subspace.
While the QA-NIZK arguments of Libert et al. provide unbounded simulation-soundness and constant proof length, their simulation-soundness is only loosely related to the underlying assumption (with a gap proportional to the number of adversarial queries) and it is unknown how to alleviate this limitation without sacrificing efficiency.
In this paper, we deal with the question of whether we can simultaneously optimize the proof size and the tightness of security reductions, allowing for important applications with tight security (which are typically quite lengthy) to be of shorter size. We resolve this question by designing a novel simulation-sound QA-NIZK argument showing that a vector $\vec{v} \in \G^n$ belongs to a subspace of rank $t <n$ using a constant number of group elements. Unlike previous short QA-NIZK proofs of such statements, the unbounded simulation-soundness of our system is nearly tightly related (i.e., the reduction only loses a factor proportional to the security parameter) to the standard Decision Linear assumption.
To show simulation-soundness in the constrained context of tight reductions, we explicitly point at a technique -- which may be of independent interest -- of hiding the linear span of a vector defined by a signature (which is part of an OR proof).
As an application, we design a public-key cryptosystem with almost tight CCA2-security in the multi-challenge, multi-user setting with improved length (asymptotically optimal for long messages). We also adapt our scheme to provide CCA security in the key-dependent message scenario (KDM-CCA2) with ciphertext length reduced by 75% when compared to the best known tightly secure KDM-CCA2 system so far.
Espresso: A Stream Cipher for 5G Wireless Communication Systems
The demand for more efficient ciphers is a likely to sharpen with new generation of products and applications. Previous cipher designs typically focused on optimizing only one of the two parameters - hardware size or speed, for a given security level. In this paper, we present a methodology for designing a class of stream ciphers which takes into account both parameters simultaneously. We combine the advantage of the Galois configuration of NLFSRs, short propagation delay, with the advantage of the Fibonacci configuration of NLFSRs, which can be analyzed formally. According to our analysis, the presented stream cipher Espresso is the fastest among the ciphers below 1500 GE, including Grain-128 and Trivium.
Differential Analysis and Meet-in-the-Middle Attack against Round-Reduced TWINE
TWINE is a recent lightweight block cipher based on a Feistel
structure. We first present two new attacks on TWINE-128
reduced to 25 rounds that have a slightly higher overall complexity than the 25-round attack presented by Wang and Wu at ACISP 2014, but a lower data complexity.
Then, we introduce alternative representations of both the round
function of this block cipher and of a sequence of 4 rounds. LBlock,
another lightweight block cipher, turns out to exhibit the same
behaviour. Then, we illustrate how this alternative representation
can shed new light on the security of TWINE by deriving high
probability iterated truncated differential trails covering 4 rounds
with probability $2^{-16}$.
The importance of these is shown by combining different
truncated differential trails to attack 23-rounds TWINE-128 and by
giving a tighter lower bound on the high probability of some
differentials by clustering differential characteristics following
one of these truncated trails. A comparison between these high
probability differentials and those recently found in a variant of
LBlock by Leurent highlights the importance of considering the whole
distribution of the coefficients in the difference distribution
table of a S-Box and not only their maximum value.
Meet-in-the-Middle Attacks and Structural Analysis of Round-Reduced PRINCE
NXP Semiconductors and its academic partners challenged the
cryptographic community with finding practical attacks on the block
cipher they designed, PRINCE. Instead of trying to attack as many
rounds as possible using attacks which are usually impractical
despite being faster than brute-force, the challenge invites
cryptographers to find practical attacks and encourages them to
actually implement them.
In this paper, we present new attacks on round-reduced PRINCE including the ones which won the challenge in the 4,
6 and 8-round categories --- the highest for which winners were
identified. Our first attacks rely on a meet-in-the-middle approach and break up to 10 rounds of the cipher.
We also describe heuristic methods we used to find practical SAT-based and differential attacks.
Finally, we also present an analysis of the cycle structure of the
internal rounds of PRINCE leading both to a low complexity
distinguisher for 4-round PRINCE-core and an alternative
representation of the cipher valid in particular contexts and which
highlights, in this cases, a poor diffusion.
One Time Programs with Limited Memory
We reinvestigate a notion of {\em one-time programs} introduced in the CRYPTO 2008 paper by Goldwasser {\it et~al.} A one-time program is a device containing a program $C$, with the property that the program $C$ can be executed on at most one input. Goldwasser {\it et~al.}~show how to implement one-time programs on devices equipped with special hardware gadgets called {\em one-time memory} tokens.
We provide an alternative construction that does not rely on the hardware gadgets. Instead, it is based on the following assumptions: (1) the total amount of data that can leak from the device is bounded, and (2) the total memory on the device (available both to the honest user and to the attacker) is also restricted, which is essentially the model used recently by Dziembowski {\it et al.}\ (TCC 2011, CRYPTO 2011) to construct one-time computable {\em pseudorandom} functions and key-evolution schemes.
Fast Revocation of Attribute-Based Credentials for Both Users and Verifiers
Attribute-based credentials allow a user to prove properties about herself anonymously. Revoking such credentials, which requires singling them out, is hard because it is at odds with anonymity. All revocation schemes proposed to date either sacrifice anonymity altogether, require the parties to be online, or put high load on the user or the verifier. As a result, these schemes are either too complicated for low-powered devices like smart cards or they do not scale. We propose a new revocation scheme that has a very low computational cost for users and verifiers, and does not require users to process updates. We trade only a limited, but well-defined, amount of anonymity to make the first practical revocation scheme that is efficient at large scales and fast enough for smart cards.
Key Recovery from State Information of Sprout: Application to Cryptanalysis and Fault Attack
Abstract. Design of secure light-weight stream ciphers is an important area in cryptographic hardware & embedded systems and a very recent design by Armknecht and Mikhalev (FSE 2015) has received serious attention that uses shorter internal state and still claims to resist the time-memory-data-tradeoff (TMDTO) attacks. An instantiation of this design paradigm is the stream cipher named Sprout with 80-bit secret key. In this paper we cryptanalyze the cipher and refute various claims. The designers claim that the secret key of Sprout can not be recovered efficiently from the complete state information using a guess and determine attack. However, in this paper, we show that it is possible with a few hundred bits in practical time. More importantly, from around 850 key-stream bits, complete knowledge of NFSR (40 bits) and a partial knowledge of LFSR (around one third, i.e., 14 bits); we can obtain all the secret key bits. This cryptanalyzes Sprout with 2^{54} attempts (considering constant time complexity required by the SAT solver in each attempt, which is around 1 minute in a laptop). This is less than the exhaustive key search. Further, we show how related ideas can be employed to mount a fault attack against Sprout that requires around 120 faults in random locations (20 faults, if the locations are known), whereas the designers claim that such a fault attack may not be possible. Our cryptanalytic results raise quite a few questions about this design paradigm in general that should be revisited with greater care.
Performance and Security Improvements for Tor: A Survey
Uncategorized
Uncategorized
Tor [Dingledine et al. 2004] is the most widely used anonymity network today, serving millions of users on a daily basis using a growing number of volunteer-run routers. Since its deployment in 2003, there have been more than three dozen proposals that aim to improve its performance, security, and unobservability. Given the significance of this research area, our goal is to provide the reader with the state of current research
directions and challenges in anonymous communication systems, focusing on the Tor network.We shed light on the design weaknesses and challenges facing the network and point out unresolved issues.
Collision Attack on 4-branch, Type-2 GFN based Hash Functions using Sliced Biclique Cryptanalysis Technique
In this work, we apply the sliced biclique cryptanalysis
technique to show 8-round collision attack on a hash function H
based on 4-branch, Type-2 Generalized Feistel Network (Type-2 GFN).
This attack is generic and works on 4-branch, Type-2 GFN with any
parameters including the block size, type of round function, the number of S-boxes in each round and the number of SP layers inside the round function. We first construct a 8-round distinguisher on 4-branch, Type-2 GFN and then use this distinguisher to launch 8-round collision attack on compression functions based on Matyas-Meyer-Oseas (MMO) and Miyaguchi-Preneel (MP) modes. The complexity of the attack on 128-bit compression function is 2^56. The attack can be directly translated to collision attack on MP and MMO based hash functions and pseudo-collision attack on Davies-Meyer (DM) based hash functions. When the round function F is instantiated with double SP layer, we show the first 8-round collision attack on 4-branch, Type-2 GFN with double SP layer based compression function. The previous best attack on this structure was a 6-round near collision attack shown by Sasaki at Indocrypt'12. His attack cannot be used to generate full collisions on 6-rounds and hence our result can be regarded the best so far in literature on this structure.
Election Verifiability: Cryptographic Definitions and an Analysis of Helios, Helios-C, and JCJ
Election verifiability is defined in the computational
model of cryptography. The definition formalizes
notions of voters verifying their own votes, auditors
verifying the tally of votes, and auditors verifying that
only eligible voters vote.
The Helios (Adida et al., 2009), Helios-C (Cortier et al., 2014) and
JCJ (Juels et al., 2010) election schemes are analyzed using the definition.
Neither Helios nor Helios-C satisfy the definition
because they do not ensure that recorded ballots are
tallied in certain cases when the adversary posts malicious material on the bulletin board.
A variant of Helios is proposed and shown to satisfy the definition.
JCJ similarly does not ensure that recorded ballots are tallied in certain cases.
Moreover, JCJ does not ensure that only eligible voters vote, due to a trust assumption it makes.
A variant of JCJ is proposed and shown to satisfy a weakened definition
that incorporates the trust assumption.
Previous definitions of verifiability (Juels et al., 2010; Cortier et al., 2014; Kiayias et al., 2015)
and definitions of global verifiability (Kuesters et al., 2010; Cortier et al., 2016)
are shown to permit election schemes vulnerable to attacks, whereas the new definition
prohibits those schemes.
And a relationship between the new definition and a variant of global verifiability is shown.
Cryptanalysis of Full Sprout
A new method for reducing the internal state size of stream cipher registers has been proposed in FSE 2015, allowing to reduce the area in hardware implementations. Along with it, an instantiated proposal of a cipher was also proposed: Sprout. In this paper, we analyze the security of Sprout, and we propose an attack that recovers the whole key more than $2^{10}$ times faster than exhaustive search and has very low data complexity. The attack can be seen as a divide-and-conquer evolved technique, that exploits the non-linear influence of the key bits on the update function. We have implemented the attack on a toy version of Sprout, that conserves the main properties exploited in the attack. The attack completely matches the expected complexities predicted by our theoretical cryptanalysis, which proves its validity. We believe that our attack shows that a more careful analysis should be done in order to instantiate the proposed design method.
A Related-Key Chosen-IV Distinguishing Attack on Full Sprout Stream Cipher
Sprout is a new lightweight stream cipher proposed at FSE 2015.
According to its designers, Sprout can resist time-memory-data trade-off (TMDTO) attacks with small internal state size.
However, we find a weakness in the updating functions of Sprout and propose a related-key chosen-IV distinguishing attacks on full Sprout.
Under the related-key setting, our attacks enable the adversary to detect non-randomness on full 320-round Sprout with a practical complexity of $\tilde{O}(2^4)$ and find collisions in 256 output bits of full Sprout with a complexity of $\tilde{O}(2^7)$.
Furthermore, when considering possible remedies, we find that only by modifying the updating functions and output function seems unlikely to equip Sprout with better resistance against this kind of distinguisher.
Therefore, it is necessary for designers to give structural modifications.
W-SPS: Designing a Wide-Area Secure Positioning System
Motivated by the security and functional limitations of satellite positioning systems, we explore a design of a Wide-Area Secure Positioning System. The main goals of this system are strong spoofing resilience, location verification, and privacy. We propose a realization of a Wide-Area Secure Positioning System and show that this solution is viable and can fulfill our defined security goals. Specifically, our system is composed of a secure positioning infrastructure to obtain reliable location information of an entity and a location verification architecture that allows others to be convinced of certain location properties of such an entity. The proposed system enables the verification of location claims in a privacy-preserving manner, thus enhancing existing security solutions and supporting future location-based applications.
Improving GGH Public Key Scheme Using Low Density Lattice Codes
Uncategorized
Uncategorized
Goldreich-Goldwasser-Halevi (GGH) public key cryptosystem is an instance of lattice-based cryptosystems whose security is based on the hardness of lattice problems. In fact, GGH cryptosystem is the lattice version of the first code-based cryptosystem, proposed by McEliece. However, it has a number of drawbacks such as; large public key length and low security level. On the other hand, Low Density Lattice Codes (LDLCs) are the practical classes of lattice codes which can achieve capacity on the additive white Gaussian noise (AWGN) channel with low complexity decoding algorithm. This paper introduces a public key cryptosystem based on LDLCs to withdraw the drawbacks of GGH cryptosystem. To reduce the key length, we employ the generator matrix of the used LDLC in Hermite normal form (HNF) as the public key. Also, by exploiting the linear decoding complexity of the used LDLC, the decryption complexity is decreased compared with GGH cryptosystem. These increased efficiencies allow us to use the bigger values of security parameters. Moreover, we exploit the special Gaussian vector whose variance is upper bounded by the Poltyrev limit as the perturbation vector. These techniques can resist the proposed scheme against the most efficient attacks to the GGH-like cryptosystems.
Leakage-Resilient Cryptography with Key Derived from Sensitive Data
In this paper we address the problem of large space consumption for protocols in the Bounded Retrieval Model (BRM), which require users to store large secret keys subject to adversarial leakage.
We propose a method to derive keys for such protocols on-the-fly from weakly random private data (like text documents or photos, users keep on their disks anyway for non-cryptographic purposes) in such a way that no extra storage is needed. We prove that any leakage-resilient protocol (belonging to a certain, arguably quite broad class) when run with a key obtained this way retains a similar level of security as the original protocol had. Additionally, we guarantee privacy of the data the actual keys are derived from. That is, an adversary can hardly gain any knowledge about the private data except that he could otherwise obtain via leakage. Our reduction works in the Random Oracle model.
As an important tool in the proof we use a newly established bound for min-entropy, which can be of independent interest. It may be viewed as an analogue of the chain rule -- a weaker form of the well-known formula $\mathbf{H}(X \vert Y) = \mathbf{H}(X, Y) - \mathbf{H}(Y)$ for random variables $X$, $Y$, and Shannon entropy, which our result originates from. For min-entropy only a much more limited version of this relation is known to hold. Namely, the min-entropy of $X$ may decrease by up to the bitlength of $Y$ when $X$ is conditioned on $Y$, in short: $\widetilde{\mathbf{H}(X \vert Y) \geq \mathbf{H}_\infty(X) - \lvert Y\rvert$.
In many cases this inequality does not offer tight bounds, and such significant entropy loss makes it inadequate for our particular application. In the quasi chain rule we propose, we inject some carefully crafted side information (spoiling knowledge) to show that with large probability the average min-entropy of $X$ conditioned on both: $Y$ and this side information can be almost lower bounded by the min-entropy of $(X, Y)$ decreased by the min-entropy of $Y$ conditioned on the side information.
Tradeoff Cryptanalysis of Memory-Hard Functions
Uncategorized
Uncategorized
We explore time-memory and other tradeoffs for memory-hard functions, which are supposed to impose significant computational and time penalties if less memory is used than intended. We analyze three finalists of the Password Hashing Competition: Catena, which was presented at Asiacrypt 2014, \textsf{yescrypt} and Lyra2.
We demonstrate that Catena's proof of tradeoff resilience is flawed, and attack it with a novel \emph{precomputation tradeoff}. We show that using $M^{4/5}$ memory instead of $M$ we have no time penalties and reduce the AT cost by the factor of 25. We further generalize our method for a wide class of schemes with predictable memory access.
For a wide class of data-dependent schemes, which addresses memory unpredictably, we develop a novel \emph{ranking tradeoff} and show how to decrease the time-memory and the time-area product by significant factors. We then apply our method to yescrypt and Lyra2 also exploiting the iterative structure of their internal compression functions.
The designers confirmed our attacks and responded by adding a new mode for Catena and tweaking Lyra2.
Secure Physical Computation using Disposable Circuits
Uncategorized
Uncategorized
In a secure physical computation, a set of parties each have physical inputs and jointly compute a function of their inputs in a way that reveals no information to any party except for the output of the function. Recent work in CRYPTO’14 presented examples of physical zero-knowledge proofs of physical properties, a special case of secure physical two-party computation in which one party has a physical input and the second party verifies a boolean function of that input. While the work suggested a general framework for modeling and analyzing physical zero-knowledge protocols, it did not provide a general theory of how to prove any physical property with zero-knowledge. This paper takes an orthogonal approach using disposable circuits (DC)—cheap hardware tokens that can be completely destroyed after a computation—an extension of the familiar tamper-proof token model. In the DC model, we demonstrate that two parties can compute any function of their physical inputs in a way that leaks at most 1 bit of additional information to either party. Moreover, our result generalizes to any multi-party physical computation. Formally, our protocols achieve unconditional UC-security with input-dependent abort.
Bitwise Linear Mappings with Good Cryptographic Properties and Efficient Implementation
Linear mappings are crucial components of symmetric ciphers. A special type of linear mappings are
(0,1)-matrices which have been used in symmetric ciphers such as ARIA, E2 and Camellia as diffusion
layers with efficient implementation. Bitwise linear maps are also used in symmetric ciphers such as
SHA family of hash functions and HC family of stream ciphers. In this article, we investigate a special
kind of linear mappings: based upon this study, we propose several linear mappings with only XOR and
rotation operations. The corresponding matrices of these mappings can be used in either the former case
as (0,1)-matrices of maximal branch number or in the latter case as linear mappings with good cryptographic
properties. The proposed mappings and their corresponding matrices can be efficiently implemented both
in software and hardware.
GORAM -- Group ORAM for Privacy and Access Control in Outsourced Personal Records
Uncategorized
Uncategorized
Cloud storage has rapidly become a cornerstone of many IT infrastructures, constituting a seamless solution for the backup, synchronization, and sharing of large amounts of data. Putting user data in the direct control of cloud service providers, however, raises security and privacy concerns related to the integrity of outsourced data, the accidental or intentional leakage of sensitive information, the profiling of user activities and so on. Furthermore, even if the cloud provider is trusted, users having access to outsourced files might be malicious and misbehave. These concerns are particularly serious in sensitive applications like personal health records and credit score systems.
To tackle this problem, we present GORAM, a cryptographic system that protects the secrecy and integrity of outsourced data with respect to both an untrusted server and malicious clients, guarantees the anonymity and unlinkability of accesses to such data, and allows the data owner to share outsourced data with other clients, selectively granting them read and write permissions. GORAM is the first system to achieve such a wide range of security and privacy properties for outsourced storage. In the process of designing an efficient construction, we developed two new, generally applicable cryptographic schemes, namely, batched zero-knowledge proofs of shuffle and an accountability technique based on chameleon signatures, which we consider of independent interest. We implemented GORAM in Amazon Elastic Compute Cloud (EC2) and ran a performance evaluation demonstrating the scalability and efficiency of our construction.
New Distinguishers for Reduced Round Trivium and Trivia-SC using Cube Testers
In this paper we experiment with cube testers on reduced round Trivium that can act as a distinguisher. Using heuristics, we obtain several distinguishers for Trivium running more than 800 rounds (maximum 829) with cube sizes not exceeding 27. In the process, we also exploit state biases that has not been explored before. Further, we apply our techniques to analyse Trivia-SC, a stream cipher proposed by modifying the parameters of Trivium and used as a building block for TriviA-ck (an AEAD scheme, which is submitted to the ongoing CAESAR
competition). We obtain distinguishers till 900 rounds of Trivia-SC with a cube size of 21 only and our results refute certain claims made by the designers. These are the best results reported so far, though our work does not affect the security claims for the ciphers with full initialization rounds, namely 1152.
Towards Understanding the Known-Key Security of Block Ciphers
Known-key distinguishers for block ciphers were proposed by Knudsen and Rijmen at ASIACRYPT 2007 and have been a major research topic in cryptanalysis since then. A formalization of known-key attacks in general is known to be difficult. In this paper, we tackle this problem for the case of block ciphers based on ideal components such as random permutations and random functions as well as propose new generic known-key attacks on generalized Feistel ciphers. We introduce the notion of known-key indifferentiability to capture the security of such block ciphers under a known key. To show its meaningfulness, we prove that the known-key attacks on block ciphers with ideal primitives to date violate security under known-key indifferentiability. On the other hand, to demonstrate its constructiveness, we prove the balanced Feistel cipher with random functions and the multiple Even-Mansour cipher with random permutations known-key indifferentiable for a sufficient number of rounds. We note that known-key indifferentiability is more quickly and tightly attained by multiple Even-Mansour which puts it forward as a construction provably secure against known-key attacks.
Tighter, faster, simpler side-channel security evaluations beyond computing power
A Eurocrypt 2013 paper "Security evaluations beyond computing power: How to analyze side-channel attacks you cannot mount?" by Veyrat-Charvillon, Gérard, and Standaert proposed a "Rank Estimation Algorithm" (REA) to estimate the difficulty of finding a secret key given side-channel information from independent subkeys, such as the 16 key bytes in AES-128 or the 32 key bytes in AES-256. The lower and upper bounds produced by the algorithm are far apart for most key ranks. The algorithm can produce tighter bounds but then becomes exponentially slower; it also becomes exponentially slower as the number of subkeys increases.
This paper introduces two better algorithms for the same problem. The first, the "Extended Rank Estimation Algorithm" (EREA), is an extension of REA using statistical sampling as a second step to increase the speed of tightening the bounds on the rank. The second, the "Polynomial Rank Outlining Algorithm" (PRO), is a new approach to computing the rank. PRO can handle a much larger number of subkeys efficiently, is easy to implement in a computer-algebra system such as Sage, and produces much tighter bounds than REA in less time.
Key Homomorphic PRFs and Their Applications
A pseudorandom function F : K x X -> Y is said to be key homomorphic if given F(k1, x) and F(k2, x) there is an efficient algorithm to compute F(k1 xor k2, x), where xor denotes a group
operation on k1 and k2 such as xor. Key homomorphic PRFs are natural objects to study and have a number of interesting applications: they can simplify the process of rotating encryption keys for encrypted data stored in the cloud, they give one round distributed PRFs, and they can be the basis of a symmetric-key proxy re-encryption scheme. Until now all known constructions
for key homomorphic PRFs were only proven secure in the random oracle model. We construct the first provably secure key homomorphic PRFs in the standard model. Our main construction
is based on the learning with errors (LWE) problem. In the proof of security we need a variant of LWE where query points are non-uniform and we show that this variant is as hard as the standard LWE. We also construct key homomorphic PRFs based on the decision linear assumption in groups with an l-linear map. We leave as an open problem the question of constructing standard model key homomorphic PRFs from more general assumptions.
Efficient Format Preserving Encrypted Databases
Uncategorized
Uncategorized
We propose storage efficient SQL-aware encrypted databases that preserve the format of the fields. We give experimental results of storage improvements in CryptDB using FNR encryption scheme.
Efficient k-out-of-n oblivious transfer protocol
A new k-out-of-n oblivious transfer protocol is presented in this paper. The communication cost of our scheme are n+1 messages of sender to receiver and k messages from the receiver to sender. To the best knowledge of the authors, the com-munication complexity of our scheme is the least. Also, our scheme has a lower computation cost with (k+1)n modular ex-ponentiations for sender and 3k modular exponentiations for the receiver. The security of our scheme is only based on the Decision Diffie-Hellman assumption. Further, we proved the sender’s computational security and the receiver’s uncondition-al security under standard model.
Salsa20 Cryptanalysis: New Moves and Revisiting Old Styles
In this paper, we revisit some existing techniques in Salsa20 cryptanalysis, and provide some new ideas as well. As a new result, we explain how a valid initial state can be obtained from a Salsa20 state after one round. This helps in studying the non-randomness of Salsa20 after 5 rounds. In particular, it can be seen that the 5-round bias reported by Fischer et al. (Indocrypt 2006) is a special case of our analysis. Towards improving the existing results, we revisit the idea of Probabilistic Neutral Bit (PNB) and how a proper choice of certain parameters reduce the complexity of the existing attacks. For cryptanalysis against 8-round Salsa20, we could achieve the key search complexity of $2^{247.2}$ compared to $2^{251}$ (FSE 2008) and
$2^{250}$ (ICISC 2012).
Quasi-Adaptive NIZK for Linear Subspaces Revisited
Non-interactive zero-knowledge (NIZK) proofs for algebraic relations in a group, such as the Groth-Sahai proofs, are an extremely powerful tool in pairing-based cryptography. A series of recent works focused on obtaining very efficient NIZK proofs for linear spaces in a weaker quasi-adaptive model. We revisit recent quasi-adaptive NIZK constructions, providing clean, simple, and improved constructions via a conceptually different approach inspired by recent developments in identity-based encryption. We then extend our techniques also to linearly homomorphic structure-preserving signatures, an object both of independent interest and with many applications.
A revocable anonymity in Tor
This new protocol is based on the idea of introducing a revocable anonymity in Tor, which was presented in our recent paper entitled "Another Tor is possible". Compared to that previous paper, this present scheme simplify the first protocol and reduce the power of the directory server, while maintaining the ability for the Tor community, to break the anonymity of a sender in case of misconduct.
We also take the opportunity of this paper, to appeal the majors internet companies, to help in the creation of a responsible Tor network (without pedophiles, spies, ....), by mixing billions of data flowing through their networks with those of Tor.
GCM Security Bounds Reconsidered
A constant of $2^{22}$ appears in the security bounds of the Galois/Counter Mode of Operation, GCM. In this paper, we first develop an algorithm to generate nonces that have a high counter-collision probability. We show concrete examples of nonces with the counter-collision probability of about $2^{20.75}/2^{128}$. This shows that the constant in the security bounds, $2^{22}$, cannot be made smaller than $2^{19.74}$ if the proof relies on ``the sum bound.'' We next show that it is possible to avoid using the sum bound, leading to improved security bounds of GCM. One of our improvements shows that the constant of $2^{22}$ can be reduced to 32.
Attribute-Based Versions of Schnorr and ElGamal
We design in this paper the first attribute-based cryptosystems that work in the classical Discrete Logarithm, pairing-free, setting. The attribute-based signature scheme can be seen as an extension of Schnorr signatures, with adaptive security relying on the Discrete Logarithm Assumption, in the random oracle model. The attribute-based encryption schemes can be seen as extensions of ElGamal cryptosystem, with adaptive security relying on the Decisional Diffie-Hellman Assumption, in the standard model.
The proposed schemes are secure only in a bounded model: the systems admit $L$ secret keys, at most, for a bound $L$ that must be fixed in the setup of the systems. The efficiency of the cryptosystems, later, depends on this bound $L$. Although this is an important drawback that can limit the applicability of the proposed schemes in some real-life applications, it turns out that the bounded security of our key-policy attribute-based encryption scheme (in particular, with $L=1$) is enough to implement the generic transformation of Parno, Raykova and Vaikuntanathan at TCC'2012. As a direct result, we obtain a protocol for the verifiable delegation of computation of boolean functions, which does not employ pairings or lattices, and whose adaptive security relies on the Decisional Diffie-Hellman Assumption.
Analyzing Permutations for AES-like Ciphers: Understanding ShiftRows
Designing block ciphers and hash functions in a manner that resemble the AES in many aspects has been very popular since Rijndael was adopted as the Advanced Encryption Standard. However, in sharp contrast to the MixColumns operation, the security implications of the way the state is permuted by the operation resembling ShiftRows has never been studied in depth.
Here, we provide the first structured study of the influence of ShiftRows-like operations, or more generally, word-wise permutations, in AES-like ciphers with respect to diffusion properties and resistance towards differential- and linear attacks. After formalizing the concept of guaranteed trail weights, we show a range of equivalence results for permutation layers in this context. We prove that the trail weight analysis when using arbitrary word-wise permutations, with rotations as a special case, reduces to a consideration of a specific normal form. Using a mixed-integer linear programming approach, we obtain optimal parameters for a wide range of AES-like ciphers, and show improvements on parameters for Rijndael-192, Rijndael-256, PRIMATEs-80 and Prøst-128. As a separate result, we show for specific cases of the state geometry that a seemingly optimal bound on the trail weight can be obtained using cyclic rotations only for the permutation layer, i.e. in a very implementation friendly way.
Faster sieving for shortest lattice vectors using spherical locality-sensitive hashing
Uncategorized
Uncategorized
Recently, it was shown that angular locality-sensitive hashing (LSH) can be used to significantly speed up lattice sieving, leading to a heuristic time complexity for solving the shortest vector problem (SVP) of $2^{0.337n + o(n)}$ (and space complexity $2^{0.208n + o(n)}$. We study the possibility of applying other LSH methods to sieving, and show that with the spherical LSH method of Andoni et al.\ we can heuristically solve SVP in time $2^{0.298n + o(n)}$ and space $2^{0.208n + o(n)}$. We further show that a practical variant of the resulting SphereSieve is very similar to Wang et al.'s two-level sieve, with the key difference that we impose an order on the outer list of centers.
Secure and Efficient Initialization and Authentication Protocols for SHIELD
Uncategorized
Uncategorized
With the globalization of semiconductor production, out-sourcing IC fabrication has become a trend in various aspects. This, however, introduces serious threats from the entire untrusted supply chain. To combat these threats, DARPA (Defense Advanced Research Projects Agency) proposed in 2014 the SHIELD (Supply Chain Hardware Integrity for Electronics Defense) program to design a secure hardware root-of-trust, called dielet, to be inserted into the host package of legitimately produced ICs. Dielets are RF powered and communicate with the outside world through their RF antennas. They have sensors which allow them to passively (without the need for power) record malicious events which can later be read out during an authentication protocol between the dielet and server with a smartphone as intermediary.
This paper introduces a general framework for the initialization and authentication protocols in SHIELD with different adversarial models based on formally-defined security games. We introduce a ``try-and-check'' attack against DARPA's example authentication protocol in their call for SHIELD proposals which nullifies the effectiveness of SHIELD's main goal of being able to detect and trace adversarial activities with significant probability. We introduce the first concrete initialization protocol and, compared to DARPA's example authentication protocol, introduce an improved authentication protocol which resists the try-and-check attack. The area overhead of our authentication and initialization protocols together is only 64-bit NVM, one 8-bit counter and a TRNG based on a single SRAM-cell together with corresponding control logic. Our findings and rigorous analysis are of utmost importance for the teams which received DARPA's funding for implementing SHIELD.
Triathlon of Lightweight Block Ciphers for the Internet of Things
In this paper we introduce a framework for the benchmarking of lightweight block ciphers on a multitude of embedded platforms. Our framework is able to evaluate the execution time, RAM footprint, as well as binary code size, and allows one to define a custom "figure of merit" according to which all evaluated candidates can be ranked. We used the framework to benchmark implementations of 19 lightweight ciphers, namely AES, Chaskey, Fantomas, HIGHT, LBlock, LEA, LED, Piccolo, PRESENT, PRIDE, PRINCE, RC5, RECTANGLE, RoadRunneR, Robin, Simon, SPARX, Speck, and TWINE, on three microcontroller platforms: 8-bit AVR, 16-bit MSP430, and 32-bit ARM. Our results bring some new insights to the question of how well these lightweight ciphers are suited to secure the Internet of Things (IoT). The benchmarking framework provides cipher designers with an easy-to-use tool to compare new algorithms with the state-of-the-art and allows standardization organizations to conduct a fair and consistent evaluation of a large number of candidates.
Towards Secure Distance Bounding
Relay attacks (and, more generally, man-in-the-middle attacks) are a serious threat against many access control and payment schemes.
In this work, we present distance-bounding protocols, how these can deter relay attacks, and the security models formalizing these protocols. We show several pitfalls making existing protocols insecure (or at least, vulnerable, in some cases). Then, we introduce the SKI protocol which enjoys resistance to all popular attack-models and features provable security. As far as we know, this is the first protocol with such all-encompassing security guarantees.
Leakage Assessment Methodology - a clear roadmap for side-channel evaluations
Evoked by the increasing need to integrate side-channel countermeasures into security-enabled commercial devices, evaluation labs are seeking a standard approach that enables a fast, reliable and robust evaluation of the side-channel vulnerability of the given products. To this end, standardization bodies such as NIST intend to establish a leakage assessment methodology fulfilling these demands. One of such proposals is the Welch's t-test, which is being put forward by Cryptography Research Inc., and is able to relax the dependency between the evaluations and the device's underlying architecture. In this work, we deeply study the theoretical background of the test's different flavors, and present a roadmap which can be followed by the evaluation labs to efficiently and correctly conduct the tests. More precisely, we express a stable, robust and efficient way to perform the tests at higher orders. Further, we extend the test to multivariate settings, and provide details on how to efficiently and rapidly carry out such a multivariate higher-order test. Including a suggested methodology to collect the traces for these tests, we point out practical case studies where different types of t-tests can exhibit the leakage of supposedly secure designs.
Efficient and Secure Delegation of Group Exponentiation to a Single Server
We consider the problem of delegating computation of group operations from a computationally weaker client holding an input and a description of a function, to a {\em single} computationally stronger server holding a description of the same function. Solutions need to satisfy natural correctness, security, privacy and efficiency requirements. We obtain delegated computation protocols for the following functions, defined for an {\em arbitrary} commutative group:
\begin{enumerate}
\item Group inverses, with security and privacy holding against any computationally unrestricted malicious server.
\item Group exponentiation, with security and privacy holding against any computationally unrestricted ``partially honest" server.
\item Group exponentiation, with security and privacy holding against any polynomial-time malicious server, under a pseudorandom generation assumption, and security holding with constant probability.
\end{enumerate}
Towards Key-Length Extension with Optimal Security: Cascade Encryption and Xor-cascade Encryption
This paper discusses provable security of two types of cascade encryptions. The first construction $\CE^l$, called $l$-cascade encryption, is obtained by sequentially composing $l$ blockcipher calls with independent keys. The security of $\CE^l$ has been a longstanding open problem until Gaži and Maurer~\cite{GM09} proved its security up to $2^{\ka+\min\{\frac{n}{2},\ka\}}$ query complexity for large cascading length, where $\ka$ and $n$ denote the key size and the block size of the underlying blockcipher, respectively. We improve this limit by proving the security of $\CE^l$ up to $2^{\ka+\min\left\{\ka,n\right\}-\frac{16}{l}\left(\frac{n}{2}+2\right)}$ query complexity: this bound approaches $2^{\ka+\min\left\{\ka,n\right\}}$ with increasing cascade length $l$.
The second construction $\XCE^l$ is a natural cascade version of the DESX scheme with intermediate keys xored between blockcipher calls. This can also be viewed as an extension of double XOR-cascade proposed by Gaži and Tessaro~\cite{GT12}. We prove that $\XCE^l$ is secure up to $2^{\ka+n-\frac{8}{l}\left(\frac{n}{2}+2\right)}$ query complexity. As cascade length $l$ increases, this bound approaches $2^{\ka+n}$.
In the ideal cipher model, one can obtain all the evaluations of the underlying blockcipher by making $2^{\ka+n}$ queries, so the $(\ka+n)$-bit security becomes the maximum that key-length extension based on a single $\ka$-bit key $n$-bit blockcipher is able to achieve. Cascade encryptions $\CE^l$~(with $n\leq\ka$) and $\XCE^l$ provide almost optimal security with large cascade length.
Leakage-Resilient Symmetric Encryption via Re-keying
In the paper, we study whether it is possible to construct an efficient leakage-resilient symmetric scheme using the AES block cipher. We aim at bridging the gap between the theoretical leakage-resilient symmetric primitives used to build encryption schemes and the practical schemes that do not have any security proof against side-channel adversaries. Our goal is to construct an as efficient as possible leakage-resilient encryption scheme, but we do not want to change the cryptographic schemes already implemented. The basic idea consists in adding a leakage-resilient re-keying scheme on top of the encryption scheme and has been already suggested by Kocher to thwart differential power analysis techniques. Indeed, in such analysis, the adversary queries the encryption box and from the knowledge of the plaintext/ciphertext, she can perform a divide-and-conquer key recovery attack. The method consisting in changing the key for each or after a small number of encryptions with the same key is known as re-keying. It prevents DPA adversaries but not SPA attacks which use one single leakage trace. Here, we prove that using a leakage-resilient re-keying scheme on top of a secure encryption scheme in the standard model, leads to a leakage-resilient encryption scheme. The main advantage of the AES block cipher is that its implementations are generally heuristically-secure against SPA adversaries. This assumption is used in many concrete instantiations of leakage-resilient symmetric primitives. Consequently, if we use it and change the key for each new message block, the adversary will not be able to recover any key if the re-keying scheme is leakage-resilient. There is mainly two different techniques for re-keying scheme, either parallel or sequential, but if we want to avoid the adversary having access to many inputs/outputs, only the sequential method is possible. However, the main drawback of the latter technique is that in case of de-synchronization, many useless computations are required. In our re-keying scheme, we use ideas from the skip-list data structure to efficiently recover a specific key.
Achieving Side-Channel Protection with Dynamic Logic Reconfiguration on Modern FPGAs
Reconfigurability is a unique feature of modern FPGA devices to load hardware circuits just on demand.
This also implies that a completely different set of circuits might operate at the exact same location of the FPGA at different time slots, making it difficult for an external observer or attacker to predict what will happen at what time.
In this work we present and evaluate a novel hardware implementation of the lightweight cipher PRESENT with built-in side-channel countermeasures based on dynamic logic reconfiguration. In our design we make use of Configurable Look-Up Tables (CFGLUT) integrated in modern Xilinx FPGAs to nearly instantaneously change hardware internals of our cipher implementation for improved resistance against side-channel attacks. We provide evidence from practical experiments based on a Spartan-6 platform that even with 10 million recorded power traces we were unable to detect a first-order leakage using the state-of-the-art leakage assessment.
Adaptively Secure Coin-Flipping, Revisited
Uncategorized
Uncategorized
The full-information model was introduced by Ben-Or and Linial in 1985 to study collective coin-flipping: the problem of generating a common bounded-bias bit in a network of $n$ players with $t=t(n)$ faults. They showed that the majority protocol, in which each player sends a random bit and the output is the majority of the players' bits, can tolerate $t(n)=O (\sqrt n)$ even in the presence of \emph{adaptive} corruptions, and they conjectured that this is optimal for such adversaries. Lichtenstein, Linial, and Saks proved that the conjecture holds for protocols in which each player sends only a single bit. Their result has been the main progress on the conjecture during the last 30 years.
In this work we revisit this question and ask: what about protocols where players can send longer messages? Can increased communication allow for a larger fraction of corrupt players?
We introduce a model of \emph{strong adaptive} corruptions, in which an adversary sees all messages sent by honest parties in any given round and, based on the message content, decides whether to corrupt a party (and alter its message or sabotage its delivery) or not. This is in contrast to the (classical) adaptive adversary who can corrupt parties only based on past messages, and cannot alter messages already sent.
We prove that any one-round coin-flipping protocol, \emph{regardless of message length}, can be secure against at most $\widetilde{O}(\sqrt n)$ strong adaptive corruptions. Thus, increased message length does not help in this setting.
We then shed light on the connection between adaptive and strongly adaptive adversaries, by proving that for any symmetric one-round coin-flipping protocol secure against $t$ adaptive corruptions, there is a symmetric one-round coin-flipping protocol secure against $t$ strongly adaptive corruptions. Going back to the standard adaptive model, we can now prove that any symmetric one-round protocol with arbitrarily long messages can tolerate at most $\widetilde{O}(\sqrt n)$ adaptive corruptions.
At the heart of our results there is a novel use of the Minimax Theorem and a new technique for converting any one-round secure protocol with arbitrarily long messages into a secure one where each player sends only $\polylog(n)$ bits. This technique may be of independent interest.
Statistical Properties of Multiplication mod $2^n$
In this paper, we investigate some statistical properties of multiplication mod $2^n$ for cryptographic use.
For this purpose, we introduce a family of T-functions similar to modular multiplication, which we call
M-functions as vectorial Boolean functions. At first, we determine the joint probability distribution of
arbitrary number of the output of an M-function component bits. Then, we obtain the probability distribution
of the component Boolean functions of combination of a linear transformation with an M-function. After that,
using a new measure for computing the imbalance of maps, we show that the restriction of the output of an
M-function to its upper bits is asymptotically balanced.
Evaluating the Duplication of Dual-Rail Precharge Logics on FPGAs
Power-equalization schemes for digital circuits aim to harden cryptographic designs against power analysis attacks. With respect to dual-rail logics most of these schemes have originally been designed for ASIC platforms, but much efforts have been spent to map them to FPGAs as well. A particular challenge is here to apply those schemes to the predefined logic structures of FPGAs (i.e., slices, LUTs, FFs, and routing switch boxes) for which special tools are required. Due to the absence of such routing tools Yu and Schaumont presented the idea of duplicating (i.e., dualizing) a fully-placed-and-routed dual-rail precharge circuit with equivalent routing structures on an FPGA. They adopted such architecture from WDDL providing the Double WDDL (DWDDL)scheme.
In this work we show that this general technique - regardless of the underlying dual-rail logic - is incapable to properly prevent side-channel leakages. Besides theoretical investigations on this issue we present practical evaluations on a Spartan-6 FPGA to demonstrate the flaws in such an approach. In detail, we consider an AES-128 encryption module realized by three dual-rail precharge logic styles as a case study and show that none of those schemes can provide the desired level of protection.
Side-Channel Security Analysis of Ultra-Low-Power FRAM-based MCUs
By shrinking the technology and reducing the energy requirements of integrated circuits, producing ultra-low-power devices has practically become possible. Texas Instruments as a pioneer in developing FRAM-based products announced a couple of different microcontroller (MCU) families based on the low-power and fast Ferroelectric RAM technology. Such MCUs come with embedded cryptographic module(s) as well as the assertion that - due to the underlying ultra-low-power technology - mounting successful side-channel analysis (SCA) attacks has become very difficult. In this work we practically evaluate this claimed hardness by means of state-of-the-art power analysis attacks. The leakage sources and corresponding attacks are presented in order to give an overview on the potential risks of making use of such platforms in security-related applications. In short, we partially confirm the given assertion. Some modules, e.g., the embedded cryptographic accelerator, can still be attacked but with slightly immoderate effort. On the contrary, the other leakage sources are easily exploitable leading to straightforward attacks being able to recover the secrets.
Side-Channel Protection by Randomizing Look-Up Tables on Reconfigurable Hardware - Pitfalls of Memory Primitives
Block Memory Content Scrambling (BMS), presented at CHES 2011, enables an effective way of first-order side-channel protection for cryptographic primitives at the cost of a significant reconfiguration time for the mask update. In this work we analyze alternative ways to implement dynamic first-order masking of AES with randomized look-up tables that can reduce this mask update time. The memory primitives we consider in this work include three distributed RAM components (RAM32M, RAM64M, and RAM256X1S) and one BRAM primitive (RAMB8BWER). We provide a detailed study of the area and time overheads of each implementation technique with respect to the operation (encryption) as well as reconfiguration (mask update) phase. We further compare the achieved security of each technique to prevent first-order side-channel leakages. Our evaluation is based on one of the most general forms of leakage assessment methodology known as non-specific t-test. Practical SCA evaluations (using a Spartan-6 FPGA platform) demonstrate that solely the BRAM primitive but none of the distributed RAM elements can be used to realize an SCA-protected implementation.
SCA Resistance Analysis on FPGA Implementations of Sponge based MAC-PHOTON
Uncategorized
Uncategorized
PHOTON is a lightweight hash function which was proposed
by Guo et al. in CRYPTO 2011. This is used in low-resource ubiquitous
computing devices such as RFID tags, wireless sensor nodes, smart cards
and mobile devices. PHOTON is built using sponge construction and it provides
a new MAC function called MAC-PHOTON. This paper deals with FPGA
implementations of MAC-PHOTON and their side-channel attack (SCA) resistance.
First, we describe three architectures of the MAC-PHOTON based
on the concepts of iterative, folding and unrolling, and we provide their
performance results on the Xilinx Virtex-5 FPGAs. Second, we analyse
security of the MAC-PHOTON against side-channel attack using a SASEBOGII
development board. Finally, we present an analysis of its Threshold
Implementation (TI) and discuss its resistance against first-order power
analysis attacks.
Tighter Reductions for Forward-Secure Signature Schemes
In this paper, we revisit the security of factoring-based signature schemes built via the Fiat-Shamir transform and show that they can admit tighter reductions to certain decisional complexity assumptions such as the quadratic-residuosity, the high-residuosity, and the $\phi$-hiding assumptions. We do so by proving that the underlying identification schemes used in these schemes are a particular case of the lossy identification notion recently introduced by Abdalla et al. at Eurocrypt 2012. Next, we show how to extend these results to the forward-security setting based on ideas from the Itkis-Reyzin forward-secure signature scheme. Unlike the original Itkis-Reyzin scheme, our construction can be instantiated under different decisional complexity assumptions and has a much tighter security reduction. Finally, we show that the tighter security reductions provided by our proof methodology can result in concrete efficiency gains in practice, both in the standard and forward-security setting, as long as the use of stronger security assumptions is deemed acceptable. All of our results hold in the random oracle model.
Zero-knowledge Argument for Polynomial Evaluation with Application to Blacklists
Verification of a polynomial’s evaluation in a secret committed value plays a role in cryptographic applications such as non-membership or membership proofs. We construct a novel special honest verifier zero-knowledge argument for correct polynomial evaluation. The argument has logarithmic communication cost in the degree of the polynomial, which is a significant improvement over the state of the art with cubic root complexity at best. The argument is relatively efficient to generate and very fast to verify compared to previous work. The argument has a simple public-coin 3-move structure and only relies on the discrete logarithm assumption.
The polynomial evaluation argument can be used as a building block to construct zero-knowledge membership and non-membership arguments with communication that is logarithmic in the size of the blacklist. Non-membership proofs can be used to design anonymous blacklisting schemes allowing online services to block misbehaving users without learning the identity of the user. They also allow the blocking of single users of anonymization networks without blocking the whole
network.
Practical Homomorphic MACs for Arithmetic Circuits
Homomorphic message authenticators allow the holder of a (public) evaluation key to perform computations over previously authenticated data, in such a way that the produced tag $\sigma$ can be used to certify the authenticity of the computation. More precisely, a user knowing the secret key $\sk$ used to authenticate the original data, can verify that $\sigma$ authenticates the correct output of the computation. This primitive has been recently formalized by Gennaro and Wichs, who also showed how to realize it from fully homomorphic encryption. In this paper, we show new constructions of this primitive that, while supporting a smaller set of functionalities (i.e., polynomially-bounded arithmetic circuits as opposite to boolean ones), are much more efficient and easy to implement. Moreover, our schemes can tolerate any number of (malicious) verification queries. Our first construction relies on the sole assumption that one way functions exist, allows for arbitrary composition (i.e., outputs of previously authenticated computations can be used as inputs for new ones) but has the drawback that the size of the produced tags grows with the degree of the circuit. Our second solution, relying on the $D$-Diffie-Hellman Inversion assumption, offers somewhat orthogonal features as it allows for very short tags (one single group element!) but poses some restrictions on the composition side.
Improving Modular Inversion in RNS using the Plus-Minus Method
The paper describes a new RNS modular inversion algorithm based on the extended Euclidean algorithm and the plus-minus trick. In our algorithm, comparisons over large RNS values are replaced by cheap computations modulo 4. Comparisons to an RNS version based on Fermat’s little theorem were carried out. The number of elementary modular operations is significantly reduced: a factor 12 to 26 for multiplications and 6 to 21 for additions. Virtex 5 FPGAs implementations show that for a similar area, our plus-minus RNS modular inversion is 6 to 10 times faster.
Memory-saving computation of the pairing final exponentiation on BN curves
Uncategorized
Uncategorized
In this paper, we describe and improve efficient methods for computing
the hard part of the final exponentiation of pairings on Barreto-Naehrig
curves.
Thanks to the variants of pairings which decrease the length of the Miller
loop, the final exponentiation has become a significant component of the
overall calculation. Here we exploit the structure of BN curves to improve
this computation.
We will first present the most famous methods in the literature that en-
sure the computing of the hard part of the final exponentiation. We are
particularly interested in the memory resources necessary for the implementation of these methods. Indeed, this is an important constraint in
restricted environments.
More precisely, we are studying Devegili et al. method, Scott et al. addition chain method and Fuentes et al. method. After recalling these methods and their complexities, we determine the number of required registers
to compute the final result, because this is not always given in the literature. Then, we will present new versions of these methods which require
less memory resources (up to 37%). Moreover, some of these variants are
providing algorithms which are also more efficient than the original ones.
iDASH Secure Genome Analysis Competition Using ObliVM
Uncategorized
Uncategorized
This is a short note in supplement to our ObliVM paper.
Multi-Client Non-Interactive Verifiable Computation
Gennaro et al.\ (Crypto 2010) introduced the notion of \emph{non-interactive verifiable computation}, which allows a computationally weak client to outsource the computation of a function $f$ on a series of inputs $x^{(1)}, \ldots$ to a more
powerful but untrusted server. Following a pre-processing phase (that is carried out only once), the client sends some representation of its current input $x^{(i)}$ to the server; the server returns an answer that allows the client to recover the correct result $f(x^{(i)})$, accompanied by a proof of correctness that ensures the client does not accept an incorrect result. The crucial property is that the work done by the client in preparing its input and verifying the server's proof is less than the time required for the client to compute~$f$ on its own.
We extend this notion to the \emph{multi-client} setting, where $n$ computationally weak clients wish to outsource to an untrusted server the computation of a function $f$ over a series of {\em joint} inputs $(x_1^{(1)}, \ldots, x_{\clients}^{(1)}), \ldots$ without interacting with each other. We present a construction for this setting by combining the scheme of Gennaro et al.\ with a primitive called proxy oblivious transfer.
Online Authenticated-Encryption and its Nonce-Reuse Misuse-Resistance
A definition of \textit{online authenticated-encryption} (OAE), call it OAE1, was given by Fleischmann, Forler, and Lucks (2012). It has become a popular definitional target because, despite allowing encryption to be online, security is supposed to be maintained even if nonces get reused. We argue that this expectation is effectively wrong. OAE1 security has also been claimed to capture best-possible security for any online-AE scheme. We claim that this understanding is wrong, too. So motivated, we redefine OAE-security, providing a radically different formulation, OAE2. The new notion effectively \textit{does} capture best-possible security for a user's choice of plaintext segmentation and ciphertext expansion. It is achievable by simple techniques from standard tools. Yet even for OAE2, nonce-reuse can still be devastating. The picture to emerge is that no OAE definition can meaningfully tolerate nonce-reuse, but, at the same time, OAE security ought neverhave been understood to turn on this question.
New Techniques for SPHFs and Efficient One-Round PAKE Protocols
Password-authenticated key exchange (PAKE) protocols allow two players to agree on a shared high entropy secret key, that depends on their own passwords only. Following the Gennaro and Lindell's approach, with a new kind of smooth-projective hash functions (SPHFs), Katz and Vaikuntanathan recently came up with the first concrete one-round PAKE protocols, where the two players just have to send simultaneous flows to each other. The first one is secure in the Bellare-Pointcheval-Rogaway (BPR) model and the second one in the Canetti's UC framework, but at the cost of simulation-sound non-interactive zero-knowledge (SSNIZK) proofs (one for the BPR-secure protocol and two for the UC-secure one), which make the overall constructions not really efficient.
This paper follows their path with, first, a new efficient instantiation of SPHF on Cramer-Shoup ciphertexts, which allows to get rid of the SSNIZK proof and leads to the design of the most efficient one-round PAKE known so far, in the BPR model, and in addition without pairings.
In the UC framework, the security proof required the simulator to be able to extract the hashing key of the SPHF, hence the additional SSNIZK proof. We improve the way the latter extractability is obtained by introducing the notion of trapdoor smooth projective hash functions (TSPHFs). Our concrete instantiation leads to the most efficient one-round PAKE UC-secure against static corruptions to date.
We additionally show how these SPHFs and TSPHFs can be used for blind signatures and zero-knowledge proofs with straight-line extractability.
How Fair is Your Protocol? A Utility-based Approach to Protocol Optimality
In his seminal result, Cleve [STOC’86] established that secure distributed computation--- guaranteeing fairness---is impossible in the presence of dishonest majorities. A generous number of proposals for relaxed notions of fairness ensued this seminal result, by weakening in various ways the desired security guarantees. While these works also suggest completeness results (i.e., the ability to design protocols which achieve their fairness notion), their assessment is typically of an all-or-nothing nature. That is, when presented with a protocol which is not designed to be fair according to their respective notion, they most likely would render it unfair and make no further statement about it.
In this work we put forth a comparative approach to fairness. We present new intuitive notions that when presented with two arbitrary protocols, provide the means to answer the question “Which of the protocols is fairer?” The basic idea is that we can use an appropriate utility function to express the preferences of an adversary who wants to break fairness. Thus, we can compare protocols with respect to how fair they are, placing them in a partial order according to this relative-fairness relation.
After formulating such utility-based fairness notions, we turn to the question of finding optimal protocols---i.e., maximal elements in the above partial order. We investigate---and answer---this question for secure function evaluation, both in the two-party and multi-party settings.
To our knowledge, the only other fairness notion providing some sort of comparative state- ment is that of 1/p-security (aka “partial fairness”) by Gordon and Katz [Eurocrypt’10]. We also show in this paper that for a special class of utilities our notion strictly implies 1/p-security. In addition, we fix a shortcoming of the definition which is exposed by our comparison, thus strengthening that result.
Higher Order Differential Analysis of NORX
In this paper, we analyse the higher order differential properties of NORX, an AEAD scheme submitted to CAESAR competition. NORX is a sponge based construction. Previous efforts, by the designers themselves, have focused on the first order differentials and rotational properties for a small number of steps of the NORX core permutation, which turn out to have quite low biases when extended to the full permutation. In our work, the higher order differential properties are identified that allow to come up with practical distinguishers of the 4-round full permutation for NORX64 and half round less than the full permutation (i.e., 3.5-round) for NORX32. These distinguishers are similar to zero-sum distinguishers but are probabilistic in nature rather than deterministic, and are of order as low as four. The distinguishers have very low complexities, and are significantly more efficient than the generic generalized birthday attack for the same configurations of zero-sums. While these distinguishers identify sharper non-randomness than what the designers identified, our results do not lend themselves for cryptanalysis of full-round NORX encryption or authentication.
Remotely Managed Logic Built-In Self-Test for Secure M2M Communications
A rapid growth of Machine-to-Machine (M2M) communications is expected in the coming years. M2M applications create new challenges for in-field testing since they typically operate in environments where human supervision is difficult or impossible. In addition, M2M networks may be significant in size. We propose to automate Logic Built-In Self-Test (LBIST) by using a centralized test management system which can test all end-point M2M devices in the same network. Such a method makes possible transferring some of the LBIST functionality from the devices under test to the test management system. This is important for M2M devices which have very limited computing resources and commonly are battery-powered. In addition, the presented method provides protection against both random and malicious faults including some types of hardware Trojans.
Links Between Truncated Differential and Multidimensional Linear Properties of Block Ciphers and Underlying Attack Complexities
The mere number of various apparently different statistical attacks on block ciphers has raised the question about their relationships which would allow to classify them and determine those that give essentially complementary information about the security of block ciphers. While mathematical links between some statistical attacks have been derived in the last couple of years, the important link between general truncated differential and multidimensional linear attacks has been missing. In this work we close this gap. The new link is then exploited to relate the complexities of chosen-plaintext and known-plaintext distinguishing attacks of differential and linear types, and further, to explore the relations between the key-recovery attacks. Our analysis shows that a statistical saturation attack is the same as a truncated differential attack, which allows us, for the first time, to provide a justifiable analysis of the complexity of the statistical saturation attack and discuss its validity on 24 rounds of the PRESENT block cipher. By studying the data, time and memory complexities of a multidimensional linear key-recovery attack and its relation with a truncated differential one, we also show that in most cases a known-plaintext attack can be transformed into a less costly chosen-plaintext attack. In particular, we show that there is a differential attack in the chosen-plaintext model on 26 rounds of PRESENT with less memory complexity than the best previous attack, which assumes known plaintext. The links between the statistical attacks discussed in this paper give further examples of attacks where the method used to sample the data required by the statistical test is more differentiating than the method used for finding the distinguishing property
New Links Between Differential and Linear Cryptanalysis
Recently, a number of relations have been established among previously known statistical attacks on block ciphers. Leander showed in 2011 that statistical saturation distinguishers are on average equivalent to multidimensional linear distinguishers. Further relations between these two types of distinguishers and the integral and zero-correlation distinguishers were established by Bogdanov et al.. Knowledge about
such relations is useful for classification of statistical attacks in order to determine those that give essentially complementary information about the security of block ciphers. The purpose of the work presented in this paper is to explore relations between differential and linear attacks. The mathematical link between linear and differential attacks was discovered by Chabaud and Vaudenay already in 1994, but it has never been used in practice. We will show how to use it for computing accurate estimatesof truncated differential probabilities from accurate estimates of correlations of linear approximations. We demonstrate this method in practice and give the first instantiation of multiple differential cryptanalysis using the LLR statistical test on PRESENT. On a more theoretical side,we establish equivalence between a multidimensional linear distinguisher and a truncated differential distinguisher, and show that certain zero-correlation linear distinguishers exist if and only if certain impossible differentials exist.
Tweakable Blockciphers with Asymptotically Optimal Security
We consider tweakable blockciphers with beyond the birthday bound security. Landecker, Shrimpton, and Terashima (CRYPTO 2012) gave the first construction with security up to $\mathcal{O}(2^{2n/3})$ adversarial queries ($n$ denotes the block size in bits of the underlying blockcipher), and for which changing the tweak does not require changing the keys for blockcipher calls. In this paper, we extend this construction, which consists of two rounds of a previous proposal by Liskov, Rivest, and Wagner (CRYPTO 2002), by considering larger numbers of rounds $r>2$. We show that asymptotically, as $r$ increases, the resulting tweakable blockcipher approaches security up to the information bound, namely $\mathcal{O}(2^n)$ queries. Our analysis makes use of a coupling argument, and carries some similarities with the analysis of the iterated Even-Mansour cipher by
Lampe, Patarin, and Seurin (ASIACRYPT 2012).
Links among Impossible Differential, Integral and Zero Correlation Linear Cryptanalysis
Uncategorized
Uncategorized
As two important cryptanalytic methods, impossible differential cryptanalysis and integral cryptanalysis have attracted much attention in recent years. Although relations among other important cryptanalytic approaches have been investigated, the link between these two methods has been missing. The motivation in this paper is to fix this gap and establish links between impossible differential cryptanalysis and integral cryptanalysis.
Firstly, by introducing the concept of structure and dual structure, we prove that $a\rightarrow b$ is an impossible differential of a structure $\mathcal E$ if and only if it is a zero correlation linear hull of the dual structure $\mathcal E^\bot$. More specifically, constructing a zero correlation linear hull of a Feistel structure with $SP$-type round function where $P$ is invertible, is equivalent to constructing an impossible differential of the same structure with $P^T$ instead of $P$. Constructing a zero correlation linear hull of an SPN structure is equivalent to constructing an impossible differential of the same structure with $(P^{-1})^T$ instead of $P$. Meanwhile, our proof shows that the automatic search tool presented by Wu and Wang could find all impossible differentials of both Feistel structures with $SP$-type round functions and SPN structures, which is useful in provable security of block ciphers against impossible differential cryptanalysis.
Secondly, by establishing some boolean equations, we show that a zero correlation linear hull always indicates the existence of an integral distinguisher while a special integral implies the existence of a zero correlation linear hull. With this observation we improve the integral distinguishers of Feistel structures by $1$ round, build a $24$-round integral distinguisher of CAST-$256$ based on which we propose the best known key recovery attack on reduced round CAST-$256$ in the non-weak key model, present a $12$-round integral distinguisher of SMS4 and an $8$-round integral distinguisher of Camellia without $FL/FL^{-1}$. Moreover, this result provides a novel way for establishing integral distinguishers and converting known plaintext attacks to chosen plaintext attacks.
Finally, we conclude that an $r$-round impossible differential of $\mathcal E$ always leads to an $r$-round integral distinguisher of the dual structure $\mathcal E^\bot$. In the case that $\mathcal E$ and $\mathcal E^\bot$ are linearly equivalent, we derive a direct link between impossible differentials and integral distinguishers of $\mathcal E$. Specifically, we obtain that an $r$-round impossible differential of an SPN structure, which adopts a bit permutation as its linear layer, always indicates the existence of an $r$-round integral distinguisher. Based on this newly established link, we deduce that impossible differentials of SNAKE(2), PRESENT, PRINCE and ARIA, which are independent of the choices of the $S$-boxes, always imply the existence of integral distinguishers.
Our results could help to classify different cryptanalytic tools. Furthermore, when designing a block cipher, the designers need to demonstrate that the cipher has sufficient security margins against important cryptanalytic approaches, which is a very tough task since there have been so many cryptanalytic tools up to now. Our results certainly facilitate this security evaluation process.
Key-Homomorphic Constrained Pseudorandom Functions
Uncategorized
Uncategorized
A pseudorandom function (PRF) is a keyed function $F \colon {\cal
K}\times{\cal X}\rightarrow {\cal Y}$ where, for a random key
$k\in{\cal K}$, the function $F(k,\cdot)$ is indistinguishable from a
uniformly random function, given black-box access. A
\emph{key-homomorphic} PRF has the additional feature that for any
keys $k,k'$ and any input $x$, we have $F(k + k', x)= F(k,x) \oplus
F(k',x)$ for some group operations $+, \oplus$ on $\cal{K}$ and
$\cal{Y}$, respectively. A \emph{constrained} PRF for a family of
sets ${\cal S} \subseteq \cal{P}({\cal X})$ has the property that,
given any key $k$ and set $S \in \cal{S}$, one can efficiently compute
a ``constrained'' key $k_S$ that enables evaluation of $F(k,x)$ on all
inputs $x\in S$, while the values $F(k,x)$ for $x \notin S$ remain
pseudorandom even given $k_{S}$.
In this paper we construct PRFs that are simultaneously constrained
\emph{and} key homomorphic, where the homomorphic property holds even
for constrained keys. We first show that the multilinear map-based
bit-fixing and circuit-constrained PRFs of Boneh and Waters (Asiacrypt
2013) can be modified to also be \emph{key-homomorphic}. We then show
that the LWE-based key-homomorphic PRFs of Banerjee and Peikert
(Crypto 2014) are essentially already \emph{prefix-constrained} PRFs,
using a (non-obvious) definition of constrained keys and associated
group operation. Moreover, the constrained keys themselves are
pseudorandom, and the constraining and evaluation functions can all be
computed in low depth.
As an application of key-homomorphic constrained PRFs, we construct a
proxy re-encryption scheme with fine-grained access control. This
scheme allows storing encrypted data on an untrusted server, where
each file can be encrypted relative to some attributes, so that only
parties whose constrained keys match the attributes can decrypt.
Moreover, the server can re-key (arbitrary subsets of) the ciphertexts
without learning anything about the plaintexts, thus permitting
efficient and fine-grained revocation.
A Simple Method for Obtaining Relations Among Factor Basis Elements for Special Hyperelliptic Curves
Uncategorized
Uncategorized
Nagao had proposed a decomposition method for divisors of hyperelliptic curves defined over a field $\rF_{q^n}$ with $n\geq 2$.
Joux and Vitse had later proposed a variant which provided relations among the factor basis elements. Both Nagao's and the
Joux-Vitse methods require solving a multi-variate system of non-linear equations. In this work, we revisit Nagao's approach
with the idea of avoiding the requirement of solving a multi-variate system. While this cannot be done in general, we are
able to identify special cases for which this is indeed possible. Our main result is for curves $C:y^2=f(x)$ of genus $g$ defined
over $\rF_{q^2}$ having characteristic greater than two. If $f(x)$ has at most $g$ consecutive coefficients which are
in $\rF_{q^2}$ while the rest are in $\rF_q$, then we show that it is possible to obtain a single relation in about
$(2g+3)!$ trials. The method combines well with a sieving method proposed by Joux and Vitse. Our implementation of the
resulting algorithm provides examples of factor basis relations for $g=5$ and $g=6$. We believe that none of the other methods
known in the literature can provide such relations faster than our method. Other than obtaining such decompositions, we
also explore the applicability of our approach for $n>2$ and also for binary characteristic fields.
How to Incentivize Data-Driven Collaboration Among Competing Parties
The availability of vast amounts of data is changing how we can make medical discoveries, predict global market trends, save energy, and develop new educational strategies. In certain settings such as Genome Wide Association Studies or deep learning,
the sheer size of data (patient files or labeled examples) seems critical to making discoveries. When data is held distributedly by many parties, as often is the case, they must share it to reap its full benefits.
One obstacle to this revolution is the lack of willingness of different entities to share their data, due to reasons such as possible loss of privacy or competitive edge. Whereas cryptographic works address the privacy aspects, they shed no light on individual parties' losses and gains when access to data carries tangible rewards. Even if it is clear that better overall conclusions can be drawn fom collaboration, are individual collaborators better off by collaborating? Addressing this question is the topic of this paper.
Our contributions are as follows.
* We formalize a model of $n$-party collaboration for computing functions over private inputs in which the participants receive their outputs in sequence, and the order depends on their private inputs. Each output ``improves'' on all previous outputs according to a score function.
* We say that a mechanism for collaboration achieves a \emph{collaborative equilibrium} if it guarantees a higher reward for all participants when joining a collaboration compared to not joining it. We show that while in general computing a collaborative equilibrium is NP-complete, we can design polynomial-time algorithms for computing it for a range of natural model settings. When possible, we design mechanisms to compute a distribution of outputs and an ordering of output delivery, based on the $n$ participants' private inputs, which achieves a collaborative equilibrium.
The collaboration mechanisms we develop are in the standard model, and thus require a central trusted party; however, we show that this assumption is not necessary under standard cryptographic assumptions. We show how the mechanisms can be implemented in a decentralized way by $n$ distrustful parties using new extensions of classical secure multiparty computation that impose order and timing constraints on the delivery of outputs to different players, in addition to guaranteeing privacy and correctness.
On the Security of an Efficient Group Key Agreement Scheme for MANETs
Yang et al. have proposed an efficient group key agreement scheme for
Mobile Adhoc Networks. The scheme is efficient as only one bilinear
computation is required for group members to obtain the session key. The scheme is analyzed for security without random oracle model. However, we prove that their scheme is not secure. In particular, we show that any passive adversary (or non-group member) can compute the
session key without having access to the individual secret keys of the group members. Hence, Yang et al. scheme cannot be used for secure group communication. We also show that, the scheme cannot be used for
secure group communication unless there exists a central entity, and hence cannot be used for secure communication in mobile adhoc networks.
Key Recovery for LWE in Polynomial Time
Uncategorized
Uncategorized
We discuss a higher dimensional generalization of the Hidden Number Problem and generalize the Boneh-Venkatesan method for solving it in polynomial time. We then use this to analyze a key recovery (decoding) attack on LWE which runs in polynomial time using the LLL lattice basis reduction algorithm and Babai's nearest planes method. We prove that success can be guaranteed with overwhelming probability when the error distribution is narrow enough and $q\geq 2^{O(n)}$, where $n$ is the dimension of the secret key. An explicit constant in the exponent is given, but in practice the performance is observed to be significantly better.
Our focus is on attacking the search variant of LWE. Known attacks include combinatorial methods, polynomial system solving (Gröbner basis) methods, and lattice reduction methods. Typically the performance of the lattice reduction attacks involves estimating the performance and complexity of BKZ-2.0, which is difficult. Still another option is to attack the decision version of LWE and use the search-to-decision reductions to break the search problem.
Our key recovery attack is interesting because it is runs in polynomial time, and yields simple and concrete security estimates for a wide range of parameters depending in a clear and explicit way on the effective approximation factor in the LLL algorithm and in Babai's nearest planes method. We ran the attack for hundreds of LWE instances demonstrating successful key recovery attacks and yielding information about the effective approximation factor as the lattice dimension grows. For example, we successfully recover the secret key for an instance with $n=350$ in about $3.5$ days on a single machine, provided that the modulus is large enough, and the error distribution narrow enough.
Trivial Nonce-Misusing Attack on Pure OMD
Pure OMD is an authenticated encryption mode that will be presented by Reyhanitabar et al. at FSE 2015. It is (among others) claimed to achieve authenticity against nonce-misusing adversaries. We show that this claim is incorrect, by presenting an adversary that makes 3 queries (including the forgery) of a total complexity 6.
A Practical Chosen Message Power Analysis Approach Against Ciphers with the Key Whitening Layers
Uncategorized
Uncategorized
The key whitening is a technique intended to enhance the strength of a block cipher. Although some research work involves DPA attacks against the key whitening layer in the compact architecture, there are no literatures dedicated in the influence of the key whitening layers in the loop architecture from the standpoint of DPA. In this paper, we
propose a practical chosen message power analysis approach against the
loop architecture of ciphers with the key whitening layers, thus proving that the key whitening technique does not enhance the security of ciphers regard to DPA. Our approach follows a reduction strategy: we recover the whitening key in the general cipher with the key whitening layer and reduce other complicated key whitening layers to the general case. In order to further manifest the validity of the new approach, we carry extensive experiments on two ISO standardized ciphers CLEFIA and Camellia implemented in loop architecture on FPGA, and the keys are recovered as expected.
Indistinguishability Obfuscation from Compact Functional Encryption
The arrival of indistinguishability obfuscation (iO) has transformed the cryptographic landscape by enabling several security goals that were previously beyond our reach. Consequently, one of the pressing goals currently is to construct iO from well-studied standard cryptographic assumptions.
In this work, we make progress in this direction by presenting a reduction from iO to a natural form of public-key functional encryption (FE). Specifically, we construct iO for general
functions from any single-key FE scheme for NC1 that achieves selective, indistinguishability security against sub-exponential time adversaries. Further, the FE scheme should be compact, namely, the running time of the encryption algorithm must only be a polynomial in the security parameter and the input message length (and not in the function description size or its output length).
We achieve this result by developing a novel arity amplification technique to transform FE for single-ary functions into FE for multi-ary functions (aka multi-input FE). Instantiating our approach with known, non-compact FE schemes, we obtain the first constructions of multi-input FE for constant-ary functions based on standard assumptions.
Finally, as a result of independent interest, we construct a compact FE scheme from randomized encodings for Turing machines and learning with errors assumption.
Silent Simon: A Threshold Implementation under 100 Slices
Uncategorized
Uncategorized
Lightweight Cryptography aims at achieving security comparable to conventional cryptography at a much lower cost. Simon is a lightweight alternative to AES, as it shares same cryptographic parameters, but has been shown to be extremely area-efficient on FPGAs. However, in the embedded setting, protection against side channel analysis is often required. In this work we present a threshold implementation of Simon. The proposed core splits the information between three shares and achieves provable security against first order side-channel attacks. The core can be implemented in less than 100 slices of a low-cost FPGA, making it the world smallest threshold implementation of a block-cipher. Hence, the proposed core perfectly suits highly-constrained embedded systems including sensor nodes and RFIDs. Security of the proposed core is validated by provable arguments as well as practical DPA attacks and tests for leakage quantification.
Authenticated Network Time Synchronization
Uncategorized
Uncategorized
The Network Time Protocol (NTP) is used by many network-connected devices to synchronize device time with remote servers. Many security features depend on the device knowing the current time, for example in deciding whether a certificate is still valid. Currently, most services implement NTP without authentication, and the authentication mechanisms available in the standard have not been formally analyzed, require a pre-shared key, or are known to have cryptographic weaknesses. In this paper we present an authenticated version of NTP, called ANTP, to protect against desynchronization attacks. To make ANTP suitable for large-scale deployments, it is designed to minimize server-side public-key operations by infrequently performing a key exchange using public key cryptography, then relying solely on symmetric cryptography for subsequent time synchronization requests; moreover, it does so without requiring server-side per-connection state. Additionally, ANTP ensures that authentication does not degrade accuracy of time synchronization. We measured the performance of ANTP by implementing it in OpenNTPD using OpenSSL. Compared to plain NTP, ANTP’s symmetric crypto reduces the server throughput (connections/second) for time synchronization requests by a factor of only 1.6. We analyzed the security of ANTP using a novel provable security framework that involves adversary control of time, and show that ANTP achieves secure time synchronization under standard cryptographic assumptions; our framework may also be used to analyze other candidates for securing NTP.
Stealing Keys from PCs using a Radio: Cheap Electromagnetic Attacks on Windowed Exponentiation
Uncategorized
Uncategorized
We present new side-channel attacks on RSA and ElGamal implementations that use the popular sliding-window or fixed-window (m-ary) modular exponentiation algorithms. The attacks can extract decryption keys using a very low measurement bandwidth (a frequency band of less than 100 kHz around a carrier under 2 MHz) even when attacking multi-GHz CPUs.
We demonstrate the attacks' feasibility by extracting keys from GnuPG, in a few seconds, using a nonintrusive measurement of electromagnetic emanations from laptop computers. The measurement equipment is cheap and compact, uses readily-available components (a Software Defined Radio USB dongle or a consumer-grade radio receiver), and can operate untethered while concealed, e.g., inside pita bread.
The attacks use a few non-adaptive chosen ciphertexts,
crafted so that whenever the decryption routine encounters particular bit patterns in the secret key, intermediate values occur with a special structure that causes observable fluctuations in the electromagnetic field. Through suitable signal processing and cryptanalysis, the bit patterns and eventually the whole secret key are recovered.
Short Schnorr signatures require a hash function with more than just random-prefix resistance
Neven, Smart and Warinschi (NSW) proved, in the generic group model, that full-length Schnorr signatures require only random-prefix resistant hash functions to resist passive existential forgery.
Short Schnorr signatures halve the length of the hash function, and
have been conjectured to provide a similar level of security. The
NSW result is too loose to provide a meaningful security for short
Schnorr signatures, but Neven, Smart and Warinschi conjecture that
this is mere artefact of the proof technique, and not an essential
deficiency of the short Schnorr signatures. In particular, this
amounts to a conjecture that short Schnorr signature are secure
under the same set of assumptions, namely random-prefix resistance
of the hash function.
This report provides a counterexample to the latter conjecture, in
other words, a separation result. It finds a hash function that
seems to suggest random-prefix resistance does not suffice for short
Schnorr signatures. In other words, the loose reduction implicit
in the NSW theorem is as tight as possible.
Obviously, this result does not preclude the possibility of another
proof for short Schnorr signatures, based on different hash function
security properties such as preimage resistance.
More PS and H-like bent functions
Two general classes (constructions) of bent functions are derived from the notion of spread. The first class, ${\cal PS}$, gives a useful framework for designing bent functions which are constant (except maybe at 0) on each of the $m$-dimensional subspaces of ${\Bbb F}_{2^{2m}}$ belonging to a partial spread. Explicit expressions (which may be used for applications) of bent functions by means of the trace can be derived for subclasses corresponding to some partial spreads, for instance the ${\cal PS}_{ap}$ class. Many more can be. The second general class, $H$, later slightly modified into a class called ${\cal H}$ so as to relate it to the so-called Niho bent functions, is (up to addition of affine functions) the set of bent functions whose restrictions to the subspaces of the Desarguesian spread (the spread of all multiplicative cosets of ${\Bbb F}_{2^m}^*$, added with 0, in ${\Bbb F}_{2^{2m}}^*$) are linear. It has been observed that the functions in ${\cal H}$ are related to o-polynomials, and this has led to several classes of bent functions in bivariate trace form. In this paper, after briefly looking at the ${\cal PS}$ functions related to the André spreads, and giving the trace representation of the ${\cal PS}$ corresponding bent functions and of their duals, we show that it is easy to characterize those bent functions whose restrictions to the subspaces of a spread are linear, but that it leads to a notion extending that of o-polynomial, for which it seems a hard task to find examples. We illustrate this with the André spreads and also study three other cases of ${\cal H}$-like functions (related to other spreads).
Post-Zeroizing Obfuscation: The case of Evasive Circuits
Uncategorized
Uncategorized
Recent devastating attacks by Cheon et al. [Eurocrypt’15] and others have highlighted significant gaps in our intuition about security in candidate multilinear map schemes, and in candidate
obfuscators that use them. The new attacks, and some that were previously known, are typically called “zeroizing” attacks because they all crucially rely on the ability of the adversary to create
encodings of 0.
In this work, we initiate the study of post-zeroizing obfuscation, and we present a construction for the special case of evasive functions. We show that our obfuscator survives all known attacks
on the underlying multilinear maps, by proving that no encodings of 0 can be created by a generic-model adversary. Previous obfuscators (for both evasive and general functions) were either analyzed in a less-conservative “pre-zeroizing” model that does not capture recent attacks, or were proved secure relative to assumptions that are now known to be false.
To prove security, we introduce a new technique for analyzing polynomials over multilinear map encodings. This technique shows that the types of encodings an adversary can create are much more restricted than was previously known, and is a crucial step toward achieving postzeroizing security. We also believe the technique is of independent interest, as it yields efficiency improvements for existing schemes.
Naturally Rehearsing Passwords
We introduce quantitative usability and security models to guide the design of \emph{password
management schemes} --- systematic strategies to help users create and remember multiple
passwords. In the same way that security proofs in cryptography are based on
complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify
usability by introducing \emph{usability assumptions}. In particular, password management relies
on assumptions about human memory, e.g., that a user who follows a particular rehearsal
schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and can be tested empirically. Given rehearsal requirements and a user's
visitation schedule for each account, we use the total number of extra rehearsals that
the user would have to do to remember all of his passwords as a measure of the usability of
the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password
management with multiple accounts and associated threats,
including online, offline, and plaintext password leak attacks. Observing that current
password management schemes are either insecure or unusable, we present Shared Cues--- a new scheme in which the underlying secret is strategically
shared across accounts to ensure that most rehearsal requirements are satisfied naturally while
simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals.
The Cryptographic Hardness of Random Local Functions -- Survey
Constant parallel-time cryptography allows to perform complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions
that each of their output bits depend on a constant number of input bits. A natural way to obtain local cryptographic constructions is to use \emph{random local functions} in which each output bit is computed by applying some fixed $d$-ary predicate $P$ to a randomly chosen $d$-size subset of the input bits.
In this work, we will study the cryptographic hardness of random local functions. In particular, we will survey known attacks and hardness results, discuss different flavors of hardness (one-wayness, pseudorandomness, collision resistance, public-key encryption), and mention applications to other problems in cryptography and computational complexity. We also present some open questions with the hope to develop a systematic study of the cryptographic hardness of local functions.
Constant Size Ring Signature Without Random Oracle
Ring signature enables an user to anonymously sign a message on behalf of a group of users termed as ‘ring’ formed in an ‘ad-hoc’ manner. A naive scheme produces a signature linear in the size of the ring, but this is extremely inefficient when ring size is large. Dodis et al. proposed a constant size scheme in EUROCRYPT’13, but provably secure in random oracle model. Best known result without random oracle is a sub-linear size construction by Chandran et al. in ICALP’07 and a follow-up work by Essam Ghadafi in IMACC’13. Therefore, construction of a constant size ring signature scheme without random oracle meeting stringent security requirement still remains as an
interesting open problem.
Our first contribution is a generic technique to convert a compatible signature scheme to a constantsized ring signature scheme. The technique employs a constant size set membership check that may be of independent interest. Our construction is instantiated over asymmetric pairing of composite order and optimally efficient. The scheme meets strongest security requirements, viz. anonymity under full key exposure and unforgeability against insider-corruption without using random oracle under simple hardness assumptions. We also provide a concrete instantiation of the scheme based on Full Boneh-Boyen signatures.
Indistinguishability Obfuscation from Functional Encryption
Indistinguishability obfuscation (IO) is a tremendous notion, powerful enough to give
rise to almost any known cryptographic object. Prior candidate IO constructions were
based on specific assumptions on algebraic objects called multi-linear graded encodings.
We present a generic construction of indistinguishability obfuscation from public-key
functional encryption with succinct encryption circuits and subexponential security. This
shows the equivalence of indistinguishability obfuscation and public-key functional en-
cryption, a primitive that has previously seemed to be much weaker, lacking the power
and the staggering range of applications of indistinguishability obfuscation.
Our main construction can be based on functional encryption schemes that support a
single functional key, and where the encryption circuit grows sub-linearly in the circuit-size
of the function. We further show that sublinear succinctness in circuit-size for single-key
schemes can be traded with sublinear succinctness in the number of keys (also known
as the collusion-size) for multi-key schemes. We also show that, under the Learning with
Errors assumption, our techniques imply that any indistinguishability obfuscator can be
converted into one where the size of obfuscated circuits is twice that of the original circuit
plus an additive overhead that is polynomial in its depth, input length, and the security
parameter.
New Multilinear Maps over the Integers
In the last few years, cryptographic multilinear maps have proved their tremendous potential as building blocks for new constructions, in particular the first viable approach to general program obfuscation. After the first candidate construction by Garg, Gentry and Halevi (GGH) based on ideal lattices, a second construction over the integers was described by Coron, Lepoint and Tibouchi (CLT). However the CLT scheme was recently broken by Cheon et al.; the attack works by computing the eigenvalues of a diagonalizable matrix over Q derived from the multilinear map.
In this paper we describe a new candidate multilinear map over the integers. Our construction is based on CLT but with a new arithmetic technique that makes the zero-testing element non-linear in the encoding, which prevents the Cheon et al. attack. Our new construction is relatively practical as its efficiency is comparable to the original CLT scheme. Moreover the subgroup membership and decisional linear assumptions appear to hold in the new setting.
Exploring the Resilience of Some Lightweight Ciphers Against Profiled Single Trace Attacks
This paper compares attack outcomes w.r.t. profiled single trace attacks of four different lightweight ciphers in order to investigate which of their properties, if any, contribute to attack success. We show that mainly the diffusion properties of both the round function and the key schedule play a role. In particular, the more (reasonably statistically independent) intermediate values are produced in a target implementation, the better attacks succeed. A crucial aspect for lightweight ciphers is hence the key schedule which is often designed to be particularly light. This design choice implies that information from all round keys can be easily combined which results in attacks that succeed with ease.
Differential-Linear Cryptanalysis of ICEPOLE
Uncategorized
Uncategorized
ICEPOLE is a CAESAR candidate with the intermediate level of robustness under nonce misuse circumstances in the original document. In particular, it was claimed that key recovery attack against ICEPOLE is impossible in the case of nonce misuse. ICEPOLE is strong against the differential cryptanalysis and linear cryptanalysis. In this paper, we developed the differential-linear attacks against ICEPOLE when nonce is misused. Our attacks show that the state of ICEPOLE-128 and ICEPOLE-128a can be recovered with data complexity $2^{46}$ and time complexity $2^{46}$; the state of ICEPOLE-256a can be recovered with data complexity $2^{60}$ and time complexity $2^{60}$. For ICEPOLE-128a and ICEPOLE-256a, the secret key is recovered once the state is recovered. We experimentally verified the attacks against ICEPOLE-128 and ICEPOLE-128a.
Leaked-State-Forgery Attack Against The Authenticated Encryption Algorithm ALE
Uncategorized
Uncategorized
ALE is a new authenticated encryption algorithm published at FSE 2013. The authentication component of ALE is based on the strong Pelican MAC, and the authentication security of ALE is claimed to be 128-bit. In this paper, we propose the leaked-state-forgery attack (LSFA) against ALE by exploiting the state information leaked from the encryption of ALE. The LSFA is a new type of differential cryptanalysis in which part of the state information is known and exploited to improve the differential probability. Our attack shows that the authentication security of ALE is only 97-bit. And the results may be further improved to around 93-bit if the whitening key layer is removed. We implemented our attacks against a small version of ALE (using 64-bit block size instead of 128-bit block size). The experimental results match well with the theoretical results.
Multi-Input Functional Encryption in the Private-Key Setting: Stronger Security from Weaker Assumptions
Uncategorized
Uncategorized
We construct a general-purpose multi-input functional encryption scheme in the private-key setting. Namely, we construct a scheme where a functional key corresponding to a function $f$ enables a user holding encryptions of $x_1, \ldots, x_t$ to compute $f(x_1, \ldots, x_t)$ but nothing else. This is achieved starting from any general-purpose private-key single-input scheme (without any additional assumptions), and is proven to be adaptively secure for any constant number of inputs $t$. Moreover, it can be extended to a super-constant number of inputs assuming that the underlying single-input scheme is sub-exponentially secure.
Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security and efficiency.
Previous and concurrent constructions of multi-input functional encryption schemes either rely on stronger assumptions and provided weaker security guarantees (Goldwasser et al. [EUROCRYPT '14], and Ananth and Jain [CRYPTO '15]), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al. [EUROCRYPT '15]). In comparison, we present a general transformation that simultaneously relies on weaker assumptions and guarantees stronger security.
Duality in ABE: Converting Attribute Based Encryption for Dual Predicate and Dual Policy via Computational Encodings
We show a generic conversion that converts an attribute based encryption (ABE) scheme for arbitrary predicate into an ABE scheme for its dual predicate. In particular, it can convert key-policy ABE (KP-ABE) into ciphertext-policy ABE (CP-ABE), and vice versa, for dually related predicates. It is generic in the sense that it can be applied to arbitrary predicates. On the other hand, it works only within the generic ABE framework recently proposed by Attrapadung (Eurocrypt'14), which provides a generic compiler that compiles a simple primitive called pair encodings into fully secure ABE. Inside this framework, Attrapadung proposed the first generic dual conversion that works only for subclass of encodings, namely, perfectly secure encodings. However, there are many predicates for which realizations of such encodings are not known, and hence the problems of constructing fully secure ABE for their dual predicates were left unsolved.
In this paper, we revisit the dual conversion of Attrapadung, and show that, somewhat surprisingly, the very same conversion indeed also works for broader classes of encodings, namely, computationally secure encodings. Consequently, we thus solve the above open problems as we obtain the first fully secure realizations of completely-unbounded CP-ABE and CP-ABE with short keys for Boolean formulae, via applying the conversion to previously proposed KP-ABE.
Moreover, we provide a generic conversion that converts ABE into its dual-policy variant. Dual-policy ABE (DP-ABE) conjunctively combines both KP-ABE and CP-ABE into one primitive, and hence can be useful in general-purpose applications. As for instantiations, we obtain the first realizations of fully secure DP-ABE for formulae, unbounded DP-ABE for formulae, and DP-ABE for regular languages. The latter two systems are the first to realize such functionalities, let alone are fully secure.