All papers in 2013 (Page 5 of 881 results)
Bounds in Shallows and in Miseries
Proving bounds on the expected differential probability (EDP) of a characteristic over all keys has been a popular technique of arguing security for both block ciphers and hash functions. In fact, to a large extent, it was the clear formulation and elegant deployment of this very principle that helped Rijndael win the AES competition. Moreover, most SHA-3 finalists have come with explicit upper bounds on the EDP of a characteristic as a major part of their design rationale. However, despite the pervasiveness of this design approach, there is no understanding of what such bounds actually mean for the security of a primitive once a key is fixed --- an essential security question in practice.
In this paper, we aim to bridge this fundamental gap. Our main result is a quantitative connection between a bound on the EDP of differential characteristics and the highest
number of input pairs that actually satisfy a characteristic for a fixed key. This is particularly important for the design of permutation-based hash functions such as sponge functions, where the EDP value itself is not informative for the absence of rekeying. We apply our theoretical result to revisit the security arguments of some prominent recent block ciphers and hash functions. For most of those, we have good news: a characteristic is followed by a small number of pairs only. For Keccak, though, currently much more rounds would be needed for our technique to guarantee any reasonable maximum number of pairs.
Thus, our work --- for the first time --- sheds light on the fixed-key differential behaviour of block ciphers in general and substitution-permutation networks in particular which has been a long-standing fundamental problem in symmetric-key cryptography.
Cryptanalysis of the Huang-Liu-Yang Cryptosystem from PKC 2012
This short note describes a key-recovery attack against a multivariate quadratic cryptosystem proposed by Huang, Liu, and Yang (PKC 2012). Our attack is running lattice-basis reduction algorithms on a lattice constructed from the keys in the cryptosystem. The attack takes less than 20 minutes for the proposed parameter sets which are expected to be 80-bit and 128-bit security.
Efficient Multiparty Protocols via Log-Depth Threshold Formulae
We put forward a new approach for the design of efficient multiparty protocols:
1. Design a protocol for a small number of parties (say, 3 or 4)
which achieves security against a single corrupted party. Such
protocols are typically easy to construct as they may employ
techniques that do not scale well with the number of corrupted
parties.
2. Recursively compose with itself to obtain an efficient n-party
protocol which achieves security against a constant fraction of
corrupted parties.
The second step of our approach combines the player emulation
technique of Hirt and Maurer (J. Cryptology, 2000) with
constructions of logarithmic-depth formulae which compute
threshold functions using only constant fan-in threshold gates.
Using this approach, we simplify and improve on previous results
in cryptography and distributed computing. In particular:
- We provide conceptually simple constructions of efficient
protocols for Secure Multiparty Computation (MPC) in the
presence of an honest majority, as well as broadcast protocols
from point-to-point channels and a 2-cast primitive.
- We obtain new results on MPC over blackbox groups and other
algebraic structures.
The above results rely on the following complexity-theoretic
contributions, which may be of independent interest:
- We show that for every integers j,k such that m = (k-1)/(j-1)
is an integer, there is an explicit (poly(n)-time) construction
of a logarithmic-depth formula which computes a good
approximation of an (n/m)-out-of-n threshold function using only
j-out-of-k threshold gates and no constants.
- For the special case of n-bit majority from 3-bit majority
gates, a non-explicit construction follows from the work of
Valiant (J. Algorithms, 1984). For this special case, we provide
an explicit construction with a better approximation than for the
general threshold case, and also an exact explicit construction
based on standard complexity-theoretic or cryptographic
assumptions.
Security analysis of Quantum-Readout PUFs in the case of challenge-estimation attacks
Uncategorized
Uncategorized
Quantum Readout PUFs (QR-PUFs) have been proposed as a technique for remote authentication of ob jects. The security is based on basic quantum information theoretic principles and the assumption that the adversary cannot losslessly implement arbitrary unitary transformations on a K-dimensional state space, with K "large". We consider all possible attacks in which the adversary bases his response on challenge state estimation by measurements. We first analyze the security of QR-PUF schemes in the case where each challenge consists of precisely n identical quanta. We use a result by Bruss and Macchiavello to derive an upper bound on the adversary’s success probability as a function of K and n. Then we generalize to challenges that contain a probabilistic number of quanta, and in particular a Poisson distribution.
Enabling End-to-End Secure Communication with Anonymous and Mobile Receivers - an Attribute-Based Messaging Approach
Mechanisms for secure mobile communication can be enablers for novel applications in the area of cooperative work. In this context, this article exemplarily investigates an emergency management setting. An efficient support of emergency communication is of high practical importance, but has specific challenges: unpredictable local crisis situations harden the establishment of communication structures, legal requirements dictate the use of end-to-end secure and documentable approaches, while end users demand user-friendliness and privacy protection.
Dealing with these challenges, the contribution of this article is two-fold. Firstly, together with emergency practioners, we follow a participatory design approach. We define security requirements, patterns for efficient communication and derive a design proposal. Secondly, we devise a novel approach to multilaterally end-to-end secure, user-friendly attribute-based messaging for one-to-many communication. It builds on a hybrid encryption technique, which efficiently combines ciphertext-policy attribute-based encryption, location-based encryption and symmetric encryption. The hybrid encryption approach supports dynamic location attributes as user-friendly selectors for targeted messaging with dynamic groups of mobile and anonymous receivers. The achieved security of the approach and concrete application scenarios are discussed.
Golden Sequence for the PPSS Broadcast Encryption Scheme with an Asymmetric Pairing
Broadcast encryption is conventionally formalized as broadcast encapsulation in which, instead of a cipher-
text, a session key is produced, which is required to be indistinguishable from random. Such a scheme can
provide public encryption functionality in combination with a symmetric encryption through the hybrid en-
cryption paradigm. The Boneh-Gentry-Waters scheme of 2005 proposed a broadcast scheme with constant-size
ciphertext. It is one of the most efficient broadcast encryption schemes regarding overhead size. In this work we
consider the improved scheme of Phan-Pointcheval-Shahandashi-Ste
er [PPSS12] which provides an adaptive
CCA broadcast encryption scheme. These two schemes may be tweaked to use bilinear pairings[DGS].
This document details our choices for the implementation of the PPSS scheme. We provide a complete golden sequence
of the protocol with efficient pairings (Tate, Ate and Optimal Ate). We target a 128-bit security
level, hence we use a BN-curve [BN06]. The aim of this work is to contribute to the use and the standardization of
PPSS scheme and pairings in concrete systems.
Dependence in IV-related bytes of RC4 key enhances vulnerabilities in WPA
Uncategorized
Uncategorized
The first three bytes of the RC4 key in WPA are public as they are derived from the public parameter IV, and this derivation leads to a strong mutual dependence between the first two bytes of the RC4 key. In this paper, we provide a disciplined study of RC4 biases resulting specifically in such a scenario. Motivated by the work of AlFardan et al. (2013), we first prove the interesting sawtooth distribution of the first byte in WPA and the similar nature for the biases in the initial keystream bytes towards zero. As we note, this sawtooth characteristics of these biases surface due to the dependence of the first two bytes of the RC4 key in WPA, both derived from the same byte of the IV. Our result on the nature of the first keystream byte provides a significantly improved distinguisher for RC4 used in WPA than what had been presented by Sepehrdad et al. (2011-12). Further, we revisit the correlation of initial keystream bytes in WPA to the first three bytes of the RC4 key. As these bytes are known from the IV, one can obtain new as well as significantly improved biases in WPA than the absolute biases exploited earlier by AlFardan et al. or Isobe et al. We notice that the correlations of the keystream bytes with publicly known IV values of WPA potentially strengthen the practical plaintext recovery attack on the protocol.
A note on verifying the APN property
We show that for an arbitrary mapping $F$ on $F_2^n$ to verify that it is APN, it is enough to consider the difference mappings of $F$
defined by elements from an hyperplane.
Eavesdropping or Disrupting a Communication --- On the Weakness of Quantum Communications
What is the behavior of an adversary to launch attacks against a communication? The good choice is to eavesdrop the communication such that the communicators can not detect the eavesdropping. The general choice is to disrupt the communication at low cost, say, measuring the transferred quantum signals in the well-known BB84 quantum key distribution protocol. The bad choice is to disrupt it at even high cost, such as severing copper or fiber, if it is necessary. In this note we remark that a quantum communication is very vulnerable to low cost attacks. The plan to build a large quantum photonic network is infeasible.
The Norwegian Internet Voting Protocol
The Norwegian government ran a trial of internet remote voting during the 2011 local government elections, and will run another trial during the 2013 parliamentary elections. A new cryptographic voting protocol will be used, where so-called return codes allow voters to verify that their ballots will be counted as cast.
This paper discusses this cryptographic protocol, and in particular the ballot submission phase.
The security of the protocol relies on a novel hardness assumption similar to Decision Diffie-Hellman. While DDH is a claim that a random subgroup of a non-cyclic group is indistinguishable from the whole group, our assumption is related to the indistinguishability of certain special subgroups. We discuss this question in some detail.
Partially blind password-based signatures using elliptic curves
Password-based signatures allow a user who can only remember a password to create digital signatures with the help of a server, without revealing the messages to be signed to the server.
Certain applications require the ability to disclose part of the message to the server. We define partially blind password-based signatures and construct a scheme based that we prove secure, based on a novel computational problem related to computing discrete logarithms.
Our scheme is based on Nyberg-Rueppel signatures. We give a variant of Nyberg-Rueppel signatures that we prove secure based on our novel computational problem.
Unlike previous password-based signature schemes, our scheme can be instantiated using elliptic curve arithmetic over small prime fields. This is important for many applications
Obfuscating Conjunctions
We show how to securely obfuscate the class of conjunction functions (functions like $f(x_1, \ldots, x_n) = x_1 \land \lnot x_4 \land \lnot x_6 \land \cdots \land x_{n-2}$). Given any function in the class, we produce an obfuscated program which preserves the input-output functionality of the given function, but reveals nothing else.
Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We show that the construction is secure when the conjunction is drawn from a distribution, under mild assumptions on the distribution. Security follows from multilinear entropic variants of the Diffie-Hellman assumption. We conjecture that our construction is secure for any conjunction, regardless of the distribution from which it is drawn. We offer supporting evidence for this conjecture, proving that our obfuscator is secure for any conjunction against generic adversaries.
Practical Cryptanalysis of a Public-Key Encryption Scheme Based on New Multivariate Quadratic Assumptions
In this paper, we investigate the security of a public-key encryption scheme introduced by Huang, Liu and Yang (HLY) at PKC’12. This new scheme can be provably reduced to the hardness of solving a set of quadratic equations whose coefficients of highest degree are chosen according to a discrete Gaussian distributions. The other terms being chosen uniformly at random. Such a problem is a variant of the classical problem of solving a system of non-linear equations (PoSSo), which is known to be hard for random systems. The main hypothesis of Huang, Liu and Yang is that their variant is not easier than solving PoSSo for random instances. In this paper, we disprove this hypothesis. To this end, we exploit the fact that the new problem proposed by Huang, Liu and Yang reduces to an easy instance of the Learning With Errors (LWE) problem. The main contribution of this paper is to show that security and efficiency are essentially incompatible for the HLY proposal. That is, one cannot find parameters which yield a secure and a practical scheme. For instance, we estimate that a public-key of at least 1.03 GB is required to achieve 80-bit security against known attacks. As a proof of concept, we present practical attacks against all the parameters proposed Huang, Liu and Yang. We have been able to recover the private-key in roughly one day for the first challenge proposed by HLY and in roughly three days for the second challenge.
Verifiable Delegation of Computation on Outsourced Data
We address the problem in which a client stores a large amount of data with an untrusted server in such a way that, at any moment, the client can ask the server to compute a function on some portion of its outsourced data. In this scenario, the client must be able to efficiently verify the correctness of the result despite no longer knowing the inputs of the delegated computation, it must be able to keep adding elements to its remote storage, and it does not have to fix in advance (i.e., at data outsourcing time) the functions that it will delegate. Even more ambitiously, clients should be able to verify in time independent of the input-size – a very appealing property for computations over huge amounts of data.
In this work we propose novel cryptographic techniques that solve the above problem for the class of computations of quadratic polynomials over a large number of variables. This class covers a wide range of significant arithmetic computations – notably, many important statistics. To confirm the efficiency of our solution, we show encouraging performance results, e.g., correctness proofs have size below 1 kB and are verifiable by clients in less than 10 milliseconds.
How To Construct Extractable One-Way Functions Against Uniform Adversaries
A function $f$ is extractable if it is possible to algorithmically ``extract,'' from any program that outputs a value $y$ in the image of $f,$ a preimage of $y$. % under $f$.
When combined with hardness properties such as one-wayness or collision-resistance, extractability has proven to be a powerful tool. However, so far, extractability has not been explicitly shown. Instead, it has only been considered as a non-standard {\em knowledge assumption} on certain functions.
We give the first construction of extractable one-way functions assuming only standard hardness assumptions (e.g. the subexponential Learning with Errors Assumption).
Our functions are extractable against adversaries with bounded polynomial advice and unbounded polynomial running time. We then use these functions to construct the first 2-message zero-knowledge arguments and 3-message zero-knowledge arguments of knowledge, against the same class of adversarial verifiers, from essentially the same assumptions.
The construction uses ideas from [Barak, FOCS01] and [Barak, Lindell, and Vadhan, FOCS03], and rely on the recent breakthrough construction of privately verifiable $\P$-delegation schemes [Kalai, Raz, and Rothblum]. The extraction procedure uses the program evaluating $f$ in a non-black-box way, which we show to be necessary.
Analysis of BLAKE2
We present a thorough security analysis of the hash function family BLAKE2, a recently proposed and already in use tweaked version of the SHA-3 finalist BLAKE. We study how existing attacks on BLAKE apply to BLAKE2 and to what extent the modifications impact the attacks. We design and run two improved searches for (impossible) differential attacks — the outcomes suggest higher number of attacked rounds in the case of impossible differentials (in fact we improve the best results for BLAKE as well), and slightly higher for the differential attacks on the hash/compression function (which gives an insight into the quality of the tweaks). We emphasize the importance of each of the modifications, in particular we show that an improper initialization could lead to collisions and near-collisions for the full-round compression function. We analyze the permutation of the new hash function and give rotational attacks and internal differentials for the whole design. We conclude that the tweaks in BLAKE2 were chosen properly and, despite having weaknesses in the theoretical attack frameworks of permutations and of fully-chosen state input compression functions, the hash function of BLAKE2 has only slightly lower security margin than BLAKE.
Efficient computation of addition-subtraction chains using generalized continued Fractions
The aim of this paper is to present a new way of computing short addition-subtraction chains using the generalized continued fractions where subtraction is allowed. We will recover the most used ways of getting addition-subtraction chains. This method is not always optimal but gives minimal chains that are easy to compute.
Practical & Provably Secure Distance-Bounding
Distance-bounding is a practical solution to be used in security-sensitive contexts, to prevent relay attacks. Its applied cryptographic role is definitely spreading fast and it is clearly far reaching, extending from contactless payments to remote car unlocking. However, security models for distance-bounding are not well-established and, as far as we know, no existing protocol is proven to resist all classical attacks: distance-fraud, mafia-fraud, and terrorist-fraud. We herein amend the latter, whilst maintaining the lightweight nature that makes these protocols appropriate for concrete applications. Firstly, we develop a general formalism for distance-bounding protocols and their security requirements. In fact, we also propose specifications of generalised frauds, stemming from the (attack-prone) multi-party scenarios. This entails our incorporation of newly advanced threats, e.g., distance-hijacking. Recently, Boureanu et al. proposed the SKI protocol. We herein extend it and prove its security. To attain resistance to terrorist-fraud, we put forward the use of a leakage scheme and of secret sharing, which we specialise and reinforce with additional requirements. In view of resistance to generalised mafia-frauds (and terrorist frauds), we further introduce the notion of circular-keying for pseudorandom functions (PRFs); this notion models the employment of a PRF, with possible linear reuse of the key. We also identify the need of PRF masking to fix common mistakes in existing security proofs/claims of distance-fraud security. We then enhance our design such that we guarantee resistance to terrorist-fraud in the presence of noise. To our knowledge, all this gives rises the first practical and provably secure class of distance-bounding protocols, even when our protocols are run in noisy communications, which is indeed the real-life setting of deployed, time-critical cryptographic protocols.
Towards A Practical JCJ / Civitas Implementation
Internet voting continues to enjoy wide interest from both research and practice. Among the Internet voting schemes developed over the last decades, JCJ / Civitas stands out from the masses due to its innovative approach to resist voter coercion. To achieve its ambitious goal, the scheme builds upon particularly restrictive assumptions and an abstract credential handling rendering the scheme impractical for real-world use. At ARES 2012, Neumann and Volkamer presented a proposal which implements several of these assumptions (voter-side assumptions) and the credential handling by the use of smart cards. While addressing these practical shortcomings of JCJ / Civitas, their proposal did not take performance into account, and accordingly its performance has not been evaluated. In the present work, we revise the ARES proposal from a performance perspective in a security-invariant manner. Based on the herein proposed revisions, we are able to conclude that the revised ARES proposal is feasible to be used in real-world elections.
Secret Key Cryptosystem based on Polar Codes over Binary Erasure Channel
Uncategorized
Uncategorized
This paper proposes an efficient secret key cryptosystem based on polar codes over Binary Erasure Channel. We introduce a method, for the first time to our knowledge, to hide the generator matrix of the polar codes from an attacker. In fact, our main goal is to achieve secure and reliable communication using finite-length polar codes. The proposed cryptosystem has a significant security advantage against chosen plaintext attacks in comparison with the Rao-Nam cryptosystem. Also, the key length is decreased after applying a new compression algorithm. Moreover, this scheme benefits from high code rate and proper error performance for reliable communication.
VABKS: Verifiable Attribute-based Keyword Search over Outsourced Encrypted Data
It is common nowadays for data owners to outsource their data to the cloud.
Since the cloud cannot be fully trusted, the outsourced data should be encrypted.
This however brings a range of problems,
such as: How should a data owner grant search capabilities to the data users?
How can the authorized data users search over a data owner's outsourced encrypted data?
How can the data users be assured that the cloud faithfully executed the search operations on their behalf?
Motivated by these questions, we propose a novel cryptographic solution, called {\em verifiable attribute-based keyword search} (\vabks).
The solution allows a data user, whose credentials satisfy a data owner's access control policy,
to (i) search over the data owner's outsourced encrypted data,
(ii) outsource the tedious search operations to the cloud, and
(iii) verify whether the cloud has faithfully executed the search operations.
We formally define the security requirements of \vabks\ and describe a construction that satisfies them.
Performance evaluation shows that the proposed schemes are practical and deployable.
HPAZ: a High-throughput Pipeline Architecture of ZUC in Hardware
Abstract.In this paper, we propose a high-throughput pipeline architecture of the stream cipher ZUC which has been included in the security portfolio of 3GPP LTE-Advanced. In the literature, the schema with the highest throughput only implements the working stage of ZUC. The schemas which implement ZUC completely can only achieve a much lower throughput, since a self-feedback loop in the critical path significantly reduces operating frequency. In this paper we design a mixed two-stage pipeline architecture which not only completely implements ZUC but also significantly raises the throughput. We have imple-mented our architecture on FPGA and ASIC. On FPGA platform, the new architecture increases the throughput by 45%, compared with the latest work, and particularly the new architecture also saves nearly 12% of hardware resource. On 65nm ASIC technology, the throughput of the new design can up to 80Gbps, which is 2.7 times faster than the fastest one in the literature, in particularly, it also saves at least 40% of hardware resource. In addition to the academic design, compared with the fastest commercial design, the new architecture doubles the throughput of that. To the best of our knowledge, this evaluation
result is so far the best outcome. It can be assumed that hardware implementations of ZUC following our architecture will fit in future LTE equipments better
Solving Terminal Revocation in EAC by Augmenting Terminal Authentication
In this paper we propose a solution to enable an accurate terminal revocation in the Extended Access Control (EAC). Chaabouni and Vaudenay in [CV09] pointed out the need for an accurate revocation procedure, but failed to provide a complete solution description. We aim at filling this gap. Our solution relies on augmenting terminal authentication with a t-out-of-l threshold signature provided by neighboring terminals. These terminals will be in charge of checking the revocation status of the requested terminal. As Terminals have a real clock embedded and more computational power than Machine Readable Travel Documents (MRTDs), they are better suited for checking revocation status.
Reset Indifferentiability and its Consequences
The equivalence of the random-oracle model and the ideal-cipher model has been studied in a long series of results. Holenstein, Künzler, and Tessaro (STOC, 2011) have recently completed the picture positively, assuming that, roughly speaking, equivalence is indifferentiability from each other. However, under the stronger notion of reset indifferentiability this picture changes significantly, as Demay et al. (EUROCRYPT, 2013) and Luykx et al. (ePrint, 2012) demonstrate.
We complement these latter works in several ways. First, we show that any simulator satisfying the reset indifferentiability notion must be stateless and pseudo deterministic. Using this characterization we show that, with respect to reset indifferentiability, two ideal models are either equivalent or incomparable, that is, a model cannot be strictly stronger than the other model. In the case of the random-oracle model and the ideal-cipher model, this implies that the two are incomparable. Finally, we examine weaker notions of reset indifferentiability that, while not being able to allow composition in general, allow composition for a large class of multi-stage games. Here we show that the seemingly much weaker notion of 1-reset indifferentiability proposed by Luykx et al. is equivalent to reset indifferentiability. Hence, the impossibility of coming up with a reset-indifferentiable construction transfers to the setting where only one reset is permitted, thereby re-opening the quest for an achievable and meaningful notion in between the two variants.
Exponentiating in Pairing Groups
Uncategorized
Uncategorized
We study exponentiations in pairing groups for the most common security levels and show that, although the Weierstrass model is preferable for pairing computation, it can be worthwhile to map to alternative curve representations for the non-pairing group operations in protocols.
Deduction Soundness: Prove One, Get Five for Free
Most computational soundness theorems deal with a limited number of primitives, thereby limiting their applicability. The notion of deduction soundness of Cortier and Warinschi (CCS'11) aims to facilitate soundness theorems for richer frameworks via composition results: deduction soundness extends, generically, with asymmetric encryption and public data structures. Unfortunately, that paper also hints at rather serious limitations regarding further composition results: composability with digital signatures seems to be precluded.
In this paper we provide techniques for bypassing the perceived limitations of deduction soundness and demonstrate that it enjoys vastly improved composition properties. More precisely, we show that a deduction sound implementation can be modularly extended with all of the basic cryptographic primitives (symmetric/asymmetric encryption, message authentication codes, digital signatures, and hash functions). We thus obtain the first soundness framework that allows for the joint use of multiple instances of all of the basic primitives.
In addition, we show how to overcome an important restriction of the bare deduction soundness framework which forbids sending encrypted secret keys. In turn, this prevents its use for the analysis of a large class of interesting protocols (e.g. key exchange protocols). We allow for more liberal uses of keys as long as they are hidden in a sense that we also define. All primitives typically used to send secret data (symmetric/asymmetric encryption) satisfy our requirement which we also show to be preserved under composition.
On the Security of Group-based Proxy Re-encryption Scheme
Uncategorized
Uncategorized
Proxy re-encryption (PRE) allows a semi-trusted proxy to convert a ciphertext intended for Alice into a ciphertext for Bob without learning anything about the underlying plaintext. Chunbo Ma et al. have proposed a group based proxy re-encryption scheme to convert a ciphertext from one group to another. Any group member can independently decrypt the ciphertexts encrypted to its group. In their paper, the authors gave a security proof to say that the scheme is secure against adaptive chosen ciphertext attack. However, we highlight the flaws in their scheme and show that their scheme is not secure against adaptive chosen ciphertext attack. In this direction, we construct an adversary who issues only one decryption oracle query and break the security of their scheme with non-negligible advantage.
Another Nail in the Coffin of White-Box AES Implementations
Uncategorized
Uncategorized
The goal of white-box cryptography is to design implementations of common cryptographic algorithm (e.g. AES) that remain secure against an attacker with full control of the implementation and execution environment. This concept was put forward a decade ago by Chow et al. (SAC 2002) who proposed the first white-box implementation of AES. Since then, several works have been dedicated to the design of new implementations and/or the breaking of existing ones.
In this paper, we describe a new attack against the original implementation of Chow et al. (SAC 2002), which efficiently recovers the AES secret key as well as the private external encodings in complexity $2^{22}$. Compared to the previous attack due to Billet et al. (SAC 2004) of complexity $2^{30}$, our attack is not only more efficient but also simpler to implement. Then, we show that the \emph{last} candidate white-box AES implementation due to Karroumi (ICISC 2010) can be broken by a direct application of either Billet et al. attack or ours. Specifically, we show that for any given secret key, the overall implementation has the \emph{exact same} distribution as the implementation of Chow et al. making them both vulnerable to the same attacks.
By improving the state of the art of white-box cryptanalysis and putting forward new attack techniques, we believe our work brings new insights on the failure of existing white-box implementations, which could be useful for the design of future solutions.
How to Use Indistinguishability Obfuscation: Deniable Encryption, and More
Uncategorized
Uncategorized
We introduce a new technique, that we call punctured programs,
to apply indistinguishability obfuscation towards cryptographic
problems. We use this technique to carry out a systematic study of
the applicability of indistinguishability obfuscation to a variety of
cryptographic goals. Along the way, we resolve the 16-year-old open
question of Deniable Encryption, posed by Canetti, Dwork, Naor,
and Ostrovsky in 1997: In deniable encryption, a sender who is forced
to reveal to an adversary both her message and the randomness she used
for encrypting it should be able to convincingly provide ``fake''
randomness that can explain any alternative message that she would
like to pretend that she sent. We resolve this question by giving the
first construction of deniable encryption that does not require
any pre-planning by the party that must later issue a denial.
In addition, we show the generality of our punctured programs
technique by also constructing a variety of core cryptographic objects
from indistinguishability obfuscation and one-way functions (or close
variants). In particular we obtain: public key encryption, short
``hash-and-sign'' selectively secure signatures, chosen-ciphertext
secure public key encryption, non-interactive zero knowledge proofs
(NIZKs), injective trapdoor functions, and oblivious transfer. These
results suggest the possibility of indistinguishability
obfuscation becoming a ``central hub'' for cryptography.
Secret Disclosure attack on Kazahaya, a Yoking-Proof For Low-Cost RFID Tags
Peris-Lopez et al. recently provides some guidelines that should be followed to
design a secure yoking-proof protocol. In addition, conforming to those guidelines and
EPC C1 G2, they presented a yoking-proof for low-cost RFID tags, named Kazahaya. However,
in this letter, we scrutinize its security showing how an passive adversary can retrieve secret
parameters of patient's tag in cost of O(216) o-line PRNG evaluations. Given the tag's secret
parameters, any security claims are ruined. Nevertheless, to show other weaknesses of the
protocol and rule out any possible improvement by increasing the length of the used PRNG,
we presented a forgery attack that shows that a proof generated at time tn can be used to
forge a valid proof for any desired time tj . The success probability of this attack is `1' and the
complexity is negligible.
Secure Channel Coding Schemes based on Polar Codes
In this paper, we propose two new frameworks for joint encryption encoding schemes based on polar codes, namely efficient and secure joint secret/public key encryption channel coding schemes. The issue of using new coding structure, i.e. polar codes in McEliece-like and RN-like schemes is addressed. Cryptanalysis methods show that the proposed schemes have an acceptable level of security with a relatively smaller key size in comparison with the previous works. The results indicate that both schemes provide an efficient error performance and benefit from a higher code rate which can approach the channel capacity for large enough polar codes. The most important property of the proposed schemes is that if we increase the block length of the code, we can have a higher code rate and higher level of security without significant changes in the key size of the scheme. The resulted characteristics of the proposed schemes make them suitable for high-speed communications, such as deep space communication systems.
Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits
In this work, we study indistinguishability obfuscation and functional encryption for general circuits:
Indistinguishability obfuscation requires that given any two equivalent circuits C_0 and C_1 of similar size, the obfuscations of C_0 and C_1 should be computationally indistinguishable.
In functional encryption, ciphertexts encrypt inputs x and keys are issued for circuits C. Using the key SK_C to decrypt a ciphertext CT_x = Enc(x), yields the value C(x) but does not reveal anything else about x. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually.
We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits. We accomplish this goal in three steps:
- We describe a candidate construction for indistinguishability obfuscation for NC1 circuits. The security of this construction is based on a new algebraic hardness assumption. The candidate and assumption use a simplified variant of multilinear maps, which we call Multilinear Jigsaw Puzzles.
- We show how to use indistinguishability obfuscation for NC1 together with Fully Homomorphic Encryption (with decryption in NC1) to achieve indistinguishability obfuscation for all circuits.
- Finally, we show how to use indistinguishability obfuscation for circuits, public-key encryption, and non-interactive zero knowledge to achieve functional encryption for all circuits. The functional encryption scheme we construct also enjoys succinct ciphertexts, which enables several other applications.
Revisiting the BGE Attack on a White-Box AES Implementation
White-box cryptography aims to protect the secret key of a cipher in an environment in which an adversary has full access to the implementation of the cipher and its execution environment. In 2002, Chow, Eisen, Johnson and van Oorschot proposed a white-box implementation of AES. In 2004, Billet, Gilbert and Ech-Chatbi presented an efficient attack (referred to as the BGE attack) on this implementation, extracting its embedded AES key with a work factor of $2^{30}$. In 2012, Tolhuizen presented an improvement of the most time-consuming phase of the BGE attack. This paper presents several improvements to the other phases of the BGE attack. The paper shows that the overall work factor of the BGE attack is reduced to $2^{22}$ when all improvements are implemented. In 2010, Karroumi presented a white-box AES implementation that is designed to withstand the BGE attack. This paper shows that the implementations of Karroumi and Chow \emph{et al.} are the same. As a result, Karroumi's white-box AES implementation is vulnerable to the attack it was designed to resist.
A Note On the Storage Requirement for AKS Primality Testing Algorithm
We remark that AKS primality testing algorithm needs about 1,000,000,000 G (gigabyte) storage space for a number of 1024 bits. Such storage requirement is hard to meet in practice. To the best of our knowledge, it is impossible for current operating systems to write and read data in so huge storage space. Thus, the running time for AKS algorithm shuould not be simply estimated as usual in terms of the amount of arithmetic operations.
Flush+Reload: a High Resolution, Low Noise, L3 Cache Side-Channel Attack
Uncategorized
Uncategorized
Sharing memory pages between non-trusting processes
is a common method of reducing the memory footprint
of multi-tenanted systems. In this paper we demonstrate
that, due to a weakness in the Intel X86 processors,
page sharing exposes processes to information leaks. We
present FLUSH+RELOAD, a cache side-channel attack
technique that exploits this weakness to monitor access
to memory lines in shared pages. Unlike previous cache
side-channel attacks, FLUSH+RELOAD targets the Last-
Level Cache (i.e. L3 on processors with three cache levels).
Consequently, the attack program and the victim do
not need to share the execution core.
We demonstrate the efficacy of the FLUSH+RELOAD
attack by using it to extract the private encryption keys
from a victim program running GnuPG 1.4.13. We tested
the attack both between two unrelated processes in a single
operating system and between processes running in
separate virtual machines. On average, the attack is able
to recover 96.7% of the bits of the secret key by observing
a single signature or decryption round.
Dynamic Runtime Methods to Enhance Private Key Blinding
In this paper we propose new methods to blind exponents
used in RSA and in elliptic curves based algorithms. Due to classical
differential power analysis (DPA and CPA), a lot of countermeasures to
protect exponents have been proposed since 1999 Kocher [20] and by
Coron [13]. However, these blinding methods present some drawbacks
regarding execution time and memory cost. It also got some weaknesses.
Indeed they could also be targeted by some attacks such as The Carry
Leakage on the Randomized Exponent proposed by P.A. Fouque et al.
in [23] or inefficient against some others analysis such as Single Power
Analysis. In this article, we explain how the most used method could
be exploited when an attacker can access test samples. We target here
new dynamic blinding methods in order to prevent from any learning
phase and also to improve the resistance against the latest side channel
analyses published.
Weakness of F_{3^{6*509}} for Discrete Logarithm Cryptography
In 2013, Joux, and then Barbulescu, Gaudry, Joux and Thomé, presented new algorithms for computing discrete logarithms in finite fields of small and medium characteristic. We show that these new algorithms render the finite field F_{3^{6*509}} = F_{3^{3054}} weak for discrete logarithm cryptography in the sense that discrete logarithms in this field can be computed significantly faster than with the previous fastest algorithms. Our concrete analysis shows that the supersingular elliptic curve over F_{3^{509}} with embedding degree 6 that had been considered for implementing pairing-based cryptosystems at the 128-bit security level in fact provides only a significantly lower level of security. Our work provides a convenient framework and tools for performing a concrete analysis of the new discrete logarithm algorithms and their variants.
Implementing Lightweight Block Ciphers on x86 Architectures
Lightweight block ciphers are designed so as to fit into very constrained environments, but usually not really with software performance in mind. For classical lightweight applications where many constrained devices communicate with a server, it is also crucial that the cipher has good software performance on the server side. Recent work has shown that bitslice implementations applied to Piccolo and PRESENT led to very good software speeds, thus making lightweight ciphers interesting for cloud applications. However, we remark that bitslice implementations might not be interesting for some situations, where the amount of data to be enciphered at a time is usually small, and very little work has been done on non-bitslice implementations.
In this article, we explore general software implementations of lightweight ciphers on x86 architectures, with a special focus on LED, Piccolo and PRESENT. First, we analyze table-based implementations, and we provide a theoretical model to predict the behavior of various possible trade-offs depending on the processor cache latency profile. We obtain the fastest table-based implementations for our lightweight ciphers, which is of interest for legacy processors. Secondly, we apply to our portfolio of primitives the vperm implementation trick for 4-bit Sboxes, which gives good performance, extra side-channels protection, and is quite fit for many lightweight primitives. Finally, we investigate bitslice implementations, analyzing various costs which are usually neglected (bitsliced form (un)packing, key schedule, etc.), but that must be taken in account for many lightweight applications. We finally discuss which type of implementation seems to be the best suited depending on the applications profile.
Sequential message authentication code without random oracles
Katz et al. provided a generic transform to construct aggregate message authentication codes and imposed a lower bound on the length of one aggregate MAC tag. The lower bound shows that the required tag length is at least linear with the number of messages when fast verification such as constant or logarithmic computation overhead is required. Aggregate message authentication codes are useful in settings such as mobile ad-hoc networks where devices are resource-constrained and energy cost is at a premium. In this paper, we introduce the notion of sequential aggregate message authentication code (SAMAC). We present a security model for this notion under unforgeability against chosen message and verification query attack and construct an efficient SAMAC scheme by extending a number-theoretic MAC construction due to Dodis et al. We prove the security of our SAMAC scheme under the CDH assumption in the standard model. Our SAMAC scheme improves the lower bound with the help of the underlying algebraic structure. Performance analysis shows that our SAMAC scheme yields constant computation for the verifier as well as fixed length for one aggregate.
Optimally Anonymous and Transferable Conditional E-cash
Transferable conditional electronic-cash (e-cash) allows a payer to spend an e-cash based on the outcome not known in advance. It also allows a payee to spend the e-cash to others, or deposit the e-cash to a bank based on the future outcome. Among security properties, the anonymity of the payer has been widely studied. However, the payer is linkable in the existing conditional e-cash schemes. This paper presents the first optimally anonymous and transferable conditional electronic-cash (e-cash) system based on two recent cryptographic primitives, i.e., the Groth-Sahai(GS) proof system
and the commuting signatures, to obtain the user's unlinkability and optimal anonymity. A publisher is introduced to publish the conditions, and is firstly formalized. By dividing the deposit protocol into two parts, the anonymity of the user is obtained in the deposit protocol. Compared with the existing conditional e-cash schemes, this scheme has the constant size for the computation and communication. Finally, we give the security proof in the standard model.
On Fair Exchange, Fair Coins and Fair Sampling
We study various classical secure computation problems in the context of fairness, and relate them with each other. We also systematically study fair sampling problems (i.e., inputless functionalities) and discover three levels of complexity for them.
Our results include the following:
-Fair exchange cannot be securely reduced to the problem of fair coin-tossing by an r-round protocol, except with an error that is $\Omega(1/r)$.
-Finite fair {\em sampling} problems with rational probabilities can all be reduced to fair coin-tossing and unfair 2-party computation (or equivalently, under computational assumptions). Thus, for this class of functionalities, fair coin-tossing is complete.
-Only sampling problems which have fair protocols without any fair setup are the trivial ones in which the two parties can sample their outputs independently. Others all have an $\Omega(1/r)$ error, roughly matching an upper bound for fair sampling from Moran et al. (TCC 2009).
-We study communication-less protocols for sampling, given another sampling problem as setup, since such protocols are inherently fair. We use spectral graph theoretic tools to show that it is impossible to reduce a sampling problem with {\em common information} (like fair coin-tossing) to a sampling problem without (like 'noisy' coin-tossing, which has a small probability of disagreement).
The last result above is a slightly sharper version of a classical result by Witsenhausen from 1975. Our proof reveals the connection between the tool used by Witsenhausen, namely 'maximal correlation', and spectral graph theoretic tools like Cheeger inequality.
Last updated: 2014-07-27
On Stochastic Security of Java Crypto and NIST DRBG Pseudorandom Sequences
Uncategorized
Uncategorized
Cryptographic primitives such as secure hash functions
(e.g., SHA1, SHA2, and SHA3) and symmetric
key block ciphers (e.g., AES and TDES) have been commonly used
to design pseudorandom generators with counter modes
(e.g., in Java Crypto Library and in NIST SP800-90A standards).
It is assumed that if these primitives
are secure then the pseudorandom generators
based on these primitives are also secure. However,
no systematic research and analysis have been done
to support this assumption.
Based on complexity theoretic results for pseudorandom sequences,
this paper analyzes stochastic properties of long sequences
produced by hash function based pseudorandom generators DRBG from
NIST SP800-90A and SHA1PRNG from Java Crypto Library.
Our results show that none of these sequences satisfy the
law of the iterated logarithm (LIL) which holds
for polynomial time pseudorandom sequences. Our results also
show that if the seeds and counters for pseudorandom generators
are not appropriately chosen, then the generated sequences
have strongly biased values for LIL-tests
and could be distinguished from uniformly chosen sequences
with a high probability.
Based on these results, appropriate seeding and counter
methods are proposed for pseudorandom generator designs.
The results in this paper reveal some ``non-random'' behavior of
SHA1, SHA2, and of the recently announced SHA3.
Revisiting Conditional Rényi Entropies and Generalizing Shannon's Bounds in Information Theoretically Secure Encryption
Information theoretic cryptography is discussed based on conditional Rényi entropies. Our discussion focuses not only on cryptography but also on the definitions of conditional Rényi entropies and the related information theoretic inequalities. First, we revisit conditional Rényi entropies, and clarify what kind of properties are required and actually satisfied. Then, we propose security criteria based on Rényi entropies, which suggests us deep relations between (conditional) Rényi entropies and error probabilities by using several guessing strategies. Based on these results, unified proof of impossibility, namely, the lower bounds of key sizes is derived based on conditional Rényi entropies. Our model and lower bounds include the Shannon's perfect secrecy, and the min-entropy based encryption presented by Dodis, and Alimomeni and Safavi-Naini. Finally, a new optimal symmetric key encryption is proposed which achieve our lower bounds.
Pushing the Limits of SHA-3 Hardware Implementations to Fit on RFID
There exists a broad range of RFID protocols in literature that propose hash functions as cryptographic primitives. Since Keccak has been selected as the winner of the NIST SHA-3 competition in 2012, there is the question of how far we can push the limits of Keccak to fulfill the stringent requirements of passive low-cost RFID. In this paper, we address this question by presenting a hardware implementation of Keccak that aims for lowest power and lowest area. Our smallest (full-state) design requires only 2\,927 GEs (for designs with external memory available) and 5\,522 GEs (total size including memory). It has a power consumption of $12.5\,\mu$W at 1\,MHz on a low leakage 130\,nm CMOS process technology. As a result, we provide a design that needs 40\,\% less resources than related work. Our design is even smaller than the smallest SHA-1 and SHA-2 implementations.
Clustering Algorithms for Non-Profiled Single-Execution Attacks on Exponentiations
Uncategorized
Uncategorized
Most implementations of public key cryptography employ exponentiation algorithms. Side-channel attacks on secret exponents are typically bound to the leakage of single executions due to cryptographic protocols or side-channel countermeasures such as blinding. We propose for the first time, to use a well-established class of algorithms, i.e. unsupervised cluster classification algorithms such as the k-means algorithm to attack cryptographic exponentiations and recover secret exponents without any prior profiling, manual tuning or leakage models. Not requiring profiling is of significant advantage to attackers, as are well-established algorithms. The proposed non-profiled single-execution attack is able to exploit any available single-execution leakage and provides a straight-forward option to combine simultaneous measurements to increase the available leakage. We present empirical results from attacking an FPGA-based elliptic curve scalar multiplication using the k-means clustering algorithm and successfully exploit location-based leakage from high-resolution electromagnetic field measurements to achieve a low remaining brute-force complexity of the secret exponent. A simulated multi-channel measurement even enables an error-free recovery of the exponent.
A Uniform Min-Max Theorem with Applications in Cryptography
We present a new, more constructive proof of von Neumann's Min-Max Theorem for two-player zero-sum game --- specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior '99) with the advantage that the algorithm runs in poly(n) time even when a pure strategy for the first player is a distribution chosen from a set of distributions over {0,1}^n. This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem).
We describe several applications, including: a more modular and improved uniform version of Impagliazzo's Hardcore Theorem (FOCS '95); regularity theorems that provide efficient simulation of distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC '09)); an improved version of the Weak Regularity Lemma of Frieze and Kannan; a Dense Model Theorem for uniform algorithms; and showing impossibility of constructing Succinct Non-Interactive Arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC '11) for the nonuniform setting).
Fast Exhaustive Search for Quadratic Systems in $\mathbb{F}_2$ on FPGAs --- Extended Version
In 2010, Bouillaguet et al. proposed an efficient solver for polynomial systems over $\mathbb{F}_2$ that trades memory for speed. As a result, 48 quadratic equations in 48 variables can be solved on a graphics card (GPU) in 21 minutes. The research question that we would like to answer in this paper is how specifically designed hardware performs on this task. We approach the answer by solving multivariate quadratic systems on reconfigurable hardware, namely Field-Programmable Gate Arrays (FPGAs). We show that, although the algorithm proposed by Bouillaguet et al. has a better asymptotic time complexity than traditional enumeration algorithms, it does not have a better asymptotic complexity in terms of silicon area. Nevertheless, our FPGA implementation consumes 25 times less energy than their GPU implementation. This is a significant improvement, not to mention that the monetary cost per unit of computational power for FPGAs is generally much cheaper than that of GPUs.
Efficient Cryptosystems From $2^k$-th Power Residue Symbols
Goldwasser and Micali (1984) highlighted the importance of randomizing the plaintext for public-key encryption and introduced the notion of semantic security. They also realized a cryptosystem meeting this security notion under the standard complexity assumption of deciding quadratic residuosity modulo a composite number. The Goldwasser-Micali cryptosystem is simple and elegant but is quite wasteful in bandwidth when encrypting large messages. A number of works followed to address this issue and proposed various modifications.
This paper revisits the original Goldwasser-Micali cryptosystem using 2^k-th power residue symbols. The so-obtained cryptosystems appear as a very natural generalization for k >= 2 (the case k = 1 corresponds exactly to the Goldwasser-Micali cryptosystem). Advantageously, they are efficient in both bandwidth and speed; in particular, they allow for fast decryption. Further, the cryptosystems described in this paper inherit the useful features of the original cryptosystem (like its homomorphic property) and are shown to be secure under a similar complexity assumption. As a prominent application, this paper describes an efficient lossy trapdoor function based thereon.
Full Domain Hash from (Leveled) Multilinear Maps and Identity-Based Aggregate Signatures
In this work, we explore building constructions with full domain hash structure, but with standard model proofs that do not employ the random oracle heuristic. The launching point for our results will be the utilization of a ``leveled'' multilinear map setting for which Garg, Gentry, and Halevi (GGH) recently gave an approximate candidate. Our first step is the creation of a standard model signature scheme that exhibits the structure of the Boneh, Lynn and Shacham signatures. In particular, this gives us a signature that admits unrestricted aggregation.
We build on this result to offer the first *identity-based* aggregate signature scheme that admits unrestricted aggregation. In our construction, an arbitrary-sized set of signatures on identity/message pairs can be aggregated into a single group element, which authenticates the entire set. The identity-based setting has important advantages over regular aggregate signatures in that it eliminates the considerable burden of having to store, retrieve or verify a set of verification keys, and minimizes the total cryptographic overhead that must be attached to a set of signer/message pairs. While identity-based signatures are trivial to achieve, their aggregate counterparts are not. To the best of our knowledge, no prior candidate for realizing unrestricted identity-based aggregate signatures exists in either the standard or random oracle models.
A key technical idea underlying these results is the realization of a hash function with a Naor-Reingold-type structure that is publicly computable using repeated application of the multilinear map. We present our results in a generic ``leveled'' multilinear map setting and then show how they can be translated to the GGH graded algebras analogue of multilinear maps.
On Symmetric Encryption with Distinguishable Decryption Failures
We propose to relax the assumption that decryption failures are indistinguishable in security models for symmetric encryption. Our main purpose is to build models that better reflect the reality of cryptographic implementations, and to surface the security issues that arise from doing so. We systematically explore the consequences of this relaxation, with some surprising consequences for our understanding of this basic cryptographic primitive. Our results should be useful to practitioners who wish to build accurate models of their implementations and then analyse them. They should also be of value to more theoretical cryptographers proposing new encryption schemes, who, in an ideal world, would be compelled by this work to consider the possibility that their schemes might leak more than simple decryption failures.
How to Sign Paper Contracts? Conjectures & Evidence Related to Equitable & Efficient Collaborative Task Scheduling
This paper explores ways of performing commutative tasks by $N$ parties. Tasks are defined as {\sl commutative} if the order at which parties perform tasks can be freely changed without affecting the final result. It is easy to see that arbitrary $N$-party commutative tasks cannot be completed in less than $N-1$ basic time units.
We conjecture that arbitrary $N$-party commutative tasks cannot be performed in $N-1$ time units by exchanging less than $4N-6$ messages and provide computational evidence in favor this conjecture. We also explore the most equitable commutative task protocols.
Practical-Time Attacks Against Reduced Variants of MISTY1
MISTY1 is a block cipher designed by Matsui in 1997. It is widely deployed in Japan where it is an e-government standard, and is recognized internationally as a NESSIE-recommended cipher as well as
an ISO standard and an RFC. Moreover, MISTY1 was selected to be the blueprint on top of which KASUMI, the GSM/3G block cipher, was based. Since its introduction, and especially in recent years, MISTY1 was subjected to extensive cryptanalytic efforts, which resulted in numerous attacks on its reduced variants. Most of these attacks aimed at maximizing the number of attacked rounds, and as a result, their complexities are highly impractical.
In this paper we pursue another direction, by focusing on attacks with a practical time complexity. The best previously-known attacks with practical complexity against MISTY1 could break either 4 rounds (out of 8), or 5 rounds in a modified variant in which some of the FL functions are removed. We present an attack on 5-round MISTY1 with all the FL functions present whose time complexity is 2^38 encryptions. When the FL functions are removed, we present a devastating (and experimentally verified) related-key attack on the full 8-round variant, requiring only 2^18 data and time.
While our attacks clearly do not compromise the security of the full
MISTY1, they expose several weaknesses in MISTY1’s components, and
improve our understanding of its security. Moreover, future designs which rely on MISTY1 as their base, should take these issues into close consideration.
Security of the Misty Structure Beyond the Birthday Bound
In this paper, we first prove beyond-birthyday-bound security for the Misty structure. Specifically, we show that an $r$-round Misty structure is secure against CCA attacks up to $O(2^{\frac{rn}{r+7}})$ query complexity, where $n$ is the size of each round permutation. So for any $\epsilon>0$, a sufficient number of rounds would guarantee the security of the Misty structure up to $2^{n(1-\epsilon)}$ query complexity.
DupLESS: Server-Aided Encryption for Deduplicated Storage
Uncategorized
Uncategorized
Cloud storage service providers such as Dropbox, Mozy, and others perform deduplication to save space by only storing one copy of each file uploaded. Should clients conventionally encrypt their files, however, savings are lost. Message-locked encryption (the most prominent manifestation of which is convergent encryption) resolves this tension. However it is inherently subject to brute-force attacks that can recover files falling into a known set. We propose an architecture that provides secure deduplicated storage resisting brute-force attacks, and realize it in a system called DupLESS. In DupLESS, clients encrypt under message-based keys obtained from a key-server via an oblivious PRF protocol. It enables clients to store encrypted data with an existing service, have the service perform deduplication on their behalf, and yet achieves strong confidentiality guarantees. We show that encryption for deduplicated storage can achieve performance and space savings close to that of using the storage service with plaintext data.
Faster 128-EEA3 and 128-EIA3 Software
The 3GPP Task Force recently supplemented mobile LTE network security with an additional set of confidentiality and integrity algorithms, namely 128-EEA3 and 128-EIA3 built on top of ZUC, a new keystream generator. We propose two novel techniques to improve the software performance of these algorithms. We show how delayed modular reduction increases the efficiency of the LFSR feedback function, yielding performance gains for ZUC and thus both 128-EEA3 and 128-EIA3. We also show how to leverage carryless multiplication to evaluate the universal hash function making up the core of 128-EIA3. Our software implementation results on Qualcomm's Hexagon DSP architecture indicate significant performance gains when employing these techniques: up to roughly a 2-fold and 2.5-fold throughput improvement for 128-EEA3 and 128-EIA3, respectively.
Toeplitz matrix-vector product based GF(2^n) shifted polynomial basis multipliers for all irreducible pentanomials
Besides Karatsuba algorithm, optimal Toeplitz matrix-vector product (TMVP) formulae is another approach to design GF(2^n) subquadratic multipliers. However, when GF(2^n) elements are represented using a shifted polynomial basis, this approach is currently appliable only to GF(2^n)s generated by all irreducible trinomials and a special type of irreducible pentanomials, not all general irreducible pentanomials. The reason is that no transformation matrix, which transforms the Mastrovito matrix into a Toeplitz matrix, has been found. In this article, we propose such a transformation matrix and its inverse matrix for an arbitrary irreducible pentanomial. Because there is no known value of n for which either an irreducible trinomial or an irreducible pentanomial does not exist, this transformation matrix makes the TMVP approach a universal tool, i.e., it is applicable to all practical GF(2^n)s.
Efficient Garbling from a Fixed-Key Blockcipher
We advocate schemes based on fixed-key AES as the best route to highly
efficient circuit-garbling. We provide such schemes making only one AES call per garbled-gate evaluation. On the theoretical side, we justify the security of these methods in the random-permutation model, where parties have access to a public random permutation. On the practical side, we provide the JustGarble system, which implements our schemes.
JustGarble evaluates moderate-sized garbled-circuits at an amortized
cost of 23.2 cycles per gate (7.25 nsec), far faster than any prior reported results.
Break WEP Faster with Statistical Analysis
The Wired Equivalent Protocol is nowadays considered as unsafe. However the only academic research that tries to break WEP has been done by Fluhrer, Mantin and Shamir, who have published a report on a specific attack. Nevertheless, an unknown person under the pseudonym Korek has published 17 attacks, which are now used by both AirCrack and WepLab. For a network with average load traffic, the FMS attack would need roughfly 40 days in order to find the key (4 millions packets needed), whereas Korek's attacks in addition to stimulation of the network load, reduce this time under 15 minutes (325'000 packets needed) for a 128 bits key (104 bits secret key). We analyzed these attacks, gave a mathematical description of them and explained a new attack, in order to identify new ones.
Instantiating Random Oracles via UCEs
Uncategorized
Uncategorized
This paper provides a (standard-model) notion of security for (keyed)
hash functions, called UCE, that we show enables instantiation of
random oracles (ROs) in a fairly broad and systematic way. Goals and
schemes we consider include deterministic PKE, message-locked
encryption, hardcore functions, point-function obfuscation, OAEP,
encryption secure for key-dependent messages, encryption secure under
related-key attack, proofs of storage and adaptively-secure garbled
circuits with short tokens. We can take existing, natural and
efficient ROM schemes and show that the instantiated scheme resulting
from replacing the RO with a UCE function is secure in the standard
model. In several cases this results in the first standard-model
schemes for these goals. The definition of UCE-security itself asks
that outputs of the function look random given some ``leakage,'' even
if the adversary knows the key, as long as the leakage is
appropriately restricted.
Locally Computable UOWHF with Linear Shrinkage
This is an errata for our paper, ``Locally Computable UOWHF with Linear Shrinkage''. There is a gap in the proof of Theorem 4.1 that asserts that the collection $F_{P,n,m}$ is $\delta$-secure $\beta$-random target-collision resistant assuming the one-wayness and the pseudorandomness of the collection for related parameters. We currently do not know whether Theorem 4.1 (as stated in Section 4) holds. We are grateful to Colin Sandon for pointing out the gap.
We note that Theorem 5.1, which transforms any $\delta$-secure $\beta$-random target collision resistant collection to a target collision resistant collection while preserving constant locality and linear shrinkage, remains intact. Thus, one can construct a locally computable UOWHF with linear shrinkage based on the hypothesis that random local functions are $\delta$-secure $\beta$-random target-collision resistant. We also mention that locally-computable functions with linear-shrinkage that achieve a stronger form of *collision-resistance* were constructed by Applebaum, Haramaty, Ishai, Kushilevitz and Vaikuntanathan (ITCS 2017) based on incomparable assumptions.
Private Database Queries Using Somewhat Homomorphic Encryption
In a private database query system, a client issues queries to a database and obtains the results without learning anything else about the database and without the server learning the query. While previous work has yielded systems that can efficiently support disjunction queries, performing conjunction queries privately remains an open problem. In this work, we show that using a polynomial encoding of the database enables efficient implementations of conjunction queries using somewhat homomorphic encryption. We describe a three-party protocol that supports efficient evaluation of conjunction queries. Then, we present two implementations of our protocol using Paillier's
additively homomorphic system as well as Brakerski's somewhat homomorphic cryptosystem. Finally, we show that the additional homomorphic properties of the Brakerski cryptosystem allow us to handle queries involving several thousand elements over a million-record database in just a few minutes, far outperforming the implementation using the additively homomorphic system.
Light-weight primitive, feather-weight security? A cryptanalytic knock-out. (Preliminary results)
In [12], the authors present a new light-weight cryptographic primitive which supports an associated RFID-based authentication protocol. The primitive has some structural similarities to AES, but is presented as a keyed one-way function using a 128-bit key. Although a security analysis is included, this is at a high-level only. To provide a more concrete idea as to the security of this primitive, we therefore make three contributions: first, a structural attack requiring $O(2^{5})$ plaintext/ciphertext pairs (and hence effort online) plus $O(2^{21})$ effort offline, second an algebraic attack on round reduced versions of the primitive which requires only a single plaintext/ciphertext pair, and, third debunk the claimed attack of [36] on the same primitive as wishful thinking. Our structural attack completely breaks the primitive and the algebraic attack highlights a crucial weakness of the primitive: we conclude that although one can consider countermeasures against these specific attacks, the design in general is questionable and should therefore be avoided.
The Holey Grail: A special score function for non-binary traitor tracing
Uncategorized
Uncategorized
We study collusion-resistant traitor tracing in the simple decoder approach, i.e. assignment of scores for each user separately.
We introduce a new score function for non-binary bias-based traitor tracing. It has three special properties that have long been sought after:
(i) The expected score of an innocent user is zero in each content position.
(ii) The variance of an innocent user's score is~1 in each content position.
(iii) The expectation of the coalition's score does not depend on the
collusion strategy.
We also find a continuous bias distribution that optimizes the asymptotic (large coalition) performance.
In the case of a binary alphabet our scheme reduces exactly to the
symmetrized Tardos traitor tracing system.
Unfortunately, the asymptotic fingerprinting rate
of our new scheme decreases with growing alphabet size.
We regret to inform you that this grail has holes.
How to Share a Lattice Trapdoor: Threshold Protocols for Signatures and (H)IBE
We develop secure \emph{threshold} protocols for two important
operations in lattice cryptography, namely, generating a hard lattice
$\Lambda$ together with a ``strong'' trapdoor, and sampling from a
discrete Gaussian distribution over a desired coset of $\Lambda$ using
the trapdoor. These are the central operations of many cryptographic
schemes: for example, they are exactly the key-generation and signing
operations (respectively) for the GPV signature scheme, and they are
the public parameter generation and private key extraction operations
(respectively) for the GPV IBE. We also provide a protocol for
trapdoor delegation, which is used in lattice-based hierarchical IBE
schemes. Our work therefore directly transfers all these systems to
the threshold setting.
Our protocols provide information-theoretic (i.e., statistical)
security against adaptive corruptions in the UC framework, and they
are private and robust against an
optimal number of semi-honest or malicious parties. Our Gaussian
sampling protocol is both noninteractive and efficient, assuming
either a trusted setup phase (e.g., performed as part of key
generation) or a sufficient amount of interactive but offline
precomputation, which can be performed before the inputs to the
sampling phase are known.
On Tight Security Proofs for Schnorr Signatures
Uncategorized
Uncategorized
The Schnorr signature scheme is the most efficient signature scheme based on the discrete logarithm problem and a long line of research investigates the existence of a tight security reduction for this scheme in the random oracle model. Almost all recent works present lower tightness bounds and most recently Seurin (Eurocrypt 2012) showed that under certain assumptions the non-tight security proof for Schnorr signatures in the random oracle by Pointcheval and Stern (Eurocrypt 1996) is essentially optimal. All previous works in this direction rule out tight reductions from the (one-more) discrete logarithm problem.
In this paper we introduce a new meta-reduction technique, which shows lower bounds for the large and very natural class of generic reductions. A generic reduction is independent of a particular representation of group elements. Most reductions in state-of-the-art security proofs have this property. It is desirable, because then the reduction applies generically to any concrete instantiation of the group. Our approach shows unconditionally that there is no tight generic reduction from any natural non-interactive computational problem $\Pi$ defined over algebraic groups to breaking Schnorr signatures, unless solving $\Pi$ is easy.
In an additional application of the new meta-reduction technique, we also unconditionally rule out any (even non-tight) generic reduction from natural non-interactive computational problems defined over algebraic groups to breaking Schnorr signatures in the non-programmable random oracle model.
The Improved Cube Attack on Grain-v1
The crucial problem of cube attack is the selection of cube set, which also being the most time-consuming process. This paper designs a new search algorithm which generates several linear equations through one cube set and applies cube attack to simplified version of Grain-v1algorithem. Our attack directly recovers 14 bits of the secret key when the initialization rounds in Grain-v1is 75 and finds 5 linear expressions about another 28 bits of the key.
Computational Fuzzy Extractors
Fuzzy extractors derive strong keys from noisy sources. Their security is usually defined information- theoretically, with gaps between known negative results, existential constructions, and polynomial-time constructions. We ask whether using computational security can close these gaps. We show the following:
-Negative Result: Noise tolerance in fuzzy extractors is usually achieved using an information reconciliation component called a secure sketch. We show that secure sketches are subject to upper bounds from coding theory even when the information-theoretic security requirement is relaxed. Specifically, we define computational secure sketches using conditional HILL pseudoentropy (Hastad et al., SIAM J. Computing 1999). We show that a computational secure sketch implies an error-correcting code. Thus, HILL pseudoentropy is bounded by the size of the best error-correcting code. Similar bounds apply to information-theoretic secure sketches.
-Positive Result: We show that our negative result can be avoided by constructing and analyzing a computational fuzzy extractor directly. We modify the code-offset construction (Juels and Wattenberg, CCS 1999) to use random linear codes. Security is based on the Learning with Errors (LWE) problem and holds when the noisy source is uniform or symbol-fixing (that is, each dimension is either uniform or fixed). As part of the proof, we reduce symbol-fixing security to uniform error security.
SL2 homomorphic hash functions: Worst case to average case reduction and short collision search
We study homomorphic hash functions into SL(2,q), the 2x2 matrices with determinant 1 over the
field with $q$ elements.
Modulo a well supported number theoretic hypothesis, which holds in particular for concrete
homomorphisms proposed thus far, we provide a worst case to average case reduction for these hash functions:
upto a logarithmic factor, a random homomorphism is as secure as _any_ concrete homomorphism.
For a family of homomorphisms containing several concrete proposals in the literature,
we prove that collisions of length O(log(q)) can be found in running time O(sqrt(q)).
For general homomorphisms we offer an algorithm that, heuristically and according to experiments,
in running time O(sqrt(q)) finds collisions of length O(log(q)) for q even, and length O(log^2(q)/loglog(q))$ for arbitrary q.
While exponetial time, our algorithms are faster in practice than all earlier generic algorithms,
and produce much shorter collisions.
A novel certificateless deniable authentication protocol
Deniable authenticated protocol is a new and attractive protocol compared to the traditional authentication protocol. It allows the appointed receiver to identify the source of a given message, but not to prove the identity of the sender to a third party even if the appointed receiver is willing to reveal its private key. In this paper, we first define a security model for certificateless deniable authentication protocols. Then we propose a non-interactive certificateless deniable authentication protocol, by combining deniable authentication protocol with certificateless cryptography. In addition, we prove its security in the random oracle model.
Policy-Based Signatures
Uncategorized
Uncategorized
We introduce policy-based signatures (PBS), where a signer can only sign
messages conforming to some authority-specified policy. The main
requirements are unforgeability and privacy, the latter meaning that
signatures not reveal the policy. PBS offers value along two
fronts: (1)~On the practical side, they allow a corporation to
control what messages its employees can sign under the corporate key.
(2)~On the theoretical side, they unify existing work, capturing
others forms of signatures as special cases or allowing them to be
easily built. Our work focuses on definitions of PBS, proofs that
this challenging primitive is realizable for arbitrary policies,
efficient constructions for specific policies, and a few
representative applications.
Moduar Form Aprroach to Solving Lattice Problems
We construct new randomized algorithms to find the exact solution to the shortest and closest vector problems (SVP and CVP) in Euclidean norm (l2) for the integral lattice. Not only the minimal norm of non-zero lattice vectors in SVP and the minimal distance in CVP, but also how many lattice vectors reach those minimums can be simultaneously computed by the algorithms. Our approach is based on some special properties of the generating function of lattice vectors’ (l2-)norms, the lattice-associated theta function, which is used in prior works mainly for hardness analysis on lattice problems but rarely for computational purposes. Such function’s modular properties are exploited to develop our SVP and CVP solvers. In computational complexity perspective and take our SVP solver as an example, for the integral lattice family {Λn} of dimension dimΛn=n and level h=l(Λn) (the minimal positive integer such that the dual lattice Λn* scaled by h1/2 is integral) polynomial in n, the case frequently occurring in applications, this algorithm can find the minimal l2-norm of non-zero lattice vectors and the number of such shortest vectors in Λn with success probability 1-ε in asymptotic space complexity of polynomial in n and asymptotic time complexity of nO(n)log(1/ε). The only contribution to the algorithm’s exponential time complexity nO(n)log(1/ε) comes from independently repeating a randomized lattice vector sampler nO(n)log(1/ε) times. All the rest of operations contribute to the algorithm’s time-complexity only with an additive polynomial in n. Similar situations occur when solving the exact CVP by our algorithm. In other words, our solvers can be easily parallelized to be polynomial in time complexity. In addition, a variant of our CVP solver can solve the closest vector problem with preprocessing (CVPP) in polynomial time and nO(n)log(1/ε) space complexity.
Security Analysis of Lightweight Authentication Protocol from WISTP 2013
One of the key problems in Radio Frequency Identification (RFID) is security and privacy. Many RFID authentication protocols have been proposed to preserve security and privacy of the system. Nevertheless, most of these protocols are analyzed and it is shown that they can not provide security against some RFID attacks. In WISTP 2013, a new lightweight authentication protocol using AES S-box and some special function is presented. The new protocol has a good implementation in resource constrained tags. In this paper, we give the security analysis on this new authentication protocol. After impersonating the valid reader to query the tag and collecting the responses, we can deduce all the secrets shared between the reader and tag through analyzing the messages. The attack utilizes the structure of the invertible function and the property of the special function introduced in the new protocol.
Plug-and-Play IP Security: Anonymity Infrastructure Instead of PKI
We present the Plug-and-Play IP Security (PnP-IPsec) protocol. PnP-IPsec automatically establishes IPsec security associations between gateways, avoiding the need for manual administration and coordination between gateways, and the dependency on IPsec public key certificates - the two problems which are widely believed to have limited the use of IPsec mostly to intra-organization communication.
PnP-IPsec builds on Self-validated Public Data Distribution (SvPDD), a protocol that we present to establish secure connections between remote peers/networks, without depending on pre-distributed keys or certification infrastructure. Instead, SvPDD uses available anonymous communication infrastructures such as Tor, which we show to allow detection of MitM attacker interfering with communication. SvPDD may also be used in other scenarios lacking secure public key distribution, such as the initial connection to an SSH server.
We provide an open-source implementation of PnP-IPsec and SvPDD, and show that the resulting system is practical and secure.
Order-Preserving Encryption Secure Beyond One-Wayness
Semantic-security of individual bits under a ciphertext are fundamental notion in modern cryptography. In this work we present the first results about this fundamental problem for Order-Preserving Encryption (OPE): ``what plaintext information can be semantically hidden by OPE encryptions?'' While OPE has gained much attention in recent years due to its usefulness in secure databases, any partial-plaintext indistinguishability (semantic security) result for it was open. Here, we propose a new indistinguishability-based security notion for OPE, which can ensure \emph{secrecy of lower bits of a plaintext} (under essentially a random ciphertext probing setting). We then propose a new scheme satisfying this security notion (while earlier schemes do not satisfy it!). We note that the known security notions tell us nothing about the above partial- plaintext indistinguishability because they are limited to being one-way-based. In addition, we show that our security notion with specific parameters implies the known security notion called WOW, and further, our scheme achieves WOW with better parameters than earlier schemes.
Delegatable Functional Signatures
We introduce delegatable functional signatures (DFS) which support the delegation of signing capabilities to another party, called the evaluator, with respect to a functionality F. In a DFS, the signer of a message can choose an evaluator, specify how the evaluator can modify the signature without voiding its validity, allow additional input and decide how the evaluator can further delegate its capabilities.
The main contribution of this paper is twofold. First, we propose DFS, a novel cryptographic primitive that unifies several seemingly different signature primitives, including functional signatures as defined by Boyle, Goldwasser, and Ivan (eprint 2013/401), sanitizable signatures, identity based signatures, and blind signatures. To achieve this unification, we present several definitions of unforgeability and privacy. Finding appropriate and meaningful definitions in this context is challenging due to the natural mealleability of DFS and due to the multi-party setting that may involve malicious keys.
Second, we present a complete characterization of the instantiability of DFS under common assumptions, like the existence of one-way functions. Here, we present both positive and negative results. On the positive side we show that DFS not achieving our notion of privacy can be constructed from one-way functions. Furthermore, we show that unforgerable and private DFS can be constructed from doubly enhanced trapdoor permutations. On the negative side we show that the previous result is optimal regarding its underlying assumptions presenting an impossibility result for unforgeable private DFS from one-way permutations.
Automated Security Proofs for Almost-Universal Hash for MAC verification
Message authentication codes (MACs) are an essential primitive in cryptography. They are used to ensure the integrity and authenticity of a message, and can also be used as a building block for larger schemes, such as chosen-ciphertext secure encryption, or identity-based encryption. MACs are often built in two steps: first, the `front end' of the MAC produces a short digest of the long message, then the `back end' provides a mixing step to make the output of the MAC unpredictable for an attacker. Our verification method follows this structure. We develop a Hoare logic for proving that the front end of the MAC is an almost-universal hash function. The programming language used to specify these functions is fairly expressive and can be used to describe many block-cipher and compression function-based MACs. We implemented this method into a prototype that can automatically prove the security of almost-universal hash functions. This prototype can prove the security of the front-end of many CBC-based MACs (DMAC, ECBC, FCBC and XCBC to name only a few), PMAC and HMAC. We then provide a list of options for the back end of the MAC, each consisting of only two or three instructions, each of which can be composed with an almost-universal hash function to obtain a secure MAC.
Last updated: 2013-08-02
Attribute-Based Server-Aided Verification Signature
Attribute based signature (ABS) is a novel cryptographic primitive, which enables a party can sign messages for any predicate satisfy by their attributes. However, heavy computational cost is required during the verification procedure in most existing ABS schemes, which may needs many pairing operations. Pairing are costly operation when compared to exponentiation in the base group. As a result, this presents a greatly challenge for resource-limited users, such as smart cards and wireless sensor. In other words, verification can hardly be done in these devices if attribute based signature is employed. We solve this
problem by proposing a new notion called \emph{Attribute-Based Server-Aided Verification Signature}. It is similar to normal ABS scheme, but it further enables the verifier to verify the signature with the assistance of an external server. In this paper, we provide the security definition of Attribute-Based Server-Aided Verification Signature, and design a concrete server-aided verification protocol for Li et al.'s attribute based signature. We also prove that our protocol is secure with random oracles.
New Quadratic Bent Functions in Polynomial Forms with Coefficients in Extension Fields
Uncategorized
Uncategorized
In this paper, we first discuss the bentness of a large class of quadratic Boolean functions in polynomial form
$f(x)=\sum_{i=1}^{\frac{n}{2}-1}Tr^n_1(c_ix^{1+2^i})+ Tr_1^{n/2}(c_{n/2}x^{1+2^{n/2}})$, where
$c_i\in GF(2^n)$ for $1\leq i \leq \frac{n}{2}-1$ and $c_{n/2}\in GF(2^{n/2})$.
The bentness of these functions can be connected with linearized permutation
polynomials. Hence, methods for constructing quadratic bent functions are given. Further, we consider a subclass of quadratic Boolean functions of the form
$f(x)=\sum_{i=1}^{\frac{m}{2}-1}Tr^n_1(c_ix^{1+2^{ei}})+
Tr_1^{n/2}(c_{m/2}x^{1+2^{n/2}})$ , where $c_i\in GF(2^e)$, $n=em$ and $m$ is even. The bentness of these functions are characterized and some methods for constructing new quadratic bent functions are given. Finally, for a special case: $m=2^{v_0}p^r$ and
$gcd(e,p-1)=1$, we present the enumeration of quadratic bent functions.
The SIMON and SPECK Families of Lightweight Block Ciphers
In this paper we propose two families of block ciphers, SIMON and SPECK, each of which comes in a variety of widths and key sizes. While many lightweight block ciphers exist, most were designed to perform well on a single platform and were not meant to provide high performance across a range of devices. The aim of SIMON and SPECK is to fill the need for secure, flexible, and analyzable lightweight block ciphers. Each offers excellent performance on hardware and software platforms, is flexible enough to admit a variety of implementations on a given platform, and is amenable to analysis using existing techniques. Both perform exceptionally well across the full spectrum of lightweight applications, but SIMON is tuned for optimal performance in hardware, and SPECK for optimal performance in software.
Function-Private Subspace-Membership Encryption and Its Applications
Boneh, Raghunathan, and Segev (CRYPTO '13) have recently put forward the notion of function privacy and applied it to identity-based encryption, motivated by the need for providing predicate privacy in public-key searchable encryption. Intuitively, their notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. While Boneh et al. showed how to construct function-private identity-based encryption (which implies predicate-private encrypted keyword search), searchable encryption typically requires a richer set of predicates.
In this paper we significantly extend the function privacy framework. First, we introduce the new notion of subspace-membership encryption, a generalization of inner-product encryption, and formalize a meaningful and realistic notion for capturing its function privacy. Then, we present a generic construction of a function-private subspace-membership encryption scheme based on any inner-product encryption scheme. Finally, we show that function-private subspace-membership encryption can be used to construct function-private identity-based encryption. These are the first generic constructions of function-private encryption schemes based on non-function-private ones, resolving one of the main open problems posed by Boneh, Raghunathan, and Segev.
Efficient Two-Pass Anonymous Identity Authentication Using Smart Card
Recently, Khan et al. proposed an enhancement on a remote authentication scheme designed by Wang et al. which emphasizes on using dynamic identity. They claim that their improvement can avoid insider attack. However, we found the scheme lacks the anonymity property. Moreover, R. Madhusudhan et al. indicate their scheme also suffers the insider attack. Due to these observations, in this paper we propose a novel one which not only anonymously authenticates the remote user by using only two passes but also satisfies the ten requirements of an authentication scheme using smart card mentioned by Liao et al..
Functional Signatures and Pseudorandom Functions
Uncategorized
Uncategorized
In this paper, we introduce two new cryptographic primitives: \emph{functional digital signatures} and \emph{functional pseudorandom functions}.
In a functional signature scheme, in addition to a master signing key that can be used to sign any message, there are \emph{signing keys for a function} $f$, which allow one to sign any message in the range of $f$. As a special case, this implies the ability to generate keys for predicates $P$, which allow one to sign any message $m$, for which $P(m) = 1$.
We show applications of functional signatures to constructing succinct non-interactive arguments and delegation schemes. We give several general constructions for this primitive based on different computational hardness assumptions, and describe the trade-offs between them in terms of the assumptions they require and the size of the signatures.
In a functional pseudorandom function, in addition to a master secret key that can be used to evaluate the pseudorandom function $F$ on any point in the domain, there are additional \emph{secret keys for a function} $f$, which allow one to evaluate $F$ on any $y$ for which there exists an $x$ such that $f(x)=y$. As a special case, this implies \emph{pseudorandom functions with selective access}, where one can delegate the ability to evaluate the pseudorandom function on inputs $y$ for which a predicate $P(y)=1$ holds.
We define and provide a sample construction of a functional pseudorandom function family for prefix-fixing functions.
A quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic
In the present work, we present a new discrete logarithm algorithm, in the same vein as in recent works by Joux, using an asymptotically more efficient descent approach. The main result gives a quasi-polynomial heuristic complexity for the discrete logarithm problem in finite field of small characteristic. By quasi-polynomial, we mean a complexity of type $n^{O(\log n)}$ where $n$ is the bit-size of the cardinality of the finite field. Such a complexity is smaller than any $L(\varepsilon)$ for $\epsilon>0$. It remains super-polynomial in the size of the input, but offers a major asymptotic improvement compared to $L(1/4+o(1))$.
Attack on Liao and Hsiao's Secure ECC-based RFID Authentication Scheme integrated with ID-Verifier Transfer Protocol
We show that the Liao and Hsiao's protocol achieves neither tag-authentication nor privacy.
ASICS: Authenticated Key Exchange Security Incorporating Certification Systems
Most security models for authenticated key exchange (AKE) do not explicitly model the associated certification system, which includes the certification authority (CA) and its behaviour. However, there are several well-known and realistic attacks on AKE protocols which exploit various forms of malicious key registration and which therefore lie outside the scope of these models. We provide the first systematic analysis of AKE security incorporating certification systems (ASICS). We define a family of security models that, in addition to allowing different sets of standard AKE adversary queries, also permit the adversary to register arbitrary bitstrings as keys. For this model family we prove generic results that enable the design and verification of protocols that achieve security even if some keys have been produced maliciously. Our approach is applicable to a wide range of models and protocols; as a concrete illustration of its power, we apply it to the CMQV protocol in the natural strengthening of the eCK model to the ASICS setting.
Practical Secure Logging: Seekable Sequential Key Generators
In computer forensics, log files are indispensable resources that support auditors in identifying and understanding system threats and security breaches. If such logs are recorded locally, i.e., stored on the monitored machine itself, the problem of log authentication arises: if a system intrusion takes place, the intruder might be able to manipulate the log entries and cover her traces. Mechanisms that cryptographically protect collected log messages from manipulation should ideally have two properties: they should be *forward-secure* (the adversary gets no advantage from learning current keys when aiming at forging past log entries), and they should be *seekable* (the auditor can verify the integrity of log entries in any order or access pattern, at virtually no computational cost).
We propose a new cryptographic primitive, a *seekable sequential key generator* (SSKG), that combines these two properties and has direct application in secure logging. We rigorously formalize the required security properties and give a provably-secure construction based on the integer factorization problem. We further optimize the scheme in various ways, preparing it for real-world deployment. As a byproduct, we develop the notion of a *shortcut one-way permutation* (SCP), which might be of independent interest.
Our work is highly relevant in practice. Indeed, our SSKG implementation has become part of the logging service of the systemd system manager, a core component of many modern commercial Linux-based operating systems.
On the Practical Security of a Leakage Resilient Masking Scheme
Uncategorized
Uncategorized
At TCC 2012, Dziembowski and Faust show how to construct leakage resilient circuits using secret sharing based on the inner product [2]. At Asiacrypt 2012, Ballash et al. turned the latter construction into an efficient masking scheme and they apply it to protect an implementation of AES against side-channel attacks [1]. The so-called Inner-Product masking (IPmasking for short) was claimed to be secure with respect to two different security models: the $\lambda$-limited security model (Section 4 of [1]), and the dth-order security model (see definitions p.8 of [1]). In the former model, the security proof makes sense for a sharing dimension $n > 130$ which is acknowledged impractical by the authors. In the latter model, the scheme is claimed secure up to the order $d = n-1$. In this note, we contradict the dth-order security claim by exhibiting a 1st-order flaw in the masking algorithm for any chosen sharing dimension n.
A Public Key Cryptoscheme Using Bit-pair Shadows
Uncategorized
Uncategorized
This paper gives the definition and property of a bit-pair shadow, and devises the three algorithms of a public key cryptoscheme called JUOAN that is based on a multivariate permutation problem and an anomalous subset product problem to which no subexponential time solutions are found so far, and regards a bit-pair as a manipulation unit. The authors demonstrate that the decryption algorithm is correct, deduce the probability that a plaintext solution is nonunique is nearly zero, analyze the security of the new cryptoscheme against extracting a private key from a public key and recovering a plaintext from a ciphertext on the assumption that an integer factorization problem, a discrete logarithm problem, and a low-density subset sum problem can be solved efficiently, and prove that the new cryptoscheme using random padding and random permutation is semantically secure. The analysis shows that the bit-pair method increases the density D of a related knapsack to a number more than 1, and decreases the modulus length lgM of the new cryptoscheme to 464, 544, or 640.
Strongly Secure One-round Group Authenticated Key Exchange in the Standard Model
One-round group authenticated key exchange (GAKE) protocols typically provide implicit authentication and appealing bind-width efficiency. As a special case of GAKE -- the pairing-based one-round tripartite authenticated key exchange (3AKE), recently gains much attention of research community due to its strong security. Several pairing-based one-round 3AKE protocols have recently been proposed to achieve provable security in the g-eCK model. In contrast to earlier GAKE models, the g-eCK model particularly formulates the security properties regarding resilience to the leakage of various combinations of long-term key and ephemeral session state, and provision of weak perfect forward secrecy in a single model. However, the g-eCK security proofs of previous protocols are only given under the random oracle model. In this work, we give a new construction for pairing-based one-round 3AKE protocol which is provably secure in the g-eCK model without random oracles. Security of proposed protocol is reduced to the hardness of Cube Bilinear Decisional Diffie-Hellman (CBDDH) problem for symmetric pairing. We also extend the proposed 3AKE scheme to a GAKE scheme with more than three group members, based on multilinear maps. We prove g-eCK security of our GAKE scheme in the standard model under the natural multilinear generalization of the CBDDH assumption.
Efficient Simultaneous Privately and Publicly Verifiable Robust Provable Data Possession from Elliptic Curves
When outsourcing large sets of data to the cloud, it is desirable for clients to efficiently check, whether all outsourced data is still retrievable at any later point in time without requiring to download all of it. Provable data possession (PDP)/proofs of retrievability (PoR), for which various constructions exist, are concepts to solve this issue. Interestingly, by now, no PDP/PoR scheme leading to an efficient construction supporting both private and public verifiability simultaneously is known. In particular, this means that up to now all PDP/PoR schemes either allow public or private verifiability exclusively, since different setup procedures and metadata sets are required. However, supporting both variants simultaneously seems interesting, as publicly verifiable schemes are far less efficient than privately verifiable ones. In this paper, we propose the first simultaneous privately and publicly verifiable (robust) PDP protocol, which allows the data owner to use the more efficient private verification and anyone else to run the public verification algorithm. Our construction, which is based on elliptic curves, achieves this, as it uses the same setup procedure and the same metadata set for private and public verifiability. We provide a rigorous security analysis and prove our construction secure in the random oracle model under the assumption that the elliptic curve discrete logarithm problem is intractable. We give detailed comparisons with the most efficient existing approaches for either private or public verifiability with our proposed scheme in terms of storage and communication overhead, as well as computational effort for the client and the server. Our analysis shows that for choices of parameters, which are relevant for practical applications, our construction outperforms all existing privately and publicly verifiable schemes significantly. This means, that even when our construction is used for either private or public verifiability alone, it still outperforms the most efficient constructions known, which is particularly appealing in the public verifiability setting.
Key Recovery Attacks on 3-round Even-Mansour, 8-step LED-128, and Full $\mbox{AES}^{2}$
The Even-Mansour (EM) encryption scheme received a lot of attention in the last couple of years due to its exceptional simplicity and tight security proofs.
The original $1$-round construction was naturally generalized into $r$-round structures with one key, two alternating keys, and completely independent keys.
In this paper we describe the first key recovery attack on the one-key 3-round version of EM which is asymptotically faster than exhaustive search
(in the sense that its running time is $o(2^n)$ rather than $O(2^n)$ for an $n$-bit key).
We then use the new cryptanalytic techniques in order to improve the best known
attacks on several concrete EM-like schemes. In the case of LED-128, the best previously known attack could only be applied to 6 of its 12 steps. In this paper we develop a new attack which increases the number of attacked steps to 8, is slightly faster than the previous attack on 6 steps, and uses about a thousand times less data.
Finally, we describe the first attack on the full $\mbox{AES}^{2}$ (which uses two complete AES-128 encryptions and three independent $128$-bit keys, and looks exceptionally strong) which is about 7 times faster than a standard meet-in-the-middle attack, thus violating its security claim.
Chosen Ciphertext Secure Keyed-Homomorphic Public-Key Encryption
Uncategorized
Uncategorized
In homomorphic encryption schemes, anyone can perform homomorphic operations, and therefore, it is difficult to manage when, where and by whom they are performed.In addition, the property that anyone can \lq\lq freely'' perform the operation inevitably means that ciphertexts are malleable, and it is well-known that adaptive chosen ciphertext (CCA) security and the homomorphic property can never be achieved simultaneously.
In this paper, we show that CCA security and the homomorphic property can be simultaneously handled in situations that the user(s) who can perform homomorphic operations on encrypted data should be controlled/limited, and propose a new concept of homomorphic public-key encryption, which we call \emph{keyed-homomorphic public-key encryption} (KH-PKE). By introducing a secret key for homomorphic operations, we can control who is allowed to perform the homomorphic operation. To construct KH-PKE schemes, we introduce a new concept, \emph{transitional universal property}, and present a practical KH-PKE scheme from the DDH assumption. For $\ell$-bit security, our DDH-based KH-PKE scheme yields only $\ell$-bit longer ciphertext size than that of the Cramer--Shoup PKE scheme. Finally, we consider an identity-based analogue of KH-PKE, called \emph{keyed-homomorphic identity-based encryption} (KH-IBE) and give its concrete construction from the Gentry IBE scheme.
A Capacity-Achieving Simple Decoder for Bias-Based Traitor Tracing Schemes
Uncategorized
Uncategorized
We investigate alternative suspicion functions for bias-based traitor tracing schemes, and present a practical construction of a simple decoder that attains capacity in the limit of large coalition size $c$.
We derive optimal suspicion functions in both the Restricted-Digit Model and the Combined-Digit Model. These functions depend on information that is usually not available to the tracer -- the attack strategy or the tallies of the symbols received by the colluders. We discuss how such results can be used in realistic contexts.
We study several combinations of coalition attack strategy versus suspicion function optimized against some attack (another attack or the same). In many of these combinations the usual codelength scaling $\ell \propto c^2$ changes to a lower power of $c$, e.g. $c^{3/2}$. We find that the interleaving strategy is an especially powerful attack. The suspicion function tailored against interleaving is the key ingredient of the capacity-achieving construction.
Parallel Gauss Sieve Algorithm : Solving the SVP in the Ideal Lattice of 128-dimensions
Uncategorized
Uncategorized
In this paper, we report that we have solved the SVP Challenge over
a 128-dimensional lattice in Ideal Lattice Challenge from TU Darmstadt,
which is currently the highest dimension in the challenge that has ever
been solved. The security of lattice-based cryptography is based on the
hardness of solving the shortest vector problem (SVP) in lattices.
In 2010, Micciancio and Voulgaris proposed a Gauss Sieve algorithm
for heuristically solving the SVP using a list $L$ of Gauss-reduced
vectors. Milde and Schneider proposed a parallel
implementation method for the Gauss Sieve algorithm.
However, the efficiency of the more than 10 threads in their implementation
decreased due to the large number of non-Gauss-reduced vectors appearing
in the distributed list of each thread. In this paper, we propose a more
practical parallelized Gauss Sieve algorithm. Our algorithm deploys an
additional Gauss-reduced list $V$ of sample vectors assigned to each
thread, and all vectors in list $L$ remain Gauss-reduced by mutually
reducing them using all sample vectors in $V$. Therefore, our algorithm
allows the Gauss Sieve algorithm to run for large dimensions with a small communication overhead.
Finally, we succeeded in solving the SVP Challenge over a 128-dimensional
ideal lattice generated by the cyclotomic polynomial $x^{128}+1$ using
about 30,000 CPU hours.
Cryptographically Protected Prefixes for Location Privacy in IPv6
There is a growing concern with preventing unauthorized agents from discovering the geographical location of Internet users, a kind of security called location privacy. Typical deployments of IPv6 make it possible to deduce the approximate geographical location of a device from its IPv6 address. We present a scheme called Cryptographically Protected Prefixes (CPP), to address this problem at the level of IPv6 addressing and forwarding. CPP randomizes the address space of a defined topological region (privacy domain), thereby making it infeasible to infer location information from an IP address.
CPP can be deployed incrementally. We present an adversary model and show that CPP is secure within the model, assuming the existence of pseudorandom functions. We have implemented CPP as a pre-processing step within the forwarding algorithm in the FreeBSD 4.8 kernel. Our performance testing indicates that CPP pre-processing results in a 40–50 percent overhead for packet forwarding in privacy domain routers. The additional end to end per packet delay is roughly 20 to 60 microseconds. We also give an attack against the address encryption scheme in [Raghavan et al. 2009]. We show that the CPP forwarding algorithm is resilient in the event of network failures.
Side Channel Attacks against Pairing over Theta Functions
In \cite{LuRo2010}, Lubicz and Robert generalized the Tate pairing over any abelian variety and more precisely over Theta functions. The security of the new algorithms is an important issue for the use of practical cryptography. Side channel attacks are powerful attacks, using the leakage of information to reveal sensitive data. The pairings over elliptic curves were sensitive to side channel attacks. In this article, we study the weaknesses of the Tate pairing over Theta functions when submitted to side channel attacks.
Last updated: 2013-12-31
Cryptanalysis of ultralightweight RFID authentication protocol
Uncategorized
Uncategorized
Radio frequency identification (RFID) technology is one of the most emerging technologies in the field of pervasive systems, which provides the automatic identification of the object with non-line of sight capability. RFID is much better than its contending identification scheme (Bar code) in terms of efficiency and functional haste. Although it offers many advantages over other identification schemes but there are also allied security apprehensions, so to make the system secure in a cost effective manner we use ultralightweight authentication protocols. In this letter, a desynchornization attack has been presented on recently published ultralightweight authentication protocol RAPP (RFID authentication protocol with permutation). Then an advanced version of RAPP has also been proposed to combat against the desynchronization attack.
Sequential Aggregate Signatures Made Shorter
Sequential aggregate signature (SAS) is a special type of public-key signature that allows a signer to add his signature into a previous aggregate signature in sequential order. In this case, since many public keys are used and many signatures are employed and compressed, it is important to reduce the sizes of signatures and public keys. Recently, Lee, Lee, and Yung (PKC 2013) proposed an efficient SAS scheme with short public keys and proved its security without random oracles under static assumptions.
In this paper, we propose an improved SAS scheme that has a shorter signature size compared with that of Lee et al.'s SAS scheme. Our SAS scheme is also secure without random oracles under static assumptions. To achieve the improvement, we devise a new public-key signature scheme that supports multi-users and public re-randomization. Compared with the SAS scheme of Lee et al., our SAS scheme employs new techniques which allow us to reduce the size of signatures by increasing the size of the public keys (obviously, since signature compression is at the heart of aggregate signature this is a further step in understanding the aggregation capability of such schemes).
Lattice Signatures and Bimodal Gaussians
Our main result is a construction of a lattice-based digital signature scheme that represents an improvement, both in theory and in practice, over today's most efficient
lattice schemes. The novel scheme is obtained as a result of a modification of the rejection sampling algorithm that is at the heart of Lyubashevsky's signature scheme (Eurocrypt, 2012) and several other lattice primitives. Our new rejection sampling algorithm which samples from a bimodal Gaussian distribution, combined with a modified
scheme instantiation, ends up reducing the standard deviation of the resulting
signatures by a factor that is asymptotically square root in the security
parameter. The implementations of our signature scheme for security levels of
128, 160, and 192 bits compare very favorably to existing schemes such as
RSA and ECDSA in terms of efficiency. In addition, the new scheme has shorter
signature and public key sizes than all previously proposed lattice signature
schemes.
As part of our implementation, we also designed several novel algorithms which
could be of independent interest. Of particular note, is a new algorithm for
efficiently generating discrete Gaussian samples over Z^n. Current
algorithms either require many high-precision floating point exponentiations
or the storage of very large pre-computed tables, which makes them completely
inappropriate for usage in constrained devices. Our sampling algorithm
reduces the hard-coded table sizes from linear to logarithmic as compared to
the time-optimal implementations, at the cost of being only a small factor
slower.
To Hash or Not to Hash Again? (In)differentiability Results for H^2 and HMAC
Uncategorized
Uncategorized
We show that the second iterate H^2(M) = H(H(M)) of a random oracle H cannot achieve strong security in the sense of indifferentiability from a random oracle. We do so by proving that indifferentiability for H 2 holds only with poor concrete security by providing a lower bound (via an attack) and a matching upper bound (via a proof requiring new techniques) on the complexity of any successful simulator. We then investigate HMAC when it is used as a general-purpose hash function with arbitrary keys (and not as a MAC or PRF with uniform, secret keys). We uncover that HMAC’s handling of keys gives rise to two types of weak key pairs. The first allows trivial attacks against its indifferentiability; the second gives rise to structural issues similar to that which ruled out strong indifferentiability bounds in the case of H^2 . However, such weak key pairs do not arise, as far as we know, in any deployed applications of HMAC. For example, using keys of any fixed length shorter than d − 1, where d is the block length in bits of the underlying hash function, completely avoids weak key pairs. We therefore conclude with a positive result: a proof that HMAC
is indifferentiable from a RO (with standard, good bounds) when applications use keys of a fixed length less than d − 1.