All papers in 2011 (Page 5 of 714 results)

Last updated:  2011-06-17
Tamper-Proof Circuits: How to Trade Leakage for Tamper-Resilience
Sebastian Faust, Krzysztof Pietrzak, Daniele Venturi
Tampering attacks are cryptanalytic attacks on the implementation of cryptographic algorithms (e.g., smart cards), where an adversary introduces faults with the hope that the tampered device will reveal secret information. Inspired by the work of Ishai et al. [Eurocrypt'06], we propose a compiler that transforms any circuit into a new circuit with the same functionality, but which is resilient against a well-defined and powerful tampering adversary. More concretely, our transformed circuits remain secure even if the adversary can adaptively tamper with every wire in the circuit as long as the tampering fails with some probability $\delta>0$. This additional requirement is motivated by practical tampering attacks, where it is often difficult to guarantee the success of a specific attack. Formally, we show that a $q$-query tampering attack against the transformed circuit can be ``simulated'' with only black-box access to the original circuit and $\log(q)$ bits of additional auxiliary information. Thus, if the implemented cryptographic scheme is secure against $\log(q)$ bits of leakage, then our implementation is tamper-proof in the above sense. Surprisingly, allowing for this small amount of information leakage -- and not insisting on perfect simulability like in the work of Ishai et al. -- allows for much more efficient compilers, which moreover do not require randomness during evaluation. Similar to earlier work our compiler requires small, stateless and computation-independent tamper-proof gadgets. Thus, our result can be interpreted as reducing the problem of shielding arbitrary complex computation to protecting simple components.
Last updated:  2011-06-13
Error-free Multi-valued Broadcast and Byzantine Agreement with Optimal Communication Complexity
Arpita Patra
In this paper we present first ever error-free, asynchronous broadcast (called as A-cast) and Byzantine Agreement (called as ABA) protocols with optimal communication complexity and fault tolerance. Our protocols are multi-valued, meaning that they deal with $\ell$ bit input and achieve communication complexity of $O(n\ell)$ bits for large enough $\ell$ for a set of $n \geq 3t+1$ parties in which at most t can be Byzantine corrupted. In synchronous settings, Fitzi and Hirt (PODC'06) proposed probabilistically correct multi-valued broadcast (BC) and Byzantine Agreement (BA) protocols with optimal complexity of $O(n\ell)$ bits. Recently, Liang and Vaidya (PODC'11) achieved the same deterministically, i.e. without error probability. In asynchronous settings, Patra and Rangan (Latincrypt'10, ICITS'11) reported similar protocols with error probability. Here we achieve optimal complexity of $O(n\ell)$ bits for asynchronous error-free case. Following all the previous works on multi-valued protocols, we too follow reduction-based approach for our protocols, meaning that our multi-valued protocols are designed given existing A-cast and ABA protocols for small message (possibly for single bit). However compared to existing reductions, our reductions are simple and elegant. More importantly, our reductions run in constant expected time, in contrast to $O(n)$ of Patra and Rangan (ICITS'11). Furthermore our reductions invoke less or equal number of instances of protocols for single bit in comparison to the reductions of Patra and Rangan. In synchronous settings, while the reduction of Fitzi and Hirt is constant-round and invokes $O(n(n+\kappa))$ ($\kappa$ is the error parameter) instances of protocols for single bit, the reduction of Liang and Vaidya calls for round complexity and number instances that are in fact function of the message size, $O(\sqrt{\ell} + n^2)$ and $\calO(n^2\sqrt{\ell} + n^4)$, respectively where $\ell = \Omega(n^6)$. By adapting our techniques from asynchronous settings, we present new {\it error-free} reduction in synchronous world that is constant-round and calls for only $O(n^2)$ instances of protocols for single bit which is at least as good as Fitzi and Hirt.
Last updated:  2011-07-02
Differential Cryptanalysis of GOST
Nicolas T. Courtois, Michal Misztal
GOST 28147-89 is a well-known block cipher and the official encryption standard of the Russian Federation. A 256-bit block cipher considered as an alternative for AES-256 and triple DES, having an amazingly low implementation cost and thus increasingly popular and used. Until 2010 researchers unanimously agreed that: "despite considerable cryptanalytic efforts spent in the past 20 years, GOST is still not broken" and in 2010 it was submitted to ISO 18033 to become a worldwide industrial encryption standard. In 2011 it was suddenly discovered that GOST is insecure on more than one account. There is an amazing variety of recent attacks on GOST. We have reflection attacks, attacks with double reflection, and various attacks which does not use reflections. All these methods follow a certain general framework called "Algebraic Complexity Reduction", a new general "umbrella" paradigm. The final key recovery step is in most cases a software algebraic attack and sometimes a Meet-In-The-Middle attack. In this paper we show that GOST is NOT SECURE even against (advanced forms of) differential cryptanalysis (DC). Previously Russian researchers postulated that GOST will be secure against DC for as few as 7 rounds out of 32 and Japanese researchers were already able to break about 13 rounds. In this paper we show a first advanced differential attack faster than brute force on full 32-round GOST. This paper is just a sketch and a proof of concept. More results of this kind will be published soon.
Last updated:  2012-01-02
Targeted Malleability: Homomorphic Encryption for Restricted Computations
Dan Boneh, Gil Segev, Brent Waters
We put forward the notion of targeted malleability: given a homomorphic encryption scheme, in various scenarios we would like to restrict the homomorphic computations one can perform on encrypted data. We introduce a precise framework, generalizing the foundational notion of non-malleability introduced by Dolev, Dwork, and Naor (SICOMP '00), ensuring that the malleability of a scheme is targeted only at a specific set of "allowable" functions. In this setting we are mainly interested in the efficiency of such schemes as a function of the number of repeated homomorphic operations. Whereas constructing a scheme whose ciphertext grows linearly with the number of such operations is straightforward, obtaining more realistic (or merely non-trivial) length guarantees is significantly more challenging. We present two constructions that transform any homomorphic encryption scheme into one that offers targeted malleability. Our constructions rely on standard cryptographic tools and on succinct non-interactive arguments, which are currently known to exist in the standard model based on variants of the knowledge-of-exponent assumption. The two constructions offer somewhat different efficiency guarantees, each of which may be preferable depending on the underlying building blocks.
Last updated:  2013-10-26
Universally Composable Synchronous Computation
Jonathan Katz, Ueli Maurer, Bjoern Tackmann, Vassilis Zikas
In synchronous networks, protocols can achieve security guarantees that are not possible in an asynchronous world: i.e., they can simultaneously achieve input completeness (all honest parties’ inputs are included in the computation) and guaranteed termination (honest parties do not “hang” indefinitely). In practice truly syn- chronous networks rarely exist, but synchrony can be emulated if channels have (known) latency and parties have loosely synchronized clocks. The framework of universal composability (UC) is inherently asynchronous, but several approaches for adding synchrony to the framework have been proposed. However, we show that the existing proposals do not provide the expected guarantees. Given this, we propose a “clean slate” approach to defining synchrony in the UC framework by introducing functionalities exactly meant to model, respectively, bounded-delay networks and loosely synchronized clocks. We show that the expected guarantees of synchronous computation can be realized given these functionalities, and that previous models can all be expressed within our new framework.
Last updated:  2011-06-13
On Constructing Homomorphic Encryption Schemes from Coding Theory
Frederik Armknecht, Daniel Augot, Ludovic Perret, Ahmad-Reza Sadeghi
Homomorphic encryption schemes are powerful cryptographic primitives that allow for a variety of applications. Consequently, a variety of proposals have been made in the recent decades but none of them was based on coding theory. The existence of such schemes would be interesting for several reasons. First, it is well known that having multiple schemes based on different hardness assumptions is advantageous. In case that one hardness assumption turns out be wrong, one can switch over to one of the alternatives. Second, fo some codes decoding (which would represent decryption in this case) is a linear mapping only (if the error is known), i.e., a comparatively simple operation. This would make such schemes interesting candidates for constructing of fully-homomorphic schemes based on bootstrapping (see Gentry, STOC'09). We show that such schemes are indeed possible by presenting a natural construction principle. Moreover, these possess several non-standard positive features. First, they are not restricted to linear homomorphism but allow for evaluating multivariate polynomials up to a fixed (but arbitrary) degree $\mult$ on encrypted field elements. Second, they can be instantiated with various error correcting codes, even for codes with poor correcting capabilities. Third, depending on the deployed code, one can achieve very efficient schemes. As a concrete example, we present an instantiation based on Reed-Muller codes where for $\mult=2$ and $\mult=3$ and security levels between 80 and 128 bits, all operations take less than a second (after some pre-computation). However, our analysis reveals also limitations on this approach. For structural reasons, such schemes cannot be public-key, allow for a limited number of fresh encryptions only, and cannot be combined with the bootstrapping technique. We argue why such schemes are nonetheless useful in certain application scenarios and discuss possible directions on how to overcome these issues.
Last updated:  2012-03-20
Provably Secure and Practical Onion Routing
Michael Backes, Ian Goldberg, Aniket Kate, Esfandiar Mohammadi
The onion routing network Tor is undoubtedly the most widely employed technology for anony- mous web access. Although the underlying onion routing (OR) protocol appears satisfactory, a comprehensive analysis of its security guarantees is still lacking. This has also resulted in a sig- nificant gap between research work on OR protocols and existing OR anonymity analyses. In this work, we address both issues with onion routing by defining a provably secure OR protocol, which is practical for deployment in the next generation Tor network. We start off by presenting a security definition (an ideal functionality) for the OR methodology in the universal composability (UC) framework. We then determine the exact security properties required for OR cryptographic primitives (onion construction and processing algorithms, and a key exchange protocol) to achieve a provably secure OR protocol. We show that the currently deployed onion algorithms with slightly strengthened integrity properties can be used in a provably secure OR construction. In the process, we identify the concept of predictably malleable symmetric encryptions, which might be of independent interest. On the other hand, we find the currently deployed key exchange protocol to be inefficient and difficult to analyze and instead show that a recent, significantly more efficient, key exchange protocol can be used in a provably secure OR construction. In addition, our definition greatly simplifies the process of analyzing OR anonymity metrics. We define and prove forward secrecy for the OR protocol, and realize our (white-box) OR definition from an OR black-box model assumed in a recent anonymity analysis. This realization not only makes the analysis formally applicable to the OR protocol but also identifies the exact adversary and network assumptions made by the black box model.
Last updated:  2011-09-09
Ways to restrict the differential path
Uncategorized
ZiJie Xu, Ke Xu
Show abstract
Uncategorized
People had developed some attack methods to attack hash function. These methods need to choose some "differential pattern"[Dau05]. We present a way to restrict the collisions that hold the "differential pattern". At the same time, to build a hash function that meet the different needs, we propose a construction.
Last updated:  2011-09-19
Group Law Computations on Jacobians of Hyperelliptic Curves
Uncategorized
Craig Costello, Kristin Lauter
Show abstract
Uncategorized
We derive an explicit method of computing the composition step in Cantor's algorithm for group operations on Jacobians of hyperelliptic curves. Our technique is inspired by the geometric description of the group law and applies to hyperelliptic curves of arbitrary genus. While Cantor's general composition involves arithmetic in the polynomial ring $F_q[x]$, the algorithm we propose solves a linear system over the base field which can be written down directly from the Mumford coordinates of the group elements. We apply this method to give more efficient formulas for group operations in both affine and projective coordinates for cryptographic systems based on Jacobians of genus 2 hyperelliptic curves in general form.
Last updated:  2011-06-09
A new attack on Jakobsson Hybrid Mix-Net
Seyyed Amir Mortazavi
The Jakobsson hybrid Mix-net proposed by Jakobsson and Juels, is a very practical and efficient scheme for long input messages. But this hybrid Mix-net does not have public verifiable property. In this paper a new attack to the Jakobsson hybrid Mix-net is introduced. This attack breaks the robustness of the hybrid Mix-net scheme, given that the corrupted first mix server and one of the senders collude with each other.
Last updated:  2011-06-08
Auditing the Auditor: Secure Delegation of Auditing Operation over Cloud Storage
Jia XU
In cloud storage service, users upload their data together with authentication information to cloud storage server. To ensure the availability and integrity of users' data stored in the cloud storage, users need to verify the cloud storage remotely and periodically, with the help of the pre-stored authentication information and without storing a local copy of the data or retrieving back the data during verification. Public verification enables a third party auditor (TPA), on the behalf of the data owner, to verify the integrity of cloud storage with the data owner's public key. In this paper, we propose a method that allows the data owner to delegate the auditing task to a potentially untrusted third party auditor in a secure manner: (1) The data owner can verify whether the TPA has indeed performed the specified audit task; (2) The data owner can verify whether the TPA did the audit task at the right time specified by the data owner; (3) The confidentiality of the data is protected against the TPA. Our method also enables a TPA to re-outsource the audit task.
Last updated:  2012-12-11
GNUC: A New Universal Composability Framework
Dennis Hofheinz, Victor Shoup
We put forward a framework for the modular design and analysis of multi-party protocols. Our framework is called ``GNUC'' (with the recursive meaning ``GNUC's Not UC''), already alluding to the similarity to Canetti's Universal Composability (UC) framework. In particular, like UC, we offer a universal composition theorem, as well as a theorem for composing protocols with joint state. We deviate from UC in several important aspects. Specifically, we have a rather different view than UC on the structuring of protocols, on the notion of polynomial-time protocols and attacks, and on corruptions. We will motivate our definitional choices by explaining why the definitions in the UC framework are problematic, and how we overcome these problems. Our goal is to make offer a framework that is largely compatible with UC, such that previous results formulated in UC carry over to GNUC with minimal changes. We exemplify this by giving explicit formulations for several important protocol tasks, including authenticated and secure communication, as well as commitment and secure function evaluation.
Last updated:  2011-06-08
Univariate Side Channel Attacks and Leakage Modeling
Julien Doget, Emmanuel Prouff, Matthieu Rivain, François-Xavier Standaert
Differential power analysis is a powerful cryptanalytic technique that exploits information leaking from physical implementations of cryptographic algorithms. During the two last decades numerous variations of the original principle have been published. In particular, the univariate case, where a single instantaneous leakage is exploited, has attracted much research effort. In this paper, we argue that several univariate attacks among the most frequently used by the community are not only asymptotically equivalent, but can also be rewritten one in function of the other, only by changing the leakage model used by the adversary. In particular, we prove that most univariate attacks proposed in the literature can be expressed as correlation power analyses with different leakage models. This result emphasizes the major role plays by the model choice on the attack efficiency. In a second point of this paper we hence also discuss and evaluate side channel attacks that involve no leakage model but rely on some general assumptions about the leakage. Our experiments show that such attacks, named robust, are a valuable alternative to the univariate differential power analyses. They only loose bit of efficiency in case a perfect model is available to the adversary, and gain a lot in case such information is not available.
Last updated:  2012-10-05
On the Amortized Complexity of Zero Knowledge Protocols for Multiplicative Relations
Ronald Cramer, Ivan Damgard, Valerio Pastro
We present a protocol that allows to prove in zero-knowledge that committed values $x_i, y_i, z_i$, $i=1,\dots,l$ satisfy $x_iy_i=z_i$, where the values are taken from a finite field $K$, or are integers. The amortized communication complexity per instance proven is $O(\kappa + l)$ for an error probability of $2^{-l}$, where $\kappa$ is the size of a commitment. When the committed values are from a field of small constant size, this improves complexity of previous solutions by a factor of $l$. When the values are integers, we improve on security: whereas previous solutions with similar efficiency require the strong RSA assumption, we only need the assumption required by the commitment scheme itself, namely factoring. We generalize this to a protocol that verifies $l$ instances of an algebraic circuit $D$ over $K$ with $v$ inputs, in the following sense: given committed values $x_{i,j}$ and $z_i$, with $i=1,\dots,l$ and $j=1,\dots,v$, the prover shows that $D(x_{i,1},\dots,x_{i,v})= z_i$ for $i=1,\dots,l$. For circuits with small multiplicative depth, this approach is better than using our first protocol: in fact, the amortized cost may be asymptotically smaller than the number of multiplications in $D$.
Last updated:  2011-10-26
One-round Strongly Secure Key Exchange with Perfect Forward Secrecy and Deniability
Cas Cremers, Michele Feltz
Traditionally, secure one-round key exchange protocols in the PKI setting have either achieved perfect forward secrecy, or forms of deniability, but not both. On the one hand, achieving perfect forward secrecy against active attackers seems to require some form of authentication of the messages, as in signed Diffie-Hellman style protocols, that subsequently sacrifice deniability. On the other hand, using implicit authentication along the lines of MQV and descendants sacrifices perfect forward secrecy in one round and achieves only weak perfect forward secrecy instead. We show that by reintroducing signatures, it is possible to satisfy both a very strong key-exchange security notion, which we call eCK-PFS, as well as a strong form of deniability, in one-round key exchange protocols. Our security notion for key exchange is stronger than, e.g., the extended-CK model, and captures perfect forward secrecy. Our notion of deniability, which we call peer-and-time deniability, is stronger than that offered by, e.g., the SIGMA protocol. We propose a concrete protocol and prove that it satisfies our definition of key-exchange security in the random oracle model as well as peer-and-time deniability. The protocol combines a signed-Diffie-Hellman message exchange with an MQV-style key computation, and offers a remarkable combination of advanced security properties.
Last updated:  2013-07-09
Modes of Operations for Encryption and Authentication Using Stream Ciphers Supporting an Initialisation Vector
Palash Sarkar
We describe a systematic framework for using a stream cipher supporting an initialisation vector (IV) to perform various tasks of authentication and authenticated encryption. These include message authentication code (MAC), authenticated encryption (AE), authenticated encryption with associated data (AEAD) and deterministic authenticated encryption (DAE) with associated data. Several schemes are presented and rigourously analysed. A major component of the constructions is a keyed hash function having provably low collision and differential probabilities. Methods are described to efficiently extend such hash functions to take multiple inputs. In particular, double-input hash functions are required for the construction of AEAD schemes. An important practical aspect of our work is that a designer can combine off-the-shelf stream ciphers with off-the-shelf hash functions to obtain secure primitives for MAC, AE, AEAD and DAE(AD).
Last updated:  2011-06-08
Local limit theorem for large deviations and statistical box-tests
Igor Semaev
Let $n$ particles be independently allocated into $N$ boxes, where the $l$-th box appears with the probability $a_l$. Let $\mu_r$ be the number of boxes with exactly $r$ particles and $\mu=[ \mu_{r_1},\ldots, \mu_{r_m}]$. Asymptotical behavior of such random variables as $N$ tends to infinity was studied by many authors. It was previously known that if $Na_l$ are all upper bounded and $n/N$ is upper and lower bounded by positive constants, then $\mu$ tends in distribution to a multivariate normal low. A stronger statement, namely a large deviation local limit theorem for $\mu$ under the same condition, is here proved. Also all cumulants of $\mu$ are proved to be $O(N)$. Then we study the hypothesis testing that the box distribution is uniform, denoted $h$, with a recently introduced box-test. Its statistic is a quadratic form in variables $\mu-\mathbf{E}\mu(h)$. For a wide area of non-uniform $a_l$, an asymptotical relation for the power of the quadratic and linear box-tests, the statistics of the latter are linear functions of $\mu$, is proved. In particular, the quadratic test asymptotically is at least as powerful as any of the linear box-tests, including the well-known empty-box test if $\mu_0$ is in $\mu$.
Last updated:  2011-07-07
NEW STATISTICAL BOX-TEST AND ITS POWER
Uncategorized
Igor Semaev, Mehdi M. Hassanzadeh
Show abstract
Uncategorized
In this paper, statistical testing of $N$ multinomial probabilities is studied and a new box-test, called \emph{Quadratic Box-Test}, is introduced. The statistics of the new test has $\chi^2_s$ limit distribution as $N$ and the number of trials $n$ tend to infinity, where $s$ is a parameter. The well-known empty-box test is a particular case for $s=1$. The proposal is quite different from Pearson's goodness-of-fit test, which requires fixed $N$ while the number of trials is growing, and linear box-tests. We prove that under some conditions on tested distribution the new test's power tends to $1$. That defines a wide region of non-uniform multinomial probabilities distinguishable from the uniform. For moderate $N$ an efficient algorithm to compute the exact values of the first kind error probability is devised.
Last updated:  2011-10-04
Short Signatures From Weaker Assumptions
Dennis Hofheinz, Tibor Jager, Eike Kiltz
We provide constructions of (m,1)-programmable hash functions (PHFs) for m >= 2. Mimicking certain programmability properties of random oracles, PHFs can, e.g., be plugged into the generic constructions by Hofheinz and Kiltz (J. Cryptol. 2011) to yield digital signature schemes from the strong RSA and strong q-Diffie-Hellman assumptions. As another application of PHFs, we propose new and efficient constructions of digital signature schemes from weaker assumptions, i.e., from the (standard, non-strong) RSA and the (standard, non-strong) q-Diffie-Hellman assumptions. The resulting signature schemes offer interesting trade-offs between efficiency/signature length and the size of the public-keys. For example, our q-Diffie-Hellman signatures can be as short as 200 bits; the signing algorithm of our Strong RSA signature scheme can be as efficient as the one in RSA full domain hash; compared to previous constructions, our RSA signatures are shorter (by a factor of roughly 2) and we obtain a considerable efficiency improvement (by an even larger factor). All our constructions are in the standard model, i.e., without random oracles.
Last updated:  2011-06-03
Counting Points on Genus 2 Curves with Real Multiplication
Uncategorized
P. Gaudry, D. Kohel, B. Smith
Show abstract
Uncategorized
We present an accelerated Schoof-type point-counting algorithm for curves of genus 2 equipped with an efficiently computable real multiplication endomorphism. Our new algorithm reduces the complexity of genus 2 point counting over a finite field $GF(q)$ of large characteristic from $\sO(\log^8 q)$ to $\sO(\log^5 q)$. Using our algorithm we compute a 256-bit prime-order Jacobian, suitable for cryptographic applications, and also the order of a 1024-bit Jacobian.
Last updated:  2011-06-03
Small Public Keys and Fast Verification for Multivariate Quadratic Public Key Systems
Albrecht Petzoldt, Enrico Thomae, Stanislav Bulygin, Christopher Wolf
Security of public key schemes in a post-quantum world is a challenging task---as both RSA and ECC will be broken then. In this paper, we show how post-quantum signature systems based on Multivariate Quadratic (MQ) polynomials can be improved up by about 9/10, and 3/4, respectively, in terms of public key size and verification time. The exact figures are 88% and 73%. This is particularly important for small-scale devices with restricted energy, memory, or computational power. In addition, we show that this reduction does not affect security and that it is also optimal in terms of possible attacks. We do so by combining the priory unrelated concepts of reduced and equivalent keys. Our new scheme is based on the so-called Unbalanced Oil and Vinegar class of MQ-schemes. We have derived our results mathematically and verified the speed-ups through a C++ implementation.
Last updated:  2011-07-26
Weakness of a Secured Authentication Protocol for Wireless Sensor Networks Using Elliptic Curves Cryptography
W. Han
Authenticating remote users in wireless sensor networks (WSN) is an important security issue due to their un-attended and hostile deployments. Usually, sensor nodes are equipped with limited computing power, storage, and communication module, thus authenticating remote users in such resource constrained environment is a critical security concern. Recently, Yeh et al. proposed a two-factor user authentication scheme in WSN and claimed that his scheme is secure against different kind of attacks. However, in this paper, we prove that Yeh et al. scheme has some critical security pitfalls and is not recommended for real application. We point out that have the following weakness: 1) no mutual authentication between the user and the sensor node, 2) no perfect forward secrecy, 3)no key agreement between the user and the sensor node.
Last updated:  2011-06-03
On Nonlinear Polynomial Selection and Geometric Progression (mod N) for Number Field Sieve
Namhun Koo, Gooc Hwa Jo, Soonhak Kwon
The general number field sieve (GNFS) is asymptotically the fastest known factoring algorithm. One of the most important steps of GNFS is to select a good polynomial pair. A standard way of polynomial selection (being used in factoring RSA challenge numbers) is to select a nonlinear polynomial for algebraic sieving and a linear polynomial for rational sieving. There is another method called a nonlinear method which selects two polynomials of the same degree greater than one. In this paper, we generalize Montgomery's method using small geometric progression (GP) (mod $N$) to construct a pair of nonlinear polynomials. We introduce GP of length d+k with 1\leq k\leq d-1 and show that we can construct polynomials of degree $d$ having common root (mod N), where the number of such polynomials and the size of the coefficients can be precisely determined.
Last updated:  2011-06-21
Leakage-Resilient Coin Tossing
Elette Boyle, Shafi Goldwasser, Yael Tauman Kalai
The ability to collectively toss a common coin among $n$ parties in the presence of faults is an important primitive in the arsenal of randomized distributed protocols. In the case of dishonest majority, it was shown to be impossible to achieve less than $\frac{1}{r}$ bias in $O(r)$ rounds (Cleve STOC '86). In the case of honest majority, in contrast, unconditionally secure $O(1)$-round protocols for generating common perfectly {\it unbiased} coins follow from general completeness theorems on multi-party secure protocols in the perfectly secure channels model (e.g., BGW, CCD STOC '88). However, in the multi-party protocols with faulty minority, parties need to generate and hold local secret values which are assumed to be {\it perfectly hidden} from malicious parties: an assumption which is crucial to proving the resulting common coin is unbiased. This assumption unfortunately does not seem to hold in practice, as attackers can launch side-channel attacks on the local state of honest parties and leak information on their secrets. In this work, we present an $O(1)$-round protocol for collectively generating an unbiased common coin, in the presence of leakage on the local state of the honest parties. We tolerate $t \le (\frac{1}{3} - \epsilon) n$ computationally-unbounded Byzantine faults and in addition a $\Omega(1)$-fraction leakage on each (honest) party's secret state. Our results hold in the memory leakage model (of Akavia, Goldwasser, Vaikuntanathan '08) adapted to the distributed setting. Another contribution of our work is a tool we use to achieve collective coin flipping -- {\it leakage-resilient verifiable secret sharing}. Informally, this is a variant of ordinary VSS in which secrecy guarantees are maintained even if information is leaked on individual shares of the secret.
Last updated:  2011-06-03
Some Results Concerning Generalized Bent Functions
Uncategorized
Pantelimon Stanica, Sugata Gangopadhyay, Brajesh Kumar Singh
Show abstract
Uncategorized
In this paper we investigate the properties of generalized bent functions defined on $\BBZ_2^n$ with values in $\BBZ_q$ where $q \ge 2$ is any positive integer. We characterize the class of generalized bent functions symmetric with respect to two variables, provide an analogue of Maiorana--McFarland type bent functions in the generalized set up. A class of bent functions called generalized spreads type is introduced and it is demonstrated that recently introduced Dillon type generalized bent functions and Maiorana--McFarland type generalized bent functions can be described as generalized spreads type functions. Thus unification of two different types of generalized bent functions is achieved.
Last updated:  2012-11-19
Polly Cracker, Revisited
Uncategorized
Martin R. Albrecht, Jean-Charles Faugère, Pooya Farshim, Gottfried Herold, Ludovic Perret
Show abstract
Uncategorized
We initiate the formal treatment of cryptographic constructions based on the hardness of computing remainders modulo an ideal in multivariate polynomial rings. Of particular interest to us is a class of schemes known as "Polly Cracker." We start by formalising and studying the relation between the ideal remainder problem and the problem of computing a Gröbner basis. We show both positive and negative results. On the negative side, we define a symmetric Polly Cracker encryption scheme and prove that this scheme only achieves bounded CPA security under the hardness of the ideal membership problem. Furthermore, we show that a large class of algebraic transformations cannot convert this scheme to a fully secure Polly Cracker-style scheme. On the positive side, we formalise noisy variants of the ideal-theoretic problems. These problems can be seen as natural generalisations of the learning with errors (LWE) and the approximate GCD problems over polynomial rings. After formalising and justifying the hardness of the noisy assumptions, we show that noisy encoding of messages results in a fully IND-CPA-secure and somewhat homomorphic encryption scheme. Together with a standard symmetric-to-asymmetric transformation for additively homomorphic schemes, we provide a positive answer to the long-standing open problem of constructing a secure Polly Cracker-style cryptosystem reducible to the hardness of solving a random system of equations. Indeed, our results go beyond this and also provide a new family of somewhat homomorphic encryption schemes based on generalised hard problems. Our results also imply that Regev's LWE-based public-key encryption scheme is (somewhat) multiplicatively homomorphic for appropriate choices of parameters.
Last updated:  2011-10-31
On the Communication Complexity of Reliable and Secure Message Transmission in Asynchronous Networks
Ashish Choudhury, Arpita Patra
In this paper, we study the communication complexity of Reliable Message Transmission (RMT) and Secure Message Transmission (SMT) protocols in asynchronous settings. We consider two variants of the problem, namely perfect} (where no error is allowed in the protocol outcome) and statistical (where the protocol may output a wrong outcome with negligible probability). RMT and SMT protocols have been investigated rigorously in synchronous settings. But not too much attention has been paid to the asynchronous version of the problem. In a significant work, Choudhury et al. (ICDCN 2009 and JPDC 2011) have studied the network connectivity requirement for perfect and statistical SMT protocols in asynchronous settings. Their investigation reveals the following two important facts: 1. Perfect SMT protocols require more network connectivity in asynchronous network than synchronous network. 2. Connectivity requirement of statistical SMT protocols is same for both synchronous and asynchronous network. Unfortunately, nothing is known about the communication complexity of RMT and SMT protocols in asynchronous settings. In this paper, we derive tight bounds on the communication complexity of the above problems and compare our results with the existing bounds for synchronous protocols. The interesting conclusions derived from our results are: 1. Asynchrony increases the communication complexity of perfect RMT protocols. However, asynchrony has no impact on the communication complexity of statistical RMT protocols. 2. SMT: Communication complexity of SMT protocols is more in asynchronous network, for both perfect as well as statistical case.
Last updated:  2011-06-03
Algebraic cryptanalysis of the round-reduced and side channel analysis of the full PRINTCipher-48
Stanislav Bulygin
In this paper we analyze the recently proposed light-weight block cipher PRINTCipher. Applying algebraic methods and SAT-solving we are able to break 8 rounds of PRINTCipher-48 with only 2 known plaintexts and 9 rounds under some additional assumptions. We show that it is possible to break the full 48-round cipher by assuming a moderate leakage of internal state bits or even just Hamming weights. Such a simulation side-channel attack has practical complexity. We investigate applicability of our method to cryptanalysis of the full PRINTCipher-48.
Last updated:  2012-02-07
Bicliques for Preimages: Attacks on Skein-512 and the SHA-2 family
Dmitry Khovratovich, Christian Rechberger, Alexandra Savelieva
We present the new concept of biclique as a tool for preimage attacks, which employs many powerful techniques from differential cryptanalysis of block ciphers and hash functions. The new tool has proved to be widely applicable by inspiring many authors to publish new results of the full versions of AES, KASUMI, IDEA, Square, and others. In this paper, we demonstrate how our concept results in the first cryptanalysis of the Skein hash function, and describe an attack on the SHA-2 hash function with more rounds than before.
Last updated:  2011-06-03
Exploiting Linear Hull in Matsui’s Algorithm 1 (extended version)
Andrea Röck, Kaisa Nyberg
We consider linear approximations of an iterated block cipher in the presence of several strong linear approximation trails. The effect of such trails in Matsui’s Algorithm 2, also called the linear hull effect, has been previously studied by a number of authors. However, he effect on Matsui’s Algorithm 1 has not been investigated until now. In this paper, we fill this gap and examine how to exploit the linear hull in Matsui’s Algorithm 1. We develop the mathematical framework for this kind of attacks. The complexity of the attack increases with the number of strong linear trails. We show how to reduce the number of trails and thus the complexity using related keys. Further, we illustrate our theory by experimental results on a reduced round version of the block cipher PRESENT
Last updated:  2011-09-15
On the Security of PPPoE Network
Uncategorized
Fanbao Liu, Yumeng Feng, Yuan Cao
Uncategorized
PPPoE is a network protocol for encapsulating PPP frames inside Ethernet. It is widely used by commercial ISPs to authenticate peers, who want to surf the Internet by paying the bills. In this paper, we analyze the security of the PPPoE network, we find that we can easily collect information about both the peers and the PPPoE authentication servers. We can use such information to recover the peer's authentication password by silently impersonating the server, which is undetectable in the network. We impersonate the server in the peers' LAN and get their passwords by hijacking the peers' PPPoE connections and negotiating for using the PAP authentication protocol. We further propose an efficient password recovery attack against the CHAP authentication protocol. We first recover the length of the password through on-line queries, based on the weakness of MD5 input pre-processing. Then we crack the known length password off-line, using the probabilistic context-free grammars. For MS-CHAP has already been proved to be weak, and the more secure EAP authentication methods can be by-passed through roll-back attack, which negotiates the weak protocols, the authentication passwords of the PPPoE networks are truly in danger. We point out that PPPoE can't be used any more, until all of the weak authentication protocols including PAP, CHAP, MS-CHAP are abolished right now and replaced with more secure EAP methods.
Last updated:  2011-06-03
The Fault Attack ECDLP Revisited
Mingqiang Wang, Xiaoyun Wang, Tao Zhan
Biehl et al.\cite{BMM} proposed a fault-based attack on elliptic curve cryptography. In this paper, we refined the fault attack method. An elliptic curve $E$ is defined over prime field $\mathbb{F}_p$ with base point $P\in E(\mathbb{F}_p)$. Applying the fault attack on these curves, the discrete logarithm on the curve can be computed in subexponential time of $L_p(1/2, 1+o(1))$. The runtime bound relies on heuristics conjecture about smooth numbers similar to the ones used in \cite{Lens}.
Last updated:  2011-09-18
An Experimentally Verified Attack on Full Grain-128 Using Dedicated Reconfigurable Hardware
Itai Dinur, Tim Güneysu, Christof Paar, Adi Shamir, Ralf Zimmermann
In this paper we describe the first single-key attack which can break the full version of Grain-128 for arbitrary keys by an algorithm which is considerably faster than exhaustive search (by a factor of about $2^{38}$). It uses a new version of a cube tester, which uses an improved choice of dynamic variables to eliminate all the previously made assumptions on the key, to speed up the attack, and to simplify the final key recovery. Since it is extremely difficult to mathematically analyze the expected behavior of such attacks, we implemented it on RIVYERA, which is a new massively parallel reconfigurable hardware, and tested its main components for dozens of random keys. These tests experimentally verified the correctness and expected complexity of the attack. This is the first time a complex analytical attack is successfully realized against a full-size cipher with a special-purpose machine. Moreover, it is also the first attack that truly exploits the configurable nature of an FPGA-based cryptanalytical hardware.
Last updated:  2011-05-30
Computational Verifiable Secret Sharing Revisited
Michael Backes, Aniket Kate, Arpita Patra
Verifiable secret sharing (VSS) is an important primitive in distributed cryptography that allows a dealer to share a secret among n parties in the presence of an adversary controlling at most t of them. In the computational setting, the feasibility of VSS schemes based on commitments was established over two decades ago. Interestingly, all known computational VSS schemes rely on the homomorphic nature of these commitments or achieve weaker guarantees. As homomorphism is not inherent to commitments or to the computational setting in general, a closer look at its utility to VSS is called for. In this paper, we demonstrate that homomorphism of commitments is not a necessity for computational VSS in the synchronous or in the asynchronous communication setting. We present new VSS schemes based only on the definitional properties of commitments that are almost as good as existing VSS schemes based homomorphic commitments. Furthermore, they have significantly lower communication complexities than their (statistical or perfect) unconditional counterparts. Considering the feasibility of commitments from any claw-free permutation, one-way function or collision-resistant hash function, our schemes can be an excellent alternative to unconditional VSS in the future. Further, in the synchronous communication model, we observe that a crucial interactive complexity measure of round complexity has never been formally studied for computational VSS. Interestingly, for the optimal resiliency conditions, the least possible round complexity in the known computational VSS schemes is identical to that in the (statistical or perfect) unconditional setting: three rounds. Considering the strength of the computational setting, this equivalence is certainly surprising. In this paper, we show that three rounds are actually not mandatory for computational VSS. We present the first two-round VSS scheme for n>= 2t+1 and lower-bound the result tightly by proving the impossibility of one-round computational VSS for t >= 2 or n >= 3t. For the remaining condition of t=1 and n >= 4, we present a one-round VSS scheme. We also include a new two-round VSS scheme using homomorphic commitments that has the same communication complexity as the well-known three-round Feldman and Pedersen VSS schemes.
Last updated:  2012-03-07
DDH-like Assumptions Based on Extension Rings
Ronald Cramer, Ivan Damgaard, Eike Kiltz, Sarah Zakarias, Angela Zottarel
We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of F_{q} ``in the exponent'' of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring R_f= \F_{q}[X]/(f) where f can be any degree-d polynomial. We show that solving the decision problem that follows naturally reduces to the case where f is irreducible. This variant is called the d-DDH problem, where 1-DDH is standard DDH. Essentially any known cryptographic construction based on DDH can be immediately generalized to use instead d-DDH, and we show in the generic group model that d-DDH is harder than DDH. This means that virtually any application of DDH can now be realized with the same (amortized) efficiency, but under a potentially weaker assumption. On the negative side, we also show that d-DDH, just like DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups and that in fact the problems become harder with increasing d and hence form an infinite hierarchy. We show that hardness of VDDH implies CCA-secure encryption, efficient Naor-Reingold style pseudorandom functions, and auxiliary input secure encryption, a strong form of leakage resilience. This can be seen as an alternative to the known family of k-linear assumptions.
Last updated:  2011-09-14
Fully Homomorphic Encryption without Squashing Using Depth-3 Arithmetic Circuits
Craig Gentry, Shai Halevi
We describe a new approach for constructing fully homomorphic encryption (FHE) schemes. Previous FHE schemes all use the same blueprint from [Gentry 2009]: First construct a somewhat homomorphic encryption (SWHE) scheme, next "squash" the decryption circuit until it is simple enough to be handled within the homomorphic capacity of the SWHE scheme, and finally "bootstrap" to get a FHE scheme. In all existing schemes, the squashing technique induces an additional assumption: that the sparse subset sum problem (SSSP) is hard. Our new approach constructs FHE as a hybrid of a SWHE and a multiplicatively homomorphic encryption (MHE) scheme, such as Elgamal. Our construction eliminates the need for the squashing step, and thereby also removes the need to assume the SSSP is hard. We describe a few concrete instantiations of the new method, including a "simple" FHE scheme where we replace SSSP with Decision Diffie-Hellman, an optimization of the simple scheme that let us "compress" the FHE ciphertext into a single Elgamal ciphertext(!), and a scheme whose security can be (quantumly) reduced to the approximate ideal-SIVP. We stress that the new approach still relies on bootstrapping, but it shows how to bootstrap without having to "squash" the decryption circuit. The main technique is to express the decryption function of SWHE schemes as a depth-3 ($\sum \prod \sum$) arithmetic circuit of a particular form. When evaluating this circuit homomorphically (as needed for bootstrapping), we temporarily switch to a MHE scheme, such as Elgamal, to handle the $\prod$ part. Due to the special form of the circuit, the switch to the MHE scheme can be done without having to evaluate anything homomorphically. We then translate the result back to the SWHE scheme by homomorphically evaluating the decryption function of the MHE scheme. Using our method, the SWHE scheme only needs to be capable of evaluating the MHE scheme's decryption function, not its own decryption function. We thereby avoid the circularity that necessitated squashing in the original blueprint.
Last updated:  2011-05-30
Comparing Different Definitions of Secure Session
Can Zhang
We first propose a definition of session protocol where two parties exchange information using a shared key. The notion of security based on our definition of session protocol requires the secure transmission of a sequence of message pieces rather than a single message piece. We then propose several definitions of secure session protocol depending on how powerful we allow the adversary to be. The main goal of this paper is to compare those definitions and show a hierarchy of definitions of secure session protocol.
Last updated:  2011-08-11
Fully Homomorphic Encryption without Bootstrapping
Uncategorized
Zvika Brakerski, Craig Gentry, Vinod Vaikuntanathan
Show abstract
Uncategorized
We present a radically new approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), {\em without Gentry's bootstrapping procedure}. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have $2^\secparam$ security against known attacks. For RLWE, we have: 1. A leveled FHE scheme that can evaluate $L$-level arithmetic circuits with $\tilde{O}(\secparam \cdot L^3)$ per-gate computation -- i.e., computation {\em quasi-linear} in the security parameter. Security is based on RLWE for an approximation factor exponential in $L$. This construction does not use the bootstrapping procedure. 2. A leveled FHE scheme that uses bootstrapping {\em as an optimization}, where the per-gate computation (which includes the bootstrapping procedure) is $\tilde{O}(\secparam^2)$, {\em independent of $L$}. Security is based on the hardness of RLWE for {\em quasi-polynomial} factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results for LWE, but with worse performance. We introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least $\secparam$ -- we can reduce the per-gate computation of the bootstrapped version to $\tilde{O}(\secparam)$, independent of $L$, by {\em batching the bootstrapping operation}. Previous FHE schemes all required $\tilde{\Omega}(\secparam^{3.5})$ computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).
Last updated:  2011-05-28
Analysis of the SSH Key Exchange Protocol
Stephen C. Williams
We provide an analysis of the widely deployed SSH protocol's key exchange mechanism. We exploit the design of the SSH key exchange to perform our analysis in a modular manner. First, a shared secret key is obtained via a Diffie-Hellman key exchange. Next, a transform is applied to obtain the application keys used by later stages of SSH. We define models, following well-established paradigms, that clarify the security provided by each type of key. Previously, there has been no formal analysis of the SSH key exchange protocol. We provide a modular proof of security for the SSH shared secret and application keys. We show that although the shared secret key exchanged by SSH is not indistinguishable, the transformation then applied yields indistinguishable application keys. Our proofs use random oracles to model the hash function used within SSH.
Last updated:  2011-09-30
Inverting the Square systems is exponential
Uncategorized
Jintai Ding
Show abstract
Uncategorized
In this paper, we prove that the degree of regularity of the family of Square systems, an HFE type of systems, over a prime finite field of odd characteristics $q$ is exactly $q$, and therefore prove that \vskip .1in \begin{itemize} \item inverting Square systems algebraically is exponential, when $q=O(n)$, where $n$ is the number of variables of the system. \end{itemize}
Last updated:  2011-08-14
A Splice-and-Cut Cryptanalysis of the AES
Dmitry Khovratovich, Christian Rechberger
Since Rijndael was chosen as the Advanced Encryption Standard, improving upon 7-round attacks on the 128-bit key variant or upon 8-round attacks on the 256-bit key variant has been one of the most difficult challenges in the cryptanalysis of block ciphers for more than a decade. In this paper we present a novel technique of block cipher cryptanalysis with bicliques, which leads to the following results: - The first key recovery attack on 9 out of 14 rounds of AES-256 with computational complexity 2^{253.1} and success rate 1. - The first key recovery attacks on 8 out of 10 rounds of AES-128. The best attack has computational complexity 2^{124.8} and success rate 0.63. - The first combination of a non-random property and an algorithm that allows to distinguish the full 10-round AES-128 from an ideal cipher in a non-trivial way. This may be interpreted as a weak deviation from an ideal behavior in a model where the adversary is allowed to choose the key, and has some relevance when AES-128 is used in a compression function of a cryptographic hash function. In contrast to most shortcut attacks on AES variants, we do not need any related-keys. As our attacks are of high complexity, yet practically verified to large extent, they do not threaten the practical use of AES-128 or AES-256 in any way.
Last updated:  2011-06-15
Memory Delegation
Kai-Min Chung, Yael Tauman Kalai, Feng-Hao Liu, Ran Raz
We consider the problem of delegating computation, where the delegator doesn't even know the input to the function being delegated, and runs in time significantly smaller than the input length. For example, consider the setting of {\em memory delegation}, where a delegator wishes to delegate her entire memory to the cloud. The delegator may want the cloud to compute functions on this memory, and prove that the functions were computed correctly. As another example, consider the setting of {\em streaming delegation}, where a stream of data goes by, and a delegator, who cannot store this data, delegates this task to the cloud. Later the delegator may ask the cloud to compute statistics on this streaming data, and prove the correctness of the computation. We note that in both settings the delegator must keep a (short) certificate of the data being delegated, in order to later verify the correctness of the computations. Moreover, in the streaming setting, this certificate should be computed in a streaming manner. We construct both memory and streaming delegation schemes. We present non-interactive constructions based on the (standard) delegation scheme of Goldwasswer et. al. (STOC '08). These schemes allow the delegation of any function computable by an ${\cal L}$-uniform circuit of low depth (the complexity of the delegator depends linearly on the depth). For memory delegation, we rely on the existence of a polylog PIR scheme, and for streaming, we rely on the existence of a fully homomorphic encryption scheme. We also present constructions based on the CS-proofs of Micali. These schemes allow the delegation of any function in~$\P$. However, they are interactive (i.e., consists of $4$ messages), or are non-interactive in the Random Oracle Model.
Last updated:  2011-10-25
Outsourcing Multi-Party Computation
Seny Kamara, Payman Mohassel, Mariana Raykova
We initiate the study of secure multi-party computation (MPC) in a server-aided setting, where the parties have access to a single server that (1) does not have any input to the computation; (2) does not receive any output from the computation; but (3) has a vast (but bounded) amount of computational resources. In this setting, we are concerned with designing protocols that minimize the computation of the parties at the expense of the server. We develop new definitions of security for this server-aided setting, that generalize the standard simulation-based definitions for MPC, and allow us to formally capture the existence of dishonest but non-colluding participants. This requires us to introduce a formal characterization of non-colluding adversaries that may be of independent interest. We then design general and special-purpose server-aided MPC protocols that are more efficient (in terms of computation and communication) for the parties than the alternative of running a standard MPC protocol (i.e., without the server). Our main general-purpose protocol provides security when there is at least one honest party with input. We also construct a new and efficient server-aided protocol for private set intersection and give a general transformation from any secure delegated computation scheme to a server-aided two-party protocol.
Last updated:  2011-05-28
Practical Key-recovery For All Possible Parameters of SFLASH
Charles Bouillaguet, Pierre-Alain Fouque, Gilles Macario-Rat
In this paper we present a new practical key-recovery attack on the SFLASH signature scheme. SFLASH is a derivative of the older $C^*$ encryption and signature scheme that was broken in 1995 by Patarin. In SFLASH, the public key is truncated, and this simple countermeasure prevents Patarin's attack. The scheme is well-known for having been considered secure and selected in 2004 by the NESSIE project of the European Union to be standardized. However, SFLASH was practically broken in 2007 by Dubois, Fouque, Stern and Shamir. Their attack breaks the original (and most relevant) parameters, but does not apply when more than half of the public key is truncated. It is therefore possible to choose parameters such that SFLASH is not broken by the existing attacks, although it is less efficient. We show a key-recovery attack that breaks the full range of parameters in practice, as soon as the information-theoretically required amount of information is available from the public-key. The attack uses new cryptanalytic tools, most notably pencils of matrices and quadratic forms.
Last updated:  2011-05-28
Programmable Hash Functions and Their Applications
Dennis Hofheinz, Eike Kiltz
We introduce a new combinatorial primitive called *programmable hash functions* (PHFs). PHFs can be used to *program* the output of a hash function such that it contains solved or unsolved discrete logarithm instances with a certain probability. This is a technique originally used for security proofs in the random oracle model. We give a variety of *standard model* realizations of PHFs (with different parameters). The programmability makes PHFs a suitable tool to obtain black-box proofs of cryptographic protocols when considering adaptive attacks. We propose generic digital signature schemes from the strong RSA problem and from some hardness assumption on bilinear maps that can be instantiated with any PHF. Our schemes offer various improvements over known constructions. In particular, for a reasonable choice of parameters, we obtain short standard model digital signatures over bilinear maps.
Last updated:  2011-05-30
Authenticated and Misuse-Resistant Encryption of Key-Dependent Data
Uncategorized
Mihir Bellare, Sriram Keelveedhi
Show abstract
Uncategorized
This paper provides a comprehensive treatment of the security of authenticated encryption (AE) in the presence of key-dependent data, considering the four variants of the goal arising from the choice of universal nonce or random nonce security and presence or absence of a header. We present attacks showing that universal-nonce security for key-dependent messages is impossible, as is security for key-dependent headers, not only ruling out security for three of the four variants but showing that currently standarized and used schemes (all these target universal nonce security in the presence of headers) fail to provide security for key-dependent data. To complete the picture we show that the final variant (random-nonce security in the presence of key-dependent messages but key-independent headers) is efficiently achievable. Rather than a single dedicated scheme, we present a RO-based transform RHtE that endows ANY AE scheme with this security, so that existing implementations may be easily upgraded to have the best possible seurity in the presence of key-dependent data. RHtE is cheap, software-friendly, and continues to provide security when the key is a password, a setting in which key-dependent data is particularly likely. We go on to give a key-dependent data treatment of the goal of misuse resistant AE. Implementations are provided and show that RHtE has small overhead.
Last updated:  2011-05-28
Birthday Forgery Attack on 128-EIA3 Version 1.5
Raja Zeshan Haider
128-EIA3 is an integrity algorithm considered for adoption as a third integrity algorithm by European Telecommunication Standard Institute (ETSI) for 4th generation of GSM networks.128-EIA3 is vul- nerable to birthday forgery attack. Birthday forgery attack requires minimum 2^16 known message-MAC pairs for nding collision in 128-EIA3. 128-EIA3 is susceptible to internal collision of its universal hash function and external collision of its Xoring transformation. Birthday forgery attack on 128-EIA3 allows message forgery with success probability greater than 1/2^32.
Last updated:  2011-12-26
Mutual Private Set Intersection with Linear Complexity
Uncategorized
Myungsun Kim, Hyung Tae Lee, Jung Hee Cheon
Show abstract
Uncategorized
A private set intersection (PSI) protocol allows players to obtain the intersection of their inputs. While in its unilateral version only the client can obtain the intersection, the mutual PSI protocol enables all players to get the desired result. In this work, we construct a mutual PSI protocol that is significantly more efficient than the state-of- the-art in the computation overhead. To the best of our knowledge, our construction is the \emph{first} result with linear computational complexity in the semi-honest model. For that, we come up with an efficient data representation technique, called \emph{prime representation}.
Last updated:  2011-05-30
Identity-Based Decryption
Daniel R. L. Brown
Identity-based decryption is an alternative to identity-based encryption, in which Alice encrypts a symmetric key for Bob under a trusted authority’s public key. Alice sends Bob the resulting ciphertext, which Bob can send to the trusted authority. The trusted authority provides Bob the symmetric key only upon verifying Bob’s identity.
Last updated:  2011-05-28
Efficient 2-Round General Perfectly Secure Message Transmission: A Minor Correction to Yang and Desmedt's Protocol
Uncategorized
Qiushi Yang, Yvo Desmedt
Show abstract
Uncategorized
At Asiacrypt~'10, Yang and Desmedt proposed a number of perfectly secure message transmission protocols in the general adversary model. However, there is a minor flaw in the 2-round protocol in an undirected graph to transmit multiple messages. A small correction solves the problem. Here we fix the protocol and prove its security.
Last updated:  2011-05-28
Round Optimal Blind Signatures
Uncategorized
Dominique Schröder, Dominique Unruh
Show abstract
Uncategorized
All known round optimal (i.e., two-move) blind signature schemes either need a common reference string, rely on random oracles, or assume the hardness of some interactive assumption. At Eurocrypt 2010, Fischlin and Schröder showed that a broad class of three-move blind signature scheme cannot be instantiated in the standard model based on any non-interactive assumption. This puts forward the question if round optimal blind signature schemes exist in the standard model. Here, we give a positive answer presenting the first round optimal blind signature scheme that is secure in the standard model without any setup assumptions. Our solution does not need interactive assumptions.
Last updated:  2011-06-07
The Computational Square-Root Exponent Problem- Revisited
Uncategorized
Fangguo Zhang
Show abstract
Uncategorized
In this paper, we revisit the Computational Square-Root Exponent Problem (CSREP), and give a more generic condition such that CSREP is polynomial-time equivalent to the Computational Diffie-Hellman Problem (CDHP) in the group with prime order. The results obtained in this paper contain Zhang \textit{et al.}'s results at IWCC2011. We also analyze the existence of such condition. Although primes satisfying such condition are rare (compare to all primes), it can be regarded as an evidence that CSREP may be equivalent to CDHP.
Last updated:  2011-05-28
Cryptanalysis of the Light-Weight Cipher A2U2 - Reduced draft version
Mohamed Ahmed Abdelraheem, Julia Borghoff, Erik Zenner
At IEEE RFID 2011, David et al. proposed a new cryptographic primitive for use with RFID [2]. The design is a stream cipher called A2U2. Shortly afterwards, an attack was published on IACR Eprint by Chai et al. [1], claiming to break the cipher in a chosen-plaintext attack using extremely little computational resources. Regrettably, this attack is wrong since it works with an erroneous description of the cipher. In this paper, we show why the attack is wrong and how it can be repaired. Furthermore, we describe a guess-and-determine attack which applies in a known plaintext scenario. A special design feature of A2U2 is that the number of initialization rounds varies and depends on an internal counter. The number of rounds varies from 9 to 126. We proposed a differential-style attack which enables us to find the counter value determining the number of initialization rounds. Moreover, we present an attack that recovers the masterkey in the case that only 9 initialization rounds are used.
Last updated:  2011-05-28
OBSERVATION: An explicit form for a class of second preimages for any message M for the SHA-3 candidate Keccak
Uncategorized
Danilo Gligoroski, Rune Steinsmo Ødeård, Rune Erlend Jensen
Show abstract
Uncategorized
In this short note we give an observation about the SHA- 3 candidate Keccak[r,c,d], where the parameters r,c and d receive values from the formal proposal for the Keccak hash function (with the hash output of n = c bits). We show how an attacker that will spend a one-time effort to find a second preimage for the value z0 = Keccak[r, c, d](0^r) will actually get infinite number of second preimages for free, for any message M. Our observation is an adaptation of similar attacks that have been reported by Aumasson et.al and Ferguson et.al for the SHA-3 candidate CubeHash. By this observation we do not contradict security claims present in the official Keccak submission, but we allocate a property in the design of the function: we get an explicit form for a class of second preimages for any message M. As far as we know, this kind of property is not known neither for MD5, SHA-1, SHA-2 nor the other SHA-3 candidates.
Last updated:  2011-05-28
Security \& Indistinguishability in the Presence of Traffic Analysis
Cristina Onete, Daniele Venturi
Traffic analysis (TA) is a powerful tool against the security and privacy of cryptographic primitives, permitting an adversary to monitor the frequency and timing characteristics of transmissions in order to distinguish the senders or the receivers of possibly encrypted communication. Briefly, adversaries may leak implementation-specific information even for schemes that are provably secure with respect to a classical model, resulting in a breach of security and/or privacy. In this work we introduce the notion of \emph{indistinguishability in the presence of traffic analysis}, enhancing \emph{any} classical security model such that no adversary can distinguish between two protocol runs (possibly implemented on different machines) with respect to a TA oracle (leaking information about each protocol run). This new notion models an attack where the adversary taps a single node of in- and outgoing communication and tries to relate two sessions of the same protocol, either run by two senders or for two receivers. Our contributions are threefold: (1) We first define a framework for indistinguishability in the presence of TA, then we (2) fully relate various notions of indistinguishability, depending on the adversary's goal and the type of TA information it has. Finally we (3) show how to use our framework for the SSH protocol and for a concrete application of RFID authentication.
Last updated:  2011-05-28
Comments on a sensor network key redistribution technique of Cichon, Golebiewski and Kutylowski
Douglas R. Stinson
Cichon, Golebiewski and Kutylowski (\cite{CGK}) proposed a technique for ``key redistribution'' in sensor networks. The idea is that long-term keys held by the sensor nodes are used to encrypt temporal keys that a base station then broadcasts to the network. The temporal keys are used as session keys by the nodes in the sensor network. It is argued that this provides increased connectivity and resilience as compared to a standard Eschenauer-Gligor key predistribution scheme, as well as providing some additional advantages. In this paper, we provide some simpler proofs of some results from \cite{CGK}. As well, we give a precise analysis of the resilience of Cichon, Golebiewski and Kutylowski's scheme, and we discuss modifications of the scheme based on defining a suitable intersection threshold.
Last updated:  2011-05-28
A High Speed Pairing Coprocessor Using RNS and Lazy Reduction
Gavin Xiaoxu Yao, Junfeng Fan, Ray C. C. Cheung, Ingrid Verbauwhede
In this paper, we present a high speed pairing coprocessor using Residue Number System (RNS) and lazy reduction. We show that combining RNS, which are naturally suitable for parallel architectures, and lazy reduction, which performs one reduction for more than one multiplication, the computational complexity of pairings can be largely reduced. The design is prototyped on a Xilinx Virtex-6 FPGA, which utilizes 7023 slices and 32 DSPs, and finishes one 254-bit optimal ate pairing computation in 0.664 ms.
Last updated:  2011-05-25
Secure Multi-Party Computation of Boolean Circuits with Applications to Privacy in On-Line Marketplaces
Seung Geol Choi, Kyung-Wook Hwang, Jonathan Katz, Tal Malkin, Dan Rubenstein
Protocols for generic secure multi-party computation (MPC) come in two forms: they either represent the function being computed as a boolean circuit, or as an arithmetic circuit over a large field. Either type of protocol can be used for any function, but the choice of which type of protocol to use can have a significant impact on efficiency. The magnitude of the effect, however, has never been quantified. With this in mind, we implement the MPC protocol of Goldreich, Micali, and Wigderson, which uses a boolean representation and is secure against a semi-honest adversary corrupting any number of parties. We then consider applications of secure MPC in on-line marketplaces, where customers select resources advertised by providers and it is desired to ensure privacy to the extent possible. Problems here are more naturally formulated in terms of boolean circuits, and we study the performance of our MPC implementation relative to existing ones that use an arithmetic-circuit representation. Our protocol easily handles tens of customers/providers and thousands of resources, and outperforms existing implementations including FairplayMP, VIFF, and SEPIA.
Last updated:  2013-03-09
Leakage Resilient Secure Two-Party Computation
Ivan Damgaard, Carmit Hazay, Arpita Patra
In the traditional {\em secure function evaluation} setting, some set of distrusting parties jointly compute a function of their respective inputs {\em securely} as if the computation is executed in an ideal setting where the parties send inputs to a trusted party that performs the computation and returns its result. Almost independently of secure computation, the area of {\em leakage resilient cryptography} has recently been evolving intensively, studying the question of designing cryptographic primitives that remain secure even when some information about the secret key is leaked. In this paper we initiate the study of {\em secure two-party computation in the presence of leakage}, where on top of corrupting one of the parties the adversary obtains leakage from the content of the secret memory of the honest party. Our study involves the following contributions: \BE \item {\em Security Definitions.} We formalize the notion of secure two-party computation in the presence of leakage and introduce security definitions in the {\em ideal/real framework}. Our formalization induces two types of adversarial attacks. We further study the feasibility of our definitions in the computational setting and explore some of the conditions under which these definitions are met. \item {\em Composition Theorems.} We provide compositions theorems for our new modelings. Our results provide compositions theorems for the case where the inputs of the parties are sampled from a min-entropy source distribution. \item {\em Leakage resilient oblivious transfer.} We present the first construction for 1-out-of-2 oblivious transfer with security against leakage of a constant fraction of the honest party's memory. Our protocol is based on the OT construction presented by Peikert et al.~\cite{PeikertVW08}. \item {\em Leakage resilient Yao's Garbled Circuit~\cite{Yao82}.} We provide the first general construction for secure two-party computation and show how to adapt the proof from~\cite{LP09} of Yao's protocol into the leakage resilient setting. Our result holds for a restricted set of functions due to technicalities arise in the proof. \EE
Last updated:  2011-05-25
Hiding the Policy in Cryptographic Access Control
Sascha Müller, Stefan Katzenbeisser
Recently, cryptographic access control has received a lot of attention, mainly due to the availability of efficient \emph{Attribute-Based Encryption (ABE)} schemes. ABE allows to get rid of a trusted reference monitor by enforcing access rules in a cryptographic way. However, ABE has a privacy problem: The access policies are sent in clear along with the ciphertexts. Further generalizing the idea of policy-hiding in cryptographic access control, we introduce \emph{policy anonymity} where -- similar to the well-understood concept of $k$-anonymity -- the attacker can only see a large set of possible policies that might have been used to encrypt, but is not able to identify the one that was actually used. We show that using a concept from graph theory we can extend a known ABE construction to achieve the desired privacy property.
Last updated:  2012-05-17
Using the Cloud to Determine Key Strengths
T. Kleinjung, A. K. Lenstra, D. Page, N. P. Smart
We develop a new methodology to assess cryptographic key strength using cloud computing, by calculating the true economic cost of (symmetric- or private-) key retrieval for the most common cryptographic primitives. Although the present paper gives the current costs, more importantly it provides the tools and infrastructure to derive new data points at any time in the future, while allowing for improvements such as of new algorithmic approaches. Over time the resulting data points will provide valuable insight in the selection of cryptographic key sizes.
Last updated:  2011-05-23
Attack Cryptosystems Based on HCDLP
Mingqiang Wang, Xiaoyun Wang, Tao Zhan
We present an algorithm for solving the discrete logarithm problem on hyperelliptic curves defined over finite field when the cyclic group can be represented by special form. On the general case, we design a method to attack on hyperelliptic curve cryptosystems. As an example, we illustrate an attack on the Twin Diffie-Hellman key agreement scheme\cite{CKS}. As a byproduct, we enumerate the isomorphism classes of genus $2$ hyperelliptic curves which satisfy some special conditions over a finite field.
Last updated:  2011-09-06
Cryptography Secure Against Related-Key Attacks and Tampering
Uncategorized
Mihir Bellare, David Cash, Rachel Miller
Show abstract
Uncategorized
We show how to leverage the RKA (Related-Key Attack) security of blockciphers to provide RKA security for a suite of high-level primitives. This motivates a more general theoretical question, namely, when is it possible to transfer RKA security from a primitive P1 to a primitive P2? We provide both positive and negative answers. What emerges is a broad and high level picture of the way achievability of RKA security varies across primitives, showing, in particular, that some primitives resist ``more'' RKAs than others. A technical challenge was to achieve RKA security even for the practical classes of related-key deriving (RKD) functions underlying fault injection attacks that fail to satisfy the ``claw-freeness'' assumption made in previous works. We surmount this barrier for the first time based on the construction of PRGs that are not only RKA secure but satisfy a new notion of identity-collision-resistance.
Last updated:  2011-05-23
Concurrently Secure Computation in Constant Rounds
Sanjam Garg, Vipul Goyal, Abhishek Jain, Amit Sahai
We study the problem of constructing concurrently secure computation protocols in the plain model, where no trust is required in any party or setup. While the well established UC framework for concurrent security is impossible to achieve in this setting, a meaningful notion of concurrent security based on \emph{super-polynomial simulation} (SPS) is achievable and has been extensively studied [Pas03,PS04,BS05,LPV09,CLP10]. The recent work of [CLP10] obtains a concurrently secure computation protocol in the plain model with SPS security under standard assumptions, but requires a number of rounds of interaction that is polynomial in the security parameter. In this work, we obtain the first concurrently secure computation protocol in the plain model with SPS security that uses only a \emph{constant} number of rounds and requires only \emph{standard assumptions}. To accomplish our result, we introduce a new proof technique that significantly reduces the demands placed on ``rewinding techniques'' employed in previous work. We believe that our techniques are of independent interest and likely to be applicable in other settings related to secure concurrent composition.
Last updated:  2011-12-21
A Parallel Repetition Theorem for Leakage Resilience
Zvika Brakerski, Yael Tauman Kalai
A leakage resilient encryption scheme is one which stays secure even against an attacker that obtains a bounded amount of side information on the secret key (say $\lambda$ bits of ``leakage''). A fundamental question is whether parallel repetition amplifies leakage resilience. Namely, if we secret share our message, and encrypt the shares under two independent keys, will the resulting scheme be resilient to $2\lambda$ bits of leakage? Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an example of a public-key encryption scheme that is (CPA) resilient to $\lambda$ bits of leakage, and yet its $2$-repetition is not resilient to even $(1+\epsilon)\lambda$ bits of leakage. In their counter-example, the repeated schemes share secretly generated public parameters. In this work, we show that under a reasonable strengthening of the definition of leakage resilience (one that captures known proof techniques for achieving non-trivial leakage resilience), parallel repetition \emph{does} in fact amplify leakage (for CPA security). In particular, if fresh public parameters are used for each copy of the Lewko-Waters scheme, then their negative result does not hold, and leakage is amplified by parallel repetition. More generally, we show that given $t$ schemes that are resilient to $\lambda_1, \ldots, \lambda_t$ bits of leakage, respectfully, their direct product is resilient to $\sum (\lambda_i-1)$ bits. We present our amplification theorem in a general framework that applies other cryptographic primitives as well.
Last updated:  2011-07-27
Breaking a certificateless key agreement protocol withour bilinear pairing
W. Han
Certificateless public key cryptography simplifies the complex certificate management in the traditional public key cryptography and resolves the key escrow problem in identity-based cryptography. Many certificateless designated verifier signature protocols using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. Recently, He et al. proposed a certificateless authenticated key agreement protocol without pairings and presented that their protocol is secure in the random oracle model. In this paper, we show that their protocol is insecure against the Type I adversary.
Last updated:  2011-05-19
Fast Password Recovery Attack: Application to APOP
Uncategorized
Fanbao Liu, Yi Liu, Tao Xie, Yumeng Feng
Show abstract
Uncategorized
In this paper, we propose a fast password recovery attack to APOP application in local which can recover a password with 11 characters in less than one minute, recover a password with 31 characters extremely fast, about 4 minutes, and for 43 characters in practical time. These attacks truly simulate the practical password recovery attacks launched by $malware$ in real life, and further confirm that the security of APOP is totally broken. To achieve these dramatical improvements, we propose a group satisfaction scheme, apply the divide-and-conquer strategy and a new suitable MD5 collision attack to greatly reduce the computational complexity in collision searching with high number of chosen bits. The average time of generating an ``\textit{IV Bridge}" is optimized to 0.17 second on ordinary PC, the average time of generating collision pairs for recovering passwords up to 11 characters is about 0.08 second, for 31 characters is about 0.15 second, for 39 characters is about 4.13 seconds, for 43 characters is about 20 seconds, and collisions for recovering passwords as long as 67 characters can be theoretically generated. These techniques can be further applied to reduce the complexity of producing a 1-bit-free collisions for recovering the first 11 characters, whose main target is that to reduce the number of challenges generated in APOP attack, to about $2^{36}$ MD5 compressions.
Last updated:  2011-05-26
An Ultra-Efficient Key Recovery Attack on the Lightweight Stream Cipher A2U2
Uncategorized
Qi Chai, Xinxin Fan, Guang Gong
Show abstract
Uncategorized
In this letter we report on an ultra-efficient key recovery attack under the chosen-plaintext-attack model against the stream cipher A2U2, which is the most lightweight cryptographic primitive (i.e., it costs only 284 GE in hardware implementation) proposed so far for low-cost Radio Frequency Identification (RFID) tags. Our attack can fully recover the secret key of the A2U2 cipher by only querying the A2U2 encryption twice on the victim tag and solving 32 sparse systems of linear equations (where each system has 56 unknowns and around 28 unknowns can be directly obtained without computation) in the worst case, which takes around 0.16 second on a Thinkpad T410 laptop.
Last updated:  2011-09-26
A Framework for Secure Single Sign-On
Bernardo Machado David, Anderson C. A. Nascimento, Rafael Tonicelli
Single sign-on solutions allow users to sign on only once and have their identities automatically verified by each application or service they want to access afterwards. There are few practical and secure single sign-on models, even though it is of great importance to current distributed application environments. We build on proxy signature schemes to introduce the first public key cryptographic approach to single sign-on frameworks, which represents an important milestone towards the construction of provably secure single sign-on schemes. Our contribution is two-fold, providing a framework that handles both session state across multiple services and granular access control. The intrinsic centralized access control functionality adds no additional cost to the single sign on protocol while providing an easy way to manage access policies and user rights revocation. Moreover, our approach significantly improves communication complexity by eliminating any communication between services and identity providers during user identity and access permission verification. Relying on simple primitives, our methods can be easily and efficiently implemented using standard cryptography APIs and libraries. We base our constructions on standard cryptographic techniques and a threat model that captures the characteristics of current attacks and the requirements of modern applications. This is the first approach to base single sign-on security on public key cryptography and associate such a practical application to proxy signatures.
Last updated:  2011-05-18
On the Number of Carries Occuring in an Addition $\mod 2^k-1$
Jean-Pierre Flori, Hugues Randriam
In this paper we study the number of carries occurring while performing an addition modulo $2^k-1$. For a fixed modular integer $t$, it is natural to expect the number of carries occurring when adding a random modular integer $a$ to be roughly the Hamming weight of $t$. Here we are interested in the number of modular integers in $\Zk$ producing strictly more than this number of carries when added to a fixed modular integer $t \in \Zk$. In particular it is conjectured that less than half of them do so. An equivalent conjecture was proposed by Tu and Deng in a different context~\cite{DCC:TD}. Although quite innocent, this conjecture has resisted different attempts of proof~\cite{DBLP:conf/seta/FloriRCM10, cryptoeprint:2010:170, cusick_combinatorial_2011, Carlet:Private} and only a few cases have been proved so far. The most manageable cases involve modular integers $t$ whose bits equal to $\textttup{0}$ are sparse. In this paper we continue to investigate the properties of $\Ptk$, the fraction of modular integers $a$ to enumerate, for $t$ in this class of integers. Doing so we prove that $\Ptk$ has a polynomial expression and describe a closed form of this expression. This is of particular interest for computing the function giving $\Ptk$ and studying it analytically. Finally we bring to light additional properties of $\Ptk$ in an asymptotic setting and give closed forms for its asymptotic values.
Last updated:  2012-05-09
PRISM -- Privacy-Preserving Search in MapReduce
Uncategorized
Erik-Oliver Blass, Roberto Di Pietro, Refik Molva, Melek Onen
Show abstract
Uncategorized
We present PRISM, a privacy-preserving scheme for word search in cloud computing. In the face of a curious cloud provider, the main challenge is to design a scheme that achieves privacy while preserving the efficiency of cloud computing. Solutions from related research, like encrypted keyword search or Private Information Retrieval (PIR), fall short of meeting real-world cloud requirements and are impractical. PRISM's idea is to transform the problem of word search into a set of parallel instances of PIR on small datasets. Each PIR instance on a small dataset is efficiently solved by a node in the cloud during the ``Map'' phase of MapReduce. Outcomes of map computations are then aggregated during the ``Reduce'' phase. Due to the linearity of PRISM, the simple aggregation of map results yields the final output of the word search operation. We have implemented PRISM on Hadoop MapReduce and evaluated its efficiency using real-world DNS logs. PRISM's overhead over non-private search is only 11%. Thus, PRISM offers privacy-preserving search that meets cloud computing efficiency requirements. Moreover, PRISM is compatible with standard MapReduce, not requiring any change to the interface or infrastructure.
Last updated:  2011-07-26
Affine Pairings on ARM
Tolga Acar, Kristin Lauter, Michael Naehrig, Daniel Shumow
Pairings on elliptic curves are being used in an increasing number of cryptographic applications on many different devices and platforms, but few performance numbers for cryptographic pairings have been reported on embedded and mobile devices. In this paper we give performance numbers for affine and projective pairings on a dual-core Cortex A9 ARM processor and compare performance of the same implementation across three platforms: x86, x86-64 and ARM. Using a fast inversion in the base field and doing inversion in extension fields by using the norm map to reduce to inversions in smaller fields, we find a very low ratio of inversion-to-multiplication costs. In our implementation, this favors using affine coordinates on all three platforms, even for the current 128-bit minimum security level specified by NIST. We use Barreto-Naehrig (BN) curves and report on the performance of an optimal ate pairing for curves covering security levels roughly between 128 and 192 bits. We compare with other reported performance numbers for pairing computation on ARM processors.
Last updated:  2011-05-18
Cryptanalysis of KeeLoq code-hopping using a Single FPGA
Idan Sheetrit, Avishai Wool
The KeeLoq cipher is used in many wireless car door systems and garage openers. Recently the algorithm was studied and several attacks have been published. When a random seed is not used the attack on the system is fairly straight-forward. However when a seed is shared between the remote control and the receiver previous research suggested using highly parallel crypto hardware (like COPACOBANA) for breaking the cipher within reasonable time. In this paper we show that highly-parallel hardware is not necessary: our attack uses a single FPGA for breaking KeeLoq when using a 48-bit random seed in 17 hours using a mid-range Virtex-4, and less than 3 hours using a high-end Virtex-6 chip. We achieve these results using a combination of algorithmic improvements, FPGA design methodology, and Xilinx-specific features.
Last updated:  2011-05-18
A Novel Adaptive Proactive Secret Sharing without a Trusted Party
Xiuqun Wang
A $(t+1,n)$ proactive secret sharing is to protect a secret in long-lived system by distributing it to a group of $n$ participants and refreshing their shares periodically in this fixed group, while any $t+1$ and more than $t+1$ shares can reconstruct the secret. In some environment, it needs to change not only the number of participants $n$ but also the threshold value $t$. An adaptive proactive secret sharing is to refresh the shares as $t$ and $n$ change. In this paper, we propose a novel adaptive proactive secret sharing scheme without a trusted party. Our proposed scheme is uniformly efficient and tolerates $t$ Byzantine faults in any single time interval, where the number of participants $n\geq 3t+1$. The threshold value $t$ and the number of participants $n$ can be changed arbitrarily in two adjacent intervals. We also prove that our proposed scheme is secure under the discrete logarithm intractability assumption.
Last updated:  2012-05-31
Universal Composability from Essentially Any Trusted Setup
Mike Rosulek
It is impossible to securely carry out general multi-party computation in arbitrary network contexts like the Internet, unless protocols have access to some trusted setup. In this work we classify the power of such trusted (2-party) setup functionalities. We show that nearly every setup is either {\bf useless} (ideal access to the setup is equivalent to having no setup at all) or else {\bf complete} (composably secure protocols for {\em all} tasks exist in the presence of the setup). We further argue that those setups which are neither complete nor useless are highly unnatural. The main technical contribution in this work is an almost-total characterization of completeness for 2-party setups. Our characterization treats setup functionalities as black-boxes, and therefore is the first work to classify completeness of {\em arbitrary setup functionalities} (i.e., randomized, reactive, and having behavior that depends on the global security parameter).
Last updated:  2011-06-28
Efficient Software Implementations of Modular Exponentiation
Uncategorized
Shay Gueron
Show abstract
Uncategorized
RSA computations have a significant effect on the workloads of SSL/TLS servers, and therefore their software implementations on general purpose processors are an important target for optimization. We concentrate here on 512-bit modular exponentiation, used for 1024-bit RSA. We propose optimizations in two directions. At the primitives’ level, we study and improve the performance of an “Almost” Montgomery Multiplication. At the exponentiation level, we propose a method to reduce the cost of protecting the w-ary exponentiation algorithm against cache/timing side channel attacks. Together, these lead to an efficient software implementation of 512-bit modular exponentiation, which outperforms the currently fastest publicly available alternative. When measured on the latest x86-64 architecture, the 2nd Generation Intel® Core™ processor, our implementation is 43% faster than that of the current version of OpenSSL (1.0.0d).
Last updated:  2016-08-19
Attacks On a Double Length Blockcipher-based Hash Proposal
Yiyuan Luo, Xuejia Lai
In this paper we attack a $2n$-bit double length hash function proposed by Lee et al. This proposal is a blockcipher-based hash function with hash rate $2/3$. The designers claimed that it could achieve ideal collision resistance and gave a security proof. However, we find a collision attack with complexity of $\Omega(2^{3n/4})$ and a preimage attack with complexity of $\Omega(2^{n})$. Our result shows this construction is much worse than an ideal $2n$-bit hash function.
Last updated:  2011-05-18
The block cipher NSABC (public domain)
Alice Nguyenova-Stepanikova, Tran Ngoc Duong
We introduce NSABC/w – Nice-Structured Algebraic Block Cipher using w -bit word arithmetics, a 4w -bit analogous of Skipjack [NSA98] with 5w -bit key. The Skipjack's internal 4-round Feistel structure is replaced with a w -bit, 2-round cascade of a binary operation (x,z)\mapsto(x\boxdot z)\lll(w/2) that permutes a text word x under control of a key word z . The operation \boxdot , similarly to the multiplication in IDEA [LM91, LMM91], bases on an algebraic group over w -bit words, so it is also capable of decrypting by means of the inverse element of z in the group. The cipher utilizes a secret 4w -bit tweak – an easily changeable parameter with unique value for each block encrypted under the same key [LRW02] – that is derived from the block index and an additional 4w -bit key. A software implementation for w=64 takes circa 9 clock cycles per byte on x86-64 processors.
Last updated:  2011-05-17
Using Templates to Distinguish Multiplications from Squaring Operations
Uncategorized
Neil Hanley, Michael Tunstall, William P. Marnane
Show abstract
Uncategorized
Since side channel analysis was introduced as a method to recover secret information from an otherwise secure cryptosystem, many countermeasures have been proposed to prevent leakage from secure devices. Among these countermeasures is side channel atomicity that makes operations indistinguishable using side channel analysis. In this paper we present practical results of an attack on RSA signature generation, protected in this manner, based on the expected difference in Hamming weight between the result of a multiplication and a squaring operation. This work presents the first attack that we are aware of where template analysis can be used without requiring an open device to characterize an implementation of a given cryptographic algorithm. Moreover, an attacker does not need to know the plaintexts being operated on and, therefore, blinding and padding countermeasures applied to the plaintext do not hinder the attack in any way.
Last updated:  2012-05-31
Computer-Aided Decision-Making with Trust Relations and Trust Domains (Cryptographic Applications)
Simon Kramer, Rajeev Goré, Eiji Okamoto
We propose generic declarative definitions of individual and collective trust relations between interacting agents and agent collections, and trust domains of trust-related agents in distributed systems. Our definitions yield (1) (in)compatibility, implicational, and transitivity results for trust relationships, including a Datalog-implementability result for their logical structure; (2) computational complexity results for deciding potential and actual trust relationships and membership in trust domains; (3) a positive (negative) compositionality result for strong (weak) trust domains; (4) a computational design pattern for building up strong trust domains; and (5) a negative scalability result for trust domains in general. We instantiate our generic trust concepts in five major cryptographic applications of trust, namely: Access Control, Trusted Third Parties, the Web of Trust, Public-Key Infrastructures, and Identity-Based Cryptography. We also show that accountability induces trust. Our defining principle for weak and strong trust (domains) is (common) belief in and (common) knowledge of agent correctness, respectively.
Last updated:  2011-05-17
Comments on a secure dynamic ID-based remote user authentication scheme for multi-server environment using smart cards
Debiao He
The security of a dynamic ID-based remote user authentication scheme for multi-server environment using smart cards proposed by Lee et al. [Lee, C-C., Lin, T-H., Chang, R-X., A Secure Dynamic ID based Remote User Authentication Scheme for Multi-server Environment using Smart Cards, Expert Systems with Applications (2011), doi: 10.1016/j.eswa.2011.04.190] is analyzed. Three kinds of attacks are presented in different scenarios
Last updated:  2011-05-17
Correlated-Input Secure Hash Functions
Vipul Goyal, Adam O'Neill, Vanishree Rao
We undertake a general study of hash functions secure under {\em correlated inputs}, meaning that security should be maintained when the adversary sees hash values of many related high-entropy inputs. Such a property is satisfied by a random oracle, and its importance is illustrated by study of the ``avalanche effect,'' a well-known heuristic in cryptographic hash function design. One can interpret ``security'' in different ways: e.g., asking for one-wayness or that the hash values look uniformly and independently random; the latter case can be seen as a generalization of correlation-robustness introduced by Ishai et al.~(CRYPTO 2003). We give specific applications of these notions to password-based login and efficient search on encrypted data. Our main construction achieves them (without random oracles) for inputs related by {\em polynomials} over the input space (namely $\zz_p$ for a prime number $p$), based on corresponding variants of the $q$-Diffie Hellman Inversion assumption. Additionally, we show relations between correlated-input secure hash functions and cryptographic primitives secure under related-key attacks. Using our techniques, we are also able to obtain a host of new results for such related-key attack secure cryptographic primitives.
Last updated:  2011-06-08
Remote Timing Attacks are Still Practical
Billy Bob Brumley, Nicola Tuveri
For over two decades, timing attacks have been an active area of research within applied cryptography. These attacks exploit cryptosystem or protocol implementations that do not run in constant time. When implementing an elliptic curve cryptosystem with a goal to provide side-channel resistance, the scalar multiplication routine is a critical component. In such instances, one attractive method often suggested in the literature is Montgomery's ladder that performs a fixed sequence of curve and field operations. This paper describes a timing attack vulnerability in OpenSSL's ladder implementation for curves over binary fields. We use this vulnerability to steal the private key of a TLS server where the server authenticates with ECDSA signatures. Using the timing of the exchanged messages, the messages themselves, and the signatures, we mount a lattice attack that recovers the private key. Finally, we describe and implement an effective countermeasure.
Last updated:  2012-08-14
History-Free Sequential Aggregate Signatures
Marc Fischlin, Anja Lehmann, Dominique Schröder
Aggregation schemes allow to combine several cryptographic values like message authentication codes or signatures into a shorter value such that, despite compression, some notion of unforgeability is preserved. Recently, Eikemeier et al. (SCN 2010) considered the notion of history-free sequential aggregation for message authentication codes, where the sequentially-executed aggregation algorithm does not need to receive the previous messages in the sequence as input. Here we discuss the idea for signatures where the new aggregate does not rely on the previous messages and public keys either, thus inhibiting the costly verifications in each aggregation step as in previous schemes by Lysyanskaya et al. (Eurocrypt 2004) and Neven (Eurocrypt 2008). Analogously to MACs we argue about new security definitions for such schemes and compare them to previous notions for history-dependent schemes. We finally give a construction based on the BLS signature scheme which satisfies our notion.
Last updated:  2017-04-04
All-But-Many Lossy Trapdoor Functions
Dennis Hofheinz
We put forward a generalization of lossy trapdoor functions (LTFs). Namely, all-but-many lossy trapdoor functions (ABM-LTFs) are LTFs that are parametrized with tags. Each tag can either be injective or lossy, which leads to an invertible or a lossy function. The interesting property of ABM-LTFs is that it is possible to generate an arbitrary number of lossy tags by means of a special trapdoor, while it is not feasible to produce lossy tags without this trapdoor. Our definition and construction can be seen as generalizations of all-but-one LTFs (due to Peikert and Waters) and all-but-N LTFs (due to Hemenway et al.). However, to achieve ABM-LTFs (and thus a number of lossy tags which is not bounded by any polynomial), we have to employ some new tricks. Concretely, we give two constructions that employ ``disguised'' variants of the Waters, resp. Boneh-Boyen signature schemes to make the generation of lossy tags hard without trapdoor. In a nutshell, lossy tags simply correspond to valid signatures. At the same time, tags are disguised (i.e., suitably blinded) to keep lossy tags indistinguishable from injective tags. ABM-LTFs are useful in settings in which there are a polynomial number of adversarial challenges (e.g., challenge ciphertexts). Specifically, building on work by Hemenway et al., we show that ABM-LTFs can be used to achieve selective opening security against chosen-ciphertext attacks. One of our ABM-LTF constructions thus yields the first SO-CCA secure encryption scheme with compact ciphertexts (O(1) group elements) whose efficiency does not depend on the number of challenges. Our second ABM-LTF construction yields an IND-CCA (and in fact SO-CCA) secure encryption scheme whose security reduction is independent of the number of challenges and decryption queries.
Last updated:  2011-05-16
Routing Protocol Based Shared and Session Key Exchange Protocol for Wireless Mobile Ad-hoc Network
Md. Golam Kaosar
Mobile Ad-hoc Network (MANET) is a transitory and infrastructureless network supported by no fixed trusted infrastructure. To achieve security goals like: authentication, integrity, non-repudiation, privacy, a secret key (or session key) is necessary to be shared between the sender and receiver. Because of the nature of MANET, it is unrealistic in many circumstances to implement Certification Authority (CA) concept. Some popular key exchange protocols also have some demerits in case of MANET which are due to mainly the requirement of high computational capability. In this key exchange protocol we propose an algorithm to exchange shared and session key between the sender and destination even during the route creation in various routing protocols.
Last updated:  2011-05-12
A Framework for Practical Universally Composable Zero-Knowledge Protocols
Jan Camenisch, Stephan Krenn, Victor Shoup
Zero-knowledge proofs of knowledge (ZK-PoK) for discrete logarithms and related problems are indispensable for practical cryptographic protocols. At \emph{Eurocrypt 2009}, Camenisch, Kiayias, and Yung provided a specification language (the \emph{CKY-language}) for such protocols, which allows one to modularly design and analyze cryptographic protocols: protocol designers just need to specify the statement they want to prove in zero-knowledge and are ensured that an efficient proof protocol exists and indeed proves the specified statement, provided that the specification was in the CKY-language. However, as specifications in the CKY-language are realized by so-called $\Sigma$-protocols, the resulting protocols only satisfy the classical notion of zero-knowledge proofs of knowledge, which \emph{not} retained if they are composed with themselves or with other protocols, e.g., when used as building blocks for higher-level applications. This problem can be tackled by moving to the Universal Composability (UC) framework, which guarantees retention of security when composing protocols and, in particular, when using them as building blocks in arbitrary contexts. While there exists generic transformations from $\Sigma$-protocols to protocols that are secure under this stronger security notion, these transformation are often not efficient enough for the design of practical protocols. In this paper we are aiming for practically efficient ZK-PoK in the UC-framework by introducing a specification language akin to the CKY-language and a compiler such that protocols specified in our language are UC-secure and efficient. To this end we propose an extension of the UC-framework addressing the problem that UC-secure zero-knowledge proofs are always proofs \emph{of knowledge}, and state a special composition theorem which allows one to use the weaker -- but more efficient and often sufficient -- notion of proofs \emph{of existence} in the UC-framework for the first time. We believe that our contributions enable the design of practical protocols that are UC-secure and thus themselves can be used as building blocks.
Last updated:  2011-05-12
Robust parent-identifying codes and combinatorial arrays
Uncategorized
Alexander Barg, Grigory Kabatiansky
Show abstract
Uncategorized
An $n$-word $y$ over a finite alphabet of cardinality $q$ is called a descendant of a set of $t$ words $x^1,\dots,x^t$ if $y_i\in\{x^1_i,\dots,x^t_i\}$ for all $i=1,\dots,n.$ A code $\cC=\{x^1,\dots,x^M\}$ is said to have the $t$-IPP property if for any $n$-word $y$ that is a descendant of at most $t$ parents belonging to the code it is possible to identify at least one of them. From earlier works it is known that $t$-IPP codes of positive rate exist if and only if $t\le q-1$. We introduce a robust version of IPP codes which allows {unconditional} identification of parents even if some of the coordinates in $y$ can break away from the descent rule, i.e., can take arbitrary values from the alphabet, or become completely unreadable. We show existence of robust $t$-IPP codes for all $t\le q-1$ and some positive proportion of such coordinates. The proofs involve relations between IPP codes and combinatorial arrays with separating properties such as perfect hash functions and hash codes, partially hashing families and separating codes. For $t=2$ we find the exact proportion of mutant coordinates (for several error scenarios) that permits unconditional identification of parents.
Last updated:  2012-05-31
Substitution-permutation networks, pseudorandom functions, and Natural Proofs
Eric Miles, Emanuele Viola
This paper takes a new step towards closing the troubling gap between pseudorandom functions (PRF) and their popular, bounded-input-length counterparts. This gap is both quantitative, because these counterparts are more efficient than PRF in various ways, and methodological, because these counterparts usually fit in the substitution-permutation network paradigm (SPN) which has not been used to construct PRF. We give several candidate PRF F_i that are inspired by the SPN paradigm. This paradigm involves a ``substitution function'' (S-box). Our main candidates are: 1. F_1 : {0,1}^n -> {0,1}^n is an SPN whose S-box is a random function on b bits, given as part of the seed. We prove unconditionally that F_1 resists attacks that run in time at most 2^{Omega(b)}. Setting b = omega(log n) we obtain an inefficient PRF, which however seems to be the first such construction using the SPN paradigm. 2. F_2 : {0,1}^n -> {0,1}^n is an SPN where the S-box is (patched) field inversion, a common choice in practical constructions. F_2 is computable with Boolean circuits of size n * log^{O(1)} n, and in particular with seed length n * log^{O(1)} n. We prove that this candidate has exponential security 2^{Omega(n)} against linear and differential cryptanalysis. 3. F_3 : {0,1}^n -> {0,1} is a non-standard variant on the SPN paradigm, where ``states'' grow in length. F_3 is computable with size n^{1+eps}, for any eps > 0, in the restricted circuit class TC0 of unbounded fan-in majority circuits of constant-depth. We prove that F_3 is almost $3$-wise independent. 4. F_4 : {0,1}^n -> {0,1} uses an extreme setting of the SPN parameters (one round, one S-box, no diffusion matrix). The S-box is again (patched) field inversion. We prove that this candidate is a small-bias generator (for tests of weight up to 2^{0.9n}). Assuming the security of our candidates, our work also narrows the gap between the ``Natural Proofs barrier'' [Razborov & Rudich; JCSS '97] and existing lower bounds, in three models: unbounded-depth circuits, TC0 circuits, and Turing machines. In particular, the efficiency of the circuits computing F_3 is related to a result by Allender and Koucky [JACM '10] who show that a lower bound for such circuits would imply a lower bound for TC0.
Last updated:  2011-05-11
A Simple and Efficient New Group Key Management Approach Based on Linear Geometry
Uncategorized
Shaohua Tang, Jintai Ding, Yujun Liang
Show abstract
Uncategorized
A new fundamental and secure group key management approach with a group controller GC using the theory of polynomial functions over a vector space over finite field is developed, where each member in the group corresponds to a vector in the vector space and the GC computes a central vector, whose inner product with every member's ID vector are identical. The central vector is published and each member can compute a common group key via inner product. The security relies on the fact that any illegitimate user cannot calculate this value without the legitimate vector, therefore cannot derive the group key. This approach is secure and its backward and forward secrecy can be guaranteed. The performance of our approach is analyzed to demonstrate its advantages in comparison with others, which include: 1) it requires both small memory and little computations for each group member; 2)it can handle massive membership change efficiently with only two re-keying messages, i.e., the central vector and a random number; 3) it is very efficient and very scalable for large size groups. Our experiments confirm these advantages and the implementation of our prototype presents very satisfactory performance for large size groups.
Last updated:  2011-05-11
Cryptanalysis and Improvement of an Efficient CCA Secure PKE Scheme
Xu An Wang, Liqiang Wu, Xiaoyuan Yang, Huaqun Wang
Recently in Chinese Journal of Computers, Kang et al. [12] proposed an efficient CCA secure public key encryption (PKE) scheme, and claimed that it is more efficient in the public/private keys than the famous CS98 and BMW05 CCA secure public key encryption scheme. However, in this paper we will show that their proposal is not secure at all. Furthermore, we improve their scheme to be a secure one and prove its security.
Last updated:  2011-05-22
A Perfectly Binding Commitment Scheme Against Quantum Attacks
Zeng Bing, Chen Liang, Tang Xueming
It's known that perfectly binding quantum computationally hiding commitment schemes can be constructed from any quantum one-way permutation. Since no quantum one-way permutations are known, it has been unknown by far whether we can get such a concrete commitment scheme. In this paper, we give a positive answer. Specifically, we present such a lattice-based commitment scheme, which is built from the results gained by Gentry et al.
Last updated:  2014-06-09
Sequential Aggregate Signatures with Lazy Verification from Trapdoor Permutations
Kyle Brogle, Sharon Goldberg, Leonid Reyzin
Sequential aggregate signature schemes allow n signers, in order, to sign a message each, at a lower total cost than the cost of n individual signatures. We present a sequential aggregate signature scheme based on trapdoor permutations (e.g., RSA). Unlike prior such proposals, our scheme does not require a signer to retrieve the keys of other signers and verify the aggregate-so-far before adding its own signature. Indeed, we do not even require a signer to know the public keys of other signers! Moreover, for applications that require signers to verify the aggregate anyway, our schemes support lazy verification: a signer can add its own signature to an unverified aggregate and forward it along immediately, postponing verification until load permits or the necessary public keys are obtained. This is especially important for applications where signers must access a large, secure, and current cache of public keys in order to verify messages. The price we pay is that our signature grows slightly with the number of signers. We report a technical analysis of our scheme (which is provably secure in the random oracle model), a detailed implementation-level specification, and implementation results based on RSA and OpenSSL. To evaluate the performance of our scheme, we focus on the target application of BGPsec (formerly known as Secure BGP), a protocol designed for securing the global Internet routing system. There is a particular need for lazy verification with BGPsec, since it is run on routers that must process signatures extremely quickly, while being able to access tens of thousands of public keys. We compare our scheme to the algorithms currently proposed for use in BGPsec, and find that our signatures are considerably shorter nonaggregate RSA (with the same sign and verify times) and have an order of magnitude faster verification than nonaggregate ECDSA, although ECDSA has shorter signatures when the number of signers is small.
Last updated:  2011-05-07
Protecting Drive Encryption Systems Against Memory Attacks
Leo Dorrendorf
Software drive encryption systems are vulnerable to memory attacks, in which an attacker gains physical accesses to the unattended computer, obtains the decryption keys from memory and consequently decrypts the drive. We reviewed the currently existing mitigations and have found that they provide only partial protection, and none of them protect against the full range of memory attacks. We propose a new method for protecting encryption systems against memory attacks, by converting them to use two tiers of keys, a single Master Key and a set of File or Sector keys. When the computer is unattended, the Master Key and part of the second-tier keys are erased from memory. The method is secure against any type of memory attack, including attackers who gain complete control of the unattended system. Compared to previous methods of protection, which erase keys and shut down the computer, our method allows to keep the computer operational by a combination of cryptographic and operating systems techniques. Applications may continue running, and can access any unencrypted data as well as a chosen subset of the encrypted data, at the cost of leaving that data unsecured against memory attacks. We first describe the application of the method to file-based encryption systems, where we have implemented and tested it in practice, and then describe a possible adaptation to disk-based encryption systems.
Last updated:  2011-05-07
Framework for Security Proofs for On-demand Routing Protocols in Multi-Hop Wireless Networks
István Vajda
We present a framework for security proofs for on-demand routing protocols. The framework relies on the composable cryptographic library by Backes, Pfitzmann and Waidner (BPW). The idea is to break down the security requirement against the system (the protocol) into security requirement against the elements of the system, the honest protocol machines in the BPW symbolic model. The practical income of this approach is simplified, structured and cryptographically sound security proof as well as it provides guidelines for designing provably secure protocols.
Last updated:  2013-02-20
On the Security of TLS-DHE in the Standard Model
Tibor Jager, Florian Kohlar, Sven Schäge, Jörg Schwenk
TLS is the most important cryptographic protocol in use today. However, up to now there is no complete cryptographic security proof in the standard model, nor in any other model. We give the first such proof for the core cryptographic protocol of TLS ciphersuites based on ephemeral Diffie-Hellman key exchange (TLS-DHE), which include the cipher suite TLS DHE DSS WITH 3DES EDE CBC SHA mandatory in TLS 1.0 and TLS 1.1. It is impossible to prove security of the TLS Handshake in any classical key-indistinguishability-based security model (like e.g. the Bellare-Rogaway or the Canetti-Krawczyk model), due to subtle issues with the encryption of the final Finished messages of the TLS Handshake. Therefore we start with proving the security of a truncated version of the TLS Handshake protocol, which has also been considered in previous work on TLS. Then we define the notion of authenticated and confidential channel establishment (ACCE) as a new security model which captures precisely the security properties expected from TLS in practice, and show that the combination of the TLS Handshake protocol with the TLS Record Layer can be proven secure in this model.
Last updated:  2011-07-12
Cryptographic Analysis of All 4 x 4 - Bit S-Boxes
Markku-Juhani O. Saarinen
We present cryptanalytic results of an exhaustive search of all $16!$ bijective 4-bit S-Boxes. Previously affine equivalence classes have been exhaustively analyzed in 2007 work by Leander and Poschmann. We extend on this work by giving further properties of the optimal S-Box linear equivalence classes. In our main analysis we consider two S-Boxes to be cryptanalytically equivalent if they are isomorphic up to the permutation of input and output bits and a XOR of a constant in the input and output. We have enumerated all such equivalence classes with respect to their differential and linear properties. These equivalence classes are equivalent not only in their differential and linear bounds but also have equivalent algebraic properties, branch number and circuit complexity. We describe a ``golden'' set of S-boxes that have ideal cryptographic properties. We also present a comparison table of S-Boxes from a dozen published cryptographic algorithms.
Last updated:  2012-11-02
Identity Based Deterministic Signature Scheme Without Forking-Lemma
S. Sharmila Deva Selvi, S. Sree Vivek, C. Pandu Rangan
Since the discovery of identity based cryptography, a number of identity based signature schemes were reported in the literature. Although, a lot of identity based signature schemes were proposed, the only identity based deterministic signature scheme was given by Javier Herranz. This signature scheme uses Schnorr signature scheme for generating the private key of the users and uses BLS short signature scheme for generating users signature. The security of this scheme was proved in the random oracle model using forking lemma. In this paper, we introduce a new identity based deterministic signature scheme and prove the security of the scheme in the random oracle model, without the aid of forking lemma. Hence, our scheme offers tighter security reduction to the underlying hard problem than the existing identity based deterministic signature scheme.
Last updated:  2014-01-03
Provably Secure Group Key Management Approach Based upon Hyper-sphere
Uncategorized
Shaohua Tang, Lingling Xu, Niu Liu, Jintai Ding, Zhiming Yang
Show abstract
Uncategorized
Secure group communication systems become more and more important in many emerging network applications. For a secure group communication system, an efficient and robust group key management approach is essential. In this paper, a new group key management approach with a group controller GC using the theory of hyper-sphere is developed, where a hyper-sphere is constructed for a group and each member in the group corresponds to a point on the hyper-sphere, which is called the member's private point. The GC computes the central point of the hyper-sphere, intuitively, whose ``distance" from each member's private point is identical. The central point is published and each member can compute a common group key via a function invoking each member's private point and the central point of the hyper-sphere. This approach is provably secure under the pseudo-random function (PRF) assumption. The performance of our approach is analyzed to demonstrate its advantages in comparison with others, which include: 1) it requires both small memory and little computations for each group member; 2) it can handle massive membership change efficiently with only two re-keying messages, i.e., the central point of the hyper-sphere and a random number; 3) it is very efficient and very scalable for large-size groups. Our experiments confirm these advantages and the implementation of our prototype presents very satisfactory performance for large-size groups.
Last updated:  2011-08-29
Delegatable Homomorphic Encryption with Applications to Secure Outsourcing of Computation
Uncategorized
M. Barbosa, P. Farshim
Show abstract
Uncategorized
In this work we propose a new cryptographic primitive called Delegatable Homomorphic Encryption (DHE). This allows a Trusted Authority to control/delegate the capability to evaluate circuits over encrypted data to untrusted workers/evaluators by issuing tokens. This primitive can be both seen as a public-key counterpart to Verifiable Computation, where input generation and output verification are performed by different entities, or as a generalisation of Fully Homomorphic Encryption enabling control over computations on encrypted data. Our primitive comes with a series of extra features as follows: 1) there is a one-time setup procedure for all circuits; 2) senders do not need to be aware of the functions which will be evaluated on the encrypted data, nor do they need to register keys; 3) tokens are independent of senders and receiver; and 4) receivers are able to verify the correctness of computation given short auxiliary information on the input data and the function, independently of the complexity of the computed circuit. We give a modular construction of such a DHE scheme from three components: Fully Homomorphic Encryption (FHE), Functional Encryption (FE), and a (customised) MAC. As a stepping stone, we first define Verifiable Functional Encryption (VFE), and then show how one can build a secure DHE scheme from a VFE and an FHE scheme. We also show how to build the required VFE from a standard FE together with a MAC scheme. All our results hold in the standard model. Finally, we show how one can build a verifiable computation (VC) scheme generically from a DHE. As a corollary, we get the first VC scheme which remains verifiable even if the attacker can observe verification results.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.