All papers in 2014 (Page 7 of 1029 results)

Last updated:  2014-06-11
Memento: How to Reconstruct your Secrets from a Single Password in a Hostile Environment
Jan Camenisch, Anja Lehmann, Anna Lysyanskaya, Gregory Neven
Passwords are inherently vulnerable to dictionary attacks, but are quite secure if guessing attempts can be slowed down, for example by an online server. If this server gets compromised, however, the attacker can again perform an offline attack. The obvious remedy is to distribute the password verification process over multiple servers, so that the password remains secure as long as no more than a threshold of the servers are compromised. By letting these servers additionally host shares of a strong secret that the user can recover upon entering the correct password, the user can perform further cryptographic tasks using this strong secret as a key, e.g., encrypting data in the cloud. Threshold password-authenticated secret sharing (TPASS) protocols provide exactly this functionality, but the two only known schemes by Bagherzandi et al. (CCS 2011) and Camenisch et al. (CCS 2012) leak the password if a user mistakenly executes the protocol with malicious servers. Authenticating to the wrong servers is a common scenario when users are tricked in phishing attacks. We propose the first t-out-of-n TPASS protocol for any n > t that does not suffer from this shortcoming. We prove our protocol secure in the UC framework, which for the particular case of password-based protocols offers important advantages over property-based definitions, e.g., by correctly modeling typos in password attempts.
Last updated:  2014-11-20
Dual System Encryption via Doubly Selective Security: Framework, Fully-secure Functional Encryption for Regular Languages, and More
Nuttapong Attrapadung
Dual system encryption techniques introduced by Waters in Crypto'09 are powerful approaches for constructing fully secure functional encryption (FE) for many predicates. However, there are still some FE for certain predicates to which dual system encryption techniques seem inapplicable, and hence their fully-secure realization remains an important problem. A notable example is FE for regular languages, introduced by Waters in Crypto'12. \\ We propose a generic framework that abstracts the concept of dual system encryption techniques. We introduce a new primitive called \emph{pair encoding} scheme for predicates and show that it implies fully secure functional encryption (for the same predicates) via a generic construction. Using the framework, we obtain the first fully secure schemes for functional encryption primitives of which only selectively secure schemes were known so far. Our three main instantiations include FE for regular languages, unbounded attribute-based encryption (ABE) for large universes, and ABE with constant-size ciphertexts. \\ Our main ingredient for overcoming the barrier of inapplicability for the dual system techniques to certain predicates is a computational security notion of the pair encoding scheme which we call \emph{doubly selective security}. This is in contrast with most of the previous dual system based schemes, where information-theoretic security are implicitly utilized. The doubly selective security notion resembles that of selective security and its complementary notion, co-selective security, and hence its name. Our framework can be regarded as a method for boosting doubly selectively security (of encoding) to full security (of functional encryption). \\ Besides generality of our framework, we remark that improved security is also obtained, as our security proof enjoys tighter reduction than previous schemes, notably the reduction cost does not depend on the number of all queries, but only that of \emph{pre-challenged} queries.
Last updated:  2014-07-31
Fast point multiplication algorithms for binary elliptic curves with and without precomputation
Thomaz Oliveira, Diego F. Aranha, Julio López, Francisco Rodríguez-Henríquez
In this paper we introduce new methods for computing constant-time variable-base point multiplications over the Galbraith-Lin-Scott (GLS) and the Koblitz families of elliptic curves. Using a left-to-right double-and-add and a right-to-left halve-and-add Montgomery ladder over a GLS curve, we present some of the fastest timings yet reported in the literature for point multiplication. In addition, we combine these two procedures to compute a multi-core protected scalar multiplication. Furthermore, we designed for the first time a regular $\tau$-adic scalar expansion for Koblitz curves. As a result, using the regular recoding approach, we set the speed record for a single constant-time point multiplication on standardized binary elliptic curves at the $128$-bit security level.
Last updated:  2016-02-10
Towards Optimally Efficient Secret-Key Authentication from PRG
Uncategorized
Ivan Damgård, Sunoo Park
Show abstract
Uncategorized
We propose a new approach to the construction of secret-key authentication protocols making black-box use of any pseudorandom generator (PRG). Our authentication protocols require only two messages, have perfect completeness, and achieve concurrent man-in-the-middle security. Finally, when based on a sufficiently efficient PRG, our protocol has (amortised) complexity $O(n)$ bit operations where $n$ is the security parameter. To the best of our knowledge, this construction is the first with linear complexity. We achieve this at the cost of having the prover (but not the verifier) keep a small amount of state. A variant of our construction, based on a stronger security notion for the PRG, is secure even if the adversary is able to reset the prover an unbounded number of times. A practical analysis of our protocol shows our prover computation time compares favorably against a simple AES-based protocol.
Last updated:  2014-06-06
Note of Multidimensional MITM Attack on 25-Round TWINE-128
Long Wen, Meiqin Wang, Andrey Bogdanov, Huaifeng Chen
TWINE is a lightweight block cipher proposed in SAC 2012 by Suzaki et al. TWINE operates on 64-bit block and supports 80 or 128-bit key, denoted as TWINE-80 and TWINE-128 respectively. TWINE has attracted some attention since its publication and its security has been analyzed against several cryptanalytic techniques in both single-key and related-key settings. In the single-key setting, the best attack so far is reported by Boztaş et al. at LightSec'13, where a splice-and-cut attack on 21-round TWINE-128 and a multidimensional meet-in-the-middle (MITM) attack on 25-round TWINE-128 are presented. Yet, the evaluation of the time complexity of the multidimensional MITM attack on 25-round TWINE-128 is somehow controversial in the way we understand. We here describe the attack in detail and explains our concerns about the time complexity of the attack. And it turns out that the multidimensional MITM attack on 25-round TWINE-128 may have a time complexity higher than exhaustive search.
Last updated:  2014-06-06
Constructing Abelian Surfaces for Cryptography via Rosenhain Invariants
Craig Costello, Alyson Deines-Schartz, Kristin Lauter, Tonghai Yang
This paper presents an algorithm to construct cryptographically strong genus 2 curves and their Kummer surfaces via Rosenhain invariants and related Kummer parameters. The most common version of the complex multiplication (CM) algorithm for constructing cryptographic curves in genus 2 relies on the well-studied Igusa invariants and Mestre's algorithm for reconstructing the curve. On the other hand, the Rosenhain invariants typically have much smaller height, so computing them requires less precision, and in addition, the Rosenhain model for the curve can be written down directly given the Rosenhain invariants. Similarly, the parameters for a Kummer surface can be expressed directly in terms of rational functions of theta constants. CM-values of these functions are algebraic numbers, and when computed to high enough precision, LLL can recognize their minimal polynomials. Motivated by fast cryptography on Kummer surfaces, we investigate a variant of the CM method for computing cryptographically strong Rosenhain models of curves (as well as their associated Kummer surfaces) and use it to generate several example curves at different security levels that are suitable for use in cryptography.
Last updated:  2014-06-06
The Hash Function "Fugue"
Uncategorized
Shai Halevi, William E. Hall, Charanjit S. Jutla
Show abstract
Uncategorized
We describe Fugue, a hash function supporting inputs of length upto 2^{64}-1 bits and hash outputs of length upto 512 bits. Notably, Fugue is not based on a compression function. Rather, it is directly a hash function that supports variable-length inputs. The starting point for Fugue is the hash function Grindahl, but it extends that design to protect against the kind of attacks that were developed for Grindahl, as well as earlier hash functions like SHA-1. A key enhancement is the design of a much stronger round function which replaces the AES round function of Grindahl, using better codes (over longer words) than the AES 4 X 4 MDS matrix. Also, Fugue makes judicious use of this new round function on a much larger internal state. The design of Fugue is proof-oriented: the various components are designed in such a way as to allow proofs of security, and yet be efficient to implement. As a result, we can prove that current attack methods cannot find collisions in Fugue any faster than the trivial birthday attack. Although the proof is computer assisted, the assistance is limited to computing ranks of various matrices.
Last updated:  2014-06-06
System-level non-interference for constant-time cryptography
Uncategorized
Gilles Barthe, Gustavo Betarte, Juan Diego Campo, Carlos Luna, David Pichardie
Show abstract
Uncategorized
Cache-based attacks are a class of side-channel attacks that are particularly effective in virtualized or cloud-based environments, where they have been used to recover secret keys from cryptographic implementations. One common approach to thwart cache-based attacks is to use \emph{constant-time} implementations, i.e.\, which do not branch on secrets and do not perform memory accesses that depend on secrets. However, there is no rigorous proof that constant-time implementations are protected against concurrent cache-attacks in virtualization platforms with shared cache; moreover, many prominent implementations are not constant-time. An alternative approach is to rely on system-level mechanisms. One recent such mechanism is stealth memory, which provisions a small amount of private cache for programs to carry potentially leaking computations securely. Stealth memory induces a weak form of constant-time, called \emph{S-constant-time}, which encompasses some widely used cryptographic implementations. However, there is no rigorous analysis of stealth memory and S-constant-time, and no tool support for checking if applications are S-constant-time. We propose a new information-flow analysis that checks if an x86 application executes in constant-time, or in S-constant-time. Moreover, we prove that constant-time (resp. S-constant-time) programs do not leak confidential information through the cache to other operating systems executing concurrently on virtualization platforms (resp. platforms supporting stealth memory). The soundness proofs are based on new theorems of independent interest, including isolation theorems for virtualization platforms (resp. platforms supporting stealth memory), and proofs that constant-time implementations (resp. S-constant-time implementations) are non-interfering with respect to a strict information flow policy which disallows that control flow and memory accesses depend on secrets. We formalize our results using the \textsf{Coq} proof assistant and we demonstrate the effectiveness of our analyses on cryptographic implementations, including PolarSSL AES, DES and RC4, SHA256 and Salsa20.
Last updated:  2014-07-26
FNR : Arbitrary length small domain block cipher proposal
Sashank Dara, Scott Fluhrer
We propose a practical flexible (or arbitrary) length small domain block cipher, FNR encryption scheme. FNR denotes Flexible Naor and Reingold. It can cipher small domain data formats like IPv4, Port numbers, MAC Addresses, Credit card numbers, any random short strings while preserving their input length. In addition to the classic Feistel networks, Naor and Reingold propose usage of Pair-wise independent permutation (PwIP) functions based on Galois Field GF(2n). Instead we propose usage of random NXN Invertible matrices in GF(2).
Last updated:  2014-06-05
Bounded Fully Homomorphic Signature Schemes
Xiang Xie, Rui Xue
Homomorphic signatures enable anyone to publicly perform computations on signed data and produce a compact tag to authenticate the results. In this paper, we construct two bounded fully homomorphic signature schemes, as follows. \begin{itemize} \item For any two polynomials $d=d(\lambda), s=s(\lambda)$, where $\lambda$ is the security parameter. Our first scheme is able to evaluate any circuit on the signatures, as long as the depth and size of the circuit are bounded by $d$ and $s$, respectively. The construction relies on indistinguishability obfuscation and injective (or polynomially bounded pre-image size) one-way functions. \medskip \item The second scheme, removing the restriction on the size of the circuits, is an extension of the first one, with succinct verification and evaluation keys. More specifically, for an a-prior polynomial $d=d(\lambda)$, the scheme allows to evaluate any circuit on the signatures, as long as the depth of the circuit is bounded by $d$. This scheme is based on differing-inputs obfuscation and collision-resistant hash functions and relies on a technique called recording hash of circuits. \end{itemize} Both schemes enjoy the composition property. Namely, outputs of previously derived signatures can be re-used as inputs for new computations. The length of derived signatures in both schemes is independent of the size of the data set. Moreover, both constructions satisfy a strong privacy notion, we call {\em semi-strong context hiding}, which requires that the derived signatures of evaluating any circuit on the signatures of two data sets are {\em identical} as long as the evaluations of the circuit on these two data sets are the same.
Last updated:  2014-06-05
FFS Factory: Adapting Coppersmith's "Factorization Factory" to the Function Field Sieve
Jérémie Detrey
In 1993, Coppersmith introduced the "factorization factory" approach as a means to speed up the Number Field Sieve algorithm (NFS) when factoring batches of integers of similar size: at the expense of a large precomputation whose cost is amortized when considering sufficiently many integers to factor, the complexity of each individual factorization can then be lowered. We suggest here to extend this idea to the computation of discrete logarithms in finite fields of small characteristic using the Function Field Sieve (FFS), thus referring to this approach as the "FFS factory". In this paper, the benefits of the proposed technique are established thanks to both a theoretical complexity analysis along with a practical experiment in which we solved the discrete logarithm problem in fifty different binary fields of sizes ranging from 601 to 699 bits.
Last updated:  2014-06-05
A Simple Recursive Tree Oblivious RAM
Benny Pinkas, Tzachy Reinman
Oblivious RAM (ORAM) has received increasing attention in the past few years. The goal of oblivious RAM is to enable a client, that can locally store only a small (preferably constant) amount of data, to store remotely N data items, and access them while hiding the identities of the items that are being accessed. Most of the earlier ORAM constructions were based on the hierarchical data structure of Goldreich and Ostrovsky. Shi et al. introduced a binary tree ORAM, which is simpler and more efficient than the classical hierarchical ORAM. Gentry et al. have improved the scheme. In this work, we improve these two constructions. Our scheme asymptotically outperforms all previous tree based ORAM schemes that have constant client memory, with an overhead of O(log^{2+eps}(N) * log^2(log(N))) per operation for a O(N) storage server. Although the best known asymptotic result for ORAM is due to the hierarchical structure of Kushilevitz et al. (O(log^2(N)/log(log(N)))), tree based ORAM constructions are much simpler
Last updated:  2014-06-05
Using Random Error Correcting Codes in Near-Collision Attacks on Generic Hash-Functions
Uncategorized
Inna Polak, Adi Shamir
Show abstract
Uncategorized
In this paper we consider the problem of finding a near-collision with Hamming distance bounded by $r$ in a generic cryptographic hash function $h$ whose outputs can be modeled as random $n$-bit strings. In 2011, Lamberger suggested a modified version of Pollard's rho method which computes a chain of values by alternately applying the hash function $h$ and an error correcting code $e$ to a random starting value $x_{0}$ until it cycles. This turns some (but not all) of the near-collisions in $h$ into full collisions in $f=e\circ h$, which are easy to find. In 2012, Leurent improved Lamberger's memoryless algorithm by using any available amount of memory to store the endpoints of multiple chains of $f$ values, and using Van Oorschot and Wiener's algorithm to find many full collisions in $f$, hoping that one of them will be an $r$-near-collision in $h$. This is currently the best known time/memory tradeoff algorithm for the problem. The efficiency of both Lamberger's and Leurent's algorithms depend on the quality of their error correction code. Since they have to apply error correction to \emph{any} bit string, they want to use perfect codes, but all the known constructions of such codes can correct only $1$ or $3$ errors. To deal with a larger number of errors, they recommend using a concatenation of many Hamming codes, each capable of correcting a single error in a particular subset of the bits, along with some projections. As we show in this paper, this is a suboptimal choice, which can be considerably improved by using randomly chosen linear codes instead of Hamming codes and storing a precomputed lookup table to make the error correction process efficient. We show both theoretically and experimentally that this is a better way to utilize the available memory, instead of devoting all the memory to the storage of chain endpoints. Compared to Leurent's algorithm, we demonstrate an improvement ratio which grows with the size of the problem. In particular, we experimentally verified an improvement ratio of about $3$ in a small example with $n=160$ and $r=33$ which we implemented on a single PC, and mathematically predicted an improvement ratio of about $730$ in a large example with $n=1024$ and $r=100$, using $2^{40}$ memory.
Last updated:  2015-01-28
Adaptive Security of Constrained PRFs
Georg Fuchsbauer, Momchil Konstantinov, Krzysztof Pietrzak, Vanishree Rao
Constrained pseudorandom functions have recently been introduced independently by Boneh and Waters (Asiacrypt'13), Kiayias et al. (CCS'13), and Boyle et al. (PKC'14). In a standard pseudorandom function (PRF) a key $K$ is used to evaluate the PRF on all inputs in the domain. Constrained PRFs additionally offer the functionality to delegate ``constrained'' keys $K_S$ which allow to evaluate the PRF only on a subset $S$ of the domain. The three above-mentioned papers all show that the classical GGM construction (J.ACM'86) of a PRF from a pseudorandom generator (PRG) directly yields a constrained PRF where one can compute constrained keys to evaluate the PRF on all inputs with a given prefix. This constrained PRF has already found many interesting applications. Unfortunately, the existing security proofs only show selective security (by a reduction to the security of the underlying PRG). To achieve full security, one has to use complexity leveraging, which loses an exponential factor $2^N$ in security, where $N$ is the input length. The first contribution of this paper is a new reduction that only loses a quasipolynomial factor $q^{\log N}$, where $q$ is the number of adversarial queries. For this we develop a new proof technique which constructs a distinguisher by interleaving simple guessing steps and hybrid arguments a small number of times. This approach might be of interest also in other contexts where currently the only technique to achieve full security is complexity leveraging. Our second contribution is concerned with another constrained PRF, due to Boneh and Waters, which allows for constrained keys for the more general class of bit-fixing functions. Their security proof also suffers from a $2^N$ loss, which we show is inherent. We construct a meta-reduction which shows that any ``simple'' reduction of full security from a non-interactive hardness assumption must incur an exponential security loss.
Last updated:  2014-06-05
Virtual Proofs of Reality
Ulrich Rührmair
In this paper, we discuss the question how physical statements can be proven remotely over digital communication channels, but without using classical secret keys, and without assuming tamper-resistant and trusted measurement hardware in the location of the prover. Examples for the considered physical statements are: (i) “the temperature of a certain object is X °C”, (ii) “two certain objects are positioned at distance X”, or (iii) “a certain object has been irreversibly altered or destroyed”. In lack of an established name, we would like to call the corresponding security protocols ”virtual proofs of reality” (VPs). While a host of variants seems conceivable, this paper focuses on VPs in which the verifier has handed over one or more specific physical objects O_i to the prover at some point prior to the VP. These “witness objects” assist the prover during the proof, but shall not contain classical digital keys nor be assumed tamper-resistant in the classical sense. The prover is allowed to open, inspect and alter these objects in our adversarial model, only being limited by current technology, while he shall still be unable to prove false claims to the verifier. In order to illustrate our concept, we give example protocols built on temperature sensitive integrated circuits, disordered optical scattering media, and quantum systems. These protocols prove the temperature, destruction/modification, or relative position of witness objects in the prover’s location. Full experimental realizations of these schemes are beyond the scope of this paper. But the protocols utilize established technologies from the areas of physical unclonable functions and quantum cryptography, and hence appear plausible also without such proof. Finally, we also discuss potential advancements of our method in theory, for example “public virtual proofs” that function without exchanging witness objects Oi between the verifier and the prover. Our work touches upon and partly extends several established cryptographic and security concepts, including physical unclonable functions, quantum cryptography, and interactive proof systems.
Last updated:  2014-06-04
A Security Proof of KCDSA using an extended Random Oracle Model
Vikram Singh
We describe a tight security reduction to the discrete logarithm problem for KCDSA under an extended Random Oracle Model. This is achieved by generalising the signature scheme and producing a security proof for the generalised scheme. We require the application of Randomized Hashing. We also introduce a Challenger to the Random Oracle Model, who is external to the Simulator and Adversary. The Challenger provides oracle returns for one hash function, and challenges which have a low probability of being met. On presentation of a forged signature the Simulator either identifies an edge case which allows solving of a challenge, or solves the discrete logarithm problem. Hence the tight reduction.
Last updated:  2014-06-04
On the Cost of Lazy Engineering for Masked Software Implementations
Josep Balasch, Benedikt Gierlichs, Vincent Grosso, Oscar Reparaz, François-Xavier Standaert
Masking is one of the most popular countermeasures to mitigate side-channel analysis. Yet, its deployment in actual cryptographic devices is well known to be challenging, since designers have to ensure that the leakage corresponding to different shares is independent. Several works have shown that such an independent leakage assumption may be contradicted in practice, because of physical effects such as ``glitches" or ``transition-based" leakages. As a result, implementing masking securely can be a time-consuming engineering problem. This is in strong contrast with recent and promising approaches for the automatic insertion of countermeasures exploiting compilers, that aim to limit the development time of side-channel resistant software. Motivated by this contrast, we question what can be hoped for these approaches - or more generally for masked software implementations based on careless assembly generation. For this purpose, our first contribution is a simple reduction from security proofs obtained in a (usual but not always realistic) model where leakages depend on the intermediate variables manipulated by the target device, to security proofs in a (more realistic) model where the transitions between these intermediate variables are leaked. We show that the cost of moving from one context to the other implies a division of the security order by two for masking schemes. Next, our second and main contribution is to provide an exhaustive empirical validation of this reduction, based on two microcontrollers, several (handwritten and compiler-based) ways of generating assembly codes, with and without ``recycling" the randomness used for sharing. These experiments confirm the relevance of our analysis, and therefore quantify the cost of lazy engineering for masking.
Last updated:  2015-03-05
Efficient Selection of Time Samples for Higher-Order DPA with Projection Pursuits
Uncategorized
François Durvaux, François-Xavier Standaert, Nicolas Veyrat-Charvillon, Jean-Baptiste Mairy, Yves Deville
Show abstract
Uncategorized
The selection of points-of-interest in leakage traces is a frequently neglected problem in the side-channel literature. However, it can become the bottleneck of practical adversaries/evaluators as the size of the measurement traces increases, especially in the challenging context of masked implementations, where only a combination of multiple shares reveals information in higher-order statistical moments. In this paper, we describe new (black box) tools for efficiently dealing with this problem. The proposed techniques exploit projection pursuits and optimized local search algorithms, work with minimum memory requirements and practical time complexity. We validate them with two case-studies of unprotected and first-order masked implementations in an 8-bit device, the latter one being hard to analyze with previously known methods.
Last updated:  2014-06-04
Combining Leakage-Resilient PRFs and Shuffling (Towards Bounded Security for Small Embedded Devices)
Vincent Grosso, Romain Poussier, François-Xavier Standaert, Lubos Gaspar
Combining countermeasures is usually assumed to be the best way to protect embedded devices against side-channel attacks. These combinations are at least expected to increase the number of measurements of successful attacks to some reasonable extent, and at best to guarantee a bounded time complexity independent of the number of measurements. This latter guarantee, only possible in the context of leakage-resilient constructions, was only reached either for stateful (pseudo-random generator) constructions, or large parallel implementations so far. In this paper, we describe a first proposal of stateless (pseudo-random function) construction, for which we have strong hints that security bounded implementations are reachable under the constraints of small embedded devices. Our proposal essentially combines the well-known shuffling countermeasure with a tweaked pseudo-random function introduced at CHES 2012. We first detail is performances. Then we analyze it against standard differential power analysis and discuss the different parameters influencing its security bounds. Finally, we put forward that its implementation in 8-bit microcontrollers can provide a better security vs. performance tradeoff than state-of-the art (combinations of) countermeasures.
Last updated:  2014-06-04
Soft Analytical Side-Channel Attacks
Nicolas Veyrat-Charvillon, Benoît Gérard, François-Xavier Standaert
In this paper, we introduce a new approach to side-channel key recovery, that combines the low time/memory complexity and noise tolerance of standard (divide and conquer) differential power analysis with the optimal data complexity of algebraic side-channel attacks. Our fundamental contribution for this purpose is to change the way of expressing the problem, from the system of equations used in algebraic attacks to a code, essentially inspired by low density parity check codes. We then show that such codes can be efficiently decoded, taking advantage of the sparsity of the information corresponding to intermediate variables in actual leakage traces. The resulting soft analytical side-channel attacks work under the same profiling assumptions as template attacks, and directly exploit the vectors of probabilities produced by these attacks. As a result, we bridge the gap between popular side-channel distinguishers based on simple statistical tests and previous approaches to analytical side-channel attacks that could only exploit hard information so far.
Last updated:  2016-09-23
Moments-Correlating DPA
Uncategorized
Amir Moradi, François-Xavier Standaert
Show abstract
Uncategorized
We generalize correlation-enhanced power analysis collision attacks into moments-correlating DPA. The resulting distinguisher is applicable to the profiled and non-profiled (collision) settings and is able to exploit information lying in any statistical moment. It also benefits from a simple rule-of-thumb to estimate its data complexity. Experimental results show that such a tool allows answering with confidence to some important questions regarding the design of side-channel countermeasures (e.g. what is the most informative statistical moment in the leakages of a threshold implementation). We further argue that moments-correlating DPA is a natural candidate for leakage detection tests, enjoying the simplicity of correlation power analysis and advanced features for the evaluation of higher-order attacks with an easy-to-compute confidence level.
Last updated:  2016-02-22
Bootstrapping BGV Ciphertexts with a Wider Choice of p and q
Emmanuela Orsini, Joop van de Pol, Nigel P. Smart
We describe a method to bootstrap a packed BGV ciphertext which does not depend (as much) on any special properties of the plaintext and ciphertext moduli. Prior “efficient” methods such as that of Gentry et al. (PKC 2012) required a ciphertext modulus q which was close to a power of the plaintext modulus p. This enables our method to be applied in a larger number of situations. Our basic bootstrapping technique makes use of a representation based on polynomials of the group (Zq,+) over the finite field Fp , followed by polynomial interpolation of the reduction mod p map over the coefficients of the algebraic group. This technique is then extended to the full BGV packed ciphertext space, using a method whose depth depends only logarithmically on the number of packed elements. This method may be of interest as an alternative to the method of Alperin-Sheriff and Peikert (CRYPTO 2013). To aid efficiency we utilize the ring/field switching technique of Gentry et al (SCN 2012, JCS 2013).
Last updated:  2014-06-03
Towards Symmetric Functional Encryption for Regular Languages with Predicate Privacy
Fu-Kuo Tseng, Rong-Jaye Chen, Bao-Shuh Paul Lin
We present a symmetric-key predicate-only functional encryption system, SP-FE, which supports functionality for regular languages described by deterministic finite automata. In SP-FE, a data owner can encrypt a string of symbols as encrypted symbols for matching. Later, the data owner can generate predicate tokens of the transitions in a deterministic finite automaton (DFA). The server with these tokens can decrypt a sequence of encrypted symbols correctly and transfer from one state to another accordingly. If the final state belongs to the set of accept states, the server takes assigned operations or returns the corresponding encrypted data. We have proven SP-FE preserves both plaintext privacy and predicate privacy through security analysis and security games. However, to achieve predicate privacy, we put bounds on the length of a string and the number of states of a DFA. Due to these restrictions, SP-FE can only capture finite languages. Finally, we present the performance analysis of SP-FE and mention possible future work.
Last updated:  2014-06-02
New Generic Attacks Against Hash-based MACs
Gaëtan Leurent, Thomas Peyrin, Lei Wang
In this paper we study the security of hash-based MAC algorithms (such as HMAC and NMAC) above the birthday bound. Up to the birthday bound, HMAC and NMAC are proven to be secure under reasonable assumptions on the hash function. On the other hand, if an $n$-bit MAC is built from a hash function with a $l$-bit state ($l \ge n$), there is a well-known existential forgery attack with complexity $2^{l/2}$. However, the remaining security after $2^{l/2}$ computations is not well understood. In particular it is widely assumed that if the underlying hash function is sound, then a generic universal forgery attack should still require $2^{n}$ computations and some distinguishing (e.g. distinguishing-H but not distinguishing-R) and state-recovery attacks should still require $2^{l}$ (or $2^k$ if $k < l$) computations. In this work, we show that above the birthday bound, hash-based MACs offer significantly less security than previously believed. Our main result is a generic distinguishing-H and state-recovery attack against hash-based MACs with a complexity of only $\tilde O(2^{l/2})$. In addition, we show a key-recovery attack with complexity $\tilde O(2^{3l/4})$ against HMAC used with a hash functions with an internal checksum, such as GOST. This surprising result shows that the use of a checksum might actually weaken a hash function when used in a MAC. We stress that our attacks are generic, and they are in fact more efficient than some previous attacks proposed on MACs instantiated with concrete hash functions. We use techniques similar to the cycle-detection technique proposed by Peyrin et al. at Asiacrypt 2012 to attack HMAC in the related-key model. However, our attacks works in the single-key model for both HMAC and NMAC, and without restriction on the key size.
Last updated:  2014-11-14
Indistinguishability Obfuscation versus Multi-Bit Point Obfuscation with Auxiliary Input
Chris Brzuska, Arno Mittelbach
In a recent celebrated breakthrough, Garg et al. (FOCS 2013) gave the first candidate for so-called indistinguishability obfuscation (iO) thereby reviving the interest in obfuscation for a general purpose. Since then, iO has been used to advance numerous sub-areas of cryptography. While indistinguishability obfuscation is a general purpose obfuscation scheme, several obfuscators for specific functionalities have been considered. In particular, special attention has been given to the obfuscation of so-called point functions that return zero everywhere, except for a single point $x$. A strong variant is point obfuscation with auxiliary input (AIPO), which allows an adversary to learn some non-trivial auxiliary information about the obfuscated point $x$ (Goldwasser, Tauman-Kalai; FOCS, 2005). Multi-bit point functions are a strengthening of point functions, where on $x$, the point function returns a string $m$ instead of $1$. Multi-bit point functions with auxiliary input (MB-AIPO) have been constructed from composable AIPO by Canetti and Dakdouk (Eurocrypt 2008) and have been used by Matsuda and Hanaoka (TCC 2014) to construct CCA-secure public-key encryption schemes and by Bitansky and Paneth (TCC 2012) to construct three-round weak zero-knowledge protocols for NP. In this paper we present both positive and negative results. We show that if indistinguishability obfuscation exists, then MB-AIPO does not. Towards this goal, we build on techniques by Brzuska, Farshim and Mittelbach (Crypto 2014) who use indistinguishability obfuscation as a mean to attack a large class of assumptions from the Universal Computational Extractor framework (Bellare, Hoang and Keelveedhi; Crypto 2013). On the positive side we introduce a weak version of MB-AIPO which we deem to be outside the reach of our impossibility result. We build this weak version of MB-AIPO based on iO and AIPO and prove that it suffices to construct a public-key encryption scheme that is secure even if the adversary can learn an arbitrary leakage function of the secret key, as long as the secret key remains computationally hidden. Thereby, we strengthen a result by Canetti et al. (TCC 2010) that showed a similar connection in the symmetric-key setting.
Last updated:  2014-06-02
Large-Scale Secure Computation
Elette Boyle, Kai-Min Chung, Rafael Pass
We are interested in secure computation protocols in settings where the number of parties is huge and their data even larger. Assuming the existence of a single-use broadcast channel (per player), we demonstrate statistically secure computation protocols for computing (multiple) arbitrary dynamic RAM programs over parties' inputs, handling (1/3-eps) fraction static corruptions, while preserving up to polylogarithmic factors the computation and memory complexities of the RAM program. Additionally, our protocol is load balanced and has polylogarithmic communication locality.
Last updated:  2014-06-05
Generic Universal Forgery Attack on Iterative Hash-based MACs
Thomas Peyrin, Lei Wang
In this article, we study the security of iterative hash-based MACs, such as HMAC or NMAC, with regards to universal forgery attacks. Leveraging recent advances in the analysis of functional graphs built from the iteration of HMAC or NMAC, we exhibit the very first generic universal forgery attack against hash-based MACs. In particular, our work implies that the universal forgery resistance of an n-bit output HMAC construction is not 2^n queries as long believed by the community. The techniques we introduce extend the previous functional graphs-based attacks that only took in account the cycle structure or the collision probability: we show that one can extract much more meaningful secret information by also analyzing the distance of a node from the cycle of its component in the functional graph.
Last updated:  2014-06-02
On the Existence of Extractable One-Way Functions
Uncategorized
Nir Bitansky, Ran Canetti, Omer Paneth, Alon Rosen
Show abstract
Uncategorized
A function f is extractable if it is possible to algorithmically ``extract,'' from any adversarial program that outputs a value y in the image of f, a preimage of y. When combined with hardness properties such as one-wayness or collision-resistance, extractability has proven to be a powerful tool. However, so far, extractability has not been explicitly shown. Instead, it has only been considered as a non-standard *knowledge assumption* on certain functions. We make two headways in the study of the existence of extractable one-way functions (EOWFs). On the negative side, we show that if there exist indistinguishability obfuscators for a certain class of circuits then there do not exist EOWFs where extraction works for any adversarial program with auxiliary-input of unbounded polynomial length. On the positive side, for adversarial programs with bounded auxiliary-input (and unbounded polynomial running time), we give the first construction of EOWFs with an explicit extraction procedure, based on relatively standard assumptions (e.g., sub-exponential hardness of Learning with Errors). We then use these functions to construct the first 2-message zero-knowledge arguments and 3-message zero-knowledge arguments of knowledge, against the same class of adversarial verifiers, from essentially the same assumptions.
Last updated:  2014-06-02
Software implementation of an Attribute-Based Encryption scheme
Uncategorized
Eric Zavattoni, Luis J. Dominguez Perez, Shigeo Mitsunari, Ana H. Sánchez-Ramírez, Tadanori Teruya, Francisco Rodríguez-Henríquez
Show abstract
Uncategorized
A ciphertext-policy attribute-based encryption protocol uses bilinear pairings to provide control access mechanisms, where the set of user's attributes is specified by means of a linear secret sharing scheme. In this paper we present the design of a software cryptographic library that achieves record timings for the computation of a 126-bit security level attribute-based encryption scheme. We developed all the required auxiliary building blocks and compared the computational weight that each of them adds to the overall performance of this protocol. In particular, our single pairing and multi-pairing implementations achieve state-of-the-art time performance at the 126-bit security level.
Last updated:  2014-07-23
Composable Oblivious Extended Permutations
Peeter Laud, Jan Willemson
An extended permutation is a function f : {1,...,m} -> {1,...,n}, used to map an n-element vector a to an m-element vector b by b_i = a_{f(i)}. An oblivious extended permutation allows this mapping to be done while preserving the privacy of a, b and f in a secure multiparty computation protocol. Oblivious extended permutations have several uses, with private function evaluation (PFE) being the theoretically most prominent one. In this paper, we propose a new technique for oblivious evaluation of extended permutations. Our construction is at least as efficient as the existing techniques, conceptually simpler, and has wider applicability. Our technique allows the party providing the description of f to be absent during the computation phase of the protocol. Moreover, that party does not even have to exist - we show how to compute the private representation of f from private data that may itself be computed from the inputs of parties. In other words, our oblivious extended permutations can be freely composed with other privacy-preserving operations in a multiparty computation.
Last updated:  2014-06-02
An Asymptotically Optimal Structural Attack on the ABC Multivariate Encryption Scheme
Dustin Moody, Ray Perlner, Daniel Smith-Tone
Historically, multivariate public key cryptography has been less than successful at offering encryption schemes which are both secure and efficient. At PQCRYPTO '13 in Limoges, Tao, Diene, Tang, and Ding introduced a promising new multivariate encryption algorithm based on a fundamentally new idea: hiding the structure of a large matrix algebra over a finite field. We present an attack based on subspace differential invariants inherent to this methodology. The attack is is a structural key recovery attack which is asymptotically optimal among all known attacks (including algebraic attacks) on the original scheme and its generalizations.
Last updated:  2014-06-04
Differential Properties of the HFE Cryptosystem
Taylor Daniels, Daniel Smith-Tone
Multivariate Public Key Cryptography (MPKC) has been put forth as a possible post-quantum family of cryptographic schemes. These schemes lack provable security in the reduction theoretic sense, and so their security against yet undiscovered attacks remains uncertain. The effectiveness of differential attacks on various field-based systems has prompted the investigation of differential properties of multivariate schemes to determine the extent to which they are secure from differential adversaries. Due to its role as a basis for both encryption and signature schemes we contribute to this investigation focusing on the HFE cryptosystem. We derive the differential symmetric and invariant structure of the HFE central map and that of HFE- and provide a collection of parameter sets which make these HFE systems provably secure against a differential symmetric or differential invariant attack.
Last updated:  2015-01-28
Cofactorization on Graphics Processing Units
Uncategorized
Andrea Miele, Joppe W. Bos, Thorsten Kleinjung, Arjen K. Lenstra
Show abstract
Uncategorized
We show how the cofactorization step, a compute-intensive part of the relation collection phase of the number field sieve (NFS), can be farmed out to a graphics processing unit. Our implementation on a GTX 580 GPU, which is integrated with a state-of-the-art NFS implementation, can serve as a cryptanalytic co-processor for several Intel i7-3770K quad-core CPUs simultaneously. This allows those processors to focus on the memory-intensive sieving and results in more useful NFS-relations found in less time.
Last updated:  2020-04-23
Prover-Efficient Commit-And-Prove Zero-Knowledge SNARKs
Helger Lipmaa
Zk-SNARKs (succinct non-interactive zero-knowledge arguments of knowledge) are needed in many applications. Unfortunately, all previous zk-SNARKs for interesting languages are either inefficient for the prover, or are non-adaptive and based on a commitment scheme that depends both on the prover's input and on the language, i.e., they are not commit-and-prove (CaP) SNARKs. We propose a proof-friendly extractable commitment scheme, and use it to construct prover-efficient adaptive CaP succinct zk-SNARKs for different languages, that can all reuse committed data. In new zk-SNARKs, the prover computation is dominated by a linear number of cryptographic operations. We use batch-verification to decrease the verifier's computation; importantly, batch-verification can be used also in QAP-based zk-SNARKs.
Last updated:  2016-05-02
Lightweight and Privacy-Preserving Delegatable Proofs of Storage
Jia Xu, Anjia Yang, Jianying Zhou, Duncan S. Wong
Proofs of storage (POR or PDP) is a cryptographic tool, which enables data owner or third party auditor to audit integrity of data stored remotely in a cloud storage server, without keeping a local copy of data or downloading data back during auditing. We observe that all existing publicly verifiable POS schemes suffer from a serious drawback: It is extremely slow to compute authentication tags for all data blocks, due to many expensive group exponentiation operations. Surprisingly, it is even much slower than typical network uploading speed, and becomes the bottleneck of the setup phase of the POS scheme. We propose a new variant formulation called "Delegatable Proofs of Storage". In this new relaxed formulation, we are able to construct POS schemes, which on one side is as efficient as private key POS schemes, and on the other side can support third party auditor and can switch auditors at any time, close to the functionalities of publicly verifiable POS schemes. Compared to traditional publicly verifiable POS schemes, we speed up the tag generation process by at least several hundred times, without sacrificing efficiency in any other aspect. Like many existing schemes, we can also speed up our tag generation process by N times using N CPU cores in parallel. We prove that our scheme is sound under Bilinear Strong Diffie-Hellman Assumption, and it is privacy preserving against auditor under Discrete Log Assumption. Both proofs are given in standard model.
Last updated:  2015-06-12
Relational Hash
Uncategorized
Avradip Mandal, Arnab Roy
Show abstract
Uncategorized
Traditional cryptographic hash functions allow one to easily check whether the original plaintexts are equal or not, given a pair of hash values. Probabilistic hash functions extend this concept where given a probabilistic hash of a value and the value itself, one can efficiently check whether the hash corresponds to the given value. However, given distinct probabilistic hashes of the same value it is not possible to check whether they correspond to the same value. In this work we introduce a new cryptographic primitive called \emph{Relational Hash} using which, given a pair of (relational) hash values, one can determine whether the original plaintexts were related or not. We formalize various natural security notions for the Relational Hash primitive - one-wayness, twin one-wayness, unforgeability and oracle simulatibility. We develop a Relational Hash scheme for discovering linear relations among bit-vectors (elements of $\FF_2^n$) and $\FF_p$-vectors. Using the linear Relational Hash schemes we develop Relational Hashes for detecting proximity in terms of hamming distance. The proximity Relational Hashing schemes can be adapted to a privacy preserving biometric identification scheme, as well as a privacy preserving biometric authentication scheme secure against passive adversaries.
Last updated:  2015-06-10
(Almost) Optimal Constructions of UOWHFs from 1-to-1, Regular One-way Functions and Beyond
Uncategorized
Yu Yu, Dawu Gu, Xiangxue Li, Jian Weng
Show abstract
Uncategorized
We revisit the problem of black-box constructions of universal one-way hash functions (UOWHFs) from several (from specific to more general) classes of one-way functions (OWFs), and give respective constructions that either improve or generalize the best previously known. In addition, the parameters we achieve are either optimal or almost optimal simultaneously up to small factors, e.g., arbitrarily small $\omega(1)$. For any 1-to-1 one-way function, we give an optimal construction of UOWHFs with key and output length $\Theta(n)$ by making a single call to the underlying OWF. This improves the constructions of Naor and Yung (STOC 1989) and De Santis and Yung (Eurocrypt 1990) that need key length $O(n*\omega(log n))$. For any known-(almost-)regular one-way function with known hardness, we give an optimal construction of UOWHFs with key and output length $\Theta(n)$ and a single call to the one-way function. For any known-(almost-)regular one-way function, we give a construction of UOWHFs with key and output length $O(n*\omega(1))$ and by making $\omega(1)$ non-adaptive calls to the one-way function. This improves the construction of Barhum and Maurer (Latincrypt 2012) that requires key and output length $O(n*\omega(log n))$ and $\omega(log n)$ calls. For any weakly-regular one-way function introduced by Yu et al. at TCC 2015 (i.e., the set of inputs with maximal number of siblings is of an $n^{-c}$-fraction for some constant $c$), we give a construction of UOWHFs with key length $O(n*log n)$ and output length $\Theta(n)$. This generalizes the construction of Ames et al. (Asiacrypt 2012) which requires an unknown-regular one-way function (i.e., $c=0$). Along the way, we use several techniques that might be of independent interest. We show that almost 1-to-1 (except for a negligible fraction) one-way functions and known (almost-)regular one-way functions are equivalent in the known-hardness (or non-uniform) setting, by giving an optimal construction of the former from the latter. In addition, we show how to transform any one-way function that is far from regular (but only weakly regular on a noticeable fraction of domain) into an almost-regular one-way function.
Last updated:  2015-01-10
The Randomized Iterate Revisited - Almost Linear Seed Length PRGs from A Broader Class of One-way Functions
Yu Yu, Dawu Gu, Xiangxue Li, Jian Weng
We revisit "the randomized iterate" technique that was originally used by Goldreich, Krawczyk, and Luby (SICOMP 1993) and refined by Haitner, Harnik and Reingold (CRYPTO 2006) in constructing pseudorandom generators (PRGs) from regular one-way functions (OWFs). We abstract out a technical lemma (which is folklore in leakage resilient cryptography), and use it to provide a simpler and more modular proof for the Haitner-Harnik-Reingold PRGs from regular OWFs. We introduce a more general class of OWFs called "weakly-regular one-way functions" from which we construct a PRG of seed length O(n*logn). More specifically, consider an arbitrary one-way function f with range divided into sets Y1, Y2, ..., Yn where each Y_i={ y:2^{i-1}<=|f^{-1}(y)|<2^{i} }. We say that f is weakly-regular if there exists a (not necessarily efficient computable) cut-off point max such that Y_max is of some noticeable portion (say n^{-c} for constant c), and Y_max+1, ..., Y_n only sum to a negligible fraction. We construct a PRG by making O(n^{2c+1}) calls to f and achieve seed length O(n*logn) using bounded space generators. This generalizes the approach of Haitner et al., where regular OWFs fall into a special case for c=0. We use a proof technique that is similar to and extended from the method by Haitner, Harnik and Reingold for hardness amplification of regular weakly-one-way functions. Our work further explores the feasibility and limits of the "randomized iterate" type of black-box constructions. In particular, the underlying f can have an arbitrary structure as long as the set of images with maximal preimage size has a noticeable fraction. In addition, our construction is much more seed-length efficient and security-preserving (albeit less general) than the HILL-style generators where the best known construction by Vadhan and Zheng (STOC 2012) requires seed length O(n^3).
Last updated:  2014-05-30
MuR-DPA: Top-down Levelled Multi-replica Merkle Hash Tree Based Secure Public Auditing for Dynamic Big Data Storage on Cloud
Chang Liu, Rajiv Ranjan, Chi Yang, Xuyun Zhang, Lizhe Wang, Jinjun Chen
Big data and its applications are attracting more and more research interests in recent years. As the new generation distributed computing platform, cloud computing is believed to be the most potent platform. With the data no longer under users' direct control, data security in cloud computing is becoming one of the most obstacles of the proliferation of cloud. In order to improve service reliability and availability, storing multiple replicas along with original datasets is a common strategy for cloud service providers. Public data auditing schemes allow users to verify their outsourced data storage without having to retrieve the whole dataset. However, existing data auditing techniques suffers from efficiency and security problems. First, for dynamic datasets with multiple replicas, the communication overhead for update verification is very large, because verification for each update requires O(logn) communication complexity and update of all replicas. Second, to the best of our knowledge, there is no existing integrity verification schemes can provide public auditing and authentication of block indices at the same time. Without authentication of block indices, the server can build a valid proof based on data blocks other than the block client requested to verify. In order to address these problems, in this paper, we present a novel public auditing scheme named MuR-DPA. The new scheme incorporated a novel authenticated data structure based on the Merkle hash tree, which we name as MR-MHT. For support of full dynamic data updates, authentication of block indices and efficient verification of updates for multiple replicas at the same time, the level values of nodes in MR-MHT are generated in a top-down order, and all replica blocks for each data block are organized into a same replica sub-tree. Compared to existing integrity verification and public auditing schemes, theoretical analysis and experimental results show that the MuR-DPA scheme can not only incur much less communication overhead for both update and verification of datasets with multiple replicas, but also provide enhanced security against dishonest cloud service providers.
Last updated:  2014-05-30
Black-Box Non-Black-Box Zero Knowledge
Vipul Goyal, Rafail Ostrovsky, Alessandra Scafuro, Ivan Visconti
Motivated by theoretical and practical interest, the challenging task of designing crypto- graphic protocols having only black-box access to primitives has generated various breakthroughs in the last decade. Despite such positive results, even though nowadays we know black-box constructions for secure two-party and multi-party computation even in constant rounds, there still are in Cryptography several constructions that critically require non-black-box use of primitives in order to securely realize some fundamental tasks. As such, the study of the gap between black-box and non-black-box constructions still includes major open questions. In this work we make progress towards filling the above gap. We consider the case of black- box constructions for computations requiring that even the size of the input of a player remains hidden. We show how to commit to a string of arbitrary size and to prove statements over the bits of the string. Both the commitment and the proof are succinct, hide the input size and use standard primitives in a black-box way. We achieve such a result by giving a black-box construction of an extendable Merkle tree that relies on a novel use of the “MPC in the head” paradigm of Ishai et al. [STOC 2007]. We show the power of our new techniques by giving the first black-box constant-round public-coin zero knowledge argument for NP. To achieve this result we use the non-black-box simulation technique introduced by Barak [FOCS 2001], the PCP of Proximity introduced by Ben-Sasson et al. [STOC 2004], together with a black-box public-coin witness indistinguishable universal argument that we construct along the way. Additionally we show the first black-box construction of a generalization of zero-knowledge sets introduced by Micali et al. [FOCS 2003]. The generalization that we propose is a strengthening that requires both the size of the set and the size of the elements of the set to remain private.
Last updated:  2014-05-30
Accelerating NTRU based Homomorphic Encryption using GPUs
Uncategorized
Wei Dai, Yarkın Doröz, Berk Sunar
Show abstract
Uncategorized
In this work we introduce a large polynomial arithmetic library optimized for Nvidia GPUs to support fully homomorphic encryption schemes. To realize the large polynomial arithmetic library we convert the polynomial with large coefficients using the Chinese Remainder Theorem into many polynomials with small coefficients, and then carry out modular multiplications in the residue space using a custom developed discrete Fourier transform library. We further extend the library to support the homomorphic evaluation operations, i.e. addition, multiplication, and relinearization, in an NTRU based somewhat homomorphic encryption library. Finally, we put the library to use to evaluate homomorphic evaluation of two block ciphers: Prince and AES, which show 2.57 times and 7.6 times speedup, respectively, over an Intel Xeon software implementation.
Last updated:  2014-05-30
Finding collisions for MD4 hash algorithm using hybrid algorithm
Marko Carić
The modification of message that meets the sufficient conditions for collision is found in the last step of differential attack proposed by Wang et all. (2005) on MD4 hash algorithm. Here we show how this attack phase, finding a collision starting from the list of sufficient conditions for the collision, can be implemented using a combination of two algorithms - evolutionary algorithm and hill climbing. Hybridization of evolutionary algorithm and hill climbing is a well-known technique for improving solutions, but it isn't applied to this domain (at least by information that author has collected).
Last updated:  2014-05-30
New candidates for multivariate trapdoor functions
Jaiberth Porras, John B. Baena, Jintai Ding
We present a new method for building pairs of HFE polynomials of high degree, such that the map constructed with such a pair is easy to invert. The inversion is accomplished using a low degree polynomial of Hamming weight three, which is derived from a special reduction via Hamming weight three polynomials produced by these two HFE polynomials. This allows us to build new candidates for multivariate trapdoor functions in which we use the pair of HFE polynomials to fabricate the core map. We performed the security analysis for the case where the base field is $GF(2)$ and showed that these new trapdoor functions have high degrees of regularity, and therefore they are secure against the direct algebraic attack. We also give theoretical arguments to show that these new trapdoor functions over $GF(2)$ are secure against the MinRank attack as well.
Last updated:  2015-03-20
Chaskey: An Efficient MAC Algorithm for 32-bit Microcontrollers
Nicky Mouha, Bart Mennink, Anthony Van Herrewege, Dai Watanabe, Bart Preneel, Ingrid Verbauwhede
We propose Chaskey: a very efficient Message Authentication Code (MAC) algorithm for 32-bit microcontrollers. It is intended for applications that require 128-bit security, yet cannot implement standard MAC algorithms because of stringent requirements on speed, energy consumption, or code size. Chaskey is a permutation-based MAC algorithm that uses the Addition-Rotation-XOR (ARX) design methodology. We formally prove that Chaskey is secure in the standard model, based on the security of an underlying Even-Mansour block cipher. Chaskey is designed to perform well on a wide range of 32-bit microcontrollers. Our benchmarks show that on the ARM Cortex-M3/M4, our Chaskey implementation reaches a speed of 7.0 cycles/byte, compared to 89.4 cycles/byte for AES-128-CMAC. For the ARM Cortex-M0, our benchmark results give 16.9 cycles/byte and 136.5 cycles/byte for Chaskey and AES-128-CMAC respectively.
Last updated:  2014-05-31
Jacobian Coordinates on Genus 2 Curves
Huseyin Hisil, Craig Costello
This paper presents a new projective coordinate system and new explicit algorithms which together boost the speed of arithmetic in the divisor class group of genus 2 curves. The proposed formulas generalise the use of Jacobian coordinates on elliptic curves, and their application improves the speed of performing cryptographic scalar multiplications in Jacobians of genus 2 curves over prime fields by an approximate factor of 1.25x. For example, on a single core of an Intel Core i7-3770M (Ivy Bridge), we show that replacing the previous best formulas with our new set improves the cost of generic scalar multiplications from 243,000 to 195,000 cycles, and drops the cost of specialised GLV-style scalar multiplications from 166,000 to 129,000 cycles.
Last updated:  2014-05-28
Yao's millionaires' problem and decoy-based public key encryption by classical physics
Dima Grigoriev, Vladimir Shpilrain
We use various laws of classical physics to offer several solutions of Yao's millionaires' problem without using any one-way functions. We also describe several informationally secure public key encryption protocols, i.e., protocols secure against passive computationally unbounded adversary. This introduces a new paradigm of decoy-based cryptography, as opposed to ``traditional" complexity-based cryptography. In particular, our protocols do not employ any one-way functions.
Last updated:  2014-05-28
Cryptanalysis of and Improvement on Biometric-based User Authentication Scheme for C/S System
Younsung Choi, Dongho Won
Password-based authentication schemes are convenient, but vulnerable to simple dictionary attacks. Cryptographic secret keys are safe, but difficult to memorize. More recently, biometric information has been used for authentication schemes. Das proposed a biometric-based authentication scheme, but it has various vulnerabilities. Jiping et al. improved Das’s scheme, but some vulnerabilities remain. In this paper, we analyze the cryptanalysis of Jiping et al.’s authentication scheme and propose the security enhanced biometric-based user authentication scheme for the C/S System.
Last updated:  2014-09-12
Privacy-Enhanced Participatory Sensing with Collusion Resistance and Data Aggregation
Felix Günther, Mark Manulis, Andreas Peter
Participatory sensing enables new paradigms and markets for information collection based on the ubiquitous availability of smartphones, but also introduces privacy challenges for participating users and their data. In this work, we review existing security models for privacy-preserving participatory sensing and propose several improvements that are both of theoretical and practical significance. We first address an important drawback of prior work, namely the lack of consideration of collusion attacks that are highly relevant for such multi-user settings. We explain why existing security models are insufficient and why previous protocols become insecure in the presence of colluding parties. We remedy this problem by providing new security and privacy definitions that guarantee meaningful forms of collusion resistance. We propose new collusion-resistant participatory sensing protocols satisfying our definitions: a generic construction that uses anonymous identity-based encryption (IBE) and its practical instantiation based on the Boneh-Franklin IBE scheme. We then extend the functionality of participatory sensing by adding the ability to perform aggregation on the data submitted by the users, without sacrificing their privacy. We realize this through an additively-homomorphic IBE scheme which in turn is constructed by slightly modifying the Boneh-Franklin IBE scheme. From a practical point of view, the resulting scheme is suitable for calculations with small sensor readings/values such as temperature measurements, noise levels, or prices, which is sufficient for many applications of participatory sensing.
Last updated:  2015-06-11
Using Indistinguishability Obfuscation via UCEs
Chris Brzuska, Arno Mittelbach
We provide the first standard model construction for a powerful class of Universal Computational Extractors (UCEs; Bellare et al. Crypto 2013) based on indistinguishability obfuscation. Our construction suffices to instantiate correlation-secure hash functions and universal one-way functions. For many cryptographic primitives and in particular for correlation-secure hash functions all known constructions are in the random-oracle model. Indeed, recent negative results by Wichs (ITCS 2013) rule out a large class of techniques to prove the security of correlation-secure hash functions in the standard model. Our construction is based on puncturable PRFs (Sahai und Waters; STOC 2014) and indistinguishability obfuscation. However, our proof also relies on point obfuscation under auxiliary inputs (AIPO). This is crucial in light of Wichs' impossibility result. Namely, Wichs proves that it is often hard to reduce two-stage games (such as UCEs) to a "one-stage assumption" such as DDH. In contrast, AIPOs and their underlying assumptions are inherently two-stage and, thus, allow us to circumvent Wichs' impossibility result. Our positive result is also noteworthy insofar as Brzuska, Farshim and Mittelbach (Crypto 2014) have shown recently, that iO and some variants of UCEs are mutually exclusive. Our results, hence, validate some of the new UCE notions that emerged as a response to the iO-attack.
Last updated:  2014-05-28
Efficient Adaptively Secure IBBE from Standard Assumptions
Somindu C. Ramanna, Palash Sarkar
This paper describes the first construction of efficient identity-based broadcast encryption (IBBE) schemes which can be proved secure against adaptive-identity attacks based on standard assumptions. The constructions are obtained by extending the currently known most efficient identity-based encryption scheme proposed by Jutla and Roy in 2013. Ciphertext size and user storage compare favourably to previously known constructions. The new constructions fill both a practical and a theoretical gap in the literature on efficient IBBE schemes.
Last updated:  2014-05-28
Hyper-and-elliptic-curve cryptography
Daniel J. Bernstein, Tanja Lange
This paper introduces "hyper-and-elliptic-curve cryptography", in which a single high-security group supports fast genus-2-hyperelliptic-curve formulas for variable-base-point single-scalar multiplication (e.g., Diffie--Hellman shared-secret computation) and at the same time supports fast elliptic-curve formulas for fixed-base-point scalar multiplication (e.g., key generation) and multi-scalar multiplication (e.g., signature verification).
Last updated:  2015-11-10
Attacks on Lin's Mobile Dynamic Identity-based Authenticated Key Agreement Scheme using Chebyshev Chaotic Maps
SK Hafizul Islam
In 2014, Lin proposed an authentication system with dynamic identity of the user for low-power mobile devices using Chebyshev chaotic map. The scheme is proposed to provide mutual authentication and session key agreement between a remote server and its legitimate user. The scheme provides user anonymity and untracibility, and resilience from many cryptographic attacks. However, the author of this paper showed that Lin’s scheme is no longer usable for practical applications as (i) it cannot verify the wrong identity and password at the user side in the login and password change phases, (ii) it cannot protect user impersonation attack, and (ii) it has the problem of session key forward secrecy.
Last updated:  2019-05-09
Logic Synthesis based Public Key Scheme
Boaz Shahar
This article proposes a method for the construction of a public key system that is based on VLSI logic synthesis algorithms. First, we discuss the properties of VLSI logic synthesis algorithms. Then we view them in the context of cryptographic primitives. Then we propose a public key encryption system and finally discuss its security properties.
Last updated:  2015-02-12
How Secure is Deterministic Encryption?
Mihir Bellare, Rafael Dowsley, Sriram Keelveedhi
This paper presents three curious findings about deterministic public-key encryption (D-PKE) that further our understanding of its security, in particular because of the contrast with standard, randomized public-key encryption (R-PKE): (1) It would appear to be a triviality, for any primitive, that security in the standard model implies security in the random-oracle model, and it is certainly true, and easily proven, for R-PKE. For D-PKE it is not clear and depends on details of the definition. In particular we can show it in the non-uniform case but not in the uniform case. (2) The power of selective-opening attacks (SOA) comes from an adversary's ability, upon corrupting a sender, to learn not just the message but also the coins used for encryption. For R-PKE, security is achievable. For D-PKE, where there are no coins, one's first impression may be that SOAs are vacuous and security should be easily achievable. We show instead that SOA-security is impossible, meaning no D-PKE scheme can achieve it. (3) For R-PKE, single-user security implies multi-user security, but we show that there are D-PKE schemes secure for a single user and insecure with two users.
Last updated:  2014-05-28
Improved Cryptanalysis on Reduced-Round GOST and Whirlpool Hash Function (Full Version)
Bingke Ma, Bao Li, Ronglin Hao, Xiaoqian Li
The GOST hash function family has served as the new Russian national hash standard (GOST R 34.11-2012) since January 1, 2013, and it has two members, $i.e.$, GOST-256 and GOST-512 which correspond to two different output lengths. Most of the previous analyses of GOST emphasize on the compression function rather than the hash function. In this paper, we focus on security properties of GOST under the hash function setting. First we give two improved preimage attacks on 6-round GOST-512 compared with the previous preimage attack, $i.e.$, a time-reduced attack with the same memory requirements and a memoryless attack with almost identical time. Then we improve the best collision attack on reduced GOST-256 (resp. GOST-512) from 5 rounds to 6.5 (resp. 7.5) rounds. Finally, we construct a limited-birthday distinguisher on 9.5-round GOST using the limited-birthday distinguisher on hash functions proposed at ASIACRYPT 2013. An essential technique used in our distinguisher is the carefully chosen differential trail, which can further exploit freedom degrees in the inbound phase when launching rebound attacks on the GOST compression function. This technique helps us to reduce the time complexity of the distinguisher significantly. We apply this strategy to Whirlpool, an ISO standardized hash function, as well. As a result, we construct a limited-birthday distinguisher on 9-round Whirlpool out of 10 rounds, and reduce the time complexity of the previous 7-round distinguisher. To the best of our knowledge, all of our results are the best cryptanalytic results on GOST and Whirlpool in terms of the number of rounds analyzed under the hash function setting.
Last updated:  2014-05-27
Optimal Contracts for Outsourced Computation
Viet Pham, MHR. Khouzani, Carlos Cid
While expensive cryptographically verifiable computation aims at defeating malicious agents, many civil purposes of outsourced computation tolerate a weaker notion of security, i.e., ``lazy-but-honest'' contractors. Targeting this type of agents, we develop optimal contracts for outsourcing of computational tasks via appropriate use of rewards, punishments, auditing rate, and ``redundancy''. Our contracts provably minimize the expense of the outsourcer (principal) while guaranteeing correct computation. Furthermore, we incorporate practical restrictions of the maximum enforceable fine, limited and/or costly auditing, and bounded budget of the outsourcer. By examining the optimal contracts, we provide insights on how resources should be utilized when auditing capacity and enforceability are limited. Finally, we present a light-weight cryptographic implementation of the contracts and discuss a comparison across different implementations of auditing in outsourced computation.
Last updated:  2014-09-10
Beyond 2^{c/2} Security in Sponge-Based Authenticated Encryption Modes
Philipp Jovanovic, Atul Luykx, Bart Mennink
The Sponge function is known to achieve 2^{c/2} security, where c is its capacity. This bound was carried over to keyed variants of the function, such as SpongeWrap, to achieve a min{2^{c/2},2^kappa} security bound, with kappa the key length. Similarly, many CAESAR competition submissions are designed to comply with the classical 2^{c/2} security bound. We show that Sponge-based constructions for authenticated encryption can achieve the significantly higher bound of min{2^{b/2},2^c,2^kappa} asymptotically, with b>c the permutation size, by proving that the CAESAR submission NORX achieves this bound. Furthermore, we show how to apply the proof to five other Sponge-based CAESAR submissions: Ascon, CBEAM/STRIBOB, ICEPOLE, Keyak, and two out of the three PRIMATEs. A direct application of the result shows that the parameter choices of these submissions are overly conservative. Simple tweaks render the schemes considerably more efficient without sacrificing security. For instance, NORX64 can increase its rate and decrease its capacity by 128 bits and Ascon-128 can encrypt three times as fast, both without affecting the security level of their underlying modes in the ideal permutation model.
Last updated:  2014-09-17
Fully secure constrained pseudorandom functions using random oracles
Dennis Hofheinz
A constrained pseudorandom function (CPRF) PRF allows to derive constrained evaluation keys that only allow to evaluate PRF on a subset of inputs. CPRFs have only recently been introduced independently by three groups of researchers. However, somewhat curiously, all of them could only achieve a comparatively weak, selective-challenge form of security (except for small input spaces, very limited forms of constrained keys, or with superpolynomial security reductions). In this paper, we construct the first fully secure CPRF without any of the above restrictions. Concretely, we support ``bit-fixing'' constrained keys that hardwire an arbitrary subset of the input bits to fixed values, we support exponentially large input spaces, and our security reduction is polynomial. We require very heavyweight tools: we assume multilinear maps, indistinguishability obfuscation, and our proof is in the random oracle model. Still, our analysis is far from tautological, and even with these strong building blocks, we need to develop additional techniques and tools. As a simple application, we obtain the first adaptively secure non-interactive key exchange protocols for large user groups.
Last updated:  2014-09-16
On the Enumeration of Double-Base Chains with Applications to Elliptic Curve Cryptography
Uncategorized
Christophe Doche
Show abstract
Uncategorized
The Double-Base Number System (DBNS) uses two bases, $2$ and $3$, in order to represent any integer $n$. A Double-Base Chain (DBC) is a special case of a DBNS expansion. DBCs have been introduced to speed up the scalar multiplication $[n]P$ on certain families of elliptic curves used in cryptography. In this context, our contributions are twofold. First, given integers $n$, $a$, and $b$, we outline a recursive algorithm to compute the number of different DBCs with a leading factor dividing $2^a3^b$ and representing $n$. A simple modification of the algorithm allows to determine the number of DBCs with a specified length as well as the actual expansions. In turn, this gives rise to a method to compute an optimal DBC representing $n$, i.e. an expansion with minimal length. Our implementation is able to return an optimal expansion for most integers up to $2^{60}$ bits in a few minutes. Second, we introduce an original and potentially more efficient approach to compute a random scalar multiplication $[n]P$, based on the concept of controlled DBC. Instead of generating a random integer $n$ and then trying to find an optimal, or at least a short DBC to represent it, we propose to directly generate $n$ as a random DBC with a chosen leading factor $2^a3^b$ and length $\ell$. To inform the selection of those parameters, in particular $\ell$, which drives the trade-off between the efficiency and the security of the underlying cryptosystem, we enumerate the total number of DBCs having a given leading factor $2^a3^b$ and a certain length $\ell$. The comparison between this total number of DBCs and the total number of integers that we wish to represent a priori provides some guidance regarding the selection of suitable parameters. Experiments indicate that our new Near Optimal Controlled DBC approach provides a speedup of at least $10\%$ with respect to the NAF for sizes from $192$ to $512$ bits. Computations involve elliptic curves defined over $\F_p$, using the Inverted Edwards coordinate system and state of the art scalar multiplication techniques.
Last updated:  2014-11-26
Compact VSS and Efficient Homomorphic UC Commitments
Ivan Damgård, Bernardo David, Irene Giacomelli, Jesper Buus Nielsen
We present a new compact verifiable secret sharing scheme, based on this we present the first construction of a homomorphic UC commitment scheme that requires only cheap symmetric cryptography, except for a small number of seed OTs. To commit to a $k$-bit string, the amortized communication cost is $O(k)$ bits. Assuming a sufficiently efficient pseudorandom generator, the computational complexity is $O(k)$ for the verifier and $O(k^{1+\epsilon})$ for the committer (where $\epsilon <1$ is a constant). In an alternative variant of the construction, all complexities are $O(k\cdot polylog(k))$. Our commitment scheme extends to vectors over any finite field and is additively homomorphic. By sending one extra message, the prover can allow the verifier to also check multiplicative relations on committed strings, as well as verifying that committed vectors $\vec{a}, \vec{b}$ satisfy $\vec{a}= \varphi( \vec{b})$ for a linear function $\varphi$. These properties allow us to non-interactively implement any one-sided functionality where only one party has input (this includes UC secure zero-knowledge proofs of knowledge). We also present a perfectly secure implementation of any multiparty functionality, based directly on our VSS. The communication required is proportional to a circuit implementing the functionality, up to a logarithmic factor. For a large natural class of circuits the overhead is even constant. We also improve earlier results by Ranellucci \emph{et al.} on the amount of correlated randomness required for string commitments with individual opening of bits.
Last updated:  2014-05-27
On the Limits of Authenticated Key Exchange Security with an Application to Bad Randomness
Michèle Feltz, Cas Cremers
State-of-the-art authenticated key exchange (AKE) protocols are proven secure in game-based security models. These models have considerably evolved in strength from the original Bellare-Rogaway model. However, so far only informal impossibility results, which suggest that no protocol can be secure against stronger adversaries, have been sketched. At the same time, there are many different security models being used, all of which aim to model the strongest possible adversary. In this paper we provide the first systematic analysis of the limits of game-based security models. Our analysis reveals that different security goals can be achieved in different relevant classes of AKE protocols. From our formal impossibility results, we derive strong security models for these protocol classes and give protocols that are secure in them. In particular, we analyse the security of AKE protocols in the presence of adversaries who can perform attacks based on chosen randomness, in which the adversary controls the randomness used in protocol sessions. Protocols that do not modify memory shared among sessions, which we call stateless protocols, are insecure against chosen-randomness attacks. We propose novel stateful protocols that provide resilience even against this worst case randomness failure, thereby weakening the security assumptions required on the random number generator.
Last updated:  2014-08-26
Solving the Discrete Logarithm of a 113-bit Koblitz Curve with an FPGA Cluster
Uncategorized
Erich Wenger, Paul Wolfger
Show abstract
Uncategorized
Using FPGAs to compute the discrete logarithms of elliptic curves is a well-known method. However, until to date only CPU clusters succeeded in computing new elliptic curve discrete logarithm records. This work presents a high-speed FPGA implementation that was used to compute the discrete logarithm of a 113-bit Koblitz curve. The core of the design is a fully unrolled, highly pipelined, self-sufficient Pollard's rho iteration function. An 18-core Virtex-6 FPGA cluster computed the discrete logarithm of a 113-bit Koblitz curve in extrapolated 24 days. Until to date, no attack on such a large Koblitz curve succeeded using as little resources or in such a short time frame.
Last updated:  2015-04-01
Redefining the Transparency Order
Kaushik Chakraborty, Sumanta Sarkar, Subhamoy Maitra, Bodhisatwa Mazumdar, Debdeep Mukhopadhyay, Emmanuel Prouff
In this paper, we consider the multi-bit Differential Power Analysis (DPA) in the Hamming weight model. In this regard, we revisit the definition of Transparency Order (TO) from the work of Prouff (FSE 2005) and find that the definition has certain limitations. Although this work has been quite well referred in the literature, surprisingly, these limitations remained unexplored for almost a decade. We analyse the definition from scratch, modify it and finally provide a definition with better insight that can theoretically capture DPA in Hamming weight model for hardware implementation with precharge logic. At the end, we confront the notion of (revised) transparency order with attack sim- ulations in order to study to what extent the low transparency order of an s-box impacts the efficiency of a side channel attack against its processing. To the best of our knowledge, this is the first time that such a critical analysis is conducted (even considering the original notion of Prouff). It practically confirms that the transparency order is indeed re- lated to the resistance of the s-box against side-channel attacks, but it also shows that it is not sufficient alone to directly achieve a satisfying level of security. Regarding this point, our conclusion is that the (revised) transparency order is a valuable criterion to consider when designing a cryptographic algorithm, and even if it does not preclude to also use classical countermeasures like masking or shuffling, it enables to improve their effectiveness.
Last updated:  2014-05-27
Cryptanalysis and Improvement on Robust Three-Factor Remote User Authentication Scheme with Key Agreement for Multimedia System
Younsung Choi, Dongho Won
A three-factor authentication combines biometrics information with user password and smart card to provide security-enhanced user authentication. An proposed user authentication scheme improved Das’s scheme. But An’s scheme is not secure against denial of service attack in login phase, forgery attack. Li et al. pointed out them and proposed three-factor remote user authentication scheme with key agreement. However, Li et al’s scheme still has some security problem. In this paper, we present a cryptanalysis and improvement of Li et al.’s remote user authentication scheme.
Last updated:  2016-02-08
Multi-target DPA attacks: Pushing DPA beyond the limits of a desktop computer
Luke Mather, Elisabeth Oswald, Carolyn Whitnall
Following the pioneering CRYPTO '99 paper by Kocher et al., differential power analysis (DPA) was initially geared around low-cost computations performed using standard desktop equipment with minimal reliance on device-specific assumptions. In subsequent years, the scope was broadened by, e.g., making explicit use of (approximate) power models. An important practical incentive of so-doing is to reduce the data complexity of attacks, usually at the cost of increased computational complexity. It is this trade-off which we seek to explore in this paper. We draw together emerging ideas from several strands of the literature---high performance computing, post-side-channel global key enumeration, and effective combination of separate information sources---by way of advancing (non-profiled) `standard DPA' towards a more realistic threat model in which trace acquisitions are scarce but adversaries are well resourced. Using our specially designed computing platform (including our parallel and scalable DPA implementation, which allows us to work efficiently with as many as 2^{32} key hypotheses), we demonstrate some dramatic improvements that are possible for `standard DPA' when combining DPA outcomes for several intermediate targets. Unlike most previous `information combining' attempts, we are able to evidence the fact that the improvements apply even when the exact trace locations of the relevant information (i.e. the `interesting points') are not known a priori but must be searched simultaneously with the correct subkey.
Last updated:  2015-04-14
Deleting Secret Data with Public Verifiability
Feng Hao, Dylan Clarke, Avelino Francisco Zorzo
Existing software-based data erasure programs can be summarized as following the same one-bit-return protocol: the deletion program performs data erasure and returns either success or failure. However, such a one-bit-return protocol turns the data deletion system into a black box -- the user has to trust the outcome but cannot easily verify it. This is especially problematic when the deletion program is encapsulated within a Trusted Platform Module (TPM), and the user has no access to the code inside. In this paper, we present a cryptographic solution that aims to make the data deletion process more transparent and verifiable. In contrast to the conventional black/white assumptions about TPM (i.e., either completely trust or distrust), we introduce a third assumption that sits in between: namely, ``trust-but-verify''. Our solution enables a user to verify the correct implementation of two important operations inside a TPM without accessing its source code: i.e., the correct encryption of data and the faithful deletion of the key. Finally, we present a proof-of-concept implementation of the SSE system on a resource-constrained Java card to demonstrate its practical feasibility. To our knowledge, this is the first systematic solution to the secure data deletion problem based on a ``trust-but-verify'' paradigm, together with a concrete prototype implementation.
Last updated:  2014-05-26
Forging Attacks on two Authenticated Encryptions COBRA and POET
Mridul Nandi
In FSE 2014, an authenticated encryption mode COBRA [4], based on pseudorandom permutation (PRP) blockcipher, and POET [3], based on Almost XOR-Universal (AXU) hash and strong pseudorandom permutation (SPRP), were proposed. Few weeks later, COBRA mode and a simple variant of the original proposal of POET (due to a forging attack [13] on the original proposal) with AES as an underlying blockcipher, were submitted in CAESAR, a competition [1] of authenticated encryption (AE). In this paper we show a forging attack on the mode COBRA based on any n-bit blockcipher. Our attack on COBRA requires about O(n) queries with success probability about 1/2. This disproves the claim proved in FSE 2014 paper. We also show both privacy and forging attack on the parallel version of POET, denoted POET-m. In case of the modes POET or POE (the underlying modes for encryption), we show one query distinguishing attack when we instantiate the underlying AXU-hash function with some other AXU hash function, namely uniform random involution. Thus, our result violates the designer's main claim (Theorem 8.1 in [1]). However, the attacks can not be extended directly for the specific choices of existing submitted versions to the CAESAR competition.
Last updated:  2014-05-25
Nothing is for Free: Security in Searching Shared & Encrypted Data
Qiang Tang
Most existing symmetric searchable encryption schemes aim at allowing a user to outsource her encrypted data to a cloud server and delegate the latter to search on her behalf. These schemes do not qualify as a secure and scalable solution for the multi-party setting, where users outsource their encrypted data to a cloud server and selectively authorize each other to search. Due to the possibility that the cloud server may collude with some malicious users, it is a challenge to have a secure and scalable multi-party searchable encryption (MPSE) scheme. This is shown by our analysis on the Popa-Zeldovich scheme, which says that an honest user may leak all her search patterns even if she shares only one of her documents with another malicious user. Based on our analysis, we present a new security model for MPSE by considering the worst-case and average-case scenarios, which capture different server-user collusion possibilities. We then propose a MPSE scheme by employing the bilinear property of Type-3 pairings, and prove its security based on the Bilinear Diffie-Hellman Variant (BDHV) and Symmetric eXternal Diffie-Hellman (SXDH) assumptions in the random oracle model.
Last updated:  2014-05-25
New Results in the Linear Cryptanalysis of DES
Igor Semaev
Two open problems on using Matsui's Algorithm 2 with multiple linear approximations posed earlier by Biryukov, De Canni$\grave{\hbox{e}}$re and M. Quisquater at Crypto'04 are solved in the present paper. That improves the linear cryptanalysis of 16-round DES reported by Matsui at Crypto'94.
Last updated:  2014-09-24
McEliece in the world of Escher
Uncategorized
Danilo Gligoroski, Simona Samardjiska, Håkon Jacobsen, Sergey Bezzateev
Show abstract
Uncategorized
We present a new family of linear binary codes of length n and dimension k accompanied with a fast list decoding algorithm that can correct up to n/2 errors in a bounded channel with an error density $\rho$. The decisional problem of decoding random codes using these generalized error sets is NP-complete. Next we use the properties of these codes to design both an encryption scheme and a signature scheme. Although in the open literature there have been several proposals how to produce digital signatures from the McEliece public key scheme, as far as we know, this is the first public key scheme based on codes where signatures are produced in a straightforward manner from the decryption procedure of the scheme. The security analysis of our scheme have four parts: 1. An extensive list of attacks using the Information Set Decoding techniques adopted for our codes; 2. An analysis of the cost of a distinguishing attack based on rank attacks on the generator matrix of the code or on its dual code; 3. An analysis of the cost of cheap distinguishing attacks on the generator matrix of the code or on its dual code that have expensive list-decoding properties; 4. We interpret our scheme as multivariate quadratic system and discuss difficulties of solving that system using algebraic approaches such as Gröbner bases. Based on this security analysis we suggest some concrete parameters for the security levels in the range of $2^{80} - 2^{128}$. An additional feature of the decryption process is that it admits massive and trivial parallelization that could potentially make our scheme in hardware as fast as the symmetric crypto primitives.
Last updated:  2014-05-23
Explicit endomorphism of the Jacobian of a hyperelliptic function field of genus 2 using base field operations
Eduardo Ruiz Duarte, Octavio Páez Osuna
We present an efficient endomorphism for the Jacobian of a curve $C$ of genus 2 for divisors having a Non disjoint support. This extends the work of Costello in ~\cite{Costello} who calculated explicit formul\ae\space for divisor doubling and addition of divisors with disjoint support in $\jacobian(C)$ using only base field operations. Explicit formul\ae\space is presented for this third case and a different approach for divisor doubling.
Last updated:  2014-05-30
A mechanical approach to derive identity-based protocols from Diffie-Hellman-based protocols
Uncategorized
Kim-Kwang Raymond Choo, Junghyun Nam, Dongho Won
Show abstract
Uncategorized
We describe a mechanical approach to derive identity-based (ID-based) protocols from existing Diffie-Hellman-based ones. As case studies, we present the ID-based versions of the Unified Model protocol, UMP-ID, Blake-Wilson, Johnson & Menezes (1997)'s protocol, BJM-ID, and Krawczyk (2005)'s HMQV protocol, HMQV-ID. We describe the calculations required to be modified in existing proofs. We conclude with a comparative security and efficiency of the three proposed ID-based protocols (relative to other similar published protocols) and demonstrate that our proposed ID-based protocols are computationally efficient.
Last updated:  2014-09-17
Simulatable Leakage: Analysis, Pitfalls, and new Constructions
Uncategorized
J. Longo Galea, D. Martin, E. Oswald, D. Page, M. Stam, M. Tunstall
Show abstract
Uncategorized
In 2013, Standaert \emph{et al.} proposed the notion of simulatable leakage to connect theoretical leakage resilience with the practice of side channel attacks. Their use of simulators, based on physical devices, to support proofs of leakage resilience allows verification of underlying assumptions: the indistinguishability game, involving real vs. simulated leakage, can be `played' by an evaluator. Using a concrete, block cipher based leakage resilient PRG and high-level simulator definition (based on concatenating two partial leakage traces), they included detailed reasoning why said simulator (for AES-128) resists state-of-the-art side channel attacks. \\\\ In this paper, we demonstrate a distinguisher against their simulator and thereby falsify their hypothesis. Our distinguishing technique, which is evaluated using concrete implementations of the Standaert \emph{et al.} simulator on several platforms, is based on `tracking' consistency (resp. identifying simulator {\em in}consistencies) in leakage traces by means of cross-correlation. In attempt to rescue the approach, we propose several alternative simulator definitions based on splitting traces at points of low intrinsic cross-correlation. Unfortunately, these come with significant caveats, and we conclude that the most natural way of producing simulated leakage is by using the underlying construction `as is' (but with a random key).
Last updated:  2014-06-03
Fully Key-Homomorphic Encryption, Arithmetic Circuit ABE, and Compact Garbled Circuits
Dan Boneh, Craig Gentry, Sergey Gorbunov, Shai Halevi, Valeria Nikolaenko, Gil Segev, Vinod Vaikuntanathan, Dhinakaran Vinayagamurthy
We construct the first (key-policy) attribute-based encryption (ABE) system with short secret keys: the size of keys in our system depends only on the depth of the policy circuit, not its size. Our constructions extend naturally to arithmetic circuits with arbitrary fan-in gates thereby further reducing the circuit depth. Building on this ABE system we obtain the first reusable circuit garbling scheme that produces garbled circuits whose size is the same as the original circuit plus an additive poly(k,d) bits, where k is the security parameter and d is the circuit depth. Save the additive $poly(k,d) factor, this is the best one could hope for. All previous constructions incurred a multiplicative poly(k) blowup. As another application, we obtain (single key secure) functional encryption with short secret keys. We construct our attribute-based system using a mechanism we call fully key-homomorphic encryption which is a public-key system that lets anyone translate a ciphertext encrypted under a public-key x into a ciphertext encrypted under the public-key (f(x),f) of the same plaintext, for any efficiently computable f. We show that this mechanism gives an ABE with short keys. Security is based on the subexponential hardness of the learning with errors problem. We also present a second (key-policy) ABE, using multilinear maps, with short ciphertexts: an encryption to an attribute vector x is the size of x plus poly(k,d) additional bits. This gives a reusable circuit garbling scheme where the size of the garbled input is short, namely the same as that of the original input, plus a poly(k,d) factor.
Last updated:  2014-05-22
Graph-theoretic design and analysis of key predistribution schemes
Uncategorized
Michelle Kendall, Keith M. Martin
Show abstract
Uncategorized
Key predistribution schemes for resource-constrained networks are methods for allocating symmetric keys to devices in such a way as to provide an efficient trade-off between key storage, connectivity and resilience. While there have been many suggested constructions for key predistribution schemes, a general understanding of the design principles on which to base such constructions is somewhat lacking. Indeed even the tools from which to develop such an understanding are currently limited, which results in many relatively ad hoc proposals in the research literature. It has been suggested that a large edge-expansion coefficient in the key graph is desirable for efficient key predistribution schemes. However, attempts to create key predistribution schemes from known expander graph constructions have only provided an extreme in the trade-off between connectivity and resilience: namely, they provide perfect resilience at the expense of substantially lower connectivity than can be achieved with the same key storage. Our contribution is two-fold. First, we prove that many existing key predistribution schemes produce key graphs with good expansion. This provides further support and justification for their use, and confirms the validity of expansion as a sound design principle. Second, we propose the use of incidence graphs and concurrence graphs as tools to represent, design and analyse key predistribution schemes. We show that these tools can lead to helpful insights and new constructions.
Last updated:  2014-05-27
Optimizing Information Set Decoding Algorithms to Attack Cyclosymmetric MDPC Codes
Ray Perlner
The most important drawback to code-based cryptography has historically been its large key sizes. Recently, several promising approaches have been proposed to reduce keysizes. In particular, significant keysize reduction has been achieved by using structured, but non-algebraic codes, such as quasi-cyclic (QC) Moderate Density Parity Check (MDPC) codes. Biasi et al. propose further reducing the keysizes of code-based schemes using cyclosymmetric (CS) codes. Biasi et al. analyze the complexity of attacking their scheme using standard information-set-decoding attacks. However, the research presented here shows that information set decoding algorithms can be modified, by choosing the columns of the information set in a way that takes advantage of the added symmetry. The result is an attack that significantly reduces the security of the proposed CS-MDPC schemes to the point that they no longer offer an advantage in keysize over QC-MDPC schemes of the same security level.
Last updated:  2014-05-22
Folding Alternant and Goppa Codes with Non-Trivial Automorphism Groups
Jean-Charles Faugère, Ayoub Otmani, Ludovic Perret, Frédéric de Portzamparc, Jean-Pierre Tillich
The main practical limitation of the McEliece public-key encryption scheme is probably the size of its key. A famous trend to overcome this issue is to focus on subclasses of alternant/Goppa codes with a non trivial automorphism group. Such codes display then symmetries allowing compact parity-check or generator matrices. For instance, a key-reduction is obtained by taking quasi-cyclic (QC) or quasi-dyadic (QD) alternant/Goppa codes. We show that the use of such symmetric alternant/Goppa codes in cryptography introduces a fundamental weakness. It is indeed possible to reduce the key-recovery on the original symmetric public-code to the key-recovery on a (much) smaller code that has not anymore symmetries. This result is obtained thanks to a new operation on codes called folding that exploits the knowledge of the automorphism group. This operation consists in adding the coordinates of codewords which belong to the same orbit under the action of the automorphism group. The advantage is twofold: the reduction factor can be as large as the size of the orbits, and it preserves a fundamental property: folding the dual of an alternant (resp. Goppa) code provides the dual of an alternant (resp. Goppa) code. A key point is to show that all the existing constructions of alternant/Goppa codes with symmetries follow a common principal of taking codes whose support is globally invariant under the action of affine transformations (by building upon prior works of T. Berger and A. D¨ur). This enables not only to present a unified view but also to generalize the construction of QC, QD and even quasi-monoidic (QM) Goppa codes. All in all, our results can be harnessed to boost up any key-recovery attack on McEliece systems based on symmetric alternant or Goppa codes, and in particular algebraic attacks.
Last updated:  2014-05-20
Multi-Vendor PayWord with Payment Approval
Andrea Huszti
One of the most well known micropayment scheme is the PayWord scheme. It is designed to be onevendor, so if we apply it for multiple vendors, it does not protect against double spending. We extended the PayWord scheme, it supports shopping at multiple vendors without an on-line broker or an on-line secure database. The proposed credit-based system uses one hash chain, hence besides the secret signature key only the seed and a random value should be securely stored. Our scheme is analyzed in applied pi calculus, we prove that it fulfills payment approval, secure payment authorization, secrecy of payment information and unreusability.
Last updated:  2014-05-20
Secret and Verifiable Delegated Voting for Wide Representation
Yefim Leifman
This paper combines cryptographic voting and web page ranking and proves that it is possible to hold elections so as not to limit a voter by a list of candidates, to benefit from voter's personal experience in dealing with people, to make wide and proportional representation, and to achieve secrecy, including incoercibility, and verifiability of cryptographic voting systems.
Last updated:  2014-05-19
Distributed Smooth Projective Hashing and its Application to Two-Server PAKE
Franziskus Kiefer, Mark Manulis
Smooth projective hash functions have been used as building block for various cryptographic applications, in particular for password-based authentication. In this work we propose the extended concept of distributed smooth projective hash functions where the computation of the hash value is distributed across $n$ parties and show how to instantiate the underlying approach for languages consisting of Cramer-Shoup ciphertexts. As an application of distributed smooth projective hashing we build a new framework for the design of two-server password authenticated key exchange protocols, which we believe can help to "explain" the design of earlier two-server password authenticated key exchange protocols.
Last updated:  2014-05-19
Zerocash: Decentralized Anonymous Payments from Bitcoin
Eli Ben-Sasson, Alessandro Chiesa, Christina Garman, Matthew Green, Ian Miers, Eran Tromer, Madars Virza
Bitcoin is the first digital currency to see widespread adoption. While payments are conducted between pseudonyms, Bitcoin cannot offer strong privacy guarantees: payment transactions are recorded in a public decentralized ledger, from which much information can be deduced. Zerocoin (Miers et al., IEEE S&P 2013) tackles some of these privacy issues by unlinking transactions from the payment's origin. Yet, it still reveals payments' destinations and amounts, and is limited in functionality. In this paper, we construct a full-fledged ledger-based digital currency with strong privacy guarantees. Our results leverage recent advances in zero-knowledge Succinct Non-interactive ARguments of Knowledge (zk-SNARKs). First, we formulate and construct decentralized anonymous payment schemes (DAP schemes). A DAP scheme enables users to directly pay each other privately: the corresponding transaction hides the payment's origin, destination, and transferred amount. We provide formal definitions and proofs of the construction's security. Second, we build Zerocash, a practical instantiation of our DAP scheme construction. In Zerocash, transactions are less than 1 kB and take under 6 ms to verify --- orders of magnitude more efficient than the less-anonymous Zerocoin and competitive with plain Bitcoin.
Last updated:  2014-05-19
A Simple Cast-as-Intended E-Voting Protocol by Using Secure Smart Cards
Helger Lipmaa
We propose a simple cast-as-intended remote e-voting protocol where the security is based on the use of secure (and trusted) smart cards that incorporate incard numeric keyboards and LCD displays, and can perform a limited number of cryptographic operations (like encryption, signing, and random number generation). The protocol, while very simple, is significantly more secure (in the sense of ``cast-as-intended'') and convenient to use than the e-voting protocol currently used in Norway. The protocol is developed primarily with the idea of deploying it in Estonia within the next $3$ to $10$ years. Since in Estonia, a vast majority of the population already has ID-cards with digital signing and authentication functionality, and the use of ID-cards is a required prerequisite to participate in Estonian e-voting anyway, our assumption of every voter having a secure hardware token makes sense in this concrete context.
Last updated:  2016-07-20
One-Way Functions and (Im)perfect Obfuscation
Uncategorized
Ilan Komargodski, Tal Moran, Moni Naor, Rafael Pass, Alon Rosen, Eylon Yogev
Show abstract
Uncategorized
A program obfuscator takes a program and outputs an "scrambled" version of it, where the goal is that the obfuscated program will not reveal much about its structure beyond what is apparent from executing it. There are several ways of formalizing this goal. Specifically, in indistinguishability obfuscation, first defined by Barak et al. (CRYPTO 2001), the requirement is that the results of obfuscating any two functionally equivalent programs (circuits) will be computationally indistinguishable. Recently, a fascinating candidate construction for indistinguishability obfuscation was proposed by Garg et al. (FOCS 2013). This has led to a flurry of discovery of intriguing constructions of primitives and protocols whose existence was not previously known (for instance, fully deniable encryption by Sahai and Waters, STOC 2014). Most of them explicitly rely on additional hardness assumptions, such as one-way functions. Our goal is to get rid of this extra assumption. We cannot argue that indistinguishability obfuscation of all polynomial-time circuits implies the existence of one-way functions, since if $P = NP$, then program obfuscation (under the indistinguishability notion) is possible. Instead, the ultimate goal is to argue that if $P \neq NP$ and program obfuscation is possible, then one-way functions exist. Our main result is that if $NP \not\subseteq ioBPP$ and there is an efficient (even imperfect) indistinguishability obfuscator, then there are one-way functions. In addition, we show that the existence of an indistinguishability obfuscator implies (unconditionally) the existence of SZK-arguments for $NP$. This, in turn, provides an alternative version of our main result, based on the assumption of hard-on-the average $NP$ problems. To get some of our results we need obfuscators for simple programs such as 3CNF formulas.
Last updated:  2014-09-12
Time-Memory Trade-offs for Index Calculus in Genus 3
Uncategorized
Kim Laine, Kristin Lauter
Show abstract
Uncategorized
In this paper, we present a variant of Diem's $\widetilde{O}(q)$ index calculus algorithm to attack the discrete logarithm problem (DLP) in Jacobians of genus $3$ non-hyperelliptic curves over a finite field $\mathbb{F}_q$. We implement this new variant in C++ and study the complexity in both theory and practice, making the logarithmic factors and constants hidden in the $\widetilde{O}$-notation precise. Our variant improves the computational complexity at the cost of a moderate increase in memory consumption, but we also improve the computational complexity even when we limit the memory usage to that of Diem's original algorithm. Finally, we examine how parallelization can help to reduce both the memory cost per computer and the running time for our algorithms.
Last updated:  2014-05-19
Private Database Access With HE-over-ORAM Architecture
Craig Gentry, Shai Halevi, Charanjit Jutla, Mariana Raykova
Enabling private database queries is an important and challenging research problem with many real-world applications. The goal is for the client to obtain the results of its queries without learning anything else about the database, while the outsourced server learns nothing about the queries or data, including access patterns. The secure-computation-over-ORAM architecture offers a promising approach to this problem, permitting sub-linear time processing of the queries (after pre-processing) without compromising security. In this work we examine the feasibility of this approach, focusing specifically on secure-computation protocols based on somewhat-homomorphic encryption (SWHE). We devised and implemented secure two-party protocols in the semi-honest model for the path-ORAM protocol of Stefanov et al. This provides access by index or keyword, which we extend (via pre-processing) to limited conjunction queries and range queries. Some of our sub-protocols may be interesting in their own right, such as our new protocols for encrypted comparisons and blinded permutations. We implemented our protocols on top of the HElib homomorphic encryption library. Our basic single-threaded implementation takes about 30 minutes to process a query on a database with $2^{22}$ records and 120-bit long keywords, providing a cause for optimism about the viability of this direction, and we expect a better optimized implementation to be much faster.
Last updated:  2014-09-04
Toward Robust Hidden Volumes using Write-Only Oblivious RAM
Erik-Oliver Blass, Travis Mayberry, Guevara Noubir, Kaan Onarlioglu
With sensitive data being increasingly stored on mobile devices and laptops, hard disk encryption is more important than ever. In particular, being able to plausibly deny that a hard disk contains certain information is a very useful and interesting research goal. However, it has been known for some time that existing ``hidden volume'' solutions, like TrueCrypt, fail in the face of an adversary who is able to observe the contents of a disk on multiple, separate occasions. In this work, we explore more robust constructions for hidden volumes and present HIVE, which is resistant to more powerful adversaries with multiple-snapshot capabilities. In pursuit of this, we propose the first security definitions for hidden volumes, and prove HIVE secure under these definitions. At the core of HIVE, we design a new write-only Oblivious RAM. We show that, when only hiding writes, it is possible to achieve ORAM with optimal O(1) communication complexity and only poly-logarithmic user memory. This is a significant improvement over existing work and an independently interesting result. We go on to show that our write-only ORAM is specially equipped to provide hidden volume functionality with low overhead and significantly increased security. Finally, we implement HIVE as a Linux kernel block device to show both its practicality and usefulness on existing platforms.
Last updated:  2015-09-07
Solving Linear Equations Modulo Unknown Divisors: Revisited
Uncategorized
Yao Lu, Rui Zhang, Liqiang Peng, Dongdai Lin
Show abstract
Uncategorized
We revisit the problem of finding small solutions to a collection of linear equations modulo an unknown divisor $p$ for a known composite integer $N$. In CaLC 2001, Howgrave-Graham introduced an efficient algorithm for solving univariate linear equations; since then, two forms of multivariate generalizations have been considered in the context of cryptanalysis: modular multivariate linear equations by Herrmann and May (Asiacrypt'08) and simultaneous modular univariate linear equations by Cohn and Heninger (ANTS'12). Their algorithms have many important applications in cryptanalysis, such as factoring with known bits problem, fault attacks on RSA signatures, analysis of approximate GCD problem, etc. In this paper, by introducing multiple parameters, we propose several generalizations of the above equations. The motivation behind these extensions is that some attacks on RSA variants can be reduced to solving these generalized equations, and previous algorithms do not apply. We present new approaches to solve them, and compared with previous methods, our new algorithms are more flexible and especially suitable for some cases. Applying our algorithms, we obtain the best analytical/experimental results for some attacks on RSA and its variants, specifically, \begin{itemize} \item We improve May's results (PKC'04) on small secret exponent attack on RSA variant with moduli $N = p^rq$ ($r\geq 2$). \item We experimentally improve Boneh et al.'s algorithm (Crypto'98) on factoring $N=p^rq$ ($r\geq 2$) with known bits problem. \item We significantly improve Jochemsz-May' attack (Asiacrypt'06) on Common Prime RSA. \item We extend Nitaj's result (Africacrypt'12) on weak encryption exponents of RSA and CRT-RSA. \end{itemize}
Last updated:  2015-03-19
Proposing Individualization of the design of cryptographic hardware accelerators as countermeasure against structure and side channel analysis
Uncategorized
Zoya Dyka, Thomas Basmer, Christian Wittke, Peter Langendoerfer
Show abstract
Uncategorized
Side channel and fault attacks take advantage from the fact that the behavior of crypto implementations can be observed and provide hints that simplify revealing keys. These attacks are normally prepared by analyzing devices that are identical to the real target. Here we propose to individualize the design of cryptographic devices in order to prevent attacks that use identical devices. We implemented three different designs that provide exactly the same cryptographic function, i.e. an ECC kP multiplication. The synthesis and power simulation results show clear differences in the area consumed as well as in the power traces. We envision that this type of protection mechanism is relevant e.g. for wireless sensor networks from which devices can easily be stolen for further analysis in the lab.
Last updated:  2014-05-15
Formal Analysis of Chaumian Mix Nets with Randomized Partial Checking
Ralf Kuesters, Tomasz Truderung, Andreas Vogt
Mix nets with randomized partial checking (RPC mix nets) have been introduced by Jakobsson, Juels, and Rivest as particularly simple and efficient verifiable mix nets. These mix nets have been used in several implementations of prominent e-voting systems to provide vote privacy and verifiability. In RPC mix nets, higher efficiency is traded for a lower level of privacy and verifiability. However, these mix nets have never undergone a rigorous formal analysis. Recently, Kahazei and Wikstroem even pointed out several severe problems in the original proposal and in implementations of RPC mix nets in e-voting systems, both for so-called re-encryption and Chaumian RPC mix nets. While Kahazei and Wikstroem proposed several fixes, the security status of Chaumian RPC mix nets (with the fixes applied) has been left open; re-encryption RPC mix nets, as they suggest, should not be used at all. In this paper, we provide the first formal security analysis of Chaumian RPC mix nets. We propose security definitions that allow one to measure the level of privacy and verifiability RPC mix nets offer, and then based on these definitions, carry out a rigorous analysis. Altogether, our results show that these mix nets provide a reasonable level of privacy and verifiability, and that they are still an interesting option for the use in e-voting systems.
Last updated:  2014-05-15
A Strong and Efficient Certificateless Digital Signature Scheme
Uncategorized
Mohammed Alfateh Hassouna, Mohsin Hashim
Show abstract
Uncategorized
This paper extends the certificateless public key infrastructure model that was proposed by Hassouna et al by proposing new digital signature scheme to provide true non-repudiation, the proposed signature scheme is short and efficient, it is also has strength point that the KGC has no contribution in signature generation/verification process, therefore any compromise of the KGC does not affect the non-repudiation service of the system. Furthermore, even the KGC cannot do signature forgery by (temporary) replacing the user’s public key.
Last updated:  2014-12-19
Public-Coin Concurrent Zero-Knowledge in Logarithmic Rounds
Uncategorized
Yi Deng
Uncategorized
We construct $O(\log^{1+\epsilon} n)$-round \emph{public-coin} concurrent zero knowledge arguments for NP from standard (against any polynomial-time adversary) collision-resistant hash functions for arbitrarily small constant $\epsilon$. Our construction is \emph{straight-line simulatable}. This is the first public-coin concurrent zero knowledge protocol based on standard/long-studied assumption that (almost) achieves the best known round-complexity of its private-coin counterpart [Prabhakaran et al., FOCS 02]. Previously, such public-coin constructions require either polynomial number of rounds [Goyal, STOC 13], newly-introduced assumptions [Chung et al., FOCS 13], or stronger model [Canetti et al., TCC 13]. This result has strong consequences: it yields the first (almost) logarithmic round simultaneously resettable arguments for NP and the first (almost) logarithmic round concurrent multi-party computation in the single input setting. These results significantly improve over the polynomial round-complexity of the best known protocols based on standard assumptions in both cases. Our technical contribution is two-fold. First, we introduce a simulation strategy called \emph{clearance} that yields a simulation tree of very \emph{special combinatorial structure} and enables us to instantiate Barak's protocol [Barak, FOCS 01] using the recent Ben-Sasson et al.'s quasi-linear construction of PCP system [Ben-Sasson et al., STOC 13] to obtain logarithmic round-complexity; secondly, we show how to modify Barak's protocol such that the soundness of overall construction does not rely on the (implicit/explicit) proof of knowledge property of the underlying universal argument/PCP system, which in turn allows us to benefit from progress on short PCP system of more general types \emph{without assuming stronger/superpolynomial hardness}.
Last updated:  2015-02-19
A Tamper and Leakage Resilient von Neumann Architecture
Uncategorized
Sebastian Faust, Pratyay Mukherjee, Jesper Buus Nielsen, Daniele Venturi
Show abstract
Uncategorized
We present a universal framework for tamper and leakage resilient computation on a von Neumann Random Access Architecture (RAM in short). The RAM has one CPU that accesses a storage, which we call the disk. The disk is subject to leakage and tampering. So is the bus connecting the CPU to the disk. We assume that the CPU is leakage and tamper-free. For a fixed value of the security parameter, the CPU has constant size. Therefore the code of the program to be executed is stored on the disk, i.e., we consider a von Neumann architecture. The most prominent consequence of this is that the code of the program executed will be subject to tampering. We construct a compiler for this architecture which transforms any keyed primitive into a RAM program where the key is encoded and stored on the disk along with the program to evaluate the primitive on that key. Our compiler only assumes the existence of a so-called continuous non-malleable code, and it only needs black-box access to such a code. No further (cryptographic) assumptions are needed. This in particular means that given an information theoretic code, the overall construction is information theoretic secure. Although it is required that the CPU is tamper and leakage proof, its design is independent of the actual primitive being computed and its internal storage is non-persistent, i.e., all secret registers are reset between invocations. Hence, our result can be interpreted as reducing the problem of shielding arbitrary complex computations to protecting a single, simple yet universal component.
Last updated:  2014-05-15
Related Randomness Attacks for Public Key Encryption
Kenneth G. Paterson, Jacob C. N. Schuldt, Dale L. Sibborn
Several recent and high-profile incidents give cause to believe that randomness failures of various kinds are endemic in deployed cryptographic systems. In the face of this, it behoves cryptographic researchers to develop methods to immunise - to the extent that it is possible - cryptographic schemes against such failures. This paper considers the practically-motivated situation where an adversary is able to force a public key encryption scheme to reuse random values, and functions of those values, in encryption computations involving adversarially chosen public keys and messages. It presents a security model appropriate to this situation, along with variants of this model. It also provides necessary conditions on the set of functions used in order to attain this security notation, and demonstrates that these conditions are also sufficient in the Random Oracle Model. Further standard model constructions achieving weaker security notions are also given, with these constructions having interesting connections to other primitives including: pseudo-random functions that are secure in the related key attack setting; Correlated Input Secure hash functions; and public key encryption schemes that are secure in the auxiliary input setting (this being a special type of leakage resilience).
Last updated:  2014-05-15
Private Predictive Analysis on Encrypted Medical Data
Joppe W. Bos, Kristin Lauter, Michael Naehrig
Increasingly, confidential medical records are being stored in data centers hosted by hospitals or large companies. As sophisticated algorithms for predictive analysis on medical data continue to be developed, it is likely that, in the future, more and more computation will be done on private patient data. While encryption provides a tool for assuring the privacy of medical information, it limits the functionality for operating on such data. Conventional encryption methods used today provide only very restricted possibilities or none at all to operate on encrypted data without decrypting it first. Homomorphic encryption provides a tool for handling such computations on encrypted data, without decrypting the data, and without even needing the decryption key. In this paper, we discuss possible application scenarios for homomorphic encryption in order to ensure privacy of sensitive medical data. We describe how to privately conduct predictive analysis tasks on encrypted data using homomorphic encryption. As a proof of concept, we present a working implementation of a prediction service running in the cloud (hosted on Microsoft's Windows Azure), which takes as input private encrypted health data, and returns the probability of suffering cardiovascular disease in encrypted form. Since the cloud service uses homomorphic encryption, it makes this prediction while handling only encrypted data, learning nothing about the submitted confidential medical data.
Last updated:  2014-05-18
SHADOW NUMBERS PUBLIC KEY ENCRYPTION
Uncategorized
John Almeida
Show abstract
Uncategorized
The present public key encryption in this paper involves the use of two values and they are the shadow’s values of a base value, and the base value is derived from the two shadows’ values. Whenever two positive whole values (first shadow value and second shadow value) are multiplied producing a product value and the value of 1 is subtracted from the product value, a first base value is derived and it is the first base value of the two shadows values. The derived first base value may be divided by any divisor that it can be divided with which produces a positive quotient result and zero for the remainder. All values that are used in the division and the quotient result are bases values for the chosen shadow value-pair. Then one of the base values is chosen along with the two chosen shadows values and they comprise a triplet values that represent the public key to encrypt a message and the private key to decrypt the encrypted message.
Last updated:  2015-01-13
LCPR: High Performance Compression Algorithm for Lattice-Based Signatures
Uncategorized
Rachid El Bansarkhani, Johannes Buchmann
Show abstract
Uncategorized
Many lattice-based signature schemes have been proposed in recent years. However, all of them suffer from huge signature sizes as compared to their classical counterparts. We present a novel and generic construction of a lossless compression algorithm for Schnorr-like signatures utilizing publicly accessible randomness. Conceptually, exploiting public randomness in order to reduce the signature size has never been considered in cryptographic applications. We illustrate the applicability of our compression algorithm using the example of a current state-of-the-art signature scheme due to Gentry et al. (GPV scheme) instantiated with the efficient trapdoor construction from Micciancio and Peikert. This scheme benefits from increasing the main security parameter $n$, which is positively correlated with the compression rate measuring the amount of storage savings. For instance, GPV signatures admit improvement factors of approximately $\lg n$ implying compression rates of about $65$\% at a security level of about 100 bits without suffering loss of information or decrease in security, meaning that the original signature can always be recovered from its compressed state. As a further result, we propose a multi-signer compression strategy in case more than one signer agree to share the same source of public randomness. Such a strategy of bundling compressed signatures together to an aggregate has many advantages over the single signer approach.
Last updated:  2016-06-15
An optimal representation for the trace zero subgroup
Elisa Gorla, Maike Massierer
We give an optimal-size representation for the elements of the trace zero subgroup of the Picard group of an elliptic or hyperelliptic curve of any genus, with respect to a field extension of any prime degree. The representation is via the coefficients of a rational function, and it is compatible with scalar multiplication of points. We provide efficient compression and decompression algorithms, and complement them with implementation results. We discuss in detail the practically relevant cases of small genus and extension degree, and compare with the other known compression methods.
Last updated:  2015-01-04
How to Choose Interesting Points for Template Attacks?
Uncategorized
Guangjun Fan, Yongbin Zhou, Hailong Zhang, Dengguo Feng
Show abstract
Uncategorized
Template attacks are widely accepted to be the most powerful side-channel attacks from an information theoretic point of view. For template attacks, many papers suggested a guideline for choosing interesting points which is still not proven. The guideline is that one should only choose one point as the interesting point per clock cycle. Up to now, many different methods of choosing interesting points were introduced. However, it is still unclear that which approach will lead to the best classification performance for template attacks. In this paper, we comprehensively evaluate and compare the classification performance of template attacks when using different methods of choosing interesting points. Evaluation results show that the classification performance of template attacks has obvious difference when different methods of choosing interesting points are used. The CPA based method and the SOST based method will lead to the best classification performance. Moreover, we find that some methods of choosing interesting points provide the same results in the same circumstance. Finally, we verify the guideline for choosing interesting points for template attacks is correct by presenting a new way of conducting template attacks.
Last updated:  2015-01-12
Machine Learning Classification over Encrypted Data
Raphael Bost, Raluca Ada Popa, Stephen Tu, Shafi Goldwasser
Machine learning classification is used in numerous settings nowadays, such as medical or genomics predictions, spam detection, face recognition, and financial predictions. Due to privacy concerns in some of these applications, it is important that the data and the classifier remain confidential. In this work, we construct three major classification protocols that satisfy this privacy constraint: hyperplane decision, Naïve Bayes, and decision trees. These protocols may also be combined with AdaBoost. They rely on a library of building blocks for constructing classifiers securely, and we demonstrate the versatility of this library by constructing a face detection classifier. Our protocols are efficient, taking milliseconds to a few seconds to perform a classification when running on real medical datasets.
Last updated:  2016-05-14
Noncentralized Cryptocurrency wtih No Blockchain
Uncategorized
qianxiaochao
Show abstract
Uncategorized
We give an explicit definition of decentralization and show you that decentralization is almost impossible for the current stage. We propose a new framework of noncentralized cryptocurrency system with an assumption of the existence of a weak adversary for a bank alliance. It abandons the mining process and blockchain, and removes history transactions from data synchronization. We propose a consensus algorithm named “Converged Consensus” for a noncentralized cryptocurrency system.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.