All papers in 2014 (Page 6 of 1029 results)

Last updated:  2014-07-08
Leakage-Resilient Signatures with Graceful Degradation
Jesper Buus Nielsen, Daniele Venturi, Angela Zottarel
We investigate new models and constructions which allow leakage-resilient signatures secure against existential forgeries, where the signature is much shorter than the leakage bound. Current models of leakage-resilient signatures against existential forgeries demand that the adversary cannot produce a new valid message/signature pair $(m, \sigma)$ even after receiving some $\lambda$ bits of leakage on the signing key. If $\vert \sigma \vert \le \lambda$, then the adversary can just choose to leak a valid signature $\sigma$, and hence signatures must be larger than the allowed leakage, which is impractical as the goal often is to have large signing keys to allow a lot of leakage. We propose a new notion of leakage-resilient signatures against existential forgeries where we demand that the adversary cannot produce $n = \lfloor \lambda / \vert \sigma \vert \rfloor + 1$ distinct valid message/signature pairs $(m_1, \sigma_1), \ldots, (m_n, \sigma_n)$ after receiving $\lambda$ bits of leakage. If $\lambda = 0$, this is the usual notion of existential unforgeability. If $1 < \lambda < \vert \sigma \vert$, this is essentially the usual notion of existential unforgeability in the presence of leakage. In addition, for $\lambda \ge \vert \sigma \vert$ our new notion still guarantees the best possible, namely that the adversary cannot produce more forgeries than he could have leaked, hence graceful degradation. Besides the game-based notion hinted above, we also consider a variant which is more simulation-based, in that it asks that from the leakage a simulator can ``extract'' a set of $n-1$ messages (to be thought of as the messages corresponding to the leaked signatures), and no adversary can produce forgeries not in this small set. The game-based notion is easier to prove for a concrete instantiation of a signature scheme. The simulation-based notion is easier to use, when leakage-resilient signatures are used as components in larger protocols. We prove that the two notion are equivalent and present a generic construction of signature schemes meeting our new notion and a concrete instantiation under fairly standard assumptions. We further give an application, to leakage-resilient identification.
Last updated:  2014-07-08
Groups With Two Generators Having Unsolvable Word Problem And Presentations of Mihailova Subgroups
Xiaofeng Wang, Chen Xu, Guo Li, Hanling Lin
A presentation of a group with two generators having unsolvable word problem and an explicit countable presentation of Mihailova subgroup of F_2×F_2 with finite number of generators are given. Where Mihailova subgroup of F_2×F_2 enjoys the unsolvable subgroup membership problem.One then can use the presentation to create entities' private key in a public key cryptsystem.
Last updated:  2015-01-06
Good is Not Good Enough: Deriving Optimal Distinguishers from Communication Theory
Annelie Heuser, Olivier Rioul, Sylvain Guilley
We find mathematically optimal side-channel distinguishers by looking at the side-channel as a communication channel. Our methodology can be adapted to any given scenario (device, signal-to-noise ratio, noise distribution, leakage model, etc.). When the model is known and the noise is Gaussian, the optimal distinguisher outperforms CPA and covariance. However, we show that CPA is optimal when the model is only known on a proportional scale. For non-Gaussian noise, we obtain different optimal distinguishers, one for each noise distribution. When the model is imperfectly known, we consider the scenario of a weighted sum of the sensitive variable bits where the weights are unknown and drawn from a normal law. In this case, our optimal distinguisher performs better than the classical linear regression analysis.
Last updated:  2014-07-07
Curve41417: Karatsuba revisited
Uncategorized
Daniel J. Bernstein, Chitchanok Chuengsatiansup, Tanja Lange
Show abstract
Uncategorized
This paper introduces constant-time ARM Cortex-A8 ECDH software that (1) is faster than the fastest ECDH option in the latest version of OpenSSL but (2) achieves a security level above 2^200 using a prime above 2^400. For comparison, this OpenSSL ECDH option is not constant-time and has a security level of only 2^80. The new speeds are achieved in a quite different way from typical prime-field ECC software: they rely on a synergy between Karatsuba's method and choices of radix smaller than the CPU word size.
Last updated:  2014-07-07
Differential Analysis on Block Cipher PRIDE
Jingyuan Zhao, Xiaoyun Wang, Meiqin Wang, Xiaoyang Dong
The lightweight block cipher PRIDE designed by Albrecht et al., appears in CRYPTO 2014. The designers claim that their method of constructing linear layer is good both in security and efficiency. In this paper, we find 16 different 2-round iterative characteristics utilizing the weaknesses of S-box and linear layer, construct several 15-round differentials. Based on one of the differentials, we launch differential attack on 18-round PRIDE. The data, time and memory complexity are $2^{60}$, $2^{66}$ and $2^{64}$, respectively.
Last updated:  2014-07-07
Constructing hyper-bent functions from Boolean functions with the Walsh spectrum taking the same value twice
Chunming Tang, Yanfeng Qi
Hyper-bent functions as a subclass of bent functions attract much interest and it is elusive to completely characterize hyper-bent functions. Most of known hyper-bent functions are Boolean functions with Dillon exponents and they are often characterized by special values of Kloosterman sums. In this paper, we present a method for characterizing hyper-bent functions with Dillon exponents. A class of hyper-bent functions with Dillon exponents over $\mathbb{F}_{2^{2m}}$ can be characterized by a Boolean function over $\mathbb{F}_{2^m}$, whose Walsh spectrum takes the same value twice. Further, we show several classes of hyper-bent functions with Dillon exponents characterized by Kloosterman sum identities and the Walsh spectra of some common Boolean functions.
Last updated:  2014-07-07
Fully Secure and Fast Signing from Obfuscation
Uncategorized
Kim Ramchen, Brent Waters
Show abstract
Uncategorized
In this work we explore new techniques for building short signatures from obfuscation. Our goals are twofold. First, we would like to achieve short signatures with adaptive security proofs. Second, we would like to build signatures with fast signing, ideally significantly faster than comparable signatures that are not based on obfuscation. The goal here is to create an "imbalanced" scheme where signing is fast at the expense of slower verification. We develop new methods for achieving short and fully secure obfuscation-derived signatures. Our base signature scheme is built from punctured programming and makes a novel use of the "prefix technique" to guess a signature. We find that our initial scheme has slower performance than comparable algorithms (e.g. EC-DSA). We find that the underlying reason is that the underlying PRG is called l^2 times for security parameter l. To address this issue we construct a more efficient scheme by adapting the Goldreich-Goldwasser-Micali [GGM86] construction to form the basis for a new puncturable PRF. This puncturable PRF accepts variable-length inputs and has the property that evaluations on all prefixes of a message can be efficiently pipelined. Calls to the puncturable PRF by the signing algorithm therefore make fewer invocations of the underlying PRG, resulting in reduced signing costs. We evaluate our puncturable PRF based signature schemes using a variety of cryptographic candidates for the underlying PRG. We show that the resulting performance on message signing is competitive with that of widely deployed signature schemes.
Last updated:  2014-07-07
Constrained Pseudorandom Functions: Verifiable and Delegatable
Nishanth Chandran, Srinivasan Raghuraman, Dhinakaran Vinayagamurthy
Constrained pseudorandom functions (introduced independently by Boneh and Waters (CCS 2013), Boyle, Goldwasser, and Ivan (PKC 2014), and Kiayias, Papadopoulos, Triandopoulos, and Zacharias (CCS 2013)), are pseudorandom functions (PRFs) that allow the owner of the secret key $k$ to compute a constrained key $k_f$, such that anyone who possesses $k_f$ can compute the output of the PRF on any input $x$ such that $f(x) = 1$ for some predicate $f$. The security requirement of constrained PRFs state that the PRF output must still look indistinguishable from random for any $x$ such that $f(x) = 0$. Boneh and Waters show how to construct constrained PRFs for the class of bit-fixing as well as circuit predicates. They explicitly left open the question of constructing constrained PRFs that are delegatable - i.e., constrained PRFs where the owner of $k_f$ can compute a constrained key $k_{f'}$ for a further restrictive predicate $f'$. Boyle, Goldwasser, and Ivan left open the question of constructing constrained PRFs that are also verifiable. Verifiable random functions (VRFs), introduced by Micali, Rabin, and Vadhan (FOCS 1999), are PRFs that allow the owner of the secret key $k$ to prove, for any input $x$, that $y$ indeed is the output of the PRF on $x$; the security requirement of VRFs state that the PRF output must still look indistinguishable from random, for any $x$ for which a proof is not given. In this work, we solve both the above open questions by constructing constrained pseudorandom functions that are simultaneously verifiable and delegatable.
Last updated:  2015-11-26
Adaptively Secure Puncturable Pseudorandom Functions in the Standard Model
Susan Hohenberger, Venkata Koppula, Brent Waters
We study the adaptive security of constrained PRFs in the standard model. We initiate our exploration with puncturable PRFs. A puncturable PRF family is a special class of constrained PRFs, where the constrained key is associated with an element $x'$ in the input domain. The key allows evaluation at all points $x\neq x'$. We show how to build puncturable PRFs with adaptive security proofs in the standard model that involve only polynomial loss to the underlying assumptions. Prior work had either super-polynomial loss or applied the random oracle heuristic. Our construction uses indistinguishability obfuscation and DDH-hard algebraic groups of composite order.
Last updated:  2015-01-14
Squares of Random Linear Codes
Ignacio Cascudo, Ronald Cramer, Diego Mirandola, Gilles Zémor
Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give a positive answer, for codes of dimension $k$ and length roughly $\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise computation of the number of quadratic forms of a given rank, and the number of their zeros.
Last updated:  2014-07-03
Realizing Pico: Finally No More Passwords!
Jens Hermans, Roel Peeters
In 2011 Stajano proposed Pico, a secure and easy-to-use alternative for passwords. Among the many proposals in this category, Pico stands out by being creative and convincing. However, the description as published leaves some details unspecified, and to the best of our knowledge the complete system has not yet been tested. This work presents detailed specifications and future-proof security protocols for Pico. Moreover, we present the first robust and efficient Pico implementation. Our implementation allows to further mature the Pico concept and can be used for large scale usability evaluations at negligible cost.
Last updated:  2014-07-03
Cryptography from Compression Functions: The UCE Bridge to the ROM
Mihir Bellare, Viet Tung Hoang, Sriram Keelveedhi
This paper suggests and explores the use of UCE security for the task of turning VIL-ROM schemes into FIL-ROM ones. The benefits we offer over indifferentiability, the current leading method for this task, are the ability to handle multi-stage games and greater efficiency. The paradigm consists of (1) Showing that a VIL UCE function can instantiate the VIL RO in the scheme, and (2) Constructing the VIL UCE function given a FIL random oracle. The main technical contributions of the paper are domain extension transforms that implement the second step. Leveraging known results for the first step we automatically obtain FIL-ROM constructions for several primitives whose security notions are underlain by multi-stage games. Our first domain extender exploits indifferentiability, showing that although the latter does not work directly for multi-stage games it can be used indirectly, through UCE, as a tool for this end. Our second domain extender targets performance. It is parallelizable and shown through implementation to provide significant performance gains over indifferentiable domain extenders.
Last updated:  2015-03-12
On the Connection between Leakage Tolerance and Adaptive Security
Jesper Buus Nielsen, Daniele Venturi, Angela Zottarel
We revisit the context of leakage-tolerant interactive protocols as defined by Bitanski, Canetti and Halevi (TCC 2012). Our contributions can be summarized as follows: \begin{itemize} \item For the purpose of secure message transmission, any encryption protocol with message space $\cM$ and secret key space $\cSK$ tolerating poly-logarithmic leakage on the secret state of the receiver must satisfy $|\cSK| \ge (1-\epsilon)|\cM|$, for every $0 < \epsilon \le 1$, and if $|\cSK| = |\cM|$, then the scheme must use a fresh key pair to encrypt each message. \item More generally, we show that any $n$ party protocol tolerates leakage of $\approx\poly(\log\spar)$ bits from one party at the end of the protocol execution, \emph{if and only if} the protocol has passive adaptive security against an adaptive corruption of one party at the end of the protocol execution. This shows that as soon as a little leakage is tolerated, one needs full adaptive security. \end{itemize} All our results can be based on the only assumption that collision-resistant function ensembles exist.
Last updated:  2014-07-02
On the Classification of Finite Boolean Functions up to Fairness
Nikolaos Makriyannis
Two parties, $P_1$ and $P_2$, wish to jointly compute some function $f(x,y)$ where $P_1$ only knows $x$, whereas $P_2$ only knows $y$. Furthermore, and most importantly, the parties wish to reveal only what the output suggests. Function $f$ is said to be computable with \emph{complete fairness} if there exists a protocol computing $f$ such that whenever one of the parties obtains the correct output, then both of them do. The only protocol known to compute functions with complete fairness is the one of Gordon et al (STOC 2008). The functions in question are finite, Boolean, and the output is shared by both parties. The classification of such functions up to fairness may be a first step towards the classification of all functionalities up to fairness. Recently, Asharov (TCC 2014) identifies two families of functions that are computable with fairness using the protocol of Gordon et al and another family for which the protocol (potentially) falls short. Surprisingly, these families account for almost all finite Boolean functions. In this paper, we expand our understanding of what can be computed fairly with the protocol of Gordon et al. In particular, we fully describe which functions the protocol computes fairly and which it (potentially) does not. Furthermore, we present a new class of functions for which fair computation is outright impossible. Finally, we confirm and expand Asharov's observation regarding the fairness of finite Boolean functions: almost all functions $f:X\times Y\rightarrow \{0,1\}$ for which $|X|\neq |Y|$ are fair, whereas almost all functions for which $|X|= |Y|$ are not.
Last updated:  2015-06-17
Ideal Social Secret Sharing Using Birkhoff Interpolation Method
Nasrollah Pakniat, Ziba Eslami, Mehrdad Nojoumian
Ideal Social Secret Sharing Using Birkhoff Interpolation Method
Last updated:  2014-11-16
On Constrained Implementation of Lattice-based Cryptographic Primitives and Schemes on Smart Cards
Uncategorized
Ahmad Boorghany, Siavash Bayat Sarmadi, Rasool Jalili
Show abstract
Uncategorized
Most lattice-based cryptographic schemes with a security proof suffer from large key sizes and heavy computations. This is also true for the simpler case of authentication protocols which are used on smart cards, as a very-constrained computing environment. Recent progress on ideal lattices has significantly improved the efficiency, and made it possible to implement practical lattice-based cryptography on constrained devices. However, to the best of our knowledge, no previous attempts were made to implement lattice-based schemes on smart cards. In this paper, we provide the results of our implementation of several state-of-the-art lattice-based authentication protocols on smart cards and a microcontroller widely used in smart cards. Our results show that only a few of the proposed lattice-based authentication protocols can be implemented using limited resources of such constrained devices, however, cutting-edge ones are suitably efficient to be used practically on smart cards. Moreover, we have implemented fast Fourier transform (FFT) and discrete Gaussian sampling with different typical parameter sets, as well as versatile lattice-based public-key encryptions. These results have noticeable points which help to design or optimize lattice-based schemes for constrained devices.
Last updated:  2014-07-01
RSA meets DPA: Recovering RSA Secret Keys from Noisy Analog Data
Noboru Kunihiro, Junya Honda
We discuss how to recover RSA secret keys from noisy analog data obtained through physical attacks such as cold boot and side channel attacks. Many studies have focused on recovering correct secret keys from noisy binary data. Obtaining noisy binary keys typically involves first observing the analog data and then obtaining the binary data through quantization process that discards much information pertaining to the correct keys. In this paper, we propose two algorithms for recovering correct secret keys from noisy analog data, which are generalized variants of Paterson et al.'s algorithm. Our algorithms fully exploit the analog information. More precisely, consider observed data which follows the Gaussian distribution with mean $(-1)^b$ and variance $\sigma^2$ for a secret key bit $b$. We propose a polynomial time algorithm based on the maximum likelihood approach and show that it can recover secret keys if $\sigma < 1.767$. The first algorithm works only if the noise distribution is explicitly known. The second algorithm does not need to know the explicit form of the noise distribution. We implement the first algorithm and verify its effectiveness.
Last updated:  2014-12-03
Rmind: a tool for cryptographically secure statistical analysis
Dan Bogdanov, Liina Kamm, Sven Laur, Ville Sokk
Secure multi-party computation platforms are becoming more and more practical. This has paved the way for privacy-preserving statistical analysis using secure multi-party computation. Simple statistical analysis functions have been emerging here and there in literature, but no comprehensive system has been compiled. We describe and implement the most used statistical analysis functions in the privacy-preserving setting including simple statistics, t-test, $\chi^{2}$ test, Wilcoxon tests and linear regression. We give descriptions of the privacy-preserving algorithms and benchmark results that show the feasibility of our solution.
Last updated:  2014-06-30
Constructing CCA-secure predicate encapsulation schemes from CPA-secure schemes and universal one-way hash functions
Johannes Blömer, Gennadij Liske
We present a new transformation of chosen-plaintext secure predicate encryption schemes with public index into chosen-ciphertext secure schemes. Our construction requires only a universal one-way hash function and is selectively secure in the standard model. The transformation is not generic but can be applied to various existing schemes constructed from bilinear groups. Using common structural properties of these schemes we provide an efficient and simple transformation without overhead in form of one-time signatures or message authentication codes as required in the known generic transformations.
Last updated:  2014-06-30
A Probabilistic Algebraic Attack on the Grain Family of Stream Cipher
Pratish Datta, Dibyendu Roy, Sourav Mukhopadhyay
In 2005, Hell, Johansson and Meier submitted a stream cipher proposal named Grain v1 to the estream call for stream cipher proposals and it also became one estream nalists in the hardware category. The output function of Grain v1 connects its 160 bits internal state divided equally between an LFSR and an NFSR, using a non-linear lter function in a complex way. Over the last years many cryptanalyst identied several weaknesses in Grain v1. As a result in 2011 the inventors modied Grain v1 and published a new version of Grain named Grain-128a which has a similar structure as Grain v1 but with a 256 bits internal state with an optional authentication is the latest version of Grain family resisting all known attacks on Grain v1. However both these ciphers are quite resistant against the classical algebraic attack due to the rapid growth of the degree of the key-stream equations in subsequent clockings caused by the NFSR. This paper presents a probabilistic algebraic attack on both these Grain versions. The basic idea of our attack is to develop separate probabilistic equations for the LFSR and the NFSR bits from each key-stream equations. Surprisingly it turns out that in case of Grain-128a our proposed equations hold with all most sure probability, which makes the sure retrieval of the LFSR bits. We also outline a technique to reduce the growth of degree of the equations involving the NFSR bits for Grain v1. Further we high light that the concept of probabilistic algebraic attack as proposed in this paper can be considered as a generic attack strategy against any stream cipher having similar structure of the output function as in case of the Grain family.
Last updated:  2014-06-30
Privacy preserving delegated word search in the cloud
Kaoutar Elkhiyaoui, Melek Onen, Refik Molva
In this paper, we address the problem of privacy preserving delegated word search in the cloud. We consider a scenario where a data owner outsources its data to a cloud server and delegates the search capabilities to a set of third party users. In the face of semi-honest cloud servers, the data owner does not want to disclose any information about the outsourced data; yet it still wants to benefit from the highly parallel cloud environment. In addition, the data owner wants to ensure that delegating the search functionality to third parties does not allow these third parties to jeopardize the confidentiality of the outsourced data, neither does it prevent the data owner from efficiently revoking the access of these authorized parties. To these ends, we propose a word search protocol that builds upon techniques of keyed hash functions, oblivious pseudo-random functions and Cuckoo hashing to construct a searchable index for the outsourced data, and uses private information retrieval of short information to guarantee that word search queries do not reveal any information about the data to the cloud server. Moreover, we combine attribute-based encryption and oblivious pseudo-random functions to achieve an efficient revocation of authorized third parties. The proposed scheme is suitable for the cloud as it can be easily parallelized.
Last updated:  2014-06-30
Reversing Stealthy Dopant-Level Circuits
Takeshi Sugawara, Daisuke Suzuki, Ryoichi Fujii, Shigeaki Tawa, Ryohei Hori, Mitsuru Shiozaki, Takeshi Fujino
A successful detection of the stealthy dopant-level circuit (trojan), proposed by Becker et al. at CHES 2013, is reported. Contrary to an assumption made by Becker et al., dopant types in active region are visible with either scanning electron microscopy (SEM) or focused ion beam (FIB) imaging. The successful measurement is explained by an LSI failure analysis technique called the passive voltage contrast. The experiments are conducted by measuring a dedicated chip. The chip uses the diffusion programmable device: an anti-reverse-engineering technique by the same principle as the stealthy dopant-level trojan. The chip is delayered down to the contact layer, and images are taken with (1) an optical microscope, (2) SEM, and (3) FIB. As a result, the four possible dopant-well combinations, namely (i) p+/n-well, (ii) p+/p-well, (iii) n+/n-well and (iv) n+/p-well are distinguishable in the SEM images. Partial but sufficient detection is also achieved with FIB. Although the stealthy dopant-level circuits are visible, however, they potentially make a detection harder. That is because the contact layer should be measured. We show that imaging the contact layer is at most 16-times expensive than that of a metal layer in terms of the number of images
Last updated:  2015-10-10
How to Generate and use Universal Samplers
Dennis Hofheinz, Tibor Jager, Dakshita Khurana, Amit Sahai, Brent Waters, Mark Zhandry
The random oracle is an idealization that allows us to model a hash function as an oracle that will output a uniformly random string given any input. We introduce the notion of a universal sampler scheme that extends the notion of a random oracle, to a method of sampling securely from arbitrary distributions. We describe several applications that provide a natural motivation for this notion; these include generating the trusted parameters for many schemes from just a single trusted setup. We further demonstrate the versatility of universal samplers by showing how they give rise to simple constructions of identity-based encryption and multiparty key exchange. In particular, we construct adaptively secure non-interactive multiparty key exchange in the random oracle model based on indistinguishability obfuscation; obtaining the first known construction of adaptively secure NIKE without complexity leveraging. We give a solution that shows how to transform any random oracle into a universal sampler scheme, based on indistinguishability obfuscation. At the heart of our construction and proof is a new technique we call “delayed backdoor programming” that we believe will have other applications.
Last updated:  2014-06-26
Finding Roots in GF(p^n) with the Successive Resultant Algorithm
Uncategorized
Christophe Petit
Show abstract
Uncategorized
The problem of solving polynomial equations over finite fields has many applications in cryptography and coding theory. In this paper, we consider polynomial equations over a ``large'' finite field with a ``small'' characteristic. We introduce a new algorithm for solving this type of equations, called the \emph{Successive Resultants Algorithm} (SRA) in the sequel. SRA is radically different from previous algorithms for this problem, yet it is conceptually simple. A straightforward implementation using Magma was able to beat the built-in function \emph{Roots} for some parameters. These preliminary results encourage a more detailed study of SRA and its applications. Moreover, we point out that an extension of SRA to the multivariate case would have an importa
Last updated:  2014-06-26
On the quaternion $\ell$-isogeny path problem
Uncategorized
David Kohel, Kristin Lauter, Christophe Petit, Jean-Pierre Tignol
Show abstract
Uncategorized
Let $\cO$ be a maximal order in a definite quaternion algebra over $\mathbb{Q}$ of prime discriminant $p$, and $\ell$ a small prime. We describe a probabilistic algorithm, which for a given left $\cO$-ideal, computes a representative in its left ideal class of $\ell$-power norm. In practice the algorithm is efficient, and subject to heuristics on expected distributions of primes, runs in expected polynomial time. This breaks the underlying problem for a quaternion analog of the Charles-Goren-Lauter hash function, and has security implications for the original CGL construction in terms of supersingular elliptic curves.
Last updated:  2015-02-18
A Provable Security Analysis of Intel's Secure Key RNG
Thomas Shrimpton, R. Seth Terashima
We provide the first provable-security analysis of the Intel Secure Key hardware RNG (ISK-RNG), versions of which have appeared in Intel processors since late 2011. To model the ISK-RNG, we generalize the PRNG-with-inputs primitive, introduced Dodis et al. introduced at CCS'13 for their /dev/[u]random analysis. The concrete security bounds we uncover tell a mixed story. We find that ISK-RNG lacks backward-security altogether, and that the forward-security bound for the ``truly random'' bits fetched by the RDSEED instruction is potentially worrisome. On the other hand, we are able to prove stronger forward-security bounds for the pseudorandom bits fetched by the RDRAND instruction. En route to these results, our main technical efforts focus on the way in which ISK-RNG employs CBCMAC as an entropy extractor.
Last updated:  2014-06-26
Efficient Hidden Vector Encryption with Constant-Size Ciphertext
Tran Viet Xuan Phuong, Guomin Yang, Willy Susilo
A Hidden Vector Encryption (HVE) scheme is a special type of anonymous identity-based encryption (IBE) scheme where the attribute string associated with the ciphertext or the user secret key can contain wildcards. In this paper, we introduce two constant-size ciphertext-policy hidden vector encryption (CP-HVE) schemes. Our first scheme is constructed on composite order bilinear groups, while the second one is built on prime order bilinear groups. Both schemes are proven secure in a selective security model which captures plaintext (or payload) and attribute hiding. To the best of our knowledge, our schemes are the first HVE constructions that can achieve constant-size ciphertext among all the existing HVE schemes.
Last updated:  2014-06-27
What's the Gist? Privacy-Preserving Aggregation of User Profiles
Igor Bilogrevic, Julien Freudiger, Emiliano De Cristofaro, Ersin Uzun
Over the past few years, online service providers have started gathering increasing amounts of personal information to build user profiles and monetize them with advertisers and data brokers. Users have little control of what information is processed and are often left with an all-or-nothing decision between receiving free services or refusing to be profiled. This paper explores an alternative approach where users only disclose an aggregate model -- the ``gist'' -- of their data. We aim to preserve data utility and simultaneously provide user privacy. We show that this approach can be efficiently supported by letting users contribute encrypted and differentially-private data to an aggregator. The aggregator combines encrypted contributions and can only extract an aggregate model of the underlying data. We evaluate our framework on a dataset of 100,000 U.S. users obtained from the U.S. Census Bureau and show that (i) it provides accurate aggregates with as little as 100 users, (ii) it generates revenue for both users and data brokers, and (iii) its overhead is appreciably low.
Last updated:  2015-08-27
WHIRLBOB, the Whirlpool based Variant of STRIBOB: Lighter, Faster, and Constant Time
Uncategorized
Markku--Juhani O. Saarinen, Billy Bob Brumley
Show abstract
Uncategorized
WHIRLBOB, also known as STRIBOBr2, is an AEAD (Authenticated Encryption with Associated Data) algorithm derived from STRIBOBr1 and the Whirlpool hash algorithm. WHIRLBOB/STRIBOBr2 is a second round candidate in the CAESAR competition. As with STRIBOBr1, the reduced-size Sponge design has a strong provable security link with a standardized hash algorithm. The new design utilizes only the LPS or $\rho$ component of Whirlpool in flexibly domain-separated BLNK Sponge mode. The number of rounds is increased from 10 to 12 as a countermeasure against Rebound Distinguishing attacks. The $8 \times 8$ - bit S-Box used by Whirlpool and WHIRLBOB is constructed from $4 \times 4$ - bit ``MiniBoxes''. We report on fast constant-time Intel SSSE3 and ARM NEON SIMD WHIRLBOB implementations that keep full miniboxes in registers and access them via SIMD shuffles. This is an efficient countermeasure against AES-style cache timing side-channel attacks. Another main advantage of WHIRLBOB over STRIBOBr1 (and most other AEADs) is its greatly reduced implementation footprint on lightweight platforms. On many lower-end microcontrollers the total software footprint of $\pi$+BLNK = WHIRLBOB AEAD is less than half a kilobyte. We also report an FPGA implementation that requires 4,946 logic units for a single round of WHIRLBOB, which compares favorably to 7,972 required for Keccak / Keyak on the same target platform. The relatively small S-Box gate count also enables efficient 64-bit bitsliced straight-line implementations. We finally present some discussion and analysis on the relationships between WHIRLBOB, Whirlpool, the Russian GOST Streebog hash, and the recent draft Russian Encryption Standard Kuznyechik.
Last updated:  2015-03-07
Verifiable and Secure Outsourcing Schemes of Modular Exponentiations Using One Untrusted Cloud Server and Their Application
Can Xiang, Chunming Tang
Modular exponentiation is one of basic operations among most of current cryptosystems. Under some algebraic assumptions or cryptography assumptions, it can construct outsourcing schemes for modular exponentiations by using two untrusted cloud servers, which cannot resist the collusive attack of two untrusted cloud servers. However, it doesn't exist an efficient outsourcing of modular exponentiations using one untrusted cloud server. In this paper, we present three secure outsourcing schemes using one untrusted cloud server, which can enable user to securely outsource exponentiations to single cloud server. The first one is a secure outsourcing scheme for fixed base-variable exponent modular exponentiations, the second is for variable base-variable exponent modular exponentiations, and the third is a secure outsourcing scheme for simultaneous modular exponentiations. Compared with other proposed schemes, our proposed schemes are superior in both efficiency and checkability. Moreover, our schemes are secure without any cryptographic assumptions. Finally, we give two applications for our outsourcing schemes, one is to construct an outsourcing scheme for Cramer-Shoup encryptions, and the other is to design an outsourcing scheme for Schnorr signatures.
Last updated:  2015-01-23
Security and Efficiency Analysis of The Hamming Distance Computation Protocol Based On Oblivious Transfer
Uncategorized
Mehmet Sabır Kiraz, Ziya Alper Genç, Süleyman Kardaş
Show abstract
Uncategorized
and Patey proposed two cryptographic protocols for the computation of Hamming distance in the two-party setting. Their first scheme uses Oblivious Transfer and provides security in the semi-honest model. The other scheme uses Committed Oblivious Transfer (COT) and is claimed to provide full security in the malicious case. The proposed protocols have direct implications to biometric authentication schemes between a prover and a verifier where the verifier has biometric data of the users in plain form. In this paper, we show that their protocol against malicious adversaries is not actually secure. Namely, we show a generic attack such that a malicious user can compute a Hamming distance which is different from the actual value. For biometric authentication systems, this attack allows a malicious adversary to pass the authentication without knowledge of the honest user's input with at most O(n) complexity instead of O(2^n), where n is the input length. We propose an enhanced version of their protocol where this attack is eliminated. The security of our modified protocol is proved using simulation-based paradigm. Also as for efficiency concerns, the modified protocol utilizes Verifiable Oblivious Transfer (VOT) which excludes the commitments to outputs (as they exist in COT). We show that the use of VOT does not reduce the security of the protocol but improves the efficiency significantly.
Last updated:  2014-06-26
Lightweight Diffusion Layer from the $k^{th}$ root of the MDS Matrix
Souvik Kolay, Debdeep Mukhopadhyay
The Maximum Distance Separable (MDS) mapping, used in cryptography deploys complex Galois field multiplications, which consume lots of area in hardware, making it a costly primitive for lightweight cryptography. Recently in lightweight hash function: PHOTON, a matrix denoted as ‘Serial’, which required less area for multiplication, has been multiplied 4 times to achieve a lightweight MDS mapping. But no efficient method has been proposed so far to synthesize such a serial matrix or to find the required number of repetitive multiplications needed to be performed for a given MDS mapping. In this paper, first we provide an generic algorithm to find out a low-cost matrix, which can be multiplied k times to obtain a given MDS mapping. Further, we optimize the algorithm for using in cryptography and show an explicit case study on the MDS mapping of the hash function PHOTON to obtain the ‘Serial’. The work also presents quite a few results which may be interesting for lightweight implementation.
Last updated:  2015-09-05
NREPO:Normal Basis Recomputing with Permuted Operands
Xiaofei Guo, Debdeep Mukhopadhyay, Chenglu Jin, Ramesh Karri
Hardware implementations of cryptographic algorithms are vulnerable to natural and malicious faults. Concurrent Error Detection (CED) can be used to detect these faults. We present NREPO, a CED which does not require redundant computational resources in the design. Therefore, one can integrate it when computational resources are scarce or when the redundant resources are difficult to harness for CED. We integrate NREPO in a low-cost Advanced Encryption Standard (AES) implementation with 8-bit datapath. We show that NREPO has 25 and 50 times lower fault miss rate than robust code and parity, respectively. The area, throughput, and power are compared with other CEDs on 45nm ASIC. The hardware overhead of NREPO is 34.9%. The throughput and power are 271.6Mbps and 1579.3&#956;W , respectively. One can also implement NREPO in other cryptographic algorithms.
Last updated:  2014-06-26
Security Pitfalls of a Provably Secure Identity-based Multi-Proxy Signature Scheme
Maryam Rajabzadeh Asaar, Mahmoud Salmasizadeh, Willy Susilo
An identity-based multi-proxy signature is a type of proxy signatures in which the delegation of signing right is distributed among a number of proxy signers. In this type of cryptographic primitive, cooperation of all proxy signers in the proxy group generates the proxy signatures of roughly the same size as that of standard proxy signatures on behalf of the original signer, which is more efficient than transmitting individual proxy signatures. Since identity-based multi-proxy signatures are useful in distributed systems, grid computing, presenting a provably secure identity-based multi-proxy scheme is desired. In 2013, Sahu and Padhye proposed the first provably secure identity-based multi-proxy signature scheme in the random oracle model, and proved that their scheme is existential unforgeable against adaptive chosen message and identity attack. Unfortunately, in this paper, we show that their scheme is insecure. We present two forgery attacks on their scheme. Furthermore, their scheme is not resistant against proxy key exposure attack. As a consequence, there is no provably secure identity-based multi-proxy signature scheme secure against proxy key exposure attack to date.
Last updated:  2014-06-26
Improved Short Lattice Signatures in the Standard Model
Léo Ducas, Daniele Micciancio
We present a signature scheme provably secure in the standard model (no random oracles) based on the worst-case complexity of approximating the Shortest Vector Problem in ideal lattices within polynomial factors. The distinguishing feature of our scheme is that it achieves short signatures (consisting of a single lattice vector), and relatively short public keys (consisting of O(log n) vectors.) Previous lattice schemes in the standard model with similarly short signatures, due to Boyen (PKC 2010) and Micciancio and Peikert (Eurocrypt 2012), had substantially longer public keys consisting of &Omega;(n) vectors (even when implemented with ideal lattices). We also present a variant of our scheme that further reduces the public key size to just O(log log n) vectors and allows for a tighther security proof by making the signer stateful.
Last updated:  2015-08-05
Hardness of k-LWE and Applications in Traitor Tracing
Uncategorized
San Ling, Duong Hieu Phan, Damien Stehle, Ron Steinfeld
Show abstract
Uncategorized
We introduce the k-LWE problem, a Learning With Errors variant of the k-SIS problem. The Boneh-Freeman reduction from SIS to k-SIS suffers from an exponential loss in k. We improve and extend it to an LWE to k-LWE reduction with a polynomial loss in k, by relying on a new technique involving trapdoors for random integer kernel lattices. Based on this hardness result, we present the first algebraic construction of a traitor tracing scheme whose security relies on the worst-case hardness of standard lattice problems. The proposed LWE traitor tracing is almost as efficient as the LWE encryption. Further, it achieves public traceability, i.e., allows the authority to delegate the tracing capability to "untrusted" parties. To this aim, we introduce the notion of projective sampling family in which each sampling function is keyed and, with a projection of the key on a well chosen space, one can simulate the sampling function in a computationally indistinguishable way. The construction of a projective sampling family from k-LWE allows us to achieve public traceability, by publishing the projected keys of the users. We believe that the new lattice tools and the projective sampling family are quite general that they may have applications in other areas.
Last updated:  2015-10-14
Arithmetic on Abelian and Kummer Varieties
David Lubicz, Damien Robert
A Kummer variety is obtained as the quotient of an abelian variety by the automorphism $(-1)$ acting on it. Kummer varieties can be seen as a higher dimensional generalisation of the $x$-coordinate representation of a point of an elliptic curve given by its Weierstrass model. Although there is no group law on the set of points of a Kummer variety, the multiplication of a point by a scalar still makes sense, since it is compatible with the action of $(-1)$, and can efficiently be computed with a Montgomery ladder. In this paper, we explain that the arithmetic of a Kummer variety is not limited to this scalar multiplication and is much richer than usually thought. We describe a set of composition laws which exhaust this arithmetic and explain how to compute them efficiently in the model of Kummer varieties provided by level $2$ theta functions. Moreover, we present concrete example where these laws turn out to be useful in order to improve certain algorithms. As an application interesting for instance in cryptography, we explain how to recover the full group law of the abelian variety with a representation almost as compact and in many cases as efficient as the level $2$ theta functions model of Kummer varieties.
Last updated:  2014-07-10
Fault attacks on pairing-based protocols revisited
Sanjit Chatterjee, Koray Karabina, Alfred Menezes
Several papers have studied fault attacks on computing a pairing value e(P,Q), where P is a public point and Q is a secret point. In this paper, we observe that these attacks are in fact effective only on a small number of pairing-based protocols, and that too only when the protocols are implemented with specific symmetric pairings. We demonstrate the effectiveness of the fault attacks on a public-key encryption scheme, an identity-based encryption scheme, and an oblivious transfer protocol when implemented with a symmetric pairing derived from a supersingular elliptic curve with embedding degree 2.
Last updated:  2014-12-03
Bootstrappable Identity-Based Fully Homomorphic Encryption
Michael Clear, Ciarán McGoldrick
It has been an open problem for a number of years to construct an identity-based fully homomorphic encryption (IBFHE) scheme (first mentioned by Naccache at CHES/CRYPTO 2010). At CRYPTO 2013, Gentry, Sahai and Waters largely settled the problem by presenting leveled IBFHE constructions based on the Learning With Errors problem. However their constructions are not bootstrappable, and as a result, are not ``pure'' IBFHE schemes. The major challenge with bootstrapping in the identity-based setting is that it must be possible to non-interactively derive from the public parameters an ``encryption'' of the secret key for an arbitrary identity. All presently-known leveled IBFHE schemes only allow bootstrapping if such an ``encryption'' of the secret key is supplied out-of-band. In this work, we present a ``pure'' IBFHE scheme from indistinguishability obfuscation, and extend the result to the attribute-based setting. Our attribute-based scheme is the first to support homomorphic evaluation on ciphertexts with different attributes. Finally, we characterize presently-known leveled IBFHE schemes with a view to developing a ``compiler'' from a leveled IBFHE scheme to a bootstrappable IBFHE scheme, and sufficient conditions are identified.
Last updated:  2014-09-10
Universally Composable secure TNC protocol based on IF-T binding to TLS
Shijun Zhao, Qianying Zhang, Yu Qin, Dengguo Feng
Trusted Network Connect (TNC) requires both user authentication and integrity validation of an endpoint before it connects to the internet or accesses some web service. However, as the user authentication and integrity validation are usually done via independent protocols, TNC is vulnerable to the Man-in-the-Middle (MitM) attack. This paper analyzes TNC which uses keys with Subject Key Attestation Evidence (SKAE) extension to perform user authentication and the IF-T protocol binding to TLS to carry integrity measurement messages in the Universally Composable (UC) framework. Our analysis result shows that TNC using keys with SKAE extension can resist the MitM attack. In this paper, we introduce two primitive ideal functionalities for TNC: an ideal dual-authentication certification functionality which binds messages and both the user and platform identities, and an ideal platform attestation functionality which formalizes the integrity verification of a platform. We prove that the SKAE extension protocol and the basic TCG platform attestation protocol, both of which are defined by TCG specifications, UC-realizes the two primitive functionalities respectively. In the end, we introduce a general ideal TNC functionality and prove that the complete TNC protocol, combining the IF-T binding to TLS which uses keys with SKAE extension for client authentication and the basic TCG platform attestation platform protocol, securely realizes the TNC functionality in the hybrid model.
Last updated:  2014-06-24
A Genetic Algorithm for Searching Shortest Lattice Vector of SVP Challenge
Uncategorized
Dan Ding, Guizhen Zhu, Xiaoyun Wang
Show abstract
Uncategorized
In this paper, we propose a genetic algorithm for solving the shortest vector problem (SVP) based on sparse integer representations of short vectors in lattices as chromesomes, which, we prove, can guarantee finding the shortest lattice vector under a Markov chain analysis. Moreover, we also suggest some improvements by introducing heuristic techniques: local search and heuristic pruning. The experimental results show that the genetic algorithm outperforms most enumeration algorithms in running time, and achieves the shortest vectors in larger dimensions under SVP challenge benchmarks.
Last updated:  2015-07-01
Related-Key Security for Pseudorandom Functions Beyond the Linear Barrier
Uncategorized
Michel Abdalla, Fabrice Benhamouda, Alain Passelègue, Kenneth G. Paterson
Show abstract
Uncategorized
Related-key attacks (RKAs) concern the security of cryptographic primitives in the situation where the key can be manipulated by the adversary. In the RKA setting, the adversary's power is expressed through the class of related-key deriving (\RKD) functions which the adversary is restricted to using when modifying keys. Bellare and Kohno (Eurocrypt 2003) first formalised RKAs and pin-pointed the foundational problem of constructing RKA-secure pseudorandom functions (RKA-PRFs). To date there are few constructions for RKA-PRFs under standard assumptions, and it is a major open problem to construct RKA-PRFs for larger classes of \RKD functions. We make significant progress on this problem. We first show how to repair the Bellare-Cash framework for constructing RKA-PRFs and extend it to handle the more challenging case of classes of \RKD functions that contain claws. We apply this extension to show that a variant of the Naor-Reingold function already considered by Bellare and Cash is an RKA-PRF for a class of affine \RKD functions under the DDH assumption, albeit with an exponential-time security reduction. We then develop a second extension of the Bellare-Cash framework, and use it to show that the same Naor-Reingold variant is actually an RKA-PRF for a class of degree $d$ polynomial \RKD functions under the stronger decisional $d$-Diffie-Hellman inversion assumption. As a significant technical contribution, our proof of this result avoids the exponential-time security reduction that was inherent in the work of Bellare and Cash and in our first result.
Last updated:  2014-06-23
GGHLite: More Efficient Multilinear Maps from Ideal Lattices
Adeline Langlois, Damien Stehle, Ron Steinfeld
The GGH Graded Encoding Scheme, based on ideal lattices, is the first plausible approximation to a cryptographic multilinear map. Unfortunately, using the security analysis in the original paper, the scheme requires very large parameters to provide security for its underlying encoding re-randomization process. Our main contributions are to formalize, simplify and improve the efficiency and the security analysis of the re-randomization process in the GGH construction. This results in a new construction that we call GGHLite. In particular, we first lower the size of a standard deviation parameter of the re-randomization process of the original scheme from exponential to polynomial in the security parameter. This first improvement is obtained via a finer security analysis of the drowning step of re-randomization, in which we apply the Rényi divergence instead of the conventional statistical distance as a measure of distance between distributions. Our second improvement is to reduce the number of randomizers needed from $\Omega(n \log n)$ to $2$, where $n$ is the dimension of the underlying ideal lattices. These two contributions allow us to decrease the bit size of the public parameters from $O(\lambda^5 \log \lambda)$ for the GGH scheme to $O(\lambda \log^2 \lambda)$ in GGHLite, with respect to the security parameter $\lambda$ (for a constant multilinearity parameter $\kappa$).
Last updated:  2014-06-23
Binary Elligator Squared
Diego F. Aranha, Pierre-Alain Fouque, Chen Qian, Mehdi Tibouchi, Jean-Christophe Zapalowicz
Applications of elliptic curve cryptography to anonymity, privacy and censorship circumvention call for methods to represent uniformly random points on elliptic curves as uniformly random bit strings, so that, for example, ECC network traffic can masquerade as random traffic. At ACM CCS 2013, Bernstein et al. proposed an efficient approach, called ``Elligator,'' to solving this problem for arbitrary elliptic curve-based cryptographic protocols, based on the use of efficiently invertible maps to elliptic curves. Unfortunately, such invertible maps are only known to exist for certain classes of curves, excluding in particular curves of prime order and curves over binary fields. A variant of this approach, ``Elligator Squared,'' was later proposed by Tibouchi (FC 2014) supporting not necessarily injective encodings to elliptic curves (and hence a much larger class of curves), but, although some rough efficiency estimates were provided, it was not clear how an actual implementation of that approach would perform in practice. In this paper, we show that Elligator Squared can indeed be implemented very efficiently with a suitable choice of curve encodings. More precisely, we consider the binary curve setting (which was not discussed in Tibouchi's paper), and implement the Elligator Squared bit string representation algorithm based on a suitably optimized version of the Shallue--van de Woestijne characteristic 2 encoding, which we show can be computed using only multiplications, trace and half-trace computations, and a few inversions. On the fast binary curve of Oliveira et al. (CHES 2013), our implementation runs in an average of only 22850 Haswell cycles, making uniform bit string representations possible for a very reasonable overhead---much smaller even than Elligator on Edwards curves. As a side contribution, we also compare implementations of Elligator and Elligator Squared on a curve supported by Elligator, namely Curve25519. We find that generating a random point and its uniform bitstring representation is around 35-40% faster with Elligator for protocols using a fixed base point (such as static ECDH), but 30-35% faster with Elligator Squared in the case of a variable base point (such as ElGamal encryption). Both are significantly slower than our binary curve implementation.
Last updated:  2017-11-07
An Improved Truncated Differential Cryptanalysis of KLEIN
Uncategorized
Shahram Rasoolzadeh, Zahra Ahmadian, Mahmood Salmasizadeh, Mohammad Reza Aref
Show abstract
Uncategorized
KLEIN is a family of lightweight block ciphers which proposed at RFIDSec 2011 by Gong et al. It has a 64-bit state and 64, 80 or 96-bit key size which introduce its version. It uses 16 same 4-bit Sboxes combined with two AES's MixColumn transformations for each round. This approach allows compact implementations of KLEIN in both low-end software and hardware. Such an innovative combination attracts the attention of cryptanalysts, and several security analyses have been published. The most successful one was represented in FSE'15 which was a truncated differential attack. They could attack up to 12, 13 and 14 rounds out of total number of 12, 16 and 20 rounds for KLEIN-64, -80 and -96, respectively. In this paper, by finding more efficient truncated differential paths and a slight improving in key recovery method we present two new truncated differential attacks on KLEIN, which recover the full secret key with better time and data complexities for the previously analyzed number of rounds. Also by using these truncated differential paths we are able to attack up to 14 and 15 rounds for KLEIN-80 and -96, respectively, which are the highest rounds ever analyzed.
Last updated:  2014-07-21
Sealing the Leak on Classical NTRU Signatures
Uncategorized
Carlos Aguilar Melchor, Xavier Boyen, Jean-Christophe Deneuville, Philippe Gaborit
Show abstract
Uncategorized
Initial attempts to obtain lattice based signatures were closely related to reducing a vector modulo the fundamental parallelepiped of a secret basis (like GGH \cite{GGH97}, or \texttt{NTRUSign} \cite{HHPSW02}). This approach leaked some information on the secret, namely the shape of the parallelepiped, which has been exploited on practical attacks \cite{NR06}. \texttt{NTRUSign} was an extremely efficient scheme, and thus there has been a noticeable interest on developing countermeasures to the attacks, but with little success \cite{DN12}. In \cite{GPV08} Gentry, Peikert and Vaikuntanathan proposed a randomized version of Babai's nearest plane algorithm such that the distribution of a reduced vector modulo a secret parallelepiped only depended on the size of the base used. Using this algorithm and generating large, close to uniform, public keys they managed to get provably secure GGH-like lattice-based signatures. Recently, Stehlé and Steinfeld obtained a provably secure scheme very close to \texttt{NTRUSign} \cite{SS13} (from a theoretical point of view). In this paper we present an alternative approach to seal the leak of \texttt{NTRUSign}. Instead of modifying the lattices and algorithms used, we do a classic leaky \texttt{NTRUSign} signature and hide it with gaussian noise using techniques present in Lyubashevky's signatures. Our main contributions are thus a set of strong \texttt{NTRUSign} parameters, obtained by taking into account latest known attacks against the scheme, a statistical way to hide the leaky \texttt{NTRU} signature so that this particular instantiation of CVP-based signature scheme becomes zero-knowledge and secure against forgeries, based on the worst-case hardness of the $\mathcal{\tilde{O}}(N^{1.5})$-Shortest Independent Vector Problem over \texttt{NTRU} lattices. Finally, we give a set of concrete parameters to gauge the efficiency of the obtained signature scheme.
Last updated:  2015-10-02
Disjunctions for Hash Proof Systems: New Constructions and Applications
Michel Abdalla, Fabrice Benhamouda, David Pointcheval
Hash Proof Systems were first introduced by Cramer and Shoup (Eurocrypt'02) as a tool to construct efficient chosen-ciphertext-secure encryption schemes. Since then, they have found many other applications, including password authenticated key exchange, oblivious transfer, and zero-knowledge arguments. One of the aspects that makes hash proof systems so interesting and powerful is that they can be seen as implicit proofs of membership for certain languages. As a result, by extending the family of languages that they can handle, one often obtains new applications or new ways to understand existing schemes. In this paper, we show how to construct hash proof systems for the disjunction of languages defined generically over cyclic, bilinear, and multilinear groups. Among other applications, this enables us to construct the most efficient one-time simulation-sound (quasi-adaptive) non-interactive zero-knowledge arguments for linear languages over cyclic groups, the first one-round group password-authenticated key exchange without random oracles, the most efficient threshold structure-preserving chosen-ciphertext-secure encryption scheme, and the most efficient one-round password authenticated key exchange in the UC framework.
Last updated:  2014-08-25
Differentially Private Data Aggregation with Optimal Utility
Fabienne Eigner, Aniket Kate, Matteo Maffei, Francesca Pampaloni, Ivan Pryvalov
Computing aggregate statistics about user data is of vital importance for a variety of services and systems, but this practice has been shown to seriously undermine the privacy of users. Differential privacy has proved to be an effective tool to sanitize queries over a database, and various cryptographic protocols have been recently proposed to enforce differential privacy in a distributed setting, e.g., statical queries on sensitive data stored on the user’s side. The widespread deployment of differential privacy techniques in real-life settings is, however, undermined by several limitations that existing constructions suffer from: they support only a limited class of queries, they pose a trade-off between privacy and utility of the query result, they are affected by the answer pollution problem, or they are inefficient. This paper presents PrivaDA, a novel design architecture for distributed differential privacy that leverages recent advances in SMPCs on fixed and floating point arithmetics to overcome the previously mentioned limitations. In particular, PrivaDA supports a variety of perturbation mechanisms (e.g., the Laplace, discrete Laplace, and exponential mechanisms) and it constitutes the first generic technique to generate noise in a fully distributed manner while maintaining the optimal utility. Furthermore, PrivaDA does not suffer from the answer pollution problem. We demonstrate the efficiency of PrivaDA with a performance evaluation, and its expressiveness and flexibility by illustrating a variety of application scenarios such as privacy-preserving web analytics.
Last updated:  2014-06-26
Universally Composable Non-Interactive Key Exchange
Eduarda S. V. Freire, Julia Hesse, Dennis Hofheinz
We consider the notion of a non-interactive key exchange (NIKE). A NIKE scheme allows a party \(A\) to compute a common shared key with another party \(B\) from \(B\)'s public key and \(A\)'s secret key alone. This computation requires no interaction between \(A\) and \(B\), a feature which distinguishes NIKE from regular (i.e., interactive) key exchange not only quantitatively, but also qualitatively. Our first contribution is a formalization of NIKE protocols as ideal functionalities in the Universal Composability (UC) framework. As we will argue, existing NIKE definitions (all of which are game-based) do not support a modular analysis either of NIKE schemes themselves, or of the use of NIKE schemes. We provide a simple and natural UC-based NIKE definition that allows for a modular analysis both of NIKE schemes and their use in larger protocols. We proceed to investigate the properties of our new definition, and in particular its relation to existing game-based NIKE definitions. We find that (a) game-based NIKE security is equivalent to UC-based NIKE security against \emph{static} corruptions, and (b) UC-NIKE security against adaptive corruptions cannot be achieved without additional assumptions (but \emph{can} be achieved in the random oracle model). Our results suggest that our UC-based NIKE definition is a useful and simple abstraction of non-interactive key exchange.
Last updated:  2015-05-01
Cryptographic Agents: Towards a Unified Theory of Computing on Encrypted Data
Shashank Agrawal, Shweta Agrawal, Manoj Prabhakaran
We provide a new framework of cryptographic agents that unifies various modern ``cryptographic objects'' --- identity-based encryption, fully-homomorphic encryption, functional encryption, and various forms of obfuscation -- similar to how the Universal Composition framework unifies various multi-party computation tasks like commitment, coin-tossing and zero-knowledge proofs. These cryptographic objects can all be cleanly modeled as ``schemata'' in our framework. Highlights of our framework include the following: - We use a new `indistinguishability preserving' (INDPRE) definition of security that interpolates indistinguishability and simulation style definitions, which (often) sidesteps the known impossibilities for the latter. INDPRE-security is parameterized by the choice of the ``test'' family, such that by choosing different test families, one can obtain different levels of security for the same primitive (including various standard definitions in the literature). - We present a notion of `reduction' from one schema to another and a powerful `composition theorem' with respect to INDPRE security. We show that obfuscation is a ``complete'' schema under this notion, under standard cryptographic assumptions. We also provide a stricter notion of reduction that composes even when security is only with respect to certain restricted test families of importance. - Last but not the least, our framework can be used to model abstractions like the generic group model and the random oracle model, letting one translate a general class of constructions in these heuristic models to constructions based on `standard model assumptions'. We also illustrate how our framework can be applied to specific primitives like obfuscation and functional encryption. We relate our definitions to existing definitions and also give new constructions and reductions between different primitives.
Last updated:  2015-01-16
Even more practical secure logging: Tree-based Seekable Sequential Key Generators
Giorgia Azzurra Marson, Bertram Poettering
Computer log files constitute a precious resource for system administrators for discovering and comprehending security breaches. A prerequisite of any meaningful log analysis is that attempts of intruders to cover their traces by modifying log entries are thwarted by storing them in a tamper-resistant manner. Some solutions employ cryptographic authentication when storing log entries locally, and let the authentication scheme's property of forward security ensure that the cryptographic keys in place at the time of intrusion cannot be used to manipulate past log entries without detection. This strong notion of security is typically achieved through frequent updates of the authentication keys via hash chains. However, as security demands that key updates take place rather often (ideally, at a resolution of milliseconds), in many settings this method quickly reaches the limits of practicality. Indeed, a log auditor aiming at verifying a specific log record might have to compute millions of hash iterations before recovering the correct verification key. This problem was addressed only recently by the introduction of seekable sequential key generators (SSKG). Every instance of this cryptographic primitive produces a forward-secure sequence of symmetric (authentication) keys, but also offers an explicit fast-forward functionality. The only currently known SSKG construction replaces traditional hash chains by the iterated evaluation of a shortcut one-way permutation, a factoring-based and hence in practice not too efficient building block. In this paper we revisit the challenge of marrying forward-secure key generation with seekability and show that symmetric primitives like PRGs, block ciphers, and hash functions suffice for obtaining secure SSKGs. Our scheme is not only considerably more efficient than the prior number-theoretic construction, but also extends the seeking functionality in a way that we believe is important in practice. Our construction is provably (forward-)secure in the standard model.
Last updated:  2015-08-30
Related-Key Secure Pseudorandom Functions: The Case of Additive Attacks
Benny Applebaum, Eyal Widder
In a related-key attack (RKA) an adversary attempts to break a cryptographic primitive by invoking the primitive with several secret keys which satisfy some known relation. The task of constructing provably RKA secure PRFs (for non-trivial relations) under a standard assumption has turned to be challenging. Currently, the only known provably-secure construction is due to Bellare and Cash (Crypto 2010). This important feasibility result is restricted, however, to linear relations over relatively complicated groups (e.g., $\Z^*_q$ where $q$ is a large prime) that arise from the algebraic structure of the underlying cryptographic assumption (DDH/DLIN). In contrast, applications typically require RKA-security with respect to simple additive relations such as XOR or addition modulo a power-of-two. In this paper, we partially fill this gap by showing that it is possible to deal with simple additive relations at the expense of relaxing the model of the attack. We introduce several natural relaxations of RKA-security, study the relations between these notions, and describe efficient constructions either under lattice assumptions or under general assumptions. Our results enrich the landscape of RKA security and suggest useful trade-offs between the attack model and the family of possible relations.
Last updated:  2014-07-23
Relaxed Two-to-one Recoding Schemes
Omkant Pandey, Kim Ramchen, Brent Waters
A two-to-one recoding (TOR) scheme is a new cryptographic primitive, proposed in the recent work of Gorbunov, Vaikuntanathan, and Wee (GVW), as a means to construct attribute-based encryption (ABE) schemes for all boolean circuits. GVW show that TOR schemes can be constructed assuming the hardness of the learning-with-errors (LWE) problem. We propose a slightly weaker variant of TOR schemes called correlation-relaxed two-to-one recoding (CR-TOR). Unlike the TOR schemes, our weaker variant does not require an encoding function to be pseudorandom on correlated inputs. We instead replace it with an indistinguishability property that states a ciphertext is hard to decrypt without access to a certain encoding. The primary benefit of this relaxation is that it allows the construction of ABE for circuits using the TOR paradigm from a broader class of cryptographic assumptions. We show how to construct a CR-TOR scheme from the noisy cryptographic multilinear maps of Garg, Gentry, and Halevi as well as those of Coron, Lepoint, and Tibouchi. Our framework leads to an instantiation of ABE for circuits that is conceptually different from the existing constructions.
Last updated:  2014-06-21
Simon's Circuit
Paul Baecher
Simon mentions in his seminal result separating collision-resistant hash functions from one-way permutations (EUROCRYPT '98), that the wrong strategy to sample collisions can be exploited to invert the permutation. He, however, does not spell out a concrete circuit that demonstrates this. In this short note, we describe and analyze one such circuit.
Last updated:  2014-06-21
A Key Recovery Attack on Error Correcting Code Based a Lightweight Security Protocol
Imran Erguler
One of the interesting types of RFID application is RFID searching which aims to hear a specific RFID tag from a large group of tags, i.e. ability of detecting whether a target RFID tag is nearby. Very recently, a lightweight protocol using error-correcting codes has been proposed by Chen et al. to provide a solution to needs in this field. The authors give a detailed analysis of their protocol in terms of security, privacy, communication overhead, hardware cost and they claim that it is a realizable scheme with fulfilling security and privacy requirements. In this study, however, we investigate security of this protocol and clearly demonstrate its security flaws that completely allow an adversary to exploit the system. In particular, by using linear properties of error correcting coding we firstly describe a tag tracing attack that undermines untraceability property which is one its design objectives. Then along with its implementation details we present a key recovery attack that reduces dramatically search space of a tag's secret key and show that an adversary can compromise it in practical time by only querying this tag for several times. As an illustrative example we retrieve the secret key of the protocol in two hours for the adopted linear block code C(47,24,11) which is one of the suggested codes.
Last updated:  2014-10-11
Cryptographic Schemes Based on the ASASA Structure: Black-box, White-box, and Public-key
Uncategorized
Alex Biryukov, Charles Bouillaguet, Dmitry Khovratovich
Show abstract
Uncategorized
In this paper we pick up an old challenge to design public key or white-box construction from symmetric cipher components. We design several encryption schemes based on the \textsf{ASASA} structure ranging from fast and generic symmetric ciphers to compact public key and white-box constructions based on generic affine transformations combined with specially designed low degree non-linear layers. While explaining our design process we show several instructive attacks on the weaker variants of our schemes.
Last updated:  2014-11-11
An Efficient Cloud-based Revocable Identity-based Proxy Re-encryption Scheme for Public Clouds Data Sharing
Uncategorized
Kaitai Liang, Joseph K. Liu, Duncan S. Wong, Willy Susilo
Uncategorized
Identity-based encryption (IBE) eliminates the necessity of having a costly certificate verification process. However, revocation remains as a daunting task in terms of ciphertext update and key update phases as due to the lack of a certificate revocation list in this infrastructure. In this paper, we provide an affirmative solution to solve the efficiency problem incurred by revocation. We propose the first cloud-based revocable identity-based proxy re-encryption (CR-IB-PRE) scheme that supports user revocation but also delegation of decryption rights. No matter a user is revoked or not, at the end of a given time period the cloud acting as a proxy will re-encrypt all ciphertexts of the user under the current time period to the next time period. If the user is revoked in the forthcoming time period, he cannot decrypt the ciphertexts by using the expired private key anymore. We state that this primitive is applicable to many practical network applications, such as subscription-based cloud storage services. Comparing to some naive solutions which require a private key generator (PKG) to interact with non-revoked users in each time period, the new scheme provides definite advantages in terms of communication and computation efficiency. Our scheme only requires the PKG to publish a constant-size public string for each time period and meanwhile, the workload of ciphertexts update is off-loaded to the cloud server. More importantly, the scheme can be proven secure in the standard model.
Last updated:  2017-01-11
How to Watermark Cryptographic Functions
Uncategorized
Ryo Nishimaki
Show abstract
Uncategorized
We introduce a notion of watermarking for cryptographic functions and propose a concrete scheme for watermarking cryptographic functions. Informally speaking, a digital watermarking scheme for cryptographic functions embeds information, called a \textit{mark}, into functions such as one-way functions and decryption functions of public-key encryption. There are two basic requirements for watermarking schemes. (1) A mark-embedded function must be functionally equivalent to the original function. (2) It must be difficult for adversaries to remove the embedded mark without damaging the original functionality. In spite of its importance and usefulness, there have only been a few theoretical works on watermarking for functions (or programs). Furthermore, we do not have rigorous definitions of watermarking for cryptographic functions and concrete constructions. To solve the above problem, we introduce a notion of watermarking for cryptographic functions and define its security. Furthermore, we present a lossy trapdoor function (LTF) based on the decisional linear (DLIN) problem and a watermarking scheme for the LTF. Our watermarking scheme is secure under the DLIN assumption in the standard model. We use techniques of dual system encryption and dual pairing vector spaces (DPVS) to construct our watermarking scheme. This is a new application of DPVS. Our watermarking for cryptographic functions is a generalized notion of copyrighted functions introduced by Naccache, Shamir, and Stern (PKC 1999) and our scheme is based on an identity-based encryption scheme whose private keys for identities (i.e., decryption functions) are marked, so our technique can be used to construct black-box traitor tracing schemes.
Last updated:  2015-01-14
Large Universe Ciphertext-Policy Attribute-Based Encryption with White-Box Traceability
Jianting Ning, Zhenfu Cao, Xiaolei Dong, Lifei Wei, Xiaodong Lin
A Ciphertext-Policy Attribute-Based Encryption (CP-ABE) system extracts the decryption keys over attributes shared by multiple users. It brings plenty of advantages in ABE applications. CP-ABE enables fine-grained access control to the encrypted data for commercial applications. There has been significant progress in CP-ABE over the recent years because of two properties called traceability and large universe, greatly enriching the commercial applications of CP-ABE. Traceability is the ability of ABE to track the malicious users or traitors who intentionally leak the partial or modified decryption keys to others for profits. Nevertheless, due to the nature of CP-ABE, it is difficult to identify the original key owner from an exposed key since the decryption privilege is shared by multiple users who have the same attributes. On the other hand, the property of large universe in ABE proposed by Lewko and Waters enlarges the practical applications by supporting flexible number of attributes. Several systems have been proposed to obtain either of the above properties. However, none of them achieve the two properties simultaneously in practice, which limits the commercial applications of CP-ABE to a certain extent. In this paper, we propose a practical large universe CP-ABE system supporting white-box traceability, which is suitable for commercial applications. Compared to existing systems, our new system has three advantages: (1) The number of attributes is not polynomially bounded; (2) Malicious users who leak their decryption keys could be traced; and, (3) The storage overhead for traitor tracing is constant. We also prove the selective security of our new system in the standard model under "$q$-type" assumption.
Last updated:  2014-06-21
PPDCP-ABE: Privacy-Preserving Decentralized Cipher-Policy Attribute-Based Encryption
Jinguang Han, Willy Susilo, Yi Mu, Jianying Zhou, Man Ho Au
Cipher-policy attribute-based encryption (CP-ABE) is a more efficient and flexible encryption system as the encryptor can control the access structure when encrypting a message. In this paper, we propose a privacy-preserving decentralized CP-ABE (PPDCP-ABE) scheme where the central authority is not required, namely each authority can work independently without the cooperation to initialize the system. Meanwhile, a user can obtain secret keys from multiple authorities without releasing his global identifier (GID) and attributes to them. This is contrasted to the previous privacy-preserving multi-authority ABE (PPMA-ABE) schemes where a user can obtain secret keys from multiple authorities with them knowing his attributes and a central authority is required. However, some sensitive attributes can also release the user’s identity information. Hence, contemporary PPMA-ABE schemes cannot fully protect users’ privacy as multiple authorities can cooperate to identifier a user by collecting and analyzing his attributes. Therefore, it remains a challenging and important work to construct a PPMA-ABE scheme where the central authority is not required and both the identifiers and the attributes are considered
Last updated:  2014-06-21
Homomorphic Signatures with Efficient Verification for Polynomial Functions
Dario Catalano, Dario Fiore, Bogdan Warinschi
A homomorphic signature scheme for a class of functions $\mathcal{C}$ allows a client to sign and upload elements of some data set $D$ on a server. At any later point, the server can derive a (publicly verifiable) signature that certifies that some $y$ is the result computing some $f\in\mathcal{C}$ on the basic data set $D$. This primitive has been formalized by Boneh and Freeman (Eurocrypt 2011) who also proposed the only known construction for the class of multivariate polynomials of fixed degree $d\geq 1$. In this paper we construct new homomorphic signature schemes for such functions. Our schemes provide the first alternatives to the one of Boneh-Freeman, and improve over their solution in three main aspects. First, our schemes do not rely on random oracles. Second, we obtain security in a stronger fully-adaptive model: while the solution of Boneh-Freeman requires the adversary to query messages in a given data set all at once, our schemes can tolerate adversaries that query one message at a time, in a fully-adaptive way. Third, signature verification is more efficient (in an amortized sense) than computing the function from scratch. The latter property opens the way to using homomorphic signatures for publicly-verifiable computation on outsourced data. Our schemes rely on a new assumption on leveled graded encodings which we show to hold in a generic model.
Last updated:  2014-06-21
Privacy-Preserving Auditing for Attribute-Based Credentials
Jan Camenisch, Anja Lehmann, Gregory Neven, Alfredo Rial
Privacy-enhancing attribute-based credentials (PABCs) allow users to authenticate to verifiers in a data-minimizing way, in the sense that users are unlinkable between authentications and only disclose those attributes from their credentials that are relevant to the verifier. We propose a practical scheme to apply the same data minimization principle when the verifiers’ authentication logs are subjected to external audits. Namely, we propose an extended PABC scheme where the verifier can further remove attributes from presentation tokens before handing them to an auditor, while preserving the verifiability of the audited tokens. We present a generic construction based on a signature, a signature of knowledge and a trapdoor commitment scheme, prove it secure in the universal composability framework, and give efficient instantiations based on the strong RSA and Decision Composite Residuosity (DCR) assumptions in the random-oracle model.
Last updated:  2014-07-03
Ad-Hoc Secure Two-Party Computation on Mobile Devices using Hardware Tokens
Daniel Demmler, Thomas Schneider, Michael Zohner
Secure two-party computation allows two mutually distrusting parties to jointly compute an arbitrary function on their private inputs without revealing anything but the result. An interesting target for deploying secure computation protocols are mobile devices as they contain a lot of sensitive user data. However, their resource restriction makes the deployment of secure computation protocols a challenging task. In this work, we optimize and implement the secure computation protocol by Goldreich-Micali-Wigderson (GMW) on mobile phones. To increase performance, we extend the protocol by a trusted hardware token (i.e., a smartcard). The trusted hardware token allows to pre-compute most of the workload in an initialization phase, which is executed locally on one device and can be pre-computed independently of the later communication partner. We develop and analyze a proof-of-concept implementation of generic secure two-party computation on Android smart phones making use of a microSD smartcard. Our use cases include private set intersection for finding shared contacts and private scheduling of a meeting with location preferences. For private set intersection, our token-aided implementation on mobile phones is up to two orders of magnitude faster than previous generic secure two-party computation protocols on mobile phones and even as fast as previous work on desktop computers.
Last updated:  2014-06-17
On a new properties of number sequences ,a randomness test and a new RC4's key scheduling algorithm.
Samir Bouftass, Abdelhak Azhari
In this paper, we introduce the concept of the derivative of sequence of numbers and define new statistical indices by which we discoverd new properties of randomly generated number sequences. We also build a test for pseudo random generators based on these properties and use it to confirm the weakness of RC4 key scheduling algorithm that has been reported in the litterature. In this rescpect we publish a new RC4's key scheduling algorithm that don't have this weakness.
Last updated:  2014-09-13
Semi-Adaptive Attribute-Based Encryption and Improved Delegation for Boolean Formula
Jie Chen, Hoeteck Wee
We consider *semi-adaptive* security for attribute-based encryption, where the adversary specifies the challenge attribute vector after it sees the public parameters but before it makes any secret key queries. We present two constructions of semi-adaptive attribute-based encryption under static assumptions with *short* ciphertexts. Previous constructions with short ciphertexts either achieve the weaker notion of selective security, or require parameterized assumptions. As an application, we obtain improved delegation schemes for Boolean formula with *semi-adaptive* soundness, where correctness of the computation is guaranteed even if the client's input is chosen adaptively depending on its public key. Previous delegation schemes for formula achieve one of adaptive soundness, constant communication complexity, or security under static assumptions; we show how to achieve semi-adaptive soundness and the last two simultaneously.
Last updated:  2014-11-04
Providing Root of Trust for ARM TrustZone using On-Chip SRAM
Shijun Zhao, Qianying Zhang, Guangyao Hu, Yu Qin, Dengguo Feng
We present the design, implementation and evaluation of the root of trust for the Trusted Execution Environment (TEE) provided by ARM TrustZone based on SRAM Physical Unclonable Functions (PUFs). We first implement a building block which provides the foundations for the root of trust: secure key storage and truly random source. The building block doesn't require on or off-chip secure non-volatile memory to store secrets, but provides a high-level security: resistance to physical attackers capable of controlling all external interfaces of the system on chip (SoC). Based on the building block, we build the root of trust consisting of seal/unseal primitives for secure services running in the TEE, and a software-only TPM service running in the TEE which provides rich TPM functionalities for the rich OS running in the normal world of TrustZone. The root of trust resists software attackers capable of compromising the entire rich OS. Besides, both the building block and the root of trust run on the powerful ARM processor. In one word, we leverage the SRAM PUF, commonly available on mobile devices, to achieve a low-cost, secure, and efficient design of the root of trust.
Last updated:  2014-07-06
(Leveled) Fully Homomorphic Signatures from Lattices
Uncategorized
Sergey Gorbunov, Vinod Vaikuntanathan
Show abstract
Uncategorized
In a homomorphic signature scheme, given a vector of signatures $\vec{\sigma}$ corresponding to a dataset of messages $\vec{\mu}$, there is a {\it public} algorithm that allows to derive a signature $\sigma'$ for message $\mu'=f(\vec{\mu})$ for any function $f$. Given the tuple $(\sigma', \mu', f)$ anyone can {\it publicly} verify the result of the computation of function $f$. Along with the standard notion of unforgeability for signatures, the security of homomorphic signatures guarantees that no adversary is able to make a forgery $\sigma^*$ for $\mu^* \neq f(\vec{\mu})$. We construct the first homomorphic signature scheme for evaluating arbitrary functions. In our scheme, the public parameters and the size of the resulting signature grows polynomially with the depth of the circuit representation of $f$. Our scheme is secure in the standard model assuming hardness of finding {\it Small Integer Solutions} in hard lattices. Furthermore, our construction has asymptotically fast verification which immediately leads to a new solution for verifiable outsourcing with pre-processing phase. Previous state of the art constructions were limited to evaluating polynomials of constant degree, secure in random oracle model without asymptotically fast verification.
Last updated:  2014-06-17
Efficient Key-policy Attribute-based Encryption for General Boolean Circuits from Multilinear Maps
Uncategorized
Constantin Catalin Dragan, Ferucio Laurentiu Tiplea
Show abstract
Uncategorized
We propose an efficient Key-policy Attribute-based Encryption (KP-ABE) scheme for general (monotone) Boolean circuits based on secret sharing and on a very particular and simple form of leveled multilinear maps, called chained multilinear maps. The number of decryption key components is substantially reduced in comparison with the current scheme based on leveled multilinear maps, and the size of the multilinear map (in terms of bilinear map components) is less than the Boolean circuit depth, while it is quadratic in the Boolean circuit depth for the current scheme based on leveled multilinear map. Moreover, it is much easier to find chained multilinear maps than leveled multilinear maps. Selective security of the proposed schemes in the standard model is proved, under the decisional multilinear Diffie-Hellman assumption.
Last updated:  2014-06-19
Provably secure and efficient certificateless signature in the standard model
Lin Cheng, Qiaoyan Wen, Zhengping Jin, Hua Zhang
Certificateless public key cryptography eliminates inherent key escrow problem in identity-based cryptography, and does not yet requires certificates as in the traditional public key infrastructure. However, most of certificateless signature schemes without random oracles have been demonstrated to be insecure. In this paper, we propose a new certificateless signature scheme and prove that our new scheme is existentially unforgeable against adaptively chosen message attack in the standard model. Performance analysis shows that our new scheme has shorter system parameters, shorter length of signature, and higher computational efficiency than the previous schemes in the standard model.
Last updated:  2014-06-15
FleXOR: Flexible garbling for XOR gates that beats free-XOR
Vladimir Kolesnikov, Payman Mohassel, Mike Rosulek
Most implementations of Yao's garbled circuit approach for 2-party secure computation use the {\em free-XOR} optimization of Kolesnikov \& Schneider (ICALP 2008). We introduce an alternative technique called {\em flexible-XOR} (fleXOR) that generalizes free-XOR and offers several advantages. First, fleXOR can be instantiated under a weaker hardness assumption on the underlying cipher/hash function (related-key security only, compared to related-key and circular security required for free-XOR) while maintaining most of the performance improvements that free-XOR offers. Alternatively, even though XOR gates are not always ``free'' in our approach, we show that the other (non-XOR) gates can be optimized more heavily than what is possible when using free-XOR. For many circuits of cryptographic interest, this can yield a significantly (over 30\%) smaller garbled circuit than any other known techniques (including free-XOR) or their combinations.
Last updated:  2014-07-01
Template Attacks on Different Devices
Omar Choudary, Markus G. Kuhn
Template attacks remain a most powerful side-channel technique to eavesdrop on tamper-resistant hardware. They use a profiling step to compute the parameters of a multivariate normal distribution from a training device and an attack step in which the parameters obtained during profiling are used to infer some secret value (e.g. cryptographic key) on a target device. Evaluations using the same device for both profiling and attack can miss practical problems that appear when using different devices. Recent studies showed that variability caused by the use of either different devices or different acquisition campaigns on the same device can have a strong impact on the performance of template attacks. In this paper, we explore further the effects that lead to this decrease of performance, using four different Atmel XMEGA 256 A3U 8-bit devices. We show that a main difference between devices is a DC offset and we show that this appears even if we use the same device in different acquisition campaigns. We then explore several variants of the template attack to compensate for these differences. Our results show that a careful choice of compression method and parameters is the key to improving the performance of these attacks across different devices. In particular we show how to maximise the performance of template attacks when using Fisher's Linear Discriminant Analysis or Principal Component Analysis. Overall, we can reduce the entropy of an unknown 8-bit value below 1.5 bits even when using different devices.
Last updated:  2014-06-15
Automated Analysis of Cryptographic Assumptions in Generic Group Models
Gilles Barthe, Edvard Fagerholm, Dario Fiore, John Mitchell, Andre Scedrov, Benedikt Schmidt
We initiate the study of principled, automated, methods for analyzing hardness assumptions in generic group models, following the approach of symbolic cryptography. We start by defining a broad class of generic and symbolic group models for different settings---symmetric or asymmetric (leveled) k-linear groups---and by proving "computational soundness" theorems for the symbolic models. Based on this result, we formulate a very general master theorem that formally relates the hardness of a (possibly interactive) assumption in these models to solving problems in polynomial algebra. Then, we systematically analyze these problems. We identify different classes of assumptions and obtain decidability and undecidability results. Then, we develop and implement automated procedures for verifying the conditions of master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome of this work is an automated tool which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption.
Last updated:  2016-04-29
Transcript secure signatures based on modular lattices
Jeff Hoffstein, Jill Pipher, John M. Schanck, Joseph H. Silverman, William Whyte
We introduce a class of lattice-based digital signature schemes based on modular properties of the coordinates of lattice vectors. We also suggest a method of making such schemes transcript secure via a rejection sampling technique of Lyubashevsky (2009). A particular instantiation of this approach is given, using NTRU lattices. Although the scheme is not supported by a formal security reduction, we present arguments for its security and derive concrete parameters (first version) based on the performance of state-of-the-art lattice reduction and enumeration tech- niques. In the revision, we re-evaluate the security of first version of the parameter sets, under the hybrid approach of lattice reduction attack the meet-in-the-middle attack. We present new sets of parameters that are robust against this attack, as well as all previous known attacks.
Last updated:  2014-06-15
Verified Implementations for Secure and Verifiable Computation
José Bacelar Almeida, Manuel Barbosa, Gilles Barthe, Guillaume Davy, François Dupressoir, Benjamin Grégoire, Pierre-Yves Strub
Formal verification of the security of software systems is gradually moving from the traditional focus on idealized models, to the more ambitious goal of producing verified implementations. This trend is also present in recent work targeting the verification of cryptographic software, but the reach of existing tools has so far been limited to cryptographic primitives, such as RSA-OAEP encryption, or standalone protocols, such as SSH. This paper presents a scalable approach to formally verifying implementations of higher-level cryptographic systems, directly in the computational model. We consider circuit-based cloud-oriented cryptographic protocols for secure and verifiable computation over encrypted data. Our examples share as central component Yao's celebrated transformation of a boolean circuit into an equivalent garbled form that can be evaluated securely in an untrusted environment. We leverage the foundations of garbled circuits set forth by Bellare, Hoang, and Rogaway (CCS 2012, ASIACRYPT 2012) to build verified implementations of garbling schemes, a verified implementation of Yao's secure function evaluation protocol, and a verified (albeit partial) implementation of the verifiable computation protocol by Gennaro, Gentry, and Parno (CRYPTO 2010). The implementations are formally verified using EasyCrypt, a tool-assisted framework for building high-confidence cryptographic proofs, and critically rely on two novel features: a module and theory system that supports compositional reasoning, and a code extraction mechanism for generating implementations from formalizations.
Last updated:  2014-06-15
Single-shot security for one-time memories in the isolated qubits model
Yi-Kai Liu
One-time memories (OTM's) are simple, tamper-resistant cryptographic devices, which can be used to implement sophisticated functionalities such as one-time programs. Can one construct OTM's whose security follows from some physical principle? This is not possible in a fully-classical world, or in a fully-quantum world, but there is evidence that OTM's can be built using "isolated qubits" -- qubits that cannot be entangled, but can be accessed using adaptive sequences of single-qubit measurements. Here we present new constructions for OTM's using isolated qubits, which improve on previous work in several respects: they achieve a stronger "single-shot" security guarantee, which is stated in terms of the (smoothed) min-entropy; they are proven secure against adversaries who can perform arbitrary local operations and classical communication (LOCC); and they are efficiently implementable. These results use Wiesner's idea of conjugate coding, combined with error-correcting codes that approach the capacity of the q-ary symmetric channel, and a high-order entropic uncertainty relation, which was originally developed for cryptography in the bounded quantum storage model.
Last updated:  2016-04-04
Early Propagation and Imbalanced Routing, How to Diminish in FPGAs
Amir Moradi, Vincent Immler
This work deals with DPA-resistant logic styles, i.e., cell-level countermeasures against power analysis attacks that are known as a serious threat to cryptographic devices. Early propagation and imbalanced routings are amongst the well-known issues of such countermeasures, that - if not considered during the design process - can cause the underlying cryptographic device to be vulnerable to certain attacks. Although most of the DPA-resistant logic styles target an ASIC design process, there are a few attempts to apply them in an FPGA platform. This is due to the missing freedom in FPGA design tools required to deal with the aforementioned problems. Our contribution in this work is to provide solutions for both early propagation and imbalanced routings considering a modern Xilinx FPGA as the target platform. Foremost, based on the WDDL concept we design a new FPGA-based logic style without early propagation in both precharge and evaluation phases. Additionally, with respect to the limited routing resources within an FPGA we develop a customized router to nd the best appropriate dual-rail routes for a given dual-rail circuit. Based on practical experiments on a Virtex-5 FPGA our evaluations verify the efficiency of each of our proposed approaches. They significantly improve the resistance of the design compared to cases not benefiting from our schemes.
Last updated:  2014-06-16
Block Ciphers - Focus On The Linear Layer (feat. PRIDE): Full Version
Martin R. Albrecht, Benedikt Driessen, Elif Bilge Kavun, Gregor Leander, Christof Paar, Tolga Yalçın
The linear layer is a core component in any substitution-permutation network block cipher. Its design significantly influences both the security and the efficiency of the resulting block cipher. Surprisingly, not many general constructions are known that allow to choose trade-offs between security and efficiency. Especially, when compared to Sboxes, it seems that the linear layer is crucially understudied. In this paper, we propose a general methodology to construct good, sometimes optimal, linear layers allowing for a large variety of trade-offs. We give several instances of our construction and on top underline its value by presenting a new block cipher. PRIDE is optimized for 8-bit micro-controllers and significantly outperforms all academic solutions both in terms of code size and cycle count.
Last updated:  2014-06-15
Proof of Activity: Extending Bitcoin’s Proof of Work via Proof of Stake
Iddo Bentov, Charles Lee, Alex Mizrahi, Meni Rosenfeld
We propose a new protocol for a cryptocurrency, that builds upon the Bitcoin protocol by combining its Proof of Work component with a Proof of Stake type of system. Our Proof of Activity (PoA) protocol offers good security against possibly practical future attacks on Bitcoin, and has a relatively low penalty in terms of network communication and storage space. We explore various attack scenarios and suggest remedies to potential vulnerabilities of the PoA protocol, as well as evaluate the performance of its core subroutine.
Last updated:  2014-10-30
Leveled Fully Homomorphic Signatures from Standard Lattices
Daniel Wichs
In a homomorphic signature scheme, a user Alice signs some large data $x$ using her secret signing key and stores the signed data on a server. The server can then run some computation $y=g(x)$ on the signed data and homomorphically produce a short signature $\sigma$. Anybody can verify the signature using Alice's public verification key and become convinced that $y$ is the correct output of the computation $g$ over Alice's data, without needing to have the underlying data itself. In this work, we construct the first leveled fully homomorphic signature schemes that can evaluate arbitrary circuits over signed data, where only the maximal depth $d$ of the circuit needs to be fixed a priori. The size of the evaluated signature grows polynomially in $d$, but is otherwise independent of the circuit size or the data size. Our solutions are based on the hardness of the small integer solution (SIS) problem, which is in turn implied by the worst-case hardness of problems in standard lattices. We get a scheme in the standard model, albeit with large public parameters whose size must exceed the total size of all signed data. In the random-oracle model, we get a scheme with short public parameters. These results offer a significant improvement in capabilities and assumptions over the best prior homomorphic signature scheme due to Boneh and Freeman (Eurocrypt '11). As a building block of independent interest, we introduce a new notion called homomorphic trapdoor functions (HTDF). We show to how construct homomorphic signatures using HTDFs as a black box. We construct HTDFs based on the SIS problem by relying on a recent technique developed by Boneh et al. (Eurocrypt '14) in the context of attribute based encryption.
Last updated:  2014-08-10
Optimized Implementation of General Secret Sharing Scheme
Uncategorized
Lein Harn, Ching-Fang Hsu
Uncategorized
Secret sharing (SS) is one of the most important cryptographic primitives used for data outsourcing. The (t, n) SS was introduced by Shamir and Blakley separately in 1979. The secret sharing policy of the (t, n) threshold SS is far too simple for many applications because it assumes that every shareholder has equal privilege to the secret or every share-holder is equally trusted. Ito et al. introduced the concept of a general secret sharing scheme (GSS). In a GSS, a secret is divided among a set of shareholders in such a way that any “qualified” subset of shareholders can access the secret, but any “unqualified” subset of shareholders cannot access the secret. The secret access structure of GSS is far more flexible than threshold SS. In this paper, we propose an optimized implementation of GSS. Our proposed scheme first uses Boolean logic to derive two important subsets, one is called which is the minimal positive access subset and the other is called which is the maximal negative access subset, of a given general secret sharing structure. Then, condi-tions of parameters of a GSS are established based on these two important subsets. Fur-thermore, integer linear/non-linear programming is used to optimize the size of shares of a GSS. The complexity of linear/non-linear programming is where is the number of shares generated by the dealer. This proposed design can be applied to implement GSS based on any classical SS. We use two GSSs, one is based on Shamir’s weighted SS (WSS) using linear polynomial and the other is based on Asmuth-Bloom's SS using Chinese Re-mainder Theorem (CRT), to demonstrate our design. In comparing with existing GSSs, our proposed scheme is more efficient and can be applied to all classical SSs.
Last updated:  2014-06-16
Related Key Secure PKE from Hash Proof Systems
Dingding Jia, Bao Li, Xianhui Lu, Qixiang Mei
In this paper, we present a construction of public key encryption secure against related key attacks from hash proof systems in the standard model. We show that the schemes presented by Jia et al. (Provsec2013) are special cases of our general theory, and also give other instantiations based on the QR and DCR assumptions. To fulfill the related key security, we require the hash proof systems to satisfy the key homomorphism and computational finger-printing properties. Compared with the construction given by Wee (PKC2012), our construction removed the use of one-time signatures.
Last updated:  2015-05-27
Differential Attacks on Reduced SIMON Versions with Dynamic Key-guessing Techniques
Uncategorized
Ning Wang, Xiaoyun Wang, Keting Jia, Jingyuan Zhao
Show abstract
Uncategorized
SIMON is a family of lightweight block ciphers which are designed by the U.S National Security Agency in 2013. It has totally 10 versions corresponding to different block size $2n$ and key length $l_k$, named as SIMON$2n/l_k$. In this paper, we present a new differential attack by considering the sufficient bit conditions of the previous differential paths. Based on the bit conditions, we successfully propose a new type of dynamic key-guessing technique which greatly reduces the key space guessed. Our attacks work on the reduced SIMON of all 10 suggested versions, which improve the best previous results by 2 to 4 rounds. For verification, we implemented a practical attack on 19-round SIMON32 in a PC, and the experimental data confirm the correctness of the attack, which also fit the theoretical complexity and success rate very well. It is remarked that, our cryptanalysis only provides a more accurate security evaluation, and it does not mean the security problem of the whole SIMON family
Last updated:  2014-07-28
Faster Private Set Intersection based on OT Extension
Benny Pinkas, Thomas Schneider, Michael Zohner
Private set intersection (PSI) allows two parties to compute the intersection of their sets without revealing any information about items that are not in the intersection. It is one of the best studied applications of secure computation and many PSI protocols have been proposed. However, the variety of existing PSI protocols makes it difficult to identify the solution that performs best in a respective scenario, especially since they were not all implemented and compared in the same setting. In this work, we give an overview on existing PSI protocols that are secure against semi-honest adversaries. We take advantage of the most recent efficiency improvements in OT extension to propose significant optimizations to previous PSI protocols and to suggest a new PSI protocol whose runtime is superior to that of existing protocols. We compare the performance of the protocols both theoretically and experimentally, by implementing all protocols on the same platform, and give recommendations on which protocol to use in a particular setting.
Last updated:  2014-06-13
4-point Attacks with Standard Deviation Analysis on A-Feistel Schemes
Valerie Nachef, Jacques Patarin, Emmanuel Volte
A usual way to construct block ciphers is to apply several rounds of a given structure. Many kinds of attacks are mounted against block ciphers. Among them, differential and linear attacks are widely used. In~\cite{V98,V03}, it is shown that ciphers that achieve perfect pairwise decorrelation are secure against linear and differential attacks. It is possible to obtain such schemes by introducing at least one random affine permutation as a round function in the design of the scheme. In this paper, we study attacks on schemes based on classical Feistel schemes where we introduce one or two affine permutations. Since these schemes resist against linear and differential attacks, we will study stronger attacks based on specific equations on 4-tuples of cleartext/ciphertext messages. We give the number of messages needed to distinguish a permutation produced by such schemes from a random permutation, depending on the number of rounds used in the schemes, the number and the position of the random affine permutations introduced in the schemes.
Last updated:  2015-08-31
Polynomial Spaces: A New Framework for Composite-to-Prime-Order Transformations
Uncategorized
Gottfried Herold, Julia Hesse, Dennis Hofheinz, Carla Ràfols, Andy Rupp
Show abstract
Uncategorized
At Eurocrypt 2010, Freeman presented a framework to convert cryptosystems based on composite-order groups into ones that use prime-order groups. Such a transformation is interesting not only from a conceptual point of view, but also since for relevant parameters, operations in prime-order groups are faster than composite-order operations by an order of magnitude. Since Freeman's work, several other works have shown improvements, but also lower bounds on the efficiency of such conversions. In this work, we present a new framework for composite-to-prime-order conversions. Our framework is in the spirit of Freeman's work; however, we develop a different, ``polynomial'' view of his approach, and revisit several of his design decisions. This eventually leads to significant efficiency improvements, and enables us to circumvent previous lower bounds. Specifically, we show how to implement Groth-Sahai proofs in a prime-order environment (with a symmetric pairing) almost twice as efficiently as the state of the art. We also show that our new conversions are optimal in a very broad sense. Besides, our conversions also apply in settings with a multilinear map, and can be instantiated from a variety of computational assumptions (including, e.g., the $k$-linear assumption).
Last updated:  2014-06-18
RPKI vs ROVER: Comparing the Risks of BGP Security Solutions
Aanchal Malhotra, Sharon Goldberg
Route Origin Verification (ROVER), a mechanism for securing interdomain routing with BGP, is a proposed alternative to the Resource Public Key Infrastructure (RPKI). While the RPKI requires the design and deployment of a completely new security infrastructure, ROVER leverages existing reverse DNS and DNSSEC deployments. Both ROVER and RPKI are based on a hierarchy of authorities that are trusted to provide information about the routing system. It has been argued recently that misconfigurations or compromises of the RPKI's trusted authorities can present new risks to the routing system. Meanwhile, the advocates of ROVER claim that it provides a "fail-safe" approach, where the Internet will continue to work as it is even when ROVER fails. This poster therefore compares the impact of ROVER failures to those of the RPKI.
Last updated:  2014-12-18
Minimizing the Two-Round Even-Mansour Cipher
Shan Chen, Rodolphe Lampe, Jooyoung Lee, Yannick Seurin, John P. Steinberger
The $r$-round (iterated) \emph{Even-Mansour cipher} (also known as \emph{key-alternating cipher}) defines a block cipher from $r$ fixed public $n$-bit permutations $P_1,\ldots,P_r$ as follows: given a sequence of $n$-bit round keys $k_0,\ldots,k_r$, an $n$-bit plaintext $x$ is encrypted by xoring round key $k_0$, applying permutation $P_1$, xoring round key $k_1$, etc. The (strong) pseudorandomness of this construction in the random permutation model (i.e., when the permutations $P_1,\ldots,P_r$ are public random permutation oracles that the adversary can query in a black-box way) was studied in a number of recent papers, culminating with the work of Chen and Steinberger (EUROCRYPT~2014), who proved that the $r$-round Even-Mansour cipher is indistinguishable from a truly random permutation up to $O(2^{\frac{rn}{r+1}})$ queries of any adaptive adversary (which is an optimal security bound since it matches a simple distinguishing attack). All results in this entire line of work share the common restriction that they only hold under the assumption that \emph{the round keys $k_0,\ldots,k_r$ and the permutations $P_1,\ldots,P_r$ are independent}. In particular, for two rounds, the current state of knowledge is that the block cipher $E(x)=k_2\oplus P_2(k_1\oplus P_1(k_0\oplus x))$ is provably secure up to $O(2^{2n/3})$ queries of the adversary, when $k_0$, $k_1$, and $k_2$ are three independent $n$-bit keys, and $P_1$ and $P_2$ are two independent random $n$-bit permutations. In this paper, we ask whether one can obtain a similar bound for the two-round Even-Mansour cipher \emph{from just one $n$-bit key and one $n$-bit permutation}. Our answer is positive: when the three $n$-bit round keys $k_0$, $k_1$, and $k_2$ are adequately derived from an $n$-bit master key $k$, and the same permutation $P$ is used in place of $P_1$ and $P_2$, we prove a qualitatively similar $\tilde{O}(2^{2n/3})$ security bound (in the random permutation model). To the best of our knowledge, this is the first ``beyond the birthday bound'' security result for AES-like ciphers that does not assume independent round keys.
Last updated:  2014-06-13
Secure Outsourced Computation of the Characteristic Polynomial and Eigenvalues of Matrix
Uncategorized
Xing Hu, Chunming Tang
Show abstract
Uncategorized
Linear algebra plays an important role in computer science, especially in cryptography.Numerous cryptog-raphic protocols, scientific computations, and numerical computations are based on linear algebra. Many linear algebra tasks can be reduced to some core problems, such as matrix multiplication, determinant of matrix and the characteristic polynomial of matrix. However, it is difficult to execute these tasks independently for client whose computation abilities are weaker than polynomial-time computational ability. Cloud Computing is a novel economical paradigm which provides powerful computational resources that enables resources-constrained client to outsource their mass computing tasks to the cloud. In this paper, we propose a new verifiable and secure outsourcing protocol for the problem of computing the characteristic polynomial and eigenvalues of matrix. These protocols are not only efficient and secure, but also unnecessary for any cryptographic assumption.
Last updated:  2014-06-14
Improved Generic Attacks Against Hash-based MACs and HAIFA
Itai Dinur, Gaëtan Leurent
The security of HMAC (and more general hash-based MACs) against state-recovery and universal forgery attacks was very recently shown to be suboptimal, following a series of surprising results by Leurent \emph{et al.} and Peyrin \emph{et al.}. These results have shown that such powerful attacks require much less than $2^{\ell}$ computations, contradicting the common belief (where $\ell$ denotes the internal state size). In this work, we revisit and extend these results, with a focus on properties of concrete hash functions such as a limited message length, and special iteration modes. We begin by devising the first state-recovery attack on HMAC with a HAIFA hash function (using a block counter in every compression function call), with complexity $2^{4\ell/5}$. Then, we describe improved trade-offs between the message length and the complexity of a state-recovery attack on HMAC. Consequently, we obtain improved attacks on several HMAC constructions used in practice, in which the hash functions limit the maximal message length (e.g., SHA-1 and SHA-2). Finally, we present the first universal forgery attacks, which can be applied with short message queries to the MAC oracle. In particular, we devise the first universal forgery attacks applicable to SHA-1 and SHA-2.
Last updated:  2014-06-12
Double Level Montgomery Cox-Rower Architecture, New Bounds
Jean-Claude Bajard, Nabil Merkiche
Recently, the Residue Number System and the Cox-Rower architecture have been used to compute efficiently Elliptic Curve Cryptography over FPGA. In this paper, we are rewriting the conditions of Kawamura’s theorem for the base extension without error in order to define the maximal range of the set from which the moduli can be chosen to build a base. At the same time, we give a procedure to compute correctly the truncation function of the Cox module. We also present a modified ALU of the Rower architecture using a second level of Montgomery Representation. Such architecture allows us to select the moduli with the new upper bound defined with the condition. This modification makes the Cox-Rower architecture suitable to compute 521 bits ECC with radix downto 16 bits compared to 18 with the classical Cox-Rower architecture. We validate our results through FPGA implementation of a scalar multiplication at classical cryptography security levels (NIST curves). Our implementation uses 35% less LUTs compared to the state of the art generic implementation of ECC using RNS for the same performance [5]. We also slightly improve the computation time (latency) and our implementation shows best ratio throughput/area for RNS computation supporting any curve independently of the chosen base.
Last updated:  2014-06-12
Efficient Non-Interactive Verifiable Outsourced Computation for Arbitrary Functions
Chunming Tang, Yuenai Chen
Non-interactive verifiable outsourced computation enables a computationally weak client to outsource the computation of a function $f$ on input $x$ to a more powerful but untrusted server, who will return the result of the function evaluation as well as a proof that the computation is performed correctly. A basic requirement of a verifiable outsourced computation scheme is that the client should invest less time in preparing the inputs and verifying the proof than computing the function by himself. One of the best solutions of such non-interactive schemes are based on Yao's garble circuit and full homomorphic encryption, which leads to invest $poly(T)$ running time in offline stage and $poly(log T)$ time in online stage of the client, where $T$ is the time complexity to compute $f$. In this paper, we'll present a scheme which does not need to use garble circuit, but to use a very simple technique to confuse the function we are going to compute, and only invests $poly(log T)$ running time in the offline stage.
Last updated:  2015-08-24
Security of Symmetric Encryption against Mass Surveillance
Mihir Bellare, Kenneth Paterson, Phillip Rogaway
Motivated by revelations concerning population-wide surveillance of encrypted communications, we formalize and investigate the resistance of symmetric encryption schemes to mass surveillance. The focus is on algorithm-substitution attacks (ASAs), where a subverted encryption algorithm replaces the real one. We assume that the goal of ``big~brother'' is undetectable subversion, meaning that ciphertexts produced by the subverted encryption algorithm should reveal plaintexts to big~brother yet be indistinguishable to users from those produced by the real encryption scheme. We formalize security notions to capture this goal and then offer both attacks and defenses. In the first category we show that successful (from the point of view of big brother) ASAs may be mounted on a large class of common symmetric encryption schemes. In the second category we show how to design symmetric encryption schemes that avoid such attacks and meet our notion of security. The lesson that emerges is the danger of choice: randomized, stateless schemes are subject to attack while deterministic, stateful ones are not.
Last updated:  2014-06-12
Rounding and Chaining LLL: Finding Faster Small Roots of Univariate Polynomial Congruences
Jingguo Bi, Jean-Sébastien Coron, Jean-Charles Faugère, Phong Q. Nguyen, Guénaël Renault, Rina Zeitoun
In a seminal work at EUROCRYPT '96, Coppersmith showed how to find all small roots of a univariate polynomial congruence in polynomial time: this has found many applications in public-key cryptanalysis and in a few security proofs. However, the running time of the algorithm is a high-degree polynomial, which limits experiments: the bottleneck is an LLL reduction of a high-dimensional matrix with extra-large coefficients. We present in this paper the first significant speedups over Coppersmith's algorithm. The first speedup is based on a special property of the matrices used by Coppersmith's algorithm, which allows us to provably speed up the LLL reduction by rounding, and which can also be used to improve the complexity analysis of Coppersmith's original algorithm. The exact speedup depends on the LLL algorithm used: for instance, the speedup is asymptotically quadratic in the bit-size of the small-root bound if one uses the Nguyen-Stehlé {\Ltwo} algorithm. The second speedup is heuristic and applies whenever one wants to enlarge the root size of Coppersmith's algorithm by exhaustive search. Instead of performing several LLL reductions independently, we exploit hidden relationships between these matrices so that the LLL reductions can be somewhat chained to decrease the global running time. When both speedups are combined, the new algorithm is in practice hundreds of times faster for typical parameters.
Last updated:  2014-06-12
Synthesis of Fault Attacks on Cryptographic Implementations
Gilles Barthe, Francois Dupressoir, Pierre-Alain Fouque, Benjamin Gregoire, Jean-Christophe Zapalowicz
Fault attacks are active attacks in which an adversary with physical access to a cryptographic device, for instance a smartcard, tampers with the execution of an algorithm to retrieve secret material. Since the seminal Bellcore attack on RSA signatures, there has been extensive work to discover new fault attacks against cryptographic schemes, and to develop countermeasures against such attacks. Originally focused on high-level algorithmic descriptions, these works increasingly focus on concrete implementations. While lowering the abstraction level leads to new fault attacks, it also makes their discovery significantly more challenging. In order to face this trend, it is therefore desirable to develop principled, tool-supported approaches that allow a systematic analysis of the security of cryptographic implementations against fault attacks. We propose, implement, and evaluate a new approach for finding fault attacks against cryptographic implementations. Our approach is based on identifying implementation-independent mathematical properties we call fault conditions. We choose them so that it is possible to recover secret data purely by computing on sufficiently many data points that satisfy a fault condition. Fault conditions capture the essence of a large number of attacks from the literature, including lattice-based attacks on RSA. Moreover, they provide a basis for discovering automatically new attacks: using fault conditions, we specify the problem of finding faulted implementations as a program synthesis problem. Using a specialized form of program synthesis, we discover multiple faulted implementations on RSA and ECDSA that realize the fault conditions, and hence lead to fault attacks. Several of the attacks found by our tool are new, and of independent interest.
Last updated:  2014-11-20
Wait a minute! A fast, Cross-VM attack on AES
Uncategorized
Gorka Irazoqui, Mehmet Sinan Inci, Thomas Eisenbarth, Berk Sunar
Show abstract
Uncategorized
In cloud computing, efficiencies are reaped by resource sharing such as co-location of computation and deduplication of data. This work exploits resource sharing in virtualization software to build a powerful cache-based attack on AES. We demonstrate the vulnerability by mounting Cross-VM Flush+Reload cache attacks in VMware VMs to recover the AES keys of OpenSSL 1.0.1 running inside the victim VM. Furthermore, the attack works in a realistic setting where different VMs are located on separate cores. The modified flush+reload attack we present, takes only in the order of seconds to minutes to succeed in a cross-VM setting. Therefore long term co-location, as required by other fine grain attacks in the literature, are not needed. The results of this study show that there is a great security risk to OpenSSL AES implementation running on VMware cloud services when the deduplication is not disabled.
Last updated:  2015-03-25
Just a Little Bit More
Uncategorized
Joop van de Pol, Nigel P. Smart, Yuval Yarom
Show abstract
Uncategorized
We extend the FLUSH+RELOAD side-channel attack of Benger et al. to extract a significantly larger number of bits of information per observed signature when using OpenSSL. This means that by observing only 25 signatures, we can recover secret keys of the secp256k1 curve, used in the Bitcoin protocol, with a probability greater than 50 percent. This is an order of magnitude improvement over the previously best known result. The new method of attack exploits two points: Unlike previous partial disclosure attacks we utilize all information obtained and not just that in the least significant or most significant bits, this is enabled by a property of the “standard” curves choice of group order which enables extra bits of information to be extracted. Furthermore, whereas previous works require direct information on ephemeral key bits, our attack utilizes the indirect information from the wNAF double and add chain.
Last updated:  2014-06-14
A Statistical Model for Higher Order DPA on Masked Devices
Uncategorized
A. Adam Ding, Liwei Zhang, Yunsi Fei, Pei Luo
Show abstract
Uncategorized
A popular effective countermeasure to protect block cipher implementations against differential power analysis (DPA) attacks is to mask the internal operations of the cryptographic algorithm with random numbers. While the masking technique resists against first-order (univariate) DPA attacks, higher-order (multivariate) attacks were able to break masked devices. In this paper, we formulate a statistical model for higher-order DPA attack. We derive an analytic success rate formula that distinctively shows the effects of algorithmic confusion property, signal-noise-ratio (SNR), and masking on leakage of masked devices. It further provides a formal proof for the centered product combination function being optimal for higher-order attacks in very noisy scenarios. We believe that the statistical model fully reveals how the higher-order attack works around masking, and would offer good insights for embedded system designers to implement masking techniques.
Last updated:  2014-10-09
Universally Composable Authentication and Key-exchange with Global PKI
Uncategorized
Ran Canetti, Daniel Shahaf, Margarita Vald
Show abstract
Uncategorized
Message authentication and key exchange are two of the most basic tasks of cryptography. Solutions based on public-key infrastructure (PKI) are prevalent. Still, the state of the art in composable security analysis of PKI-based authentication and key exchange is somewhat unsatisfactory. Specifically, existing treatments either (a)~make the unrealistic assumption that the PKI is accessible only within the confines of the protocol itself, thus failing to capture real-world PKI-based authentication, or (b)~impose often-unnecessary requirements---such as strong on-line non-transferability---on candidate protocols, thus ruling out natural candidates. We give a modular and universally composable analytical framework for PKI-based message authentication and key exchange protocols. This framework guarantees security even when the PKI is pre-existing and globally available, without being unnecessarily restrictive. Specifically, we model PKI as a global set-up functionality within the \emph{Global~UC} security model [Canetti \etal, TCC 2007] and relax the ideal authentication and key exchange functionalities accordingly. We then demonstrate the security of basic signature-based authentication and key exchange protocols. Our modeling makes minimal security assumptions on the PKI in use; in particular, ``knowledge of the secret key'' is not needed.
Last updated:  2015-12-11
A Low-Latency, Low-Area Hardware Oblivious RAM Controller
Christopher W. Fletcher, Ling Ren, Albert Kwon, Marten van Dijk, Emil Stefanov, Dimitrios Serpanos, Srinivas Devadas
We build and evaluate Tiny ORAM, an Oblivious RAM prototype on FPGA. Oblivious RAM is a cryptographic primitive that completely obfuscates an application's data, access pattern and read/write behavior to/from external memory (such as DRAM or disk). Tiny ORAM makes two main contributions. First, by removing an algorithmic bottleneck in prior work, Tiny ORAM is the first hardware ORAM design to support arbitrary block sizes (e.g. 64 Bytes to 4096 Bytes). With a 64-Byte block size, Tiny ORAM can finish an access in 1.4us, over 40x faster than the prior-art implementation. Second, through novel algorithmic and engineering-level optimizations, Tiny ORAM reduces the number of symmetric encryption operations by ~3x compared to a prior work. Tiny ORAM is also the first design to implement and report real numbers for the cost of symmetric encryption in hardware ORAM constructions. Putting it together, Tiny ORAM requires 18381 (5%) LUTs and 146 (13%) Block RAM on a Xilinx XC7VX485T FPGA, including the cost of encryption.
Last updated:  2014-06-12
Revisiting the Gentry-Szydlo Algorithm
H. W. Lenstra, A. Silverberg
We put the Gentry-Szydlo algorithm into a mathematical framework, and show that it is part of a general theory of ``lattices with symmetry''. For large ranks, there is no good algorithm that decides whether a given lattice has an orthonormal basis. But when the lattice is given with enough symmetry, we can construct a provably deterministic polynomial time algorithm to accomplish this, based on the work of Gentry and Szydlo. The techniques involve algorithmic algebraic number theory, analytic number theory, commutative algebra, and lattice basis reduction. This sheds new light on the Gentry-Szydlo algorithm, and the ideas should be applicable to a range of questions in cryptography.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.