All papers in 2014 (Page 10 of 1029 results)

Last updated:  2014-02-24
How to Use Bitcoin to Design Fair Protocols
Iddo Bentov, Ranjit Kumaresan
We study a model of fairness in secure computation in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty. We then show how the Bitcoin network can be used to achieve the above notion of fairness in the two-party as well as the multiparty setting (with a dishonest majority). In particular, we propose new ideal functionalities and protocols for fair secure computation and fair lottery in this model. One of our main contributions is the definition of an ideal primitive, which we call $\mathcal{F}_{\mathrm{CR}}^\star$ ($\mathrm{CR}$ stands for ``claim-or-refund''), that formalizes and abstracts the exact properties we require from the Bitcoin network to achieve our goals. Naturally, this abstraction allows us to design fair protocols in a hybrid model in which parties have access to the $\mathcal{F}_{\mathrm{CR}}^\star$ functionality, and is otherwise independent of the Bitcoin ecosystem. We also show an efficient realization of $\mathcal{F}_{\mathrm{CR}}^\star$ that requires only two Bitcoin transactions to be made on the network. Our constructions also enjoy high efficiency. In a multiparty setting, our protocols only require a constant number of calls to $\mathcal{F}_{\mathrm{CR}}^\star$ per party on top of a standard multiparty secure computation protocol. Our fair multiparty lottery protocol improves over previous solutions which required a quadratic number of Bitcoin transactions.
Last updated:  2014-06-27
Efficient Three-Party Computation from Cut-and-Choose
Seung Geol Choi, Jonathan Katz, Alex J. Malozemoff, Vassilis Zikas
With relatively few exceptions, the literature on efficient (practical) secure computation has focused on secure two-party computation (2PC). It is, in general, unclear whether the techniques used to construct practical 2PC protocols - in particular, the cut-and-choose approach - can be adapted to the multi-party setting. In this work we explore the possibility of using cut-and-choose for practical secure three-party computation. The three-party case has been studied in prior work in the semi-honest setting, and is motivated by the observation that real-world deployments of multi-party computation are likely to involve few parties. We propose a constant-round protocol for three-party computation tolerating any number of malicious parties, whose computational cost is essentially only a small constant worse than that of state-of-the-art two-party protocols.
Last updated:  2014-04-10
Algebraic Properties of Modular Addition Modulo a Power of Two
S. M. Dehnavi, Alireza Rahimipour
Modular addition modulo a power of two, is one of the most applicable operators in symmetric cryptography; therefore, investigating cryptographic properties of this operator has a significant role in design and analysis of symmetric ciphers. Algebraic properties of modular addition modulo a power of two have been studied for two operands by Braeken in fse’05. Also, the authors of this paper, have studied this operator, in some special cases, before. In this paper, taking advantage of previous researches in this area, we generalize the algebraic properties of this operator for more than two summands. More precisely, we determine the algebraic degree of the component Boolean functions of modular addition of arbitrary number of summands modulo a power of two, as a vectorial Boolean function, along with the number of terms and variables in these component functions. As a result, algebraic degrees of the component Boolean functions of Generalized Pseudo-Hadamard Transforms are driven.
Last updated:  2014-03-19
Public-Key Encryption Resilient Against Linear Related-Key Attacks Revisited
Uncategorized
Hui Cui, Yi Mu, Man Ho Au
Uncategorized
Wee (PKC'12) proposed a generic public-key encryption scheme in the setting of related-key attacks. Bellare, Paterson and Thomson (Asiacrypt'12) provided a framework enabling related-key attack (RKA) secure cryptographic primitives for a class of non-linear related-key derivation functions. However, in both of their constructions, the instantiations to achieve the full (not weak) RKA security are given under the scenario regarding the private key composed of single element. In other words, each element of the private key shares the same modification. However, this is impractical in real world. In this paper, we concentrate on the security of public-key encryption schemes under linear related-key attacks in the setting of multi-element private keys (that is, the private key is composed of more than one element), where an adversary is allowed to tamper any part of this private key stored in a hardware device, and subsequently observe the outcome of a public-key encryption system under this targeted modified private key. We define the security model for RKA secure public-key encryption schemes as chosen-ciphertext and related-key attack (CC-RKA) security, which means that a public-key encryption scheme remains secure even when an adversary is allowed to issue the decryption oracle on linear shifts of any component of the private key. After that, we present a detailed public-key encryption schemes with the private key formed of several elements, of which the CC-RKA security is under the decisional BDH assumption in the standard model.
Last updated:  2014-10-13
Removing Erasures with Explainable Hash Proof Systems
Michel Abdalla, Fabrice Benhamouda, David Pointcheval
An important problem in secure multi-party computation is the design of protocols that can tolerate adversaries that are capable of corrupting parties dynamically and learning their internal states. In this paper, we make significant progress in this area in the context of password-authenticated key exchange (PAKE) and oblivious transfer (OT) protocols. More precisely, we first revisit the notion of projective hash proofs and introduce a new feature that allows us to \emph{explain} any message sent by the simulator in case of corruption, hence the notion of \emph{Explainable Projective Hashing}. Next, we demonstrate that this new tool generically leads to efficient PAKE and OT protocols that are secure against semi-adaptive adversaries without erasures in the Universal Composability (UC) framework. We then show how to make these protocols secure even against adaptive adversaries, using \emph{non-committing encryption}, in a much more efficient way than generic conversions from semi-adaptive to adaptive security. Finally, we provide concrete instantiations of explainable projective hash functions that lead to the most efficient PAKE and OT protocols known so far, with UC-security against adaptive adversaries, with or without erasures, in the single global CRS setting. As an important side contribution, we also propose a new commitment scheme based on DDH, which leads to the construction of the first one-round PAKE adaptively secure under plain DDH without pairing, assuming reliable erasures, and also improves previous constructions of OT and two- or three-round PAKE schemes.
Last updated:  2016-09-02
On the Information Ratio of Non-Perfect Secret Sharing Schemes
Uncategorized
Oriol Farràs, Torben Brandt Hansen, Tarik Kaced, Carles Padró
Show abstract
Uncategorized
A secret sharing scheme is non-perfect if some subsets of players that cannot recover the secret value have partial information about it. The information ratio of a secret sharing scheme is the ratio between the maximum length of the shares and the length of the secret. This work is dedicated to the search of bounds on the information ratio of non-perfect secret sharing schemes and the construction of efficient linear non-perfect secret sharing schemes. To this end, we extend the known connections between matroids, polymatroids and perfect secret sharing schemes to the non-perfect case. In order to study non-perfect secret sharing schemes in all generality, we describe their structure through their access function, a real function that measures the amount of information on the secret value that is obtained by each subset of players. We prove that there exists a secret sharing scheme for every access function. Uniform access functions, that is, access functions whose values depend only on the number of players, generalize the threshold access structures. The optimal information ratio of the uniform access functions with rational values has been determined by Yoshida, Fujiwara and Fossorier. By using the tools that are described in our work, we provide a much simpler proof of that result and we extend it to access functions with real values.
Last updated:  2015-11-23
FORSAKES: A Forward-Secure Authenticated Key Exchange Protocol Based on Symmetric Key-Evolving Schemes
Mohammad Sadeq Dousti, Rasool Jalili
This paper suggests a model and a definition for forward-secure authenticated key exchange (AKE) protocols, which can be satisfied without depending on the Diffie-Hellman assumption. The basic idea is to use key-evolving schemes (KES), where the long-term keys of the system get updated regularly and irreversibly. Protocols conforming to our model can be highly efficient, since they do not require the resource-intensive modular exponentiations of the Diffie-Hellman protocol. We also introduce a protocol, called FORSAKES, and prove rigorously that it is a forward-secure AKE protocol in our model. FORSAKES is a very efficient protocol, and can be implemented by merely using hash functions.
Last updated:  2014-02-24
New Way to Construct Cryptographic Hash Function
WANG Yong
In this paper, a new way to construct cryptographic hash function is given. The cryptographic hash function is generalized to uncertain function which has various specific function forms. When computing hash value, the specific form of the function is determined by the message, but the codebreaker cannot know the message, and hence cannot know the specific form of random function. This provides a new kind of one-wayness, the one-wayness of the specific function makes the breaking of hash is very difficult because in most cryptographic analysis of hash function, the function should be known and fixed. As fixed function is just a special case of uncertain function, when the function is uncertain, we obviously have more choices and can choose more secure function. Keywords:I.Introduction
Last updated:  2014-02-28
Oblivious Radix Sort: An Efficient Sorting Algorithm for Practical Secure Multi-party Computation
Koki Hamada, Dai Ikarashi, Koji Chida, Katsumi Takahashi
We propose a simple and efficient sorting algorithm for secure multi-party computation (MPC). The algorithm is designed to be efficient when the number of parties and the size of the underlying field are small. For a constant number of parties and a field with a constant size, the algorithm has $O(\gm\log\gm)$ communication complexity, which is asymptotically the same as the best previous algorithm but achieves $O(1)$ round complexity, where $\gm$ is the number of items. The algorithm is constructed with the help of a new technique called ``shuffle-and-reveal.'' This technique can be seen as an analogue of the frequently used technique of ``add random number and reveal.'' The feasibility of our algorithm is demonstrated by an implementation on an MPC scheme based on Shamir's secret-sharing scheme with three parties and corruption tolerance of $1$. Our implementation sorts 1 million 32-bit word secret-shared values in 197 seconds.
Last updated:  2014-03-03
Automated Proof for Authorization Protocols of TPM 2.0 in Computational Model (full version)
Weijin Wang, Yu Qin, Dengguo Feng, Xiaobo Chu
We present the first automated proof of the authorization protocols in TPM 2.0 in the computational model. The Trusted Platform Module(TPM) is a chip that enables trust in computing platforms and achieves more security than software alone. The TPM interacts with a caller via a predefined set of commands. Many commands reference TPM-resident structures, and use of them may require authorization. The TPM will provide an acknowledgement once receiving an authorization. This interact ensure the authentication of TPM and the caller. In this paper, we present a computationally sound mechanized proof for authorization protocols in the TPM 2.0. We model the authorization protocols using a probabilistic polynomial-time calculus and prove authentication between the TPM and the caller with the aid of the tool CryptoVerif, which works in the computational model. In addition, the prover gives the upper bounds to break the authentication between them.
Last updated:  2014-06-12
Breaking `128-bit Secure' Supersingular Binary Curves (or how to solve discrete logarithms in ${\mathbb F}_{2^{4 \cdot 1223}}$ and ${\mathbb F}_{2^{12 \cdot 367}}$)
Uncategorized
Robert Granger, Thorsten Kleinjung, Jens Zumbrägel
Show abstract
Uncategorized
In late 2012 and early 2013 the discrete logarithm problem (DLP) in finite fields of small characteristic underwent a dramatic series of breakthroughs, culminating in a heuristic quasi-polynomial time algorithm, due to Barbulescu, Gaudry, Joux and Thomé. Using these developments, Adj, Menezes, Oliveira and Rodríguez-Henríquez analysed the concrete security of the DLP, as it arises from pairings on (the Jacobians of) various genus one and two supersingular curves in the literature, which were originally thought to be $128$-bit secure. In particular, they suggested that the new algorithms have no impact on the security of a genus one curve over ${\mathbb F}_{2^{1223}}$, and reduce the security of a genus two curve over ${\mathbb F}_{2^{367}}$ to $94.6$ bits. In this paper we propose a new field representation and efficient general descent principles which together make the new techniques far more practical. Indeed, at the `128-bit security level' our analysis shows that the aforementioned genus one curve has approximately $59$ bits of security, and we report a total break of the genus two curve.
Last updated:  2014-02-16
Quantum position verification in the random oracle model
Dominique Unruh
We present a quantum position verification scheme in the random oracle model. In contrast to prior work, our scheme does not require bounded storage/retrieval/entanglement assumptions. We also give an efficient position-based authentication protocol. This enables secret and authenticated communication with an entity that is only identified by its position in space.
Last updated:  2014-02-16
An Applicable Public-Key-Cryptosystem Based on NP-Complete Problems
Bjoern Grohmann
A new Public-Key-Cryptosystem is presented from which we think that it will survive against possible attacks by Quantum-Computers in the future. We analyse its performance and its expected security.
Last updated:  2014-10-09
Optimal Algebraic Manipulation Detection Codes in the Constant-Error Model
Uncategorized
Ronald Cramer, Carles Padrö, Chaoping Xing
Show abstract
Uncategorized
Algebraic manipulation detection (AMD) codes, introduced at EUROCRYPT 2008, may, in some sense, be viewed as {\em keyless} combinatorial authentication codes that provide security in the presence of an {\em oblivious}, {\em algebraic} attacker. Its original applications included robust fuzzy extractors, secure message transmission and robust secret sharing. In recent years, however, a rather diverse array of additional applications in cryptography has emerged. In this paper we consider, for the first time, the regime of arbitrary positive constant error probability $\epsilon$ in combination with unbounded cardinality $M$ of the message space. There are several applications where this model makes sense. Adapting a known bound to this regime, it follows that the binary length $\rho$ of the tag satisfies $\rho\geq \log \log M + \Omega_{\epsilon}(1)$. In this paper, we shall call AMD codes meeting this lower bound {\em optimal}. Known constructions, notably a construction based on dedicated polynomial evaluation codes, are a multiplicative factor~2 {\em off} from being optimal. By a generic enhancement using error-correcting codes, these parameters can be further improved but remain suboptimal. Reaching optimality efficiently turns out to be surprisingly nontrivial. Owing to our refinement of the mathematical perspective on AMD codes, which focuses on symmetries of codes, we propose novel constructive principles. This leads to an explicit construction based on certain BCH codes that improves the parameters of the polynomial construction and to an efficient randomized construction of optimal AMD codes based on certain quasi-cyclic codes. In all our results, the error probability $\epsilon$ can be chosen as an arbitrarily small positive real number.
Last updated:  2014-02-16
Comments on a novel user authentication and key agreement scheme
Jia-Lun Tsai
In 2013, Sun et al. showed that the related works' authentication schemes proposed by [2-7] are vulnerable to an insider attack and fail to provide mutual authentication. These two attacks can be successfully plotted by an adversary, since the private key of the server can compute all the legal users’ private keys. They then proposed a new remote user authentication and key agreement scheme for the mobile client-server environment. However, we find that their scheme is still vulnerable to insider attack (Sun et al.) and how to avoid such an insider attack on the client-server environment is still an open problem.
Last updated:  2015-10-15
Prover Anonymous and Deniable Distance-Bounding Authentication
Sebastien Gambs, Cristina Onete, Jean-Marc Robert
In distance-bounding authentication protocols, a verifier confirms that a prover is (1) legitimate and (2) in the verifier's proximity. Proximity checking is done by running time-critical exchanges between both parties. This enables the verifier to detect relay attacks (a.k.a. mafia fraud). While most distance-bounding protocols offer resistance to mafia and distance fraud as well as to impersonation attacks, only few protect the privacy of the authenticating prover. One exception is the protocol due to Hermans, Peeters, and Onete developed in 2013, which offers strong privacy guarantees with respect to a Man-in-the-Middle adversary. However, this protocol provides no privacy guarantees for the prover with respect to a malicious verifier, who can fully identify the prover. Having in mind possible verifier corruption or data leakage from verifiers to a centralized server, we suggest that stronger privacy properties are needed. In this paper, we propose an efficient distance-bounding protocol that gives strong prover privacy guarantees even with respect to the verifier or to a centralized back-end server, storing prover information and managing revocation and registration. Specifically, we formally model and define prover anonymity, a property guaranteeing that verifiers infer only the legitimacy of the prover but not his identity, and deniability, which ensures that the back-end server cannot distinguish prover behavior from malicious verifier behavior (i.e., provers can deny that they authenticated). Finally, we present an efficient protocol that achieves these strong guarantees, give exact bounds for each of its security properties, and prove these statements formally.
Last updated:  2014-03-25
Secure Compression: Theory \& Practice
James Kelley, Roberto Tamassia
Encryption and compression are frequently used together in both network and storage systems, for example in TLS. Despite often being used together, there has not been a formal framework for analyzing these combined systems; moreover, the systems are usually just a simple chaining of compression followed by encryption. In this work, we present the first formal framework for proving security in combined compression-encryption schemes and relate it to the traditional notion of semantic security. We call this entropy-restricted semantic security. Additionally, we present a new, efficient cipher, called the squeeze cipher, that combines compression and encryption into a single primitive and provably achieves our entropy-restricted security.
Last updated:  2014-02-16
Polynomial Time Attack on Wild McEliece Over Quadratic Extensions
Alain Couvreur, Ayoub Otmani, Jean-Pierre Tillich
We present a polynomial time structural attack against the McEliece system based on Wild Goppa codes from a quadratic finite field extension. This attack uses the fact that such codes can be distinguished from random codes to compute some filtration, that is to say a family of nested subcodes which will reveal their secret algebraic description.
Last updated:  2014-02-16
A Note on the CLRW2 Tweakable Block Cipher Construction
Gordon Procter
In this note, we describe an error in the proof for CLRW2 given by Landecker et al. in their paper at CRYPTO 2012 on the beyond-birthday-bound security for tweakable block ciphers. We are able to resolve the issue, give a new bound for the security of CLRW2, and identify a potential limitation of this proof technique when looking to extend the scheme to provide asymptotic security.
Last updated:  2014-03-11
Halka: A Lightweight, Software Friendly Block Cipher Using Ultra-lightweight 8-bit S-box
Sourav Das
This paper presents the design of a lightweight, yet software friendly, block cipher. Most of the lightweight block ciphers are nibble-oriented as the implementation of a 4-bit S-box is much more compact than an 8-bit S-box. This paper uses a novel implementation of multiplicative inverse for 8-bit S-boxes using LFSR requiring only 138 gate-equivalent. With this powerful scheme, we design a lightweight block cipher competitive with existing standards in terms of hardware gate equivalent first time using an 8-bit S-box.
Last updated:  2014-02-15
Diffusion Programmable Device : The device to prevent reverse engineering
Mitsuru Shiozaki, Ryohei Hori, Takeshi Fujino
The secret information, which is embedded in integrated circuit (IC) devices such as a smart card, has the risk of theft by reverse engineering (RE). The circuit design of IC can be stolen by the RE, and the counterfeit can be illegally fabricated. Therefore, the secure IC device requires the circuit architecture protected from the RE attacks. This paper proposes the diffusion programmable device (DPD) architecture as a countermeasure against the RE. A look-up table circuit based on the DPD can generate desired logic function without changing the layout except diffusion layer. And, the logic function can be programmed by assigning the N-type or P-type dopant to a part of active region. A test chip using the DPD-LUT was prototyped with a 180nm CMOS technology. And, operations of vaious logic functions such as AND, OR, XOR and XNOR were confirmed through experiments.
Last updated:  2014-02-15
MJH: A Faster Alternative to MDC-2
Jooyoung Lee, Martijn Stam
In this paper, we introduce a new class of double-block-length hash functions. Using the ideal cipher model, we prove that these hash functions, dubbed \MJH, are asymptotically collision resistant up to $O(2^{n(1-\epsilon)})$ query complexity for any $\epsilon>0$ in the iteration, where $n$ is the block size of the underlying blockcipher. When based on $n$-bit key blockciphers, our construction, being of rate 1/2, provides better provable security than MDC-2, the only known construction of a rate-1/2 double-length hash function based on an $n$-bit key blockcipher with non-trivial provable security. Moreover, since key scheduling is performed only once per message block for MJH, our proposal significantly outperforms MDC-2 in efficiency. When based on a $2n$-bit key blockcipher, we can use the extra $n$ bits of key to increase the amount of payload accordingly. Thus we get a rate-1 hash function that is much faster than existing proposals, such as Tandem-DM with comparable provable security. The proceedings version of this paper appeared in CT-RSA 2011.
Last updated:  2014-02-15
Key-Indistinguishable Message Authentication Codes
Joel Alwen, Martin Hirt, Ueli Maurer, Arpita Patra, Pavel Raykov
While standard message authentication codes (MACs) guarantee authenticity of messages, they do not, in general, guarantee the anonymity of the sender and recipient. For example it may be easy for an observer to determine whether or not two authenticated messages were sent by the same party even without any information about the secret key used. However preserving any uncertainty an attacker may have about the identities of honest parties engaged in authenticated communication is an important goal of many cryptographic applications. For example this is stated as an explicit goal of modern cellphone authentication protocols~\cite{3GPP} and RFID based authentication systems\cite{Vaudenay10}. In this work we introduce and construct a new fundamental cryptographic primitive called \emph{key indistinguishable} (KI) MACs. These can be used to realize many of the most important higher-level applications requiring some form of anonymity and authenticity~\cite{AHMPR14}. We show that much (though not all) of the modular MAC construction framework of~\cite{DodisKPW12} gives rise to several variants of KI MACs. On the one hand, we show that KI MACs can be built from hash proof systems and certain weak PRFs allowing us to base security on such assumption as DDH, CDH and LWE. Next we show that the two direct constructions from the LPN assumption of~\cite{DodisKPW12} are KI, resulting in particularly efficient constructions based on structured assumptions. On the other hand, we also give a very simple and efficient construction based on a PRF which allows us to base KI MACs on some ideal primitives such as an ideal compression function (using HMAC) or block-cipher (using say CBC-MAC). In particular, by using our PRF construction, many real-world implementations of MACs can be easily and cheaply modified to obtain a KI MAC. Finally we show that the transformations of~\cite{DodisKPW12} for increasing the domain size of a MAC as well as for strengthening the type of unforgeability it provides also preserve (or even strengthen) the type of KI enjoyed by the MAC. All together these results provide a wide range of assumptions and construction paths for building various flavors of this new primitive.
Last updated:  2014-02-14
Algorithms in HElib
Shai Halevi, Victor Shoup
HElib is a software library that implements homomorphic encryption (HE), specifically the Brakerski-Gentry-Vaikuntanathan (BGV) scheme, focusing on effective use of the Smart-Vercauteren ciphertext packing techniques and the Gentry-Halevi-Smart optimizations. The underlying cryptosystem serves as the equivalent of a "hardware platform" for HElib, in that it defines a set of operations that can be applied homomorphically, and specifies their cost. This "platform" is a SIMD environment (somewhat similar Intel SSE and the like), but with a unique cost metrics and parameters. In this report we describe some of the algorithms and optimization techniques that are used in HElib for data movement and simple linear algebra over this "platform."
Last updated:  2014-07-01
Reducing the Overhead of MPC over a Large Population
Ashish Choudhury, Arpita Patra, Nigel P. Smart
We present a secure honest majority MPC protocol, against a static adversary, which aims to reduce the communication cost in the situation where there are a large number of parties and the number of adversarially controlled parties is relatively small. Our goal is to reduce the usage of point-to-point channels among the parties, thus enabling them to run multiple different protocol executions. Our protocol has highly efficient theoretical communication cost when compared with other protocols in the literature; specifically the circuit-dependent communication cost, for circuits of suitably large depth, is $\Order(|\Circuit|\kappa^7)$, for security parameter $\kappa$~and circuit size $|\Circuit|$. Our protocol finds application in cloud computing scenario, where the fraction of corrupted parties is relatively small. By minimizing the usage of point-to-point channels, our protocol can enable a cloud service provider to run multiple MPC protocols.
Last updated:  2014-02-18
Space-efficient, byte-wise incremental and perfectly private encryption schemes
Uncategorized
Kévin Atighehchi
Show abstract
Uncategorized
The problem raised by incremental encryption is the overhead due to the larger storage space required by the provision of random blocks together with the ciphered versions of a given document. Besides, permitting variable-length modifications on the ciphertext leads to privacy preservation issues. In this paper we present incremental encryption schemes which are space-efficient, byte-wise incremental and which preserve perfect privacy in the sense that they hide the fact that an update operation has been performed on a ciphered document. For each scheme, the run time of updates performed turns out to be very efficient and we discuss the statistically adjustable trade-off between computational cost and storage space required by the produced ciphertexts.
Last updated:  2014-02-16
SHipher: Families of Block Ciphers based on SubSet-Sum Problem
Uncategorized
Xiali Hei, Binheng Song
Show abstract
Uncategorized
In this paper, we describe the families of block ciphers named SHipher. We show a symmetric encryption framework based on the SubSet-Sum problem. This framework can provide families of secure, flexible, and any-size block ciphers. We have extensively cryptanalyzed our encryption framework. We can easily control the computational cost by a key selection. Also, this framework offers excellent performance and it is flexible and general enough to admit a variety of implementations on different non-Abelian groups. In this paper, we provide one implementation using a group of matrices whose determinants are 1. This implementation accepts any block size satisfying $3l-1$. If $l=21$, the block size is 62 bits, which suits for full spectrum of lightweight applications. While if $l=341$, the block size is 1022, which provides high security level up to resistant $2^{684}$ differential-attack effort and $2^{1022}$ brute-force attack effort.
Last updated:  2014-12-18
Actively Secure Private Function Evaluation
Payman Mohassel, Saeed Sadeghian, Nigel P. Smart
We propose the first general framework for designing actively secure private function evaluation (PFE), not based on universal circuits. Our framework is naturally divided into pre-processing and online stages and can be instantiated using any generic actively secure multiparty computation (MPC) protocol. Our framework helps address the main open questions about efficiency of actively secure PFE. On the theoretical side, our framework yields the first actively secure PFE with linear complexity in the circuit size. On the practical side, we obtain the first actively secure PFE for arithmetic circuits with $O(g \cdot \log g)$ complexity where $g$ is the circuit size. The best previous construction (of practical interest) is based on an arithmetic universal circuit and has complexity $O(g^5)$. We also introduce the first linear Zero-Knowledge proof of correctness of ``extended permutation" of ciphertexts (a generalization of ZK proof of correct shuffles) which maybe of independent interest.
Last updated:  2014-06-02
Dishonest Majority Multi-Party Computation for Binary Circuits
Enrique Larraia, Emmanuela Orsini, Nigel P. Smart
We extend the Tiny-OT two party protocol of Nielsen et al (CRYPTO 2012) to the case of $n$ parties in the dishonest majority setting. This is done by presenting a novel way of transferring pairwise authentications into global authentications. As a by product we obtain a more efficient manner of producing globally authenticated shares, which in turn leads to a more efficient two party protocol than that of Nielsen et al.
Last updated:  2014-02-14
Improved Slender-set Linear Cryptanalysis
Uncategorized
Guo-Qiang Liu, Chen-Hui Jin, Chuan-Da Qi
Show abstract
Uncategorized
In 2013, Borghoff \emph{et al}. introduced a slender-set linear cryptanalysis on PRESENT-like ciphers with key-dependent secret S-boxes. In this paper, we propose an improved slender-set linear attack to PRESENT-like ciphers with secret S-boxes. We investigate three new cryptanalytic techniques, and use them to recover the secret S-boxes efficiently. Our first new idea is that we propose a new technique to support consistency of partitions of the input to the secret S-boxes. Our second new technique is that we present a more efficient method to recover the coordinate functions of secret S-boxes based on more information than that of Borghoff's attack. The third new technique is that we propose a method of constructing all correct coordinate function of secret S-boxes by pruning search algorithm. In particular, we implemented a successful linear attack on the full round Maya in practice. In our experiments, the correct S-box can be recovered with $2^{36}$ known plaintexts, $2^{18.9}$ time complexity and negligible memory complexity at a success rate of 87.5\%. Our attack is the improvement and sequel of Borghoff's work on PRESENT-like cipher with secret S-boxes.
Last updated:  2014-02-14
Indistinguishability Obfuscation and UCEs: The Case of Computationally Unpredictable Sources
Chris Brzuska, Pooya Farshim, Arno Mittelbach
Random oracles are powerful cryptographic objects. They facilitate the security proofs of an impressive number of practical cryptosystems ranging from KDM-secure and deterministic encryption to point-function obfuscation and many more. However, due to an uninstantiability result of Canetti, Goldreich, and Halevi (STOC 1998) random oracles have become somewhat controversial. Recently, Bellare, Hoang, and Keelveedhi (BHK; CRYPTO 2013 and ePrint 2013/424, August 2013) introduced a new abstraction called Universal Computational Extractors (UCEs), and showed that they suffice to securely replace random oracles in a number of prominent applications, including all those mentioned above, without suffering from the aforementioned uninstantiability result. This, however, leaves open the question of constructing UCEs in the standard model. We show that the existence of indistinguishability obfuscation (iO) implies (non-black-box) attacks on all the definitions that BHK proposed within their UCE framework in the original version of their paper, in the sense that no concrete hash function can satisfy them. We also show that this limitation can be overcome, to some extent, by restraining the class of admissible adversaries via a statistical notion of unpredictability. Following our attack, BHK (ePrint 2013/424, September 2013), independently adopted this approach in their work. In the updated version of their paper, BHK (ePrint 2013/424, September 2013) also introduce two other novel source classes, called bounded parallel sources and split sources, which aim at recovering the computational applications of UCEs that fall outside the statistical fix. These notions keep to a computational notion of unpredictability, but impose structural restrictions on the adversary so that our original iO attack no longer applies. We extend our attack to show that indistinguishability obfuscation is sufficient to also break the UCE security of any hash function against bounded parallel sources. Towards this goal, we use the randomized encodings paradigm of Applebaum, Ishai, and Kushilevitz (STOC 2004) to parallelize the obfuscated circuit used in our attack, so that it can be computed by a bounded parallel source whose second stage consists of constant-depth circuits. We conclude by discussing the composability and feasibility of hash functions secure against split sources.
Last updated:  2014-02-14
Towards Characterizing Complete Fairness in Secure Two-Party Computation
Gilad Asharov
The well known impossibility result of Cleve (STOC 1986) implies that in general it is impossible to securely compute a function with \emph{complete fairness} without an honest majority. Since then, the accepted belief has been that \emph{nothing} non-trivial can be computed with complete fairness in the two party setting. The surprising work of Gordon, Hazay, Katz and Lindell (STOC 2008) shows that this belief is false, and that there exist \emph{some} non-trivial (deterministic, finite-domain) boolean functions that can be computed fairly. This raises the fundamental question of characterizing complete fairness in secure two-party computation. In this work we show that not only that some or few functions can be computed fairly, but rather an \emph{enormous number} of functions can be computed fairly. In fact, \emph{almost all} boolean functions with distinct domain sizes can be computed with complete fairness (for instance, more than $99.999\%$ of the boolean functions with domain sizes $31 \times 30$). The class of functions that is shown to be possible includes also rather involved and highly non-trivial tasks, such as set-membership, evaluation of a private (boolean) function, private matchmaking and set-disjointness. In addition, we demonstrate that fairness is not restricted to the class of symmetric boolean functions where both parties get the same output, which is the only known feasibility result. Specifically, we show that fairness is also possible for asymmetric boolean functions where the output of the parties is not necessarily the same. Moreover, we consider the class of functions with \emph{non-binary} output, and show that fairness is possible \emph{for any finite range}. The constructions are based on the protocol of Gordon et.~al, and its analysis uses tools from convex geometry.
Last updated:  2020-10-30
Towards Constructing Fully Homomorphic Encryption without Ciphertext Noise from Group Theory
Koji Nuida
In CRYPTO 2008, one year earlier than Gentry's pioneering \lq\lq bootstrapping'' technique on constructing the first fully homomorphic encryption (FHE) scheme, Ostrovsky and Skeith III had suggested a completely different approach towards achieving FHE. Namely, they showed that the $\mathsf{NAND}$ operator can be realized in some \emph{non-commutative} groups; consequently, in combination with the $\mathsf{NAND}$ operator realized in such a group, homomorphically encrypting the elements of the group will yield an FHE scheme. However, no observations on how to homomorphically encrypt the group elements were presented in their paper, and there have been no follow-up studies in the literature based on their approach. The aim of this paper is to exhibit more clearly what is sufficient and what seems to be effective for constructing FHE schemes based on their approach. First, we prove that it is sufficient to find a surjective homomorphism $\pi \colon \widetilde{G} \to G$ between finite groups for which bit operators are realized in $G$ and the elements of the kernel of $\pi$ are indistinguishable from the general elements of $\widetilde{G}$. Secondly, we propose new methodologies to realize bit operators in some groups, which enlarges the possibility of the group $G$ to be used in our framework. Thirdly, we give an observation that a naive approach using matrix groups would never yield secure FHE due to an attack utilizing the \lq\lq linearity'' of the construction. Then we propose an idea to avoid such \lq\lq linearity'' by using combinatorial group theory, and give a prototypical but still \emph{incomplete} construction in the sense that it is \lq\lq non-compact'' FHE, i.e., the ciphertext size is unbounded (though the ciphertexts are noise-free as opposed to the existing FHE schemes). Completely realizing FHE schemes based on our proposed framework is left as a future research topic.
Last updated:  2014-03-20
Tight security bounds for multiple encryption
Yuanxi Dai, John Steinberger
Multiple encryption---the practice of composing a blockcipher several times with itself under independent keys---has received considerable attention of late from the standpoint of provable security. Despite these efforts proving definitive security bounds (i.e., with matching attacks) has remained elusive even for the special case of triple encryption. In this paper we close the gap by improving both the best known attacks and best known provable security, so that both bounds match. Our results apply for arbitrary number of rounds and show that the security of $\ell$-round multiple encryption is precisely $\exp(\kappa + \min\{\kappa (\ell'-2)/2), n (\ell'-2)/\ell'\})$ where $\exp(t) = 2^t$ and where $\ell' = 2\lceil \ell/2\rceil$ is the even integer closest to $\ell$ and greater than or equal to $\ell$, for all $\ell \geq 1$. Our technique is based on Patarin's H-coefficient method and reuses a combinatorial result of Chen and Steinberger originally required in the context of key-alternating ciphers.
Last updated:  2014-02-14
Unified, Minimal and Selectively Randomizable Structure-Preserving Signatures
Masayuki Abe, Jens Groth, Miyako Ohkubo, Mehdi Tibouchi
We construct a structure-preserving signature scheme that is selectively randomizable and works in all types of bilinear groups. We give matching lower bounds showing that our structure-preserving signature scheme is optimal with respect to both signature size and public verification key size. State of the art structure-preserving signatures in the asymmetric setting consist of 3 group elements, which is known to be optimal. Our construction preserves the signature size of 3 group elements and also at the same time minimizes the verification key size to 1 group element. Depending on the application, it is sometimes desirable to have strong unforgeability and in other situations desirable to have randomizable signatures. To get the best of both worlds, we introduce the notion of selective randomizability where the signer may for specific signatures provide randomization tokens that enable randomization. Our structure-preserving signature scheme unifies the different pairing-based settings since it can be instantiated in both symmetric and asymmetric groups. Since previously optimal structure-preserving signatures had only been constructed in asymmetric bilinear groups this closes an important gap in our knowledge. Having a unified signature scheme that works in all types of bilinear groups is not just conceptually nice but also gives a hedge against future cryptanalytic attacks. An instantiation of our signature scheme in an asymmetric bilinear group may remain secure even if cryptanalysts later discover an efficiently computable homomorphism between the source groups.
Last updated:  2014-06-14
Faster Bootstrapping with Polynomial Error
Uncategorized
Jacob Alperin-Sheriff, Chris Peikert
Show abstract
Uncategorized
\emph{Bootstrapping} is a technique, originally due to Gentry (STOC 2009), for ``refreshing'' ciphertexts of a somewhat homomorphic encryption scheme so that they can support further homomorphic operations. To date, bootstrapping remains the only known way of obtaining fully homomorphic encryption for arbitrary unbounded computations. Over the past few years, several works have dramatically improved the efficiency of bootstrapping and the hardness assumptions needed to implement it. Recently, Brakerski and Vaikuntanathan~(ITCS~2014) reached the major milestone of a bootstrapping algorithm based on Learning With Errors for \emph{polynomial} approximation factors. Their method uses the Gentry-Sahai-Waters~(GSW) cryptosystem~(CRYPTO~2013) in conjunction with Barrington's ``circuit sequentialization'' theorem~(STOC~1986). This approach, however, results in \emph{very large} polynomial runtimes and approximation factors. (The approximation factors can be improved, but at even greater costs in runtime and space.) In this work we give a new bootstrapping algorithm whose runtime and associated approximation factor are both \emph{small} polynomials. Unlike most previous methods, ours implements an elementary and efficient \emph{arithmetic} procedure, thereby avoiding the inefficiencies inherent to the use of boolean circuits and Barrington's Theorem. For $2^{\lambda}$ security under conventional lattice assumptions, our method requires only a \emph{quasi-linear} $\Otil(\lambda)$ number of homomorphic operations on GSW ciphertexts, which is optimal (up to polylogarithmic factors) for schemes that encrypt just one bit per ciphertext. As a contribution of independent interest, we also give a technically simpler variant of the GSW system and a tighter error analysis for its homomorphic operations.
Last updated:  2014-02-23
The Related-Key Analysis of Feistel Constructions
Manuel Barbosa, Pooya Farshim
It is well known that the classical three- and four-round Feistel constructions are provably secure under chosen-plaintext and chosen-ciphertext attacks, respectively. However, irrespective of the number of rounds, no Feistel construction can resist related-key attacks where the keys can be offset by a constant. In this paper we show that, under suitable reuse of round keys, security under related-key attacks can be provably attained. Our modification is substantially simpler and more efficient than alternatives obtained using generic transforms, namely the PRG transform of Bellare and Cash (CRYPTO 2010) and its random-oracle analogue outlined by Lucks (FSE 2004). Additionally we formalize Luck's transform and show that it does not always work if related keys are derived in an oracle-dependent way, and then prove it sound under appropriate restrictions.
Last updated:  2014-02-10
A new class of system oriented PKC, K(I)SOPKC.
Masao KASAHARA
In this paper, we present a new type of PKC, system-oriented PKC,referred to as K(I)SOPKC that can be well adapted to a secure and a high speed communication between various systems and organizations of the present day network society. K(I)SOPKC is constructed on the basis of K(XIV)SE(1)PKC, a modified version of K(XII)SE(1)PKC, K(XIII)SE(1)PKC and ${\rm K_p(XIII)SE(1)PKC}$.
Last updated:  2014-12-11
On Cryptographic Applications of Matrices Acting on Finite Commutative Groups and Rings
S. M. Dehnavi, A. Mahmoodi Rishakani, M. R. Mirzaee Shamsabad
In this paper, we investigate matrices acting on finite commutative groups and rings; in fact, we study modules on ring of matrices over Z_N and also modules over the ring (F_2^t,\oplus,\land); these new algebraic constructions are a generalization of some of the constructions which were previously presented by the authors of this paper. We present new linearized and nonlinear MDS diffusion layers, based on this mathematical investigation. Then, we study some types of nonlinear number generators over Z_(2^n ) and we present a lower bound on the period of these new nonlinear number generators. As a consequence, we present nonlinear recurrent sequences over Z_(2^n ) with periods which are multiples of the period of the corresponding sigma-LFSR’s.
Last updated:  2014-03-07
Cryptanalysis of KLEIN (Full version)
Uncategorized
Virginie Lallemand, María Naya-Plasencia
Show abstract
Uncategorized
Due to the recent emergence of resource-constrained devices, cryptographers are facing the problem of designing dedicated lightweight ciphers. KLEIN is one of the resulting primitives, proposed at RFIDSec in 2011 by Gong et al. This family of software-oriented block ciphers has an innovative structure, as it combines 4-bit Sboxes with the AES MixColumn transformation, and has woken up the attention of cryptanalysts. Several security analyses have been published, in particular on the 64-bit key version. The best of these results could attack up to 10 rounds out of the total number of 12. In this paper we propose a new family of attacks that can cryptanalyze for the first time all the 12 rounds of the complete version of KLEIN-64. Our attacks use truncated differential paths and are based both on some of the notions developed in previous attacks and on our new ideas that allow to considerably improve the performance. To prove the validity of our attacks, we have implemented reduced-round versions of them. In particular we were able to reproduce a practical attack that recovers the whole key on 10 rounds, which also corresponds to the best practical attack against KLEIN-64.
Last updated:  2014-09-09
Multiple Differential Cryptanalysis of Round-Reduced PRINCE (Full version)
Anne Canteaut, Thomas Fuhr, Henri Gilbert, María Naya-Plasencia, Jean-René Reinhard
PRINCE is a lightweight block cipher proposed by Borghoff et al. at Asiacrypt 2012. Due to its originality, novel design and low number of rounds, it has already attracted the attention of a large number of cryptanalysts. Several results on reduced versions have been published to date; the best one is an attack on 8 rounds out of the total number of 12. In this paper we improve this result by two rounds: we provide an attack on 10 rounds of the cipher with a data complexity of $2^{57.94}$ and a time complexity of $2^{60.62}$, corresponding to 118.56 security bits, instead of 126 for the generic attacks. Our attack uses multiple differentials and exploits some properties of PRINCE for recovering the whole key. PRINCE is defined as a member of a family of ciphers, differing by the choice of an Sbox among a distinguished set. We also show that the security offered by all the members of the family is not equivalent, by identifying an Sbox for which our attack can be extended up to 11 rounds with a data complexity of $2^{59.81}$ and a time complexity of $2^{62.43}$.
Last updated:  2014-02-10
A Bound For Multiparty Secret Key Agreement And Implications For A Problem Of Secure Computing
Himanshu Tyagi, Shun Watanabe
We consider secret key agreement by multiple parties observing correlated data and communicating interactively over an insecure communication channel. Our main contribution is a single-shot upper bound on the length of the secret keys that can be generated, without making any assumptions on the distribution of the underlying data. Heuristically, we bound the secret key length in terms of ``how far" is the joint distribution of the initial observations of the parties and the eavesdropper from a distribution that renders the observations of the parties conditionally independent across some partition, when conditioned on the eavesdropper's side information. The closeness of the two distributions is measured in terms of the exponent of the probability of error of type II for a binary hypothesis testing problem, thus bringing out a structural connection between secret key agreement and binary hypothesis testing. When the underlying data consists of an independent and identically distributed sequence, an application of our bound recovers several known upper bounds for the asymptotic rate of a secret key that can be generated, without requiring the agreement error probability or the security index to vanish to 0 asymptotically. Also, we consider the following problem of secure function computation with trusted parties: Multiple parties observing correlated data seek to compute a function of their collective data. To this end, they communicate interactively over an insecure communication channel. It is required that the value of the function be concealed from an eavesdropper with access to the communication. When is such a secure computation of a given function feasible? Using the aforementioned upper bound, we derive a necessary condition for the existence of a communication protocol that allows the parties to reliably recover the value of a given function, while keeping this value concealed from an eavesdropper with access to (only) the communication.
Last updated:  2015-07-22
AnoA: A Framework For Analyzing Anonymous Communication Protocols
Uncategorized
Michael Backes, Aniket Kate, Praveen Manoharan, Sebastian Meiser, Esfandiar Mohammadi
Show abstract
Uncategorized
Anonymous communication (AC) protocols such as the widely used Tor network have been designed to provide anonymity over the Internet to their participating users. While AC protocols have been the subject of several security and anonymity analyses in the last years, there still does not exist a framework for analyzing complex systems, such as Tor, and their different anonymity properties in a unified manner. In this work we present AnoA: a generic framework for defining, analyzing, and quantifying anonymity properties for AC protocols. In addition to quantifying the (additive) advantage of an adversary in an indistinguishability-based definition, AnoA uses a multiplicative factor, inspired from differential privacy. AnoA enables a unified quantitative analysis of well-established anonymity properties, such as sender anonymity, sender unlinkability, and relationship anonymity. AnoA modularly specifies adversarial capabilities by a simple wrapper-construction, called adversary classes. We examine the structure of these adversary classes and identify conditions under which it suffices to establish anonymity guarantees for single messages in order to derive guarantees for arbitrarily many messages. We coin this condition single-challenge reducability. This then leads us to the definition of Plug'n'Play adversary classes (PAC), which are easy to use, expressive, and single-challenge reducable. Additionally, we show that our framework is compatible with the universal composability (UC) framework. Leveraging a recent security proof about Tor, we illustrate how to apply AnoA to a simplified version of Tor against passive adversaries.
Last updated:  2014-02-07
Randomized and Efficient Authentication in Mobile Environments
Wei Jiang, Dan Lin, Feng Li, Elisa Bertino
In a mobile environment, a number of users act as a network nodes and communicate with one another to acquire location based information and services. This emerging paradigm has opened up new business opportunities and enables numerous applications such as road safety enhancement, service recommendations and mobile entertainment. A fundamental issue that impacts the success of these applications is the security and privacy concerns raised regarding the mobile users. In that, a malicious user or service provider can track the locations of a user traveled so that other malicious act can be carried out more effectively against the user. Therefore, the challenge becomes how to authenticate mobile users while preserving their actual identity and location privacy. In this work, we propose a novel randomized or privacy-preserving authentication protocol based on homomorphic encryption. The protocol allows individual users to self generate any number of authenticated identities to achieve full anonymity in mobile environment. The proposed protocol prevents users being tracked by any single party including peer users, service providers, authentication servers, and other infrastructure. Meanwhile, our protocol also provides traceability in case of any dispute. We have conducted experimental study which demonstrates the efficiency of our protocol. Another advantage of the proposed protocol is lightweight computation and storage requirement, particularly suitable for any mobile devices with limited computation power and storage space.
Last updated:  2014-02-07
Multipermutations in Crypto World: Different Faces of the Perfect Diffusion Layer
Uncategorized
Aleksandra Mileva
Show abstract
Uncategorized
Diffusion layers, and specially perfect diffusion layers, are very important subject for cryptographic research. Main quest is a perfect diffusion layer with more optimal hardware and/or software implementations (if possible, the last needs to holds also for its inverse). Different structures can be used for representing these layers, but all are interconnected. We start with multipermutations as a tools for obtaining perfect diffusion, and we summarize the interconnections between them, MDS codes, Latin squares and quasigroups, orthogonal arrays and $m$-arcs. We give a new construction of perfect recursive diffusion layer from $r$-recursive MDS codes, or recursively $r$-differentiable quasigroups.
Last updated:  2016-01-06
RECTANGLE: A Bit-slice Lightweight Block Cipher Suitable for Multiple Platforms
Uncategorized
Wentao Zhang, Zhenzhen Bao, Dongdai Lin, Vincent Rijmen, Bohan Yang, Ingrid Verbauwhede
Show abstract
Uncategorized
In this paper, we propose a new lightweight block cipher named RECTANGLE. The main idea of the design of RECTANGLE is to allow lightweight and fast implementations using bit-slice techniques. RECTANGLE uses an SP-network. The substitution layer consists of 16 4 x 4 S-boxes in parallel. The permutation layer is composed of 3 rotations. As shown in this paper, RECTANGLE offers great performance in both hardware and software environment, which provides enough flexibility for different application scenario. The following are 3 main advantages of RECTANGLE. First, RECTANGLE is extremely hardware-friendly. For the 80-bit key version, a one-cycle-per-round parallel implementation only needs 1600 gates for a throughput of 246 Kbits/sec at 100 KHz clock and an energy efficiency of 3.0 pJ/bit. Second, RECTANGLE achieves a very competitive software speed among the existing lightweight block ciphers due to its bit-slice style. Using 128-bit SSE instructions, a bit-slice implementation of RECTANGLE reaches an average encryption speed of about 3.9 cycles/byte for messages around 3000 bytes. Last, but not least, we propose new design criteria for the RECTANGLE S-box. Due to our careful selection of the S-box and the asymmetric design of the permutation layer, RECTANGLE achieves a very good security-performance tradeoff. Our extensive and deep security analysis shows that the highest number of rounds that we can attack, is 18 (out of 25).
Last updated:  2014-02-05
Garbled RAM Revisited, Part II
Steve Lu, Rafail Ostrovsky
In EUROCRYPT 2013, Lu and Ostrovsky proposed the notion of Garbled RAM (GRAM) programs. These GRAM programs are analogous to the classic result of Yao's garbled circuits: a large encrypted memory can first be provided to evaluator, and then a program can separately be garbled and sent to an evaluator to securely execute while learning nothing but the output of the program and its running time. The key feature of GRAM is that it harnesses the natural complexity-theoretic power that Random Access Memory (RAM) programs have over circuits, such as in binary search or randomized algorithms, where program can be executed in a garbled way without "unrolling" it into a circuit. The candidate Lu-Ostrovsky GRAM scheme proposed in that paper is based on the existence of one-way functions, but due to an implicit reliance on a complex "circular" use of Yao garbled circuits, the scheme requires an additional circularity assumptions and may not be secure otherwise. In this paper, we show how to construct efficient GRAM without circularity and based solely on the existence of any one-way function. The novel approach that allows us to break the circularity is a modification of the Goldreich-Goldwasser-Micali (PRF) construction. More specifically, we modify the PRF to allow PRF-keys to be "adaptively revoked" during run-time at the additive cost of roughly log n per revocation. Then, to improve the overhead of this scheme, we apply a delicate recursion technique that bootstraps mini-GRAM schemes into larger, more powerful ones while still avoiding circularity in the hybrid arguments. This results in secure GRAM with overhead of poly($k$)(min($t; n^\eps$)) for any constant $\eps>0$, where $n$ is the size of memory and $t$ is the running time. In a companion work (Part I), Gentry, Halevi, Raykova, and Wichs show an alternative approach using identity-based encryption to solve the circularity problem. Their scheme achieves overhead of poly($k$)polylog($n$) assuming the existence of IBE.
Last updated:  2014-02-05
Garbled RAM Revisited, Part I
Craig Gentry, Shai Halevi, Mariana Raykova, Daniel Wichs
The notion of *garbled random-access machines* (garbled RAMs) was introduced by Lu and Ostrovsky (Eurocrypt 2013). It can be seen as an analogue of Yao's garbled circuits, that allows a user to garble a RAM program directly, without performing the expensive step of converting it into a circuit. In particular, the size of the garbled program and the time it takes to create and evaluate it are only proportional to its running time on a RAM rather than its circuit size. Lu and Ostrovsky gave a candidate construction of this primitive based on pseudo-random functions (PRFs). The starting point of this work is a subtle yet difficult-to-overcome issue with the Lu-Ostrovsky construction, that prevents a proof of security from going through. Specifically, the construction requires a complex "circular" use of Yao garbled circuits and PRFs. As our main result, we show how to remove this circularity and get a provably secure solution using *identity-based encryption* (IBE). We also abstract out, simplify and generalize the main ideas behind the Lu-Ostrovsky construction, making them easier to understand and analyze. In a companion work to ours (Part II), Lu and Ostrovsky show an alternative approach to solving the circularity problem. Their approach relies only on the existence of one-way functions, at the price of higher overhead. Specifically, our construction has overhead $\poly(k)\polylog(n)$ (with $k$ the security parameter and $n$ the data size), while the Lu-Ostrovsky approach can achieve overhead $\poly(k)n^\eps$ for any constant $\eps>0$. It remains as an open problem to achieve an overhead of $\poly(k)\polylog(n)$ assuming only the existence of one-way functions.
Last updated:  2014-02-05
Efficient Round Optimal Blind Signatures
Sanjam Garg, Divya Gupta
Known constructions of blind signature schemes suffer from at least one of the following limitations: (1) rely on parties having access to a common reference string or a random oracle, (2) are not round-optimal, or (3) are prohibitively expensive. In this work, we construct the \emph{first} blind-signature scheme that does not suffer from any of these limitations. In other words, besides being round optimal and having a standard model proof of security, our scheme is very efficient. Specifically, in our scheme, one signature is of size $6.5$ KB and the communication complexity of the signing protocol is roughly $100$ KB. An amortized variant of our scheme has communication complexity less that $1$ KB.
Last updated:  2014-02-05
A Full Characterization of Completeness for Two-party Randomized Function Evaluation
Daniel Kraschewski, Hemanta K. Maji, Manoj Prabhakaran, Amit Sahai
We settle a long standing open problem which has pursued a full characterization of completeness of (potentially randomized) finite functions for 2-party computation that is secure against active adversaries. Since the first such complete function was discovered [Kilian, FOCS 1988], the question of which finite 2-party functions are complete has been studied extensively, leading to characterization in many special cases. In this work, we completely settle this problem. We provide a polynomial time algorithm to test whether a 2-party finite secure function evaluation (SFE) functionality (possibly randomized) is complete or not. The main tools in our solution include: -- A formal linear algebraic notion of ``redundancy'' in a general 2-party randomized function. -- A notion of ``statistically testable games.'' A kind of interactive proof in the information-theoretic setting where both parties are computationally unbounded but differ in their knowledge of a secret. -- An extension of the (weak) ``converse of Shannon's channel coding theorem,'' where an adversary can adaptively choose the channel based on it view. We show that any function f, if complete, can implement any (randomized) circuit C using only O(|C| + k) calls to f, where k is the statistical security parameter. In particular, for any two-party functionality g, this establishes a universal notion of its quantitative ``cryptographic complexity'' independent of the setup and has close connections to circuit complexity.
Last updated:  2014-02-07
Unifying Leakage Models: from Probing Attacks to Noisy Leakage
Alexandre Duc, Stefan Dziembowski, Sebastian Faust
A recent trend in cryptography is to formally show the leakage resilience of cryptographic implementations in a given leakage model. One of the most prominent leakage models -- the so-called bounded leakage model -- assumes that the amount of leakage is a-priori bounded. Unfortunately, it has been pointed out that the assumption of bounded leakages is hard to verify in practice. A more realistic assumption is to assume that leakages are sufficiently noisy, following the engineering observation that real-world physical leakages are inherently noisy. While the noisy leakage assumption has first been studied in the seminal work of Chari et al. (CRYPTO 99), the recent work of Prouff and Rivain (Eurocrypt 2013) provides the first analysis of a full masking scheme under a physically motivated noise model. In particular, the authors show that a block-cipher implementation that uses an additive masking scheme is secure against noisy leakages. Unfortunately, the security analysis of Prouff and Rivain has three important shortcomings: (1) it requires leak-free gates, (2) it considers a restricted adversarial model (random message attacks), and (3) the security proof has limited application for cryptographic settings. In this work, we provide an alternative security proof in the same noisy model that overcomes these three challenges. We achieve this goal by a new reduction from noisy leakage to the important theoretical model of probing adversaries (Ishai et al~ -- CRYPTO 2003). Our work can be viewed as a next step of closing the gap between theory and practice in leakage resilient cryptography: while our security proofs heavily rely on concepts of theoretical cryptography, we solve problems in practically motivated leakage models.
Last updated:  2014-02-04
Implementation and Comparison of Lattice-based Identification Protocols on Smart Cards and Microcontrollers
Ahmad Boorghany, Rasool Jalili
Most lattice-based cryptographic schemes which enjoy a security proof suffer from huge key sizes and heavy computations. This is also true for the simpler case of identification protocols. Recent progress on ideal lattices has significantly improved the efficiency, and made it possible to implement practical lattice-based cryptography on constrained devices like FPGAs and smart phones. However, to the best of our knowledge, no previous attempts were made to implement lattice-based schemes on smart cards. In this paper, we report the results of our implementation of several state-of-the-art and highly-secure lattice-based identification protocols on smart cards and microcontrollers. Our results show that only a few of such protocols fit into the limitations of these devices. We also discuss the implementation challenges and techniques to perform lattice-based cryptography on constrained devices, which may be of independent interest.
Last updated:  2014-04-22
Mixcoin: Anonymity for Bitcoin with accountable mixes
Joseph Bonneau, Arvind Narayanan, Andrew Miller, Jeremy Clark, Joshua A. Kroll, Edward W. Felten
We propose Mixcoin, a protocol to facilitate anonymous payments in Bitcoin and similar cryptocurrencies. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal. Our scheme is efficient and fully compatible with Bitcoin. Against a passive attacker, our scheme provides an anonymity set of all other users mixing coins contemporaneously. This is an interesting new property with no clear analog in better-studied communication mixes. Against active attackers our scheme offers similar anonymity to traditional communication mixes.
Last updated:  2014-04-03
Certified Bitcoins
Giuseppe Ateniese, Antonio Faonio, Bernardo Magri, Breno de Medeiros
Bitcoin is a peer-to-peer (p2p) electronic cash system that uses a distributed timestamp service to record transactions in a public ledger (called the Blockchain). A critical component of Bitcoin’s success is the decentralized nature of its architecture, which does not require or even support the establishment of trusted authorities. Yet the absence of certification creates obstacles to its wider acceptance in e-commerce and official uses. We propose a certification system for Bitcoin that offers: a) an opt-in guarantee to send and receive bitcoins only to/ from certified users; b) control of creation of bitcoins addresses (certified users) by trusted authorities. Our proposal may encourage the adoption of Bitcoin in different scenarios that require an officially recognized currency, such as tax payments—often an integral part of e-commerce transactions.
Last updated:  2016-09-16
Publicly Auditable Secure Multi-Party Computation
Uncategorized
Carsten Baum, Ivan Damgård, Claudio Orlandi
Show abstract
Uncategorized
In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly -- even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damgård et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.
Last updated:  2014-06-14
New and Improved Key-Homomorphic Pseudorandom Functions
Abhishek Banerjee, Chris Peikert
A \emph{key-homomorphic} pseudorandom function (PRF) family $\set{F_{s} \colon D \to R}$ allows one to efficiently compute the value $F_{s+t}(x)$ given $F_{s}(x)$ and $F_{t}(x)$. Such functions have many applications, such as distributing the operation of a key-distribution center and updatable symmetric encryption. The only known construction of key-homomorphic PRFs without random oracles, due to Boneh \etal (CRYPTO~2013), is based on the learning with errors (\lwe) problem and hence on worst-case lattice problems. However, the security proof relies on a very strong \lwe assumption (i.e., very large approximation factors), and hence has quite inefficient parameter sizes and runtimes. In this work we give new constructions of key-homomorphic PRFs that are based on much weaker \lwe assumptions, are much more efficient in time and space, and are still highly parallel. More specifically, we improve the \lwe approximation factor from exponential in the input length to exponential in its \emph{logarithm} (or less). For input length~$\lambda$ and~$2^{\lambda}$ security against known lattice algorithms, we improve the key size from~$\lambda^{3}$ to~$\lambda$ bits, the public parameters from~$\lambda^{6}$ to~$\lambda^{2}$ bits, and the runtime from~$\lambda^{7}$ to~$\lambda^{\omega+1}$ bit operations (ignoring polylogarithmic factors in~$\lambda$), where $\omega \in [2,2.373]$ is the exponent of matrix multiplication. In addition, we give even more efficient ring-\lwe-based constructions whose key sizes, public parameters, and \emph{incremental} runtimes on consecutive inputs are all \emph{quasi-linear}~$\Otil(\lambda)$, which is optimal up to polylogarithmic factors. To our knowledge, these are the first \emph{low-depth} PRFs (whether key homomorphic or not) enjoying any of these efficiency measures together with nontrivial proofs of~$2^{\lambda}$ security under any conventional assumption.
Last updated:  2014-02-04
Anonymous Authentication with Shared Secrets
Joel Alwen, Martin Hirt, Ueli Maurer, Arpita Patra, Pavel Raykov
Anonymity and authenticity are both important yet often conflicting security goals in a wide range of applications. On the one hand for many applications (say for access control) it is crucial to be able to verify the identity of a given legitimate party (a.k.a. entity authentication). Alternatively an application might require that no one but a party can communicate on its behalf (a.k.a. message authentication). Yet, on the other hand privacy concerns also dictate that anonymity of a legitimate party should be preserved; that is no information concerning the identity of parties should be leaked to an outside entity eavesdropping on the communication. This conflict becomes even more acute when considering anonymity with respect to an active entity that may attempt to impersonate other parties in the system. In this work we resolve this conflict in two steps. First we formalize what it means for a system to provide both authenticity and anonymity even in the presence of an active man-in-the-middle adversary for various specific applications such as message and entity authentication using the constructive cryptography framework of~\cite{Mau11}. Our approach inherits the composability statement of constructive cryptography and can therefore be directly used in any higher-level context. Next we demonstrate several simple protocols for realizing these systems, at times relying on a new type of (probabilistic) Message Authentication Code (MAC) called \emph{key indistinguishable} (KI) MACs. Similar to the key hiding encryption schemes of~\cite{BellareBDP01} they guarantee that tags leak no discernible information about the keys used to generate them.
Last updated:  2014-02-04
Efficient Privacy-Preserving Big Data Processing through Proxy-Assisted ORAM
Uncategorized
Nikolaos P. Karvelas, Andreas Peter, Stefan Katzenbeisser, Sebastian Biedermann
Show abstract
Uncategorized
We present a novel mechanism that allows a client to securely outsource his private data to the cloud while at the same time to delegate to a third party the right to run certain algorithms on his data. The mechanism is privacy-preserving, meaning that the third party only learns the result of his algorithm on the client's data, while at the same time the access pattern on the client's data is hidden from the cloud. To achieve this we combine recent advances in the field of Oblivious RAM and Secure Two-Party Computation: We develop an Oblivious RAM which is ran between the cloud and a proxy server, and which does not need the data to be decrypted at any point. The evaluation on the data is done by employing Yao's garbled circuit solution for Secure Two-Party Computation.
Last updated:  2014-11-22
Implementing Pairing-Based Cryptosystems in USB Tokens
Zhaohui Cheng
In the last decade, pairing-based cryptography has been one of the most intensively studied subjects in cryptography. Various optimization techniques have been developed to speed up the pairing computation. However, implementing a pairing-based cryptosystem in resource constrained devices has been less tried. Moreover, due to progress on solving the discrete logarithm problem (DLP), those implementations are no longer safe to use. In this paper, we report an implementation of a couple of pairing-based cryptosystems at a high security level on a 32-bit microcontroller in a USB token. It shows that USB token supporting secure pairing-based cryptosystems is viable. The presented curve parameters may also be used by other pairing-related cryptosystems to achieve stronger security than those given in the existing literature.
Last updated:  2014-07-17
Lattice Cryptography for the Internet
Chris Peikert
In recent years, \emph{lattice-based} cryptography has been recognized for its many attractive properties, such as strong provable security guarantees and apparent resistance to quantum attacks, flexibility for realizing powerful tools like fully homomorphic encryption, and high asymptotic efficiency. Indeed, several works have demonstrated that for basic tasks like encryption and authentication, lattice-based primitives can have performance competitive with (or even surpassing) those based on classical mechanisms like RSA or Diffie-Hellman. However, there still has been relatively little work on developing lattice cryptography for deployment in \emph{real-world} cryptosystems and protocols. In this work we take a step toward that goal, by giving efficient and practical lattice-based protocols for key transport, encryption, and authenticated key exchange that are suitable as ``drop-in'' components for proposed Internet standards and other open protocols. The security of all our proposals is provably based (sometimes in the random-oracle model) on the well-studied ``learning with errors over rings'' problem, and hence on the conjectured worst-case hardness of problems on ideal lattices (against quantum algorithms). One of our main technical innovations (which may be of independent interest) is a simple, low-bandwidth \emph{reconciliation} technique that allows two parties who ``approximately agree'' on a secret value to reach \emph{exact} agreement, a setting common to essentially all lattice-based encryption schemes. Our technique reduces the ciphertext length of prior (already compact) encryption schemes nearly twofold, at essentially no cost.
Last updated:  2014-02-04
One-Pass Authenticated Key Establishment Protocol on Bilinear Pairings for Wireless Sensor Networks
Manoj Ranjan Mishra, Jayaprakash Kar, Banshidhar Majhi
The article proposes one-pass authenticated key establishment protocol in random oracles for Wireless Sensor Networks. Security of the protocol relies on Computational Diffie-Hellman Problem on Bilinear Pairings. In one-pass key establishment protocol, the initiator computes a session key and a related message. The key token is to be sent to the intended receiver using receiver’s public key and sender secret key. From the received key token the receiver compute the session key, which is the same as the one computed by the sender, using sender public key and receiver’s secret key. Because of low communication overhead, the scheme is better suited for Wireless Sensor Networks(WSNs) than the traditional key establishment protocol to establish the session key between two adjacent nodes.
Last updated:  2014-01-31
Some security bounds for the DGHV scheme
Franca Marinelli, Riccardo Aragona, Chiara Marcolla, Massimiliano Sala
The correctness in decrypting a ciphertext after some operations in the DGVH scheme depends heavily on the dimension of the secret key. In this paper we compute two bounds on the size of the secret key for the DGHV scheme to decrypt correctly a ciphertext after a fixed number of additions and a fixed number of multiplication. Moreover we improve the original bound on the dimension of the secret key for a general circuit.
Last updated:  2016-03-11
Efficient and Strongly Secure Dynamic Domain-Specific Pseudonymous Signatures for ID Documents
Julien Bringer, Hervé Chabanne, Roch Lescuyer, Alain Patey
The notion of domain-specific pseudonymous signatures (DSPS) has recently been introduced for private authentication of ID documents, like passports, that embed a chip with computational abilities. Thanks to this privacy-friendly primitive, the document authenticates to a service provider through a reader and the resulting signatures are anonymous, linkable inside the service and unlinkable across services. A subsequent work proposes to enhance security and privacy of DSPS through group signatures techniques. In this paper, we improve on these proposals in three ways. First, we spot several imprecisions in previous formalizations. We consequently provide a clean security model for \emph{dynamic domain-specific pseudonymous signatures}, where we correctly address the dynamic and adaptive case. Second, we note that using group signatures is somehow an overkill for constructing DSPS, and we provide an optimized construction that achieves the same strong level of security while being more efficient. Finally, we study the implementation of our protocol in a chip and show that our solution is well-suited for these limited environments. In particular, we propose a secure protocol for delegating the most demanding operations from the chip to the reader.
Last updated:  2014-01-30
A Subexponential Construction of Graph Coloring for Multiparty Computation
Hassan Jameel Asghar, Yvo Desmedt, Josef Pieprzyk, Ron Steinfeld
We show the first deterministic construction of an unconditionally secure multiparty computation (MPC) protocol in the passive adversarial model over black-box non-Abelian groups which is both optimal and has subexponential complexity of construction. More specifically, following the result of Desmedt et al. (2012) that the problem of MPC over non-Abelian groups can be reduced to finding a $t$-reliable $n$-coloring of planar graphs, we show the construction of such a graph which allows a path from the input nodes to the output nodes when any $t$-party subset is in the possession of the adversary. Unlike the (deterministic) constructions from Desmedt et al. (2012) our construction is subexponential and optimal at the same time, i.e., it is secure for any $t < \frac{n}{2}$.
Last updated:  2020-03-04
Fine Tuning the Function Field Sieve Algorithm for the Medium Prime Case
Palash Sarkar, Shashank Singh
This work builds on the variant of the function field sieve (FFS) algorithm for the medium prime case introduced by Joux and Lercier in 2006. We make several contributions. The first contribution uses a divisibility and smoothness technique and goes on to develop a sieving method based on the technique. This leads to significant practical efficiency improvements in the descent phase and also provides improvement to Joux's pinpointing technique. The second contribution is a detailed analysis of the degree of freedom and the use of a walk technique in the descent phase of the algorithm. Such analysis shows that it is possible to compute discrete logarithms over certain fields which are excluded by the earlier analyses performed by Joux and Lercier (2006) and Joux (2013). In concrete terms, we present computations of discrete logs for fields with 16 and 19-bit prime characteristic. We also provide concrete analysis of the effectiveness of the FFS algorithm for certain fields of characteristic ranging from 16-bit to 32-bit primes. The final contribution is to perform a complete asymptotic analysis of the FFS algorithm for fields $\mathbb{F}_Q$ with $p=L_Q(1/3,c)$. This closes gaps and corrects errors in the analysis earlier performed by Joux-Lercier and Joux and also provides new insights into the asymptotic behaviour of the algorithm.
Last updated:  2014-01-28
A Polynomial Time Attack against Algebraic Geometry Code Based Public Key Cryptosystems
Alain Couvreur, Irene Márquez-Corbella, Ruud Pellikaan
We give a polynomial time attack on the McEliece public key cryptosystem based on algebraic geometry codes. Roughly speaking, this attacks runs in $O(n^4)$ operations in $\mathbb F_q$, where $n$ denotes the code length. Compared to previous attacks, allows to recover a decoding algorithm for the public key even for codes from high genus curves.
Last updated:  2014-01-28
Cryptanalysis on “Secure untraceable off-line electronic cash system”
Yalin Chen, Jue-Sam Chou
Recently, Baseri et al. proposed a secure untraceable off-line electronic cash system. They claimed that their scheme could achieve security requirements of an e-cash system such as, untraceability, anonymity, unlinkability, double spending checking, un-forgeability, date-attachability, and prevent forging coins. They further prove the un-forgeability security feature by using the hardness of discrete logarithm problems. However, after cryptanalysis, we found that the scheme cannot attain the security feature, untraceability. We, therefore, modify it to comprise this desired requirement, which is very important in an e-cash system.
Last updated:  2014-03-13
A Comparison of the Homomorphic Encryption Schemes FV and YASHE
Tancrède Lepoint, Michael Naehrig
We conduct a theoretical and practical comparison of two Ring-LWE-based, scale-invariant, leveled homomorphic encryption schemes – Fan and Vercauteren’s adaptation of BGV and the YASHE scheme proposed by Bos, Lauter, Loftus and Naehrig. In particular, we explain how to choose parameters to ensure correctness and security against lattice attacks. Our parameter selection improves the approach of van de Pol and Smart to choose parameters for schemes based on the Ring-LWE problem by using the BKZ-2.0 simulation algorithm. We implemented both encryption schemes in C++, using the arithmetic library FLINT, and compared them in practice to assess their respective strengths and weaknesses. In particular, we performed a homomorphic evaluation of the lightweight block cipher SIMON. Combining block ciphers with homomorphic encryption allows to solve the gargantuan ciphertext expansion in cloud applications.
Last updated:  2014-01-28
Bounded-Collusion Identity-Based Encryption from Semantically-Secure Public-Key Encryption: Generic Constructions with Short Ciphertexts
Stefano Tessaro, David A. Wilson
Identity-based encryption (IBE) is a special case of public-key encryption where user identities replace public keys. Every user is given a corresponding secret key for decryp- tion, and encryptions for his or her identity must remain confidential even to attackers who learn the secret keys associated with other identities. Several IBE constructions are known to date, but their security relies on specific assumptions, such as quadratic residuosity, as well as different pairing-based and lattice-based assumptions. To circumvent the lack of generic constructions, Dodis et al. (EUROCRYPT ’02) introduced the notion of bounded-collusion IBE (BC-IBE), where attackers only learn secret keys of an a-priori bounded number t of identities. They provided a generic BC-IBE construction from any semantically-secure encryption scheme which, however, suffers from a &#969;(t) blow-up in ciphertext size. Goldwasser et al. (TCC 2012) recently presented a generic construction with no ciphertext-length blow-up. Their construction requires an underlying public-key scheme with a key homomorphism, as well as a hash-proof-style security definition that is strictly stronger than semantic security. This latter requirement in particular reduces the applicability of their construction to existing schemes. In this paper, we present the first generic constructions of BC-IBE from semantically-secure encryption schemes with no ciphertext-length blow-up. Our constructions require different degrees of key-homomorphism and malleability properties that are usually easy to verify. We provide concrete instantiations based on the DDH, QR, NTRU, and LWE assumptions. For all of these assumptions, our schemes present the smallest BC-IBE ciphertext size known to date. Our NTRU-based construction is particularly interesting, due to the lack of NTRU- based IBE constructions as well as the fact that it supports fully-homomorphic evaluation. Our results also yield new constructions of bounded CCA-secure cryptosystems.
Last updated:  2014-08-07
Verifiable Computation in Multiparty Protocols with Honest Majority
Uncategorized
Peeter Laud, Alisa Pankova
Show abstract
Uncategorized
We present a generic method for turning passively secure protocols into protocols secure against covert attacks. The method adds a post-execution verification phase to the protocol that allows a misbehaving party to escape detection only with negligible probability. The execution phase, after which the computed protocol result is already available for parties, has only negligible overhead added by our method. The checks, based on linear probabilistically checkable proofs, are done in zero-knowledge, thereby preserving the privacy guarantees of the original protocol. Our method is inspired by recent results in verifiable computation, adapting them to multiparty setting and significantly lowering their computational costs for the provers.
Last updated:  2015-01-01
Cuckoo Cycle: a memory bound graph-theoretic proof-of-work
Uncategorized
John Tromp
Show abstract
Uncategorized
We introduce the first graph-theoretic proof-of-work system, based on finding small cycles or other structures in large random graphs. Such problems are trivially verifiable and arbitrarily scalable, presumably requiring memory linear in graph size to solve efficiently. Our cycle finding algorithm uses one bit per edge, and up to one bit per node. Runtime is linear in graph size and dominated by random access latency, ideal properties for a memory bound proof-of-work. We exhibit two alternative algorithms that allow for a memory-time trade-off (TMTO)---decreased memory usage, by a factor $k$, coupled with increased runtime, by a factor $\Omega(k)$. The constant implied in $\Omega()$ gives a notion of memory-hardness, which is shown to be dependent on cycle length, guiding the latter's choice. Our algorithms are shown to parallelize reasonably well.
Last updated:  2014-01-27
Cryptanalysis of FIDES
Itai Dinur, Jérémy Jean
FIDES is a lightweight authenticated cipher, presented at CHES 2013. The cipher has two version, providing either 80-bit or 96-bit security. In this paper, we describe internal state-recovery attacks on both versions of FIDES, and show that once we recover the internal state, we can use it to immediately forge any message. Our attacks are based on a guess-and-determine algorithm, exploiting the slow diffusion of the internal linear transformation of FIDES. Our most basic attacks have time complexities of 2^{75} and 2^{90} for FIDES-80 and FIDES-96, respectively, use a very small amount of memory, and their most distinctive feature is their very low data complexity: the attacks require at most 24 bytes of an arbitrary plaintext and its corresponding ciphertext, in order to break the cipher with probability 1. In addition to the basic attacks, we describe optimized attacks which exploit additional data in order to reduce the time complexities to 2^{73} and 2^{88} for FIDES-80 and FIDES-96, respectively.
Last updated:  2014-11-12
Computing Discrete Logarithms in F_{3^{6*137}} and F_{3^{6*163}} using Magma
Uncategorized
Gora Adj, Alfred Menezes, Thomaz Oliveira, Francisco Rodríguez-Henríquez
Show abstract
Uncategorized
We show that a Magma implementation of Joux's L[1/4+o(1)] algorithm can be used to compute discrete logarithms in the 1303-bit finite field F_{3^{6*137}} and the 1551-bit finite field F_{3^{6*163}} with very modest computational resources. Our F_{3^{6*137}} implementation was the first to illustrate the effectiveness of Joux's algorithm for computing discrete logarithms in small-characteristic finite fields that are not Kummer or twisted-Kummer extensions.
Last updated:  2014-12-04
Low Probability Differentials and the Cryptanalysis of Full-Round CLEFIA-128
Sareh Emami, San Ling, Ivica Nikolic, Josef Pieprzyk, Huaxiong Wang
So far, low probability differentials for the key schedule of block ciphers have been used as a straightforward proof of security against related-key differential attacks. To achieve the resistance, it is believed that for cipher with $k$-bit key it suffices the upper bound on the probability to be $2^{-k}$. Surprisingly, we show that this reasonable assumption is incorrect, and the probability should be (much) lower than $2^{-k}$. Our counter example is a related-key differential analysis of the block cipher CLEFIA-128. We show that although the key schedule of CLEFIA-128 prevents differentials with a probability higher than $2^{-128}$, the linear part of the key schedule that produces the round keys, and the Feistel structure of the cipher, allow to exploit particularly chosen differentials with a probability as low as $2^{-128}$. CLEFIA-128 has $2^{14}$ such differentials, which translate to $2^{14}$ pairs of weak keys. The probability of each differential is too low for attacks, but the weak keys have a special structure which allows with a divide-and-conquer approach to gain advantage of $2^7$ over generic attacks. We exploit the advantage and give a membership test for the weak-key class, provide analysis in the hashing mode, and show the importance for the secret-key mode. The proposed analysis has been tested with computer experiments on small-scale variants of CLEFIA-128. Our results do not threaten the practical use of CLEFIA.
Last updated:  2014-01-26
Security Enhanced Anonymous Multi-Server Authenticated Key Agreement Scheme using Smart Card and Biometrics
Uncategorized
Younsung Choi
Show abstract
Uncategorized
Chuang and Chen propose an anonymous multi server authenticated key agreement scheme based on trust computing using smart card, password, and biometrics. Chuang and Chen say that this scheme not only supports multi-server but also achieves security requirements. but this scheme is vulnerable to masquerade attack, smart card attack, DoS attack and insufficient for perfect forward secrecy. To solve problems, this paper proposes security enhanced anonymous multi server authenticated key agreement scheme using smart card and biometrics.
Last updated:  2014-01-26
The Fourier Entropy-Influence conjecture holds for a log-density 1 class of cryptographic Boolean functions
Sugata Gangopadhyay, Pantelimon Stanica
We consider the Fourier Entropy-Influence (FEI) conjecture in the context of cryptographic Boolean functions. We show that the FEI conjecture is true for the functions satisfying the strict avalanche criterion, which forms a subset of asymptotic log--density~$1$ in the set of all Boolean functions. Further, we prove that the FEI conjecture is satisfied for plateaued Boolean functions, monomial algebraic normal form (with the best involved constant), direct sums, as well as concatenations of Boolean functions. As a simple consequence of these general results we find that each affine equivalence class of quadratic Boolean functions contains at least one function satisfying the FEI conjecture. Further, we propose some ``leveled'' FEI conjectures.
Last updated:  2014-02-21
Masking and Leakage-Resilient Primitives: One, the Other(s) or Both?
Sonia Belaïd, Vincent Grosso, François-Xavier Standaert
Securing cryptographic implementations against side-channel attacks is one of the most important challenges in modern cryptography. Many countermeasures have been introduced for this purpose, and analyzed in specialized security models. Formal solutions have also been proposed to extend the guarantees of provable security to physically observable devices. Masking and leakage-resilient cryptography are probably the most investigated and best understood representatives of these two approaches. Unfortunately, claims whether one, the other or their combination provides better security at lower cost remained vague so far. In this paper, we provide the first comprehensive treatment of this important problem. For this purpose, we analyze whether cryptographic implementations can be security-bounded, in the sense that the time complexity of the best side-channel attack is lower-bounded, independent of the number of measurements performed. Doing so, we first put forward a significant difference between stateful primitives such as leakage-resilient PRGs (that easily ensure bounded security), and stateless ones such as leakage-resilient PRFs (that hardly do). We then show that in practice, leakage-resilience alone provides the best security vs. performance tradeoff when bounded security is achievable, while masking alone is the solution of choice otherwise. That is, we highlight that~one (x)or the other approach should be privileged, which contradicts the usual intuition that physical security is best obtained by combining countermeasures. Besides, our experimental results underline that despite defined in exactly the same way, the bounded leakage requirement in leakage-resilient PRGs and PRFs imply significantly different challenges for hardware designers. Namely, such a bounded leakage is much harder to guarantee for stateless primitives (like PRFs) than for statefull ones (like PRGs). As a result, constructions of leakage-resilient PRGs and PRFs proven under the same bounded leakage assumption, and instantiated with the same AES implementation, may lead to different practical security levels.
Last updated:  2014-02-18
DAA-related APIs in TPM2.0 Revisited
Uncategorized
Li Xi
Show abstract
Uncategorized
In TPM2.0, a single signature primitive is proposed to support various signature schemes including Direct Anonymous Attestation (DAA), U-Prove and Schnorr signature. This signature primitive is implemented by several APIs which can be utilized as a static Diffie-Hellman oracle. In this paper, we measure the practical impact of the SDH oracle in TPM2.0 and show the security strength of these signature schemes can be weakened by 14-bit. We propose a novel property of DAA called forward anonymity and show how to utilize these DAA-related APIs to break forward anonymity. Then we propose new APIs which not only remove the Static Diffie-Hellman oracle but also support the foward anonymity, thus significantly improve the security of DAA and the other signature schemes supported by TPM2.0. We prove the security of our new APIs under the discrete logarithm assumption in the random oracle model. We prove that DAA satisfy forward anonymity using the new APIs under the Decision Diffie-Hellman assumption. Our new APIs are almost as efficient as the original APIs in TPM2.0 specification and can support LRSW-DAA and SDH-DAA together with U-Prove as the original APIs.
Last updated:  2014-11-18
An Equivalence-Preserving Transformation of Shift Registers
Elena Dubrova
The Fibonacci-to-Galois transformation is useful for reducing the propagation delay of feedback shift register-based stream ciphers and hash functions. In this paper, we extend it to handle Galois-to-Galois case as well as feedforward connections. This makes possible transforming Trivium stream cipher and increasing its keystream data rate by 27\% without any penalty in area. The presented transformation might open new possibilities for cryptanalysis of Trivium, since it induces a class of stream ciphers which generate the same set of keystreams as Trivium, but have a different structure.
Last updated:  2014-07-07
Some Theoretical Conditions for Menezes--Qu--Vanstone Key Agreement to Provide Implicit Key Authentication
Daniel R. L. Brown
Menezes--Qu--Vanstone key agreement (MQV) is intended to provide implicit key authentication (IKA) and several other security objectives. MQV is approved and specified in five standards. This report focuses on the IKA of two-pass MQV, without key confirmation. Arguably, implicit key authentication is the most essential security objective in authenticated key agreement. The report examines various necessary or sufficient formal conditions under which MQV may provide IKA. Incidentally, this report defines, relies on, and inter-relates various conditions on the key deriviation function and Diffie--Hellman groups. While it should be expected that most such definitions and results are already well-known, a reader interested in these topics may be interested in this report as a kind of review, even if they have no interest in MQV whatsoever.
Last updated:  2014-01-21
Data Security in Cloud Architecture Based on Diffie Hellman and Elliptical Curve Cryptography
Neha tirthani, Ganesan R
Technological advancements in cloud computing due to increased connectivity and exponentially proliferating data has resulted in migration towards cloud architecture. Cloud computing is technology where the users’ can use high end services in form of software that reside on different servers and access data from all over the world. Cloud storage enables users to access and store their data anywhere. It also ensures optimal usage of the available resources. There is no need for the user to maintain the overhead of hardware and software costs. With a promising technology like this, it certainly abdicates users’ privacy, putting new security threats towards the certitude of data in cloud. The user relies entirely for his data protection on the cloud providers, making them solely responsible for safeguarding it. The security threats such as maintenance of data integrity, data hiding and data safety dominate our concerns when the issue of cloud security come up. The voluminous data and time consuming encryption calculations related to applying any encryption method have been proved as a hindrance in this field. In this research paper, we have contemplated a design for cloud architecture which ensures secured movement of data at client and server end. We have used the non breakability of Elliptic curve cryptography for data encryption and Diffie Hellman Key Exchange mechanism for connection establishment. The proposed encryption mechanism uses the combination of linear and elliptical cryptography methods. It has three security checkpoints: authentication, key generation and encryption of data.
Last updated:  2014-01-21
When a Boolean Function can be Expressed as the Sum of two Bent Functions
Longjiang Qu, Shaojing Fu, Qingping Dai, Chao Li
In this paper we study the problem that when a Boolean function can be represented as the sum of two bent functions. This problem was recently presented by N. Tokareva in studying the number of bent functions. Firstly, many functions, such as quadratic Boolean functions, Maiorana-MacFarland bent functions, partial spread functions etc, are proved to be able to be represented as the sum of two bent functions. Methods to construct such functions from low dimension ones are also introduced. N. Tokareva's main hypothesis is proved for $n\leq 6$. Moreover, two hypotheses which are equivalent to N. Tokareva's main hypothesis are presented. These hypotheses may lead to new ideas or methods to solve this problem. At last, necessary and sufficient conditions on the problem when the sum of several bent functions is again a bent function are given.
Last updated:  2014-01-25
Down the Rabbit Hole: Revisiting the Shrinking Method
Vivien Dubois
The paper is about methodology to detect and demonstrate impossible differentials in a block cipher. We were inspired by the shrinking technique proposed by Biham et al. in 1999 which recovered properties of scalable block cipher structures from numerical search on scaled down variants. Attempt to bind all concepts and techniques of impossible differentials together reveals a view of the search for impossible differentials that can benefit from the computational power of a computer. We demonstrate on generalized Feistel networks with internal permutations an additional clustering layer on top of shrinking which let us merge numerical data into relevant human-readable information to be used in an actual proof. After that, we show how initial analysis of scaled down TEA-like schemes leaks the relevant part of the design and the length and ends of the impossible differentials. We use that initial profiling to numerically discover 4 15-round impossible differentials (beating the current 13-round) and thousands of shorter ones.
Last updated:  2014-01-20
Crypto-analyses on “user efficient recoverable off-line e-cashs scheme with fast anonymity revoking”
Yalin Chen, Jue-Sam Chou
Recently, Fan et al. proposed a user efficient recoverable off-line e-cash scheme with fast anonymity revoking. They claimed that their scheme could achieve security requirements of an e-cash system such as, anonymity, unlinkability, double spending checking, anonymity control, and rapid anonymity revoking on double spending. They further formally prove the unlinkability and the un-forgeability security features. However, after crypto-analysis, we found that the scheme cannot attain the two proven security features, anonymity and unlinkability. We, therefore, modify it to comprise the two desired requirements which are very important in an e-cash system.
Last updated:  2014-01-20
Human Assisted Randomness Generation Using Video Games
Uncategorized
Mohsen Alimomeni, Reihaneh Safavi-Naini
Show abstract
Uncategorized
Random number generators have direct applications in information security, online gaming, gambling, and computer science in general. True random number generators need an entropy source which is a physical source with inherent uncertainty, to ensure unpredictability of the output. In this paper we propose a new indirect approach to collecting entropy using human errors in the game play of a user against a computer. We argue that these errors are due to a large set of factors and provide a good source of randomness. To show the viability of this proposal, we design and implement a game, conduct a user study in which we collect user input in the game, and extract randomness from it. We measure the rate and the quality of the resulting randomness that clearly show effectiveness of the approach. Our work opens a new direction for construction of entropy sources that can be incorporated into a large class of video games.
Last updated:  2014-01-17
rPIR: Ramp Secret Sharing based Communication Efficient Private Information Retrieval
Lichun Li, Michael Militzer, Anwitaman Datta
Even as data and analytics driven applications are becoming increasingly popular, retrieving data from shared databases poses a threat to the privacy of their users. For example, investors/patients retrieving records about interested stocks/diseases from a stock/medical database leaks sensitive information to the database server. PIR (Private Information Retrieval) is a promising security primitive to protect the privacy of users' interests. PIR allows the retrieval of a data record from a database without letting the database server know which record is being retrieved. The privacy guarantees could either be information theoretic or computational. Ever since the first PIR schemes were proposed, a lot of work has been done to reduce the communication cost in the information-theoretic settings - particularly the question communication cost, i.e., the traffic from the user to the database server. The answer communication cost (the traffic from the database server to the user) has however barely been improved. When question communication cost is much lower than the record length, reducing question communication cost has marginal benefit on lowering overall communication cost. In contrast, reducing answer cost becomes very important. In this paper we propose ramp secret sharing based mechanisms that reduce the answer communication cost in information-theoretic PIR. We have designed four information-theoretic PIR schemes, using three ramp secret sharing approaches, achieving answer communication cost close to the cost of non-private information retrieval. Evaluation shows that our PIR schemes can achieve lower communication cost and the same level of privacy compared with existing schemes. Our PIR schemes' usages are demonstrated for realistic settings of outsourced data sharing and P2P content delivery scenarios. Thus, our approach makes PIR a viable communication efficient technique to protect user interest privacy.
Last updated:  2014-01-30
Elligator Squared: Uniform Points on Elliptic Curves of Prime Order as Uniform Random Strings
Mehdi Tibouchi
When represented as a bit string in a standard way, even using point compression, an elliptic curve point is easily distinguished from a random bit string. This property potentially allows an adversary to tell apart network traffic that makes use of elliptic curve cryptography from random traffic, and then intercept, block or otherwise tamper with such traffic. Recently, Bernstein, Hamburg, Krasnova and Lange proposed a partial solution to this problem in the form of Elligator: an algorithm for representing around half of the points on a large class of elliptic curves as close to uniform random strings. Their proposal has the advantage of being very efficient, but suffers from several limitations: * Since only a subset of all elliptic curve points can be encoded as a string, their approach only applies to cryptographic protocols transmitting points that are rerandomizable in some sense. * Supported curves all have non-trivial $2$-torsion, so that Elligator cannot be used with prime-order curves, ruling out standard ECC parameters and many other cryptographically interesting curves such as BN curves. * For indistinguishability to hold, transmitted points have to be uniform in the whole set of representable points; in particular, they cannot be taken from a prime order subgroup, which, in conjunction with the non-trivial $2$-torsion, rules out protocols that require groups of prime order. In this paper, we propose an approach to overcome all of these limitations. The general idea is as follows: whereas Bernstein et al. represent an elliptic curve point P as the bit string \iota^{-1}(P), where \iota is an injective encoding to the curve (which is only known to exist for some curve families, and reaches only half of all possible points), we propose to use a randomly sampled preimage of P under an admissible encoding of the form f^{\otimes 2}: (u,v)\mapsto f(u)+f(v), where f is essentially any algebraic encoding. Such encodings f exist for all elliptic curves, and the corresponding admissible encodings f^{\otimes 2} are essentially surjective, inducing a close to uniform distribution on the curve. As a result, our bit string representation is somewhat less compact (about twice as long as Elligator), but it has none of the limitations above, and can be computed quite efficiently when the function f is suitably chosen.
Last updated:  2014-03-03
A New Algorithm for Solving the General Approximate Common Divisors Problem and Cryptanalysis of the FHE Based on the GACD problem
Uncategorized
Jintai Ding, Chengdong Tao
Show abstract
Uncategorized
In this paper, we propose a new algorithm for solving the general approximate common divisors (GACD) problems, which is based on lattice reduction algorithms on certain special lattices and linear equation solving algorithms over integers. Through both theoretical arguments and experimental data, we show that our new algorithm works in polynomial time but under roughly the following condition: \begin{itemize} \item There is a positive integer $t$ such that $$\frac{\gamma+\eta}{t} + \frac{t}{H}+\rho < \eta;$$ \item We have more than $t$ GACD samples. \end{itemize} or equivalently \begin{itemize} \item $$H(\eta-\rho)^{2}-4(\gamma+\eta)>0$$ \item We have more than $t=\lceil\frac{H(\eta-\rho)-\sqrt{H^{2}(\eta-\rho)^{2}-4H(\gamma+\eta)}}{2}\rceil$ GACD samples. \end{itemize} Here $\gamma$, $\eta$ and $\rho$ are parameters describing a GACD problem, $H =1/ \log_{2}F$ and $F$ is the Hermite Factor. In our experiments, $H$ is shown to be roughly $40$ when using the LLL reduction algorithm and it should be around $80$ when using Deep or BKZ algorithms. % We use our algorithm to solve concrete problems that no other algorithm could solve before. We show how our algorithm can be applied to attack the fully homomorphic encryption schemes which are based on the general approximate common divisors problem and its limitations.
Last updated:  2018-02-12
Cryptanalysis via algebraic spans
Uncategorized
Adi Ben-Zvi, Arkadius Kalka, Boaz Tsaban
Show abstract
Uncategorized
We introduce a method for obtaining provable polynomial time solutions of problems in nonabelian algebraic cryptography. This method is widely applicable, easier to apply, and more efficient than earlier methods. After demonstrating its applicability to the major classic nonabelian protocols, we use this method to cryptanalyze the Triple Decomposition key exchange protocol, the only classic group theory based key exchange protocol that could not be cryptanalyzed by earlier methods.
Last updated:  2014-01-15
A Fast Modular Reduction Method
Zhengjun Cao, Ruizhong Wei, Xiaodong Lin
We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett's reduction, and provide a thorough complexity analysis of the method.
Last updated:  2014-01-15
Homomorphic AES Evaluation using NTRU
Yarkin Doroz, Yin Hu, Berk Sunar
Since its introduction more than a decade ago the homomorphic properties of the NTRU encryption scheme have gone largely ignored. A variant of NTRU proposed by Stehle and Steinfeld was recently extended into a full fledged multi-key fully homomorphic encryption scheme by Alt-Lopez, Tromer and Vaikuntanathan (ATV). This NTRU based FHE presents a viable alternative to the currently dominant BGV style FHE schemes. While the scheme appears to be more efficient, a full implementation and comparison to BGV style implementations has been missing in the literature. In this work, we develop a customized implementation of the ATV scheme. First parameters are selected to yield an efficient and yet secure ATV instantiation. We present an analysis of the noise growth that allows us to formulate a modulus cutting strategy for arbitrary circuits. Furthermore, we introduce a specialization of the ring structure that allows us to drastically reduce the public key size making evaluation of deep circuits such as the AES block cipher viable on a standard computer with a reasonable amount of memory. Moreover, with the modulus specialization the need for key switching is eliminated. Finally, we present a generic bit-sliced implementation of the ATV scheme that embodies a number of optimizations. To assess the performance of the scheme we homomorphically evaluate the full 10 round AES circuit in 31 hours with 2048 message slots resulting in 55 sec per AES block evaluation time.
Last updated:  2014-01-17
Extending and Applying a Framework for the Cryptographic Verification of Java Programs.
Ralf Kuesters, Enrico Scapin, Tomasz Truderung, Juergen Graf
In our previous work, we have proposed a framework which allows tools that can check standard noninterference properties but a priori cannot deal with cryptography to establish cryptographic indistinguishability properties, such as privacy properties, for Java programs. We refer to this framework as the CVJ framework (Cryptographic Verification of Java Programs) in this paper. While so far the CVJ framework directly supports public-key encryption (without corruption and without a public-key infrastructure) only, in this work we further instantiate the framework to support, among others, public-key encryption and digital signatures, both with corruption and a public-key infrastructure, as well as (private) symmetric encryption. Since these cryptographic primitives are very common in security-critical applications, our extensions make the framework much more widely applicable. To illustrate the usefulness and applicability of the extensions proposed in this paper, we apply the framework along with the tool Joana, which allows for the fully automatic verification of noninterference properties of Java programs, to establish cryptographic privacy properties of a (non-trivial) cloud storage application, where clients can store private information on a remote server.
Last updated:  2015-04-02
On the Security of the Pre-Shared Key Ciphersuites of TLS
Yong Li, Sven Schäge, Zheng Yang, Florian Kohlar, Jörg Schwenk
TLS is by far the most important protocol on the Internet for negotiating secure session keys and providing authentication. Only very recently, the standard ciphersuites of TLS have been shown to provide provably secure guarantees under a new notion called authenticated and Confidential Channel Establishment (ACCE) introduced by Jager et al. at CRYPTO'12. In this work, we analyse the variants of TLS that make use of pre-shared keys (TLS-PSK). In various environments, TLS-PSK is an interesting alternative for remote authentication between servers and constrained clients like smart cards, for example for mobile phone authentication, EMV-based payment transactions or authentication via electronic ID cards. First, we introduce a new and strong definition of ACCE security that covers protocols with pre-shared keys. Next, we prove that all ciphersuite families of TLS-PSK meet our strong notion of ACCE security. Our results do not rely on random oracles nor on any non-standard assumption.
Last updated:  2014-01-13
A Secure Text Messaging Protocol
Gary Belvin
Mobile text messages are currently vulnerable to inspection, modification, and replay by network operators and those that influence network operators. This paper describes a set of protocols that provide end-to-end message confidentiality, integrity, and authenticity over the high latency, low bandwidth, Short Message Service provided by GSM networks.
Last updated:  2014-01-12
A new attack on RSA with a composed decryption exponent
Abderrahmane Nitaj, Mohamed Ould Douh
In this paper, we consider an RSA modulus $N=pq$, where the prime factors $p$, $q$ are of the same size. We present an attack on RSA when the decryption exponent $d$ is in the form $d=Md_1+d_0$ where $M$ is a given positive integer and $d_1$ and $d_0$ are two suitably small unknown integers. In 1999, Boneh and Durfee presented an attack on RSA when $d<N^{0.292}$. When $d=Md_1+d_0$, our attack enables one to overcome Boneh and Durfee's bound and to factor the RSA modulus.
Last updated:  2014-01-12
Authenticated Encryption with SPECK
Chase Manny
In this paper, we provide performance measures for software implementations of the NSA-designed Speck128 block cipher together with various existing authenticated encryption modes. We investigated Speck128 using GCM, CCM, EAX, OCB3, COPA, and PAEAD-1, and we briefly discuss performance advantages and disadvantages of each mode. Our results indicate that Speck128 is capable of performing extremely fast authenticated encryption, as fast as 3.4 cycles/byte on a modern x86-based 64-bit processor.
Last updated:  2016-08-05
Lattice-based Group Signature Scheme with Verier-local Revocation
Uncategorized
Adeline Langlois, San Ling, Khoa Nguyen, Huaxiong Wang
Show abstract
Uncategorized
Support of membership revocation is a desirable functionality for any group signature scheme. Among the known revocation approaches, verifier-local revocation (VLR) seems to be the most flexible one, because it only requires the verifiers to possess some up-to-date revocation information, but not the signers. All of the contemporary VLR group signatures operate in the bilinear map setting, and all of them will be insecure once quantum computers become a reality. In this work, we introduce the first lattice-based VLR group signature, and thus, the first such scheme that is believed to be quantum-resistant. In comparison with existing lattice-based group signatures, our scheme has several noticeable advantages: support of membership revocation, logarithmic-size signatures, and milder hardness assumptions. In the random oracle model, our scheme is proven secure based on the hardness of the SIVP_O(n^{2.5}) problem in general lattices. Moreover, our construction works without relying on public-key encryption schemes, which is an intriguing feature for group signatures.
Last updated:  2014-01-12
Scale-Invariant Fully Homomorphic Encryption over the Integers
Jean-Sébastien Coron, Tancrède Lepoint, Mehdi Tibouchi
At Crypto 2012, Brakerski constructed a scale-invariant fully homomorphic encryption scheme based on the LWE problem, in which the same modulus is used throughout the evaluation process, instead of a ladder of moduli when doing “modulus switching”. In this paper we describe a variant of the van Dijk et al. FHE scheme over the integers with the same scale-invariant property. Our scheme has a single secret modulus whose size is linear in the multiplicative depth of the circuit to be homomorphically evaluated, instead of exponential; we therefore construct a leveled fully homomorphic encryption scheme. This scheme can be transformed into a pure fully homomorphic encryption scheme using bootstrapping, and its security is still based on the Approximate-GCD problem. We also describe an implementation of the homomorphic evaluation of the full AES encryption circuit, and obtain significantly improved performance compared to previous implementations: about 23 seconds (resp. 3 minutes) per AES block at the 72-bit (resp. 80-bit) security level on a mid-range workstation. Finally, we prove the equivalence between the (error-free) decisional Approximate-GCD problem introduced by Cheon et al. (Eurocrypt 2013) and the classical computational Approximate-GCD problem. This equivalence allows to get rid of the additional noise in all the integer-based FHE schemes described so far, and therefore to simplify their security proof.
Last updated:  2014-01-12
On the Design of LIL Tests for (Pseudo) Random Generators and Some Experimental Results
Yongge Wang
Random numbers have been one of the most useful objects in statistics, computer science, cryptography, modeling, simulation, and other applications though it is very difficult to construct true randomness. Many solutions (e.g., cryptographic pseudorandom generators) have been proposed to harness or simulate randomness and many statistical testing techniques have been proposed to determine whether a pseudorandom generator produces high quality randomness. NIST SP800-22 (2010) proposes the state of art testing suite for (pseudo) random generators to detect deviations of a binary sequence from randomness. On the one hand, as a counter example to NIST SP800-22 test suite, it is easy to construct functions that are considered as GOOD pseudorandom generators by NIST SP800-22 test suite though the output of these functions are easily distinguishable from the uniform distribution. Thus these functions are not pseudorandom generators by definition. On the other hand, NIST SP800-22 does not cover some of the important laws for randomness. Two fundamental limit theorems about random binary strings are the central limit theorem and the law of the iterated logarithm (LIL). Several frequency related tests in NIST SP800-22 cover the central limit theorem while no NIST SP800-22 test covers LIL. This paper proposes techniques to address the above challenges that NIST SP800-22 testing suite faces. Firstly, we propose statistical distance based testing techniques for (pseudo) random generators to reduce the above mentioned Type II errors in NIST SP800-22 test suite. Secondly, we propose LIL based statistical testing techniques, calculate the probabilities, and carry out experimental tests on widely used pseudorandom generators by generating around 30TB of pseudorandom sequences. The experimental results show that for a sample size of 1000 sequences (2TB), the statistical distance between the generated sequences and the uniform distribution is around 0.07 (with 0 for statistically indistinguishable and 1 for completely distinguishable) and the root-mean-square deviation is around 0.005. Though the statistical distance 0.07 and RMSD 0.005 are acceptable for some applications, for a cryptographic "random oracle", the preferred statistical distance should be smaller than 0.03 and RMSD be smaller than 0.001 at the sample size 1000. These results justify the importance of LIL testing techniques designed in this paper. The experimental results in this paper are reproducible and the raw experimental data are available at author's website.
Last updated:  2014-04-07
Lyra: Password-Based Key Derivation with Tunable Memory and Processing Costs
Uncategorized
Leonardo C. Almeida, Ewerton R. Andrade, Paulo S. L. M. Barreto, Marcos A. Simplicio Jr.
Show abstract
Uncategorized
We present Lyra, a password-based key derivation scheme based on cryptographic sponges. Lyra was designed to be strictly sequential (i.e., not easily parallelizable), providing strong security even against attackers that use multiple processing cores (e.g., custom hardware or a powerful GPU). At the same time, it is very simple to implement in software and allows legitimate users to fine-tune its memory and processing costs according to the desired level of security against brute force password guessing. We compare Lyra with similar-purpose state-of-the-art solutions, showing how our proposal provides a higher security level and overcomes limitations of existing schemes. Specfically, we show that if we fix Lyra's total processing time t in a legitimate platform, the cost of a memory-free attack against the algorithm is exponential, while the best known result in the literature (namely, against the scrypt algorithm) is quadratic. In addition, for an identical same processing time, Lyra allows for a higher memory usage than its counterparts, further increasing the cost of brute force attacks.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.