All papers in 2015 (Page 9 of 1255 results)

Last updated:  2018-03-23
Collateral damage of Facebook Apps: an enhanced privacy scoring model
Iraklis Symeonidis, Pagona Tsormpatzoudi, Bart Preneel
Establishing friendship relationships on Facebook often entails information sharing which is based on the social trust and implicit contract between users and their friends. In this context, Facebook offers applications (Apps) developed by third-party application providers (AppPs), which may grant access to users' personal data via Apps installed by their friends. Such access takes place outside the circle of social trust with the user not being aware whether a friend has installed an App collecting her data. In some cases, one or more AppPs may cluster several Apps and thus gain access to a collection of personal data. As a consequence privacy risks emerge. Previous research has mentioned the need to quantify privacy risks on Online Social Networks (OSNs). Nevertheless, most of the existing works do not focus on the personal data disclosure via Apps. Moreover, the problem of personal data clustering from AppPs has not been studied. In this work, we perform a general analysis of the privacy threats stemming from the personal data requested by Apps installed by the user’s friends from a technical and legal point of view. In order to assist users, we propose a model and a privacy scoring formula to calculate the amount of personal data that may be exposed to AppPs. Moreover, we propose algorithms that based on clustering, computes the visibility of each personal data to the AppPs.
Last updated:  2016-09-07
Secure Deduplication of Encrypted Data without Additional Independent Servers
Uncategorized
Jian Liu, N. Asokan, Benny Pinkas
Show abstract
Uncategorized
Encrypting data on client-side before uploading it to a cloud storage is essential for protecting users' privacy. However client-side encryption is at odds with the standard practice of deduplication. Reconciling client-side encryption with cross-user deduplication is an active research topic. We present the first secure cross-user deduplication scheme that supports client-side encryption {\em without requiring any additional independent servers}. Interestingly, the scheme is based on using a PAKE (password authenticated key exchange) protocol. We demonstrate that {\em our scheme provides better security guarantees than previous efforts}. We show both the effectiveness and the efficiency of our scheme, via simulations using realistic datasets and an implementation.
Last updated:  2015-05-13
A comprehensive and lightweight security architecture to secure the IoT throughout the lifecycle of a device based on HIMMO
Oscar Garcia-Morchon, Ronald Rietman, Sahil Sharma, Ludo Tolhuizen, Jose Luis Torre-Arce
Smart objects are devices with computational and communication capabilities connected to the Internet forming the so called Internet of Things (IoT). The IoT enables many applications, for instance outdoor lighting control, smart energy and water management, or environmental sensing in a smart city environment. Security in such scenarios remains an open challenge due to the resource-constrained nature of devices and networks or the multiple ways in which opponents can attack the system during the lifecycle of a smart object.This paper firstly reviews security and operational goals in an IoT scenario inspired in a smart city environment. Then, we present a comprehensive and lightweight security architecture to secure the IoT throughout the lifecycle of a device. Our solution relies on the lightweight HIMMO scheme as the building stone and shows how HIMMO is not only efficient resource-wise, but that it enables advanced IoT protocols and deployments. Our design and analysis show that our HIMMO-based security architecture can be easily integrated in existing communication protocols such as IEEE 802.15.4 or OMA LWM2M providing a number of advantages that existing solutions cannot provide both performance and operation-wise.
Last updated:  2015-05-13
An Optimization of Gu Map-1
Uncategorized
Yupu Hu, Huiwen Jia
Show abstract
Uncategorized
As a modified version of GGH map, Gu map-1 was successful in constructing multi-party key exchange (MPKE). In this short paper we present a result about the parameter setting of Gu map-1, therefore we can reduce a key parameter $\tau$ from original $O(n^{2})$ down to $O(\lambda n)$ (in theoretically secure case, where $\lambda$ is the security parameter), and even down to $O(2n)$ (in computationally secure case). Such optimization greatly reduces the size of the map.
Last updated:  2015-05-13
Masks will Fall Off -- Higher-Order Optimal Distinguishers
Nicolas Bruneau, Sylvain Guilley, Annelie Heuser, Olivier Rioul
Higher-order side-channel attacks are able to break the security of cryptographic implementations even if they are protected with masking countermeasures. In this paper, we derive the best possible distinguishers (High-Order Optimal Distinguishers or HOOD) against masking schemes under the assumption that the attacker can profile. Our exact derivation admits simple approximate expressions for high and low noise and shows to which extent the optimal distinguishers reduce to known attacks in the case where no profiling is possible. From these results, we can explain theoretically the empirical outcome of recent works on second-order distinguishers. In addition, we extend our analysis to any order and to the application to masked tables precomputation. Our results give some insight on which distinguishers have to be considered in the security analysis of cryptographic devices.
Last updated:  2015-05-12
On the Systematic Constructions of Rotation Symmetric Bent Functions with Any Possible Algebraic Degrees
Uncategorized
Sihong Su, Xiaohu Tang
Show abstract
Uncategorized
In the literature, few constructions of $n$-variable rotation symmetric bent functions have been presented, which either have restriction on $n$ or have algebraic degree no more than $4$. In this paper, for any even integer $n=2m\ge2$, a first systemic construction of $n$-variable rotation symmetric bent functions, with any possible algebraic degrees ranging from $2$ to $m$, is proposed.
Last updated:  2017-10-22
Adaptively Secure Computation with Partial Erasures
Carmit Hazay, Yehuda Lindell, Arpita Patra
Adaptive security is a strong corruption model that captures ``hacking'' attacks where an external attacker breaks into parties' machines in the midst of a protocol execution. There are two types of adaptively-secure protocols: adaptive with erasures and adaptive without erasures. Achieving adaptivity without erasures is preferable, since secure erasures are not always trivial. However, it seems far harder. We introduce a new model of adaptive security called adaptive security with partial erasures that allows erasures, but only assumes them in a minimal sense. Specifically, if all parties are corrupted then security holds as long as any single party successfully erases. In addition, security holds if any proper subset of the parties is corrupted without erasures. We initiate a theoretical study of this new notion and demonstrate that secure computation in this setting is as efficient as static secure computation. In addition, we study the relations between semi-adaptive security~\cite{GarayWZ09}, adaptive security with partial erasures, and adaptive security without any erasures. We prove that the existence of semi-adaptive OT implies secure computation in all these settings.
Last updated:  2015-05-12
On Constructions of a Sort of MDS Block Diffusion Matrices for Block Ciphers and Hash Functions
Ruoxin Zhao, Rui Zhang, Yongqiang Li, Baofeng Wu
Many modern block ciphers use maximum distance separate (MDS) matrices as their diffusion layers. In this paper, we propose a new method to verify a sort of MDS diffusion block matrices whose blocks are all polynomials in a certain primitive block over the finite field $\mathbb F_2$. And then we discover a new kind of transformations that can retain MDS property of diffusion matrices and generate a series of new MDS matrices from a given one. Moreover, we get an equivalence relation from this kind of transformation. And MDS property is an invariant with respect to this equivalence relation which can greatly reduce the amount of computation when we search for MDS matrices. The minimal polynomials of matrices play an important role in our strategy. To avoid being too theoretical, we list a series of MDS diffusion matrices obtained from our method for some specific parameters. Furthermore, we talk about MDS recursive diffusion layers with our method and extend the corresponding work of M. Sajadieh et al. published on FSE 2012 and the work of S. Wu published on SAC 2012.
Last updated:  2015-05-11
A Comment on Gu Map-1
Uncategorized
Yupu Hu, Huiwen Jia
Show abstract
Uncategorized
Gu map-1 is a modified version of GGH map. It uses same ideal lattices for constructing the trapdoors, while the novelty is that no encodings of zero are given. In this short paper we show that Gu map-1 cannot be used for the instance of witness encryption (WE) based on the hardness of 3-exact cover problem. That is, if Gu map-1 is used for such instance, we can break it by solving a combined 3-exact cover problem. The reason is just that no encodings of zero are given.
Last updated:  2015-05-10
A New Model for Error-Tolerant Side-Channel Cube Attacks
Zhenqi Li, Bin Zhang, Junfeng Fan, Ingrid Verbauwhede
Side-channel cube attacks are a class of leakage attacks on block ciphers in which the attacker is assumed to have access to some leaked information on the internal state of the cipher as well as the plaintext/ciphertext pairs. The known Dinur-Shamir model and its variants require error-free data for at least part of the measurements. In this paper, we consider a new and more realistic model which can deal with the case when \textit{all} the leaked bits are noisy. In this model, the key recovery problem is converted to the problem of decoding a binary linear code over a binary symmetric channel with the crossover probability which is determined by the measurement quality and the cube size. We use the maximum likelihood decoding method to recover the key. As a case study, we demonstrate efficient key recovery attacks on PRESENT. We show that the full $80$-bit key can be restored with $2^{10.2}$ measurements with an error probability of $19.4\%$ for each measurement.
Last updated:  2015-05-09
On the Amortized Complexity of Zero-knowledge Protocols
Ronald Cramer, Ivan Damgård, Marcel Keller
We propose a general technique that allows improving the complexity of zero-knowledge protocols for a large class of problems where previously the best known solution was a simple cut-and-choose style protocol, i.e., where the size of a proof for problem instance $x$ and error probability $2^{-n}$ was $O(|x| n)$ bits. By using our technique to prove $n$ instances simultaneously, we can bring down the proof size per instance to $O(|x| + n)$ bits for the same error probability while using no computational assumptions. Examples where our technique applies include proofs for quadratic residuosity, proofs of subgroup membership and knowledge of discrete logarithms in groups of unknown order, interval proofs of the latter, and proofs of plaintext knowledge for various types of homomorphic encryption schemes. We first propose our protocols as $\Sigma$-protocols and extend them later to zero-knowledge proofs of knowledge.
Last updated:  2015-05-09
XLS is not a Strong Pseudorandom Permutation
Mridul Nandi
In FSE 2007, Ristenpart and Rogaway had described a generic method XLS to construct a length-preserving strong pseudorandom per- mutation (SPRP) over bit-strings of size at least n. It requires a length-preserving permutation E over all bits of size multiple of n and a blockcipher E with block size n. The SPRP security of XLS was proved from the SPRP assumptions of both E and E. In this paper we disprove the claim by demonstrating a SPRP distinguisher of XLS which makes only three queries and has distinguishing advantage about 1/2. XLS uses a multi-permutation linear function, called mix2. In this paper, we also show that if we replace mix2 by any invertible linear functions, the construction XLS still remains insecure. Thus the mode has inherit weakness.
Last updated:  2015-05-09
Revisiting Security Claims of XLS and COPA
Mridul Nandi
Ristenpart and Rogaway proposed XLS in 2007 which is a generic method to encrypt messages with incomplete last blocks. Later Andreeva et al., in 2013 proposed an authenticated encryption COPA which uses XLS while processing incomplete message blocks. Following the design of COPA, several other CAESAR candidates used the similar approach. Surprisingly in 2014, Nandi showed a three-query distinguisher against XLS which violates the security claim of XLS and puts a question mark on all schemes using XLS. However, due to the interleaved nature of encryption and decryption queries of the distinguisher, it was not clear whether the security claims of COPA remains true or not. This paper revisits XLS and COPA both in the direction of cryptanalysis and provable security. Our contribution of the paper can be summarized into following two parts: 1. Cryptanalysis: We describe two attacks - (i) a new distinguisher against XLS and extending this attack to obtain (ii) a forging algo- rithm with query complexity about 2^n/3 against COPA where n is the block size of the underlying blockcipher. 2. Security Proof: Due to the above attacks the main claims of XLS (already known before) and COPA are wrong. So we revise the security analysis of both and show that (i) both XLS and COPA are pseudorandom function or PRF up to 2^n/2 queries and (ii) COPA is integrity-secure up to 2^n/3 queries (matching the query complexity of our forging algorithm).
Last updated:  2015-05-09
Security Evaluation and Enhancement of Bistable Ring PUFs
Uncategorized
Xiaolin Xu, Ulrich Rührmair, Daniel E. Holcomb, Wayne Burleson
Show abstract
Uncategorized
The Bistable Ring (BR) Physical Unclonable Function (PUF) is a newly proposed hardware security primitive in the PUF family. In this work, we comprehensively evaluate its resilience against Machine Learning (ML) modeling attacks. Based on the success of ML attacks, we propose XOR strategies to enhance the security of BR PUFs. Our results show that the XOR BR PUF with more than four parallel BR PUFs is able to resist the ML modeling methods in this work. We also evaluate the other PUF metrics of reliability, uniqueness and uniformity, and find that the XOR function is also effective in improving the uniformity of BR PUFs.
Last updated:  2015-05-09
Individualizing Electrical Circuits of Cryptographic Devices as a Means to Hinder Tampering Attacks
Zoya Dyka, Thomas Basmer, Christian Wittke, Peter Langendoerfer
Side channel and fault attacks take advantage from the fact that the behavior of crypto implementations can be observed and provides hints that simplify revealing keys. In a real word a lot of devices, that are identical to the target device, can be attacked before attacking the real target to increase the success of the attack. Their package can be opened and their electromagnetic radiation and structure can be analyzed. Another example of how to improve significantly the success rate of attacks is the measurement of the difference of the side channel leakage of two identical devices, one of these devices being the target, using the Wheatstone bridge measurement setup. Here we propose to individualize the electrical circuit of cryptographic devices in order to prevent attacks that use identical devices: attacks, that analyze the structure of devices identical to the target device in a preparation phase; usual side channel attacks, that use always the same target device for collecting many traces, and attacks that use two identical devices at the same time for measuring the difference of side-channel leakages. The proposed individualization can prevent such attacks because the power consumption and the electromagnetic radiation of devices with individualized electrical circuit are individualized while providing the same functionality. We implemented three individualized ECC designs that provide exactly the same cryptographic function on a Spartan-6 FPGA. These designs differ from each other in a single block only, i.e. in the field multiplier. The visualization of the routed design and measurement results show clear differences in the topology, in the resources consumed as well as in the power and electromagnetic traces. We show that the influence of the individualized designs on the power traces is comparable with the influence of inputs. These facts show that individualizing of electrical circuits of cryptographic devices can be exploited as a protection mechanism. We envision that this type of protection mechanism is relevant if an attacker has a physical access to the cryptographic devices, e.g. for wireless sensor networks from which devices can easily be stolen for further analysis in the lab.
Last updated:  2015-05-25
FIDES: Enhancing Trust in Reconfigurable Based Hardware Systems
Devu Manikantan Shila, Vivek Venugopalan, Cameron D Patterson
Extensive use of third party IP cores (e.g., HDL, netlist) and open source tools in the FPGA application design and development process in conjunction with the inadequate bitstream protection measures have raised crucial security concerns in the past for reconfigurable hardware systems. Designing high fidelity and secure methodologies for FPGAs are still infancy and in particular, there are almost no concrete methods/techniques that can ensure trust in FPGA applications not entirely designed and/or developed in a trusted environment. This work strongly suggests the need for an anomaly detection capability within the FPGAs that can continuously monitor the behavior of the underlying FPGA IP cores and the communication activities of IP cores with other IP cores or peripherals for any abnormalities. To capture this need, we propose a technique called FIDelity Enhancing Security (FIDES) methodology for FPGAs that uses a combination of access control policies and behavior learning techniques for anomaly detection. FIDES essentially comprises of two components: (i) {\em Trusted Wrappers}, a layer of monitors with sensing capabilities distributed across the FPGA fabric; these wrappers embed the output of each IP core $i$ with a tag $\tau_i$ according to the pre-defined security policy $\Pi$ and also verifies the embeddings of each input to the IP core to detect any violation of policies. The use of tagging and tracking enables us to capture the normal interactions of each IP core with its environment (e.g., other IP cores, memory, OS or I/O ports). {\em Trusted Wrappers} also monitors the statistical properties exhibited by each IP core module on execution such as power consumption, number of clock cycles and timing variations to detect any anomalous operations; (ii) a {\em Trusted Anchor} that monitors the communication between the IP cores and the peripherals with regard to the centralized security policies $\Psi$ as well as the statistical properties produced by the peripherals. We target FIDES architecture on a Xilinx Zynq 7020 device implemented with a red-black system comprising of sensitive and non-sensitive IP cores. Our results show that FIDES implementation leads to only 1-2\% overhead in terms of the logic resources per wrapper and incurs minimal latency per wrapper for tag verification and embedding. On the other hand, as compared to the baseline implementation, when all the communications within the system are routed to the Trusted Anchor for centralized policy checking and verification, a latency of 1.5X clock cycles is observed; this clearly manifests the advantage of using distributed wrappers as opposed to centralized policy checking.
Last updated:  2015-05-08
Message-Locked Encryption for Lock-Dependent Messages
Martín Abadi, Dan Boneh, Ilya Mironov, Ananth Raghunathan, Gil Segev
Motivated by the problem of avoiding duplication in storage systems, Bellare, Keelveedhi, and Ristenpart have recently put forward the notion of Message-Locked Encryption (MLE) schemes which subsumes convergent encryption and its variants. Such schemes do not rely on permanent secret keys, but rather encrypt messages using keys derived from the messages themselves. We strengthen the notions of security proposed by Bellare et al. by considering plaintext distributions that may depend on the public parameters of the schemes. We refer to such inputs as lock-dependent messages. We construct two schemes that satisfy our new notions of security for message-locked encryption with lock-dependent messages. Our main construction deviates from the approach of Bellare et al. by avoiding the use of ciphertext components derived deterministically from the messages. We design a fully randomized scheme that supports an equality-testing algorithm defined on the ciphertexts. Our second construction has a deterministic ciphertext component that enables more efficient equality testing. Security for lock-dependent messages still holds under computational assumptions on the message distributions produced by the attacker. In both of our schemes the overhead in the length of the ciphertext is only additive and independent of the message length.
Last updated:  2015-05-08
On Concurrently Secure Computation in the Multiple Ideal Query Model
Vipul Goyal, Abhishek Jain
The multiple ideal query (MIQ) model was introduced by Goyal, Jain and Ostrovsky [Crypto'10] as a relaxed notion of security which allows one to construct concurrently secure protocols in the plain model. The main question relevant to the MIQ model is how many queries must we allow to the ideal world adversary? The importance of the above question stems from the fact that if the answer is positive, then it would enable meaningful security guarantees in many application scenarios, as well as, lead to resolution of long standing open questions such as fully concurrent password based key exchange in the plain model. In this work, we continue the study of the MIQ model and prove severe lower bounds on the number of ideal queries per session. Following are our main results: 1) There exists a two-party functionality that cannot be securely realized in the MIQ model with only a constant number of ideal queries per session. 2) There exists a two-party functionality that cannot be securely realized in the MIQ model by any constant round protocol, with any polynomial number of ideal queries per session. Both of these results are unconditional and even rule out protocols proven secure using a non-black-box simulator. We in fact prove a more general theorem which allows for trade-off between round complexity and the number of ideal queries per session. We obtain our negative results in the following two steps: 1) We first prove our results with respect to black-box simulation, i.e., we only rule out simulators that make black-box use of the adversary. 2) Next, we give a technique to compile our negative results w.r.t. black-box simulation into full impossibility results (ruling out non-black-box simulation as well) in the MIQ model. Interestingly, our compiler uses ideas from the work on obfuscation using tamper-proof hardware, even though our setting does not involve any hardware tokens.
Last updated:  2015-05-08
A Hybrid Approach for Proving Noninterference of Java Programs
Ralf Kuesters, Tomasz Truderung, Bernhard Beckert, Daniel Bruns, Michael Kirsten, Martin Mohr
Several tools and approaches for proving noninterference properties for Java and other languages exist. Some of them have a high degree of automation or are even fully automatic, but overapproximate the actual information flow, and hence, may produce false positives. Other tools, such as those based on theorem proving, are precise, but may need interaction, and hence, analysis is time-consuming. In this paper, we propose a \emph{hybrid approach} that aims at obtaining the best of both approaches: We want to use fully automatic analysis as much as possible and only at places in a program where, due to overapproximation, the automatic approaches fail, we resort to more precise, but interactive analysis, where the latter involves the verification only of specific functional properties in certain parts of the program, rather than checking more intricate noninterference properties for the whole program. To illustrate the hybrid approach, in a case study we use this approach---along with the fully automatic tool Joana for checking noninterference properties for Java programs and the theorem prover KeY for the verification of Java programs--- as well as the CVJ framework proposed by Küsters, Truderung, and Graf to establish cryptographic privacy properties for a non-trivial Java program, namely an e-voting system. The CVJ framework allows one to establish cryptographic indistinguishability properties for Java programs by checking (standard) noninterference properties for such programs.
Last updated:  2015-05-07
A Note on the Unsoundness of vnTinyRAM's SNARK
Bryan Parno
Gennaro, Gentry, Parno, and Raykova (GGPR) introduced Quadratic Arithmetic Programs (QAPs) as a way of representing arithmetic circuits in a form amendable to highly efficient cryptographic protocols (EUROCRYPT 2013), particularly for verifiable computation and succinct non-interactive arguments. Subsequently, Parno, Gentry, Howell, and Raykova introduced an improved cryptographic protocol (and implementation), which they dubbed Pinocchio (IEEE S&P 2013). Ben-Sasson et al. then introduced a lightly modified version of the Pinocchio protocol and implemented it as part of their libsnark distribution. Later work by the same authors employed this protocol, as did a few works by others. Many of these works cite the version of the paper which was published at USENIX Security. However, the protocol does not appear in that peer-reviewed paper; instead, it appears only in a technical report, where it is justified via a lemma that lacks a proof. Unfortunately, the lemma is incorrect, and the modified protocol is unsound. With probability one, an adversary can submit false statements and proofs that the verifier will accept. We demonstrate this theoretically, as well as with concrete examples in which the protocol's implementation in libsnark accepts invalid statements. Fixing this problem requires different performance tradeoffs, indicating that the performance results reported by papers building on this protocol (USENIX Security 2013, CRYPTO 2014, NDSS 2014, EUROCRYPT 2015, IEEE S&P 2014, IEEE S&P 2015) are, to a greater or lesser extent, inaccurate.
Last updated:  2015-05-07
On the Resistance of Prime-variable Rotation Symmetric Boolean Functions against Fast Algebraic Attacks
Yusong Du, Baodian Wei, Fangguo Zhang, Huang Zhang
Boolean functions used in stream ciphers should have many cryptographic properties in order to help resist different kinds of cryptanalytic attacks. The resistance of Boolean functions against fast algebraic attacks is an important cryptographic property. Deciding the resistance of an n-variable Boolean function against fast algebraic attacks needs to determine the rank of a square matrix over finite field GF(2). In this paper, for rotation symmetric Boolean functions in prime n variables, exploiting the properties of partitioned matrices and circulant matrices, we show that the rank of such a matrix can be obtained by determining the rank of a reduced square matrix with smaller size over GF(2), so that the computational complexity decreases by a factor of n to the power omega for large n, where omega is approximately equal to 2.38 and is known as the exponent of the problem of computing the rank of matrices.
Last updated:  2015-05-07
On the (Fast) Algebraic Immunity of Boolean Power Functions
Yusong Du, Baodian Wei, Fangguo Zhang, Huang Zhang
The (fast) algebraic immunity, including (standard) algebraic immunity and the resistance against fast algebraic attacks, has been considered as an important cryptographic property for Boolean functions used in stream ciphers. This paper is on the determination of the (fast) algebraic immunity of a special class of Boolean functions, called Boolean power functions. An n-variable Boolean power function f can be represented as a monomial trace function over finite field GF(2^n). To determine the (fast) algebraic immunity of Boolean power functions one may need the arithmetic in GF(2^n), which may be not computationally efficient compared with the operations over GF(2). We provide two sufficient conditions for Boolean power functions such that their immunities can determined only by the computations in GF(2). We show that Niho functions and a number of odd variables Kasami functions can satisfy the conditions.
Last updated:  2015-05-06
Dickson Polynomials that are Involutions
Pascale Charpin, Sihem Mesnager, Sumanta Sarkar
Dickson polynomials which are permutations are interesting combinatorial objects and well studied. In this paper, we describe Dickson polynomials of the first kind in $\mathbb{F}_2[x]$ that are involutions over finite fields of characteristic $2$. Such description is obtained using modular arithmetic's tools. We give results related to the cardinality and the number of fixed points (in the context of cryptographic application) of this corpus. We also present a class of Dickson involutions with high degree.
Last updated:  2015-05-07
A New Classification of 4-bit Optimal S-boxes and its Application to PRESENT, RECTANGLE and SPONGENT
Wentao Zhang, Zhenzhen Bao, Vincent Rijmen, Meicheng Liu
In this paper, we present a new classification of 4-bit optimal S-boxes. All optimal 4-bit S-boxes can be classified into 183 different categories, among which we specify 3 platinum categories. Under the design criteria of the PRESENT (or SPONGENT) S-box, there are 8064 different S-boxes up to adding constants before and after an S-box. The 8064 S-boxes belong to 3 different categories, we show that the S-box should be chosen from one out of the 3 categories or other categories for better resistance against linear cryptanalysis. Furthermore, we study in detail how the S-boxes in the 3 platinum categories influence the security of PRESENT, RECTANGLE and SPONGENT88 against differential and linear cryptanalysis. Our results show that the S-box selection has a great influence on the security of the schemes. For block ciphers or hash functions with 4-bit S-boxes as confusion layers and bit permutations as diffusion layers, designers can extend the range of S-box selection to the 3 platinum categories and select their S-box very carefully. For PRESENT, RECTANGLE and SPONGENT88 respectively, we get a set of potentially best/better S-box candidates from the 3 platinum categories. These potentially best/better S-boxes can be further investigated to see if they can be used to improve the security-performance tradeoff of the 3 cryptographic algorithms.
Last updated:  2015-07-31
Non-Repudiable Proofs of Storage in Cloud
Uncategorized
Hongyuan Wang, Liehuang Zhu, Yijia Lilong, Chang Xu
Uncategorized
With the widespread use of cloud computing and cloud storage, how to ensure the authenticity of data in remote storage has become a severe problem. Provable data possession (PDP) and Proof of Retrievability (POR) are techniques for a client to verify whether an untrusted server possesses the original data entirely, and many PDP and POR schemes have been proposed to resolve above issue so far. But driven by profits, a malicious client may accuse an honest server and deny the correct verification in many circumstances. In this paper, we give a method to reform any private verification PDP/POR scheme into a non-repudiable PDP/POR scheme. And then we propose an instantiation, the Non-repudiable PDP (NRPDP) scheme of private verification, which can provide mutual proof to protect both the client and server. Based on homomorphic verifiable tags and commitment function, our solution allows both the client and the server to prove themselves and verify the other side respectively. To achieve better performance, we improve the NRPDP scheme to obtain an E-NRPDP scheme, which can save the storage cost of the server and reduce the I/O costs to raise efficiency. We prove the security of NRPDP scheme in the random oracle model, and implement a prototype based on our NRPDP scheme. Then we utilize data size as much as 10G to evaluate the performance in a realistic cloud platform. The experiments show our scheme can be executed efficiently as the original PDP/POR scheme and guarantee non-repudiation efficaciously. Our method is universal and practical, which means that it can be applied in any private PDP/POR schemes to guarantee non-repudiation.
Last updated:  2015-07-13
Conversions among Several Classes of Predicate Encryption and Applications to ABE with Various Compactness Tradeoffs
Uncategorized
Nuttapong Attrapadung, Goichiro Hanaoka, Shota Yamada
Show abstract
Uncategorized
Predicate encryption is an advanced form of public-key encryption that yield high flexibility in terms of access control. In the literature, many predicate encryption schemes have been proposed such as fuzzy-IBE, KP-ABE, CP-ABE, (doubly) spatial encryption (DSE), and ABE for arithmetic span programs. In this paper, we study relations among them and show that some of them are in fact equivalent by giving conversions among them. More specifically, our main contributions are as follows: - We show that monotonic, small universe KP-ABE (CP-ABE) with bounds on the size of attribute sets and span programs (or linear secret sharing matrix) can be converted into DSE. Furthermore, we show that DSE implies non-monotonic CP-ABE (and KP-ABE) with the same bounds on parameters. This implies that monotonic/non-monotonic KP/CP-ABE (with the bounds) and DSE are all equivalent in the sense that one implies another. - We also show that if we start from KP-ABE without bounds on the size of span programs (but bounds on the size of attribute sets), we can obtain ABE for arithmetic span programs. The other direction is also shown: ABE for arithmetic span programs can be converted into KP-ABE. These results imply, somewhat surprisingly, KP-ABE without bounds on span program sizes is in fact equivalent to ABE for arithmetic span programs, which was thought to be more expressive or at least incomparable. By applying these conversions to existing schemes, we obtain many non-trivial consequences. We obtain the first non-monotonic, large universe CP-ABE (that supports span programs) with constant-size ciphertexts, the first KP-ABE with constant-size private keys, the first (adaptively-secure, multi-use) ABE for arithmetic span programs with constant-size ciphertexts, and more. We also obtain the first attribute-based signature scheme that supports non-monotone span programs and achieves constant-size signatures via our techniques.
Last updated:  2015-05-06
Fast and Tradeoff-Resilient Memory-Hard Functions for Cryptocurrencies and Password Hashing
Alex Biryukov, Daniel Dinu, Dmitry Khovratovich
Memory-hard functions are becoming an important tool in the design of password hashing schemes, cryptocurrencies, and more generic proof-of-work primitives that are x86-oriented and can not be computed on dedicated hardware more efficiently. We develop a simple and cryptographically secure approach to the design of such functions and show how to exploit the architecture of modern CPUs and memory chips to make faster and more secure schemes compared to existing alternatives such as scrypt. We also propose cryptographic criteria for the components, that prevent cost reductions using time-memory tradeoffs and side-channel leaks. The concrete proof-of-work instantiation, which we call Argon2, can fill GBytes of RAM within a second, is resilient to various tradeoffs, and is suitable for a wide range of applications, which aim to bind a computation to a certain architecture. Concerning potential DoS attacks, our scheme is lightweight enough to offset the bottleneck from the CPU to the memory bus thus leaving sufficient computing power for other tasks. We also propose parameters for which our scheme is botnet resistant. As an application, we suggest a cryptocurrency design with fast and memory-hard proof-of-work, which allows memoryless verification.
Last updated:  2015-05-06
Cryptanalysis of Round-Reduced LED
Ivica Nikolić, Lei Wang, Shuang Wu
In this paper we present known-plaintext single-key and chosen-key attacks on round-reduced LED-64 and LED-128. We show that with an application of the recently proposed slidex attacks, one immediately improves the complexity of the previous single-key 4-step attack on LED-128. Further, we explore the possibility of multicollisions and show single-key attacks on 6 steps of LED-128. A generalization of our multicollision attack leads to the statement that no 6-round cipher with two subkeys that alternate, or 2-round cipher with linearly dependent subkeys, is secure in the single-key model. Next, we exploit the possibility of finding pairs of inputs that follow a certain differential rather than a differential characteristic, and obtain chosen-key differential distinguishers for 5-step LED-64, as well as 8-step and 9-step LED-128. We provide examples of inputs that follow the 8-step differential, i.e. we are able to practically confirm our results on 2/3 of the steps of LED-128. We introduce a new type of chosen-key differential distinguisher, called random-difference distinguisher, and successfully penetrate 10 of the total 12 steps of LED-128. We show that this type of attack is generic in the chosen-key model, and can be applied to any 10-round cipher with two alternating subkeys.
Last updated:  2015-06-18
Dumb Crypto in Smart Grids: Practical Cryptanalysis of the Open Smart Grid Protocol
Philipp Jovanovic, Samuel Neves
This paper analyses the cryptography used in the Open Smart Grid Protocol (OSGP). The authenticated encryption (AE) scheme deployed by OSGP is a non-standard composition of RC4 and a home-brewed MAC, the ``OMA digest''. We present several practical key-recovery attacks against the OMA digest. The first and basic variant can achieve this with a mere $13$ queries to an OMA digest oracle and negligible time complexity. A more sophisticated version breaks the OMA digest with only $4$ queries and a time complexity of about $2^{25}$ simple operations. A different approach only requires one arbitrary valid plaintext-tag pair, and recovers the key in an average of $144$ \emph{message verification} queries, or one ciphertext-tag pair and $168$ \emph{ciphertext verification} queries. Since the encryption key is derived from the key used by the OMA digest, our attacks break both confidentiality and authenticity of OSGP.
Last updated:  2015-05-05
A High Reliability PUF Using Hot Carrier Injection Based Response Reinforcement
Mudit Bhargava, Ken Mai
Achieving high reliability across environmental variations and over aging in physical unclonable functions (PUFs) remains a challenge for PUF designers. The conventional method to improve PUF reliability is to use powerful error correction codes (ECC) to correct the errors in the raw response from the PUF core. Unfortunately, these ECC blocks generally have high VLSI overheads, which scale up quickly with the error correction capability. Alternately, researchers have proposed techniques to increase the reliability of the PUF core, and thus significantly reduce the required strength (and complexity) of the ECC. One method of increasing the reliability of the PUF core is to use normally detrimental IC aging effects to reinforce the desired (or "golden") response of the PUF by altering the PUF circuit characteristics permanently and hence making the PUF more reliable. In this work, we present a PUF response reinforcement technique based on hot carrier injection (HCI) which can reinforce the PUF golden response in short stress times (i.e., tens of seconds), without impacting the surrounding circuits, and that has high permanence (i.e., does not degrade significantly over aging). We present a self-contained HCI-reinforcement-enabled PUF circuit based on sense amplifiers (SA) which autonomously self-reinforces with minimal external intervention. We have fabricated a custom ASIC testchip in 65nm bulk CMOS with the proposed PUF design. Measured results show high reliability across environmental variations and accelerated aging, as well as good uniqueness and randomness. For example, 1600 SA elements, after being HCI stressed for 125s, show 100% reliability (zero errors) across ~20% voltage variations a temperature range of -20C to 85C.
Last updated:  2015-05-05
Complementing Feistel Ciphers
Alex Biryukov, Ivica Nikolic
In this paper, we propose related-key differential distinguishers based on the complementation property of Feistel ciphers. We show that with relaxed requirements on the complementation, i.e. the property does not have to hold for all keys and the complementation does not have to be on all bits, one can obtain a variety of distinguishers. We formulate criteria sufficient for attacks based on the complementation property. To stress the importance of our findings we provide analysis of the \textit{full-round} primitives: * For the hash mode of \camo without $FL,FL^{-1}$ layers, differential multicollisions with $2^{112}$ time * For GOST, practical recovery of the full key with 31 related keys and $2^{38}$ time/data
Last updated:  2015-05-05
Smaller Keys for Code-Based Cryptography: QC-MDPC McEliece Implementations on Embedded Devices
Stefan Heyse, Ingo von Maurich, Tim Güneysu
In the last years code-based cryptosystems were established as promising alternatives for asymmetric cryptography since they base their security on well-known NP-hard problems and still show decent performance on a wide range of computing platforms. The main drawback of code-based schemes, including the popular proposals by McEliece and Niederreiter, are the large keys whose size is inherently determined by the underlying code. In a very recent approach, Misoczki et al. proposed to use quasi-cyclic MDPC (QC-MDPC) codes that allow for a very compact key representation. In this work, we investigate novel implementations of the McEliece scheme using such QC-MDPC codes tailored for embedded devices, namely a Xilinx Virtex-6 FPGA and an 8-bit AVR microcontroller. In particular, we evaluate and improve different approaches to decode QC-MDPC codes. Besides competitive performance for encryption and decryption on the FPGA, we achieved a very compact implementation on the microcontroller using only 4,800 and 9,600 bits for the public and secret key at 80 bits of equivalent symmetric security.
Last updated:  2015-05-05
FIDES: Lightweight Authenticated Cipher with Side-Channel Resistance for Constrained Hardware
Begül Bilgin, Andrey Bogdanov, Miroslav Knežević, Florian Mendel, Qingju Wang
In this paper, we present a novel lightweight authenticated cipher optimized for hardware implementations called FIDES. It is an online nonce-based authenticated encryption scheme with authenticated data whose area requirements are as low as 793 GE and 1001 GE for 80-bit and 96-bit security, respectively. This is at least two times smaller than its closest competitors Hummingbird-2 and Grain-128a. While being extremely compact, Fides is both throughput and latency efficient, even in its most serial implementations. This is attained by our novel sponge-like design approach. Moreover, cryptographically optimal 5-bit and 6-bit S-boxes are used as basic nonlinear components while paying a special attention on the simplicity of providing first order side-channel resistance with threshold implementation.
Last updated:  2015-05-12
On the Implementation of Unified Arithmetic on Binary Huff Curves
Uncategorized
Santosh Ghosh, Amit Kumar, Amitabh Das, Ingrid Verbauwhede
Show abstract
Uncategorized
Unified formula for computing elliptic curve point addition and doubling are considered to be resistant against simple power-analysis attack. A new elliptic curve formula known as unified binary Huff curve in this regard has appeared into the literature in 2011. This paper is devoted to analyzing the applicability of this elliptic curve in practice. Our paper has two contributions.We provide an efficient implementation of the unified Huff formula in projective coordinates on FPGA. Secondly, we point out its side-channel vulnerability and show the results of an actual attack. It is claimed that the formula is unified and there will be no power consumption difference when computing point addition and point doubling operations, observable with simple power analysis (SPA). In this paper, we contradict their claim showing actual SPA results on a FPGA platform and propose a modified arithmetic and its suitable implementation technique to overcome the vulnerability.
Last updated:  2015-05-10
Efficient Server-Aided Secure Two-Party Function Evaluation with Applications to Genomic Computation
Marina Blanton, Fattaneh Bayatbabolghani
Computation based on genomic data is becoming increasingly popular today, be it for medical or other purposes such as ancestry or paternity testing. Non-medical uses of genomic data in a computation often take place in a server-mediated setting where the server offers the ability for joint genomic testing between the users. Undeniably, genomic data is highly sensitive, which in contrast to other biometry types, discloses a plethora of information not only about the data owner, but also about his or her relatives. Thus, there is an urgent need to protect genomic data, especially when it is used in computation for what we call as recreational non-health-related purposes. Towards this goal, in this work we put forward a framework for server-aided secure two-party computation with the security model motivated by genomic applications. One particular security setting that we treat in this work provides stronger security guarantees with respect to malicious users than the traditional malicious model. In particular, we incorporate certified inputs into secure computation based on garbled circuit evaluation to guarantee that a malicious user is unable to modify her inputs in order to learn unauthorized information about the other user's data. Our solutions are general in the sense that they can be used to securely evaluate arbitrary functions and offer attractive performance compared to the state of the art. We apply the general constructions to three specific types of genomic tests: paternity, genetic compatibility, and ancestry testing and implement the constructions. The results show that all such private tests can be executed within a matter of seconds or less despite the large size of one's genomic data.
Last updated:  2015-05-05
VLSI Implementation of Double-Base Scalar Multiplication on a Twisted Edwards Curve with an Efficiently Computable Endomorphism
Zhe Liu, Husen Wang, Johann Großschädl, Zhi Hu, Ingrid Verbauwhede
The verification of an ECDSA signature requires a double-base scalar multiplication, an operation of the form $k \cdot G + l \cdot Q$ where $G$ is a generator of a large elliptic curve group of prime order $n$, $Q$ is an arbitrary element of said group, and $k$, $l$ are two integers in the range of $[1, n-1]$. We introduce in this paper an area-optimized VLSI design of a Prime-Field Arithmetic Unit (PFAU) that can serve as a loosely-coupled or tightly-coupled hardware accelerator in a system-on-chip to speed up the execution of double-base scalar multiplication. Our design is optimized for twisted Edwards curves with an efficiently computable endomorphism that allows one to reduce the number of point doublings by some 50% compared to a conventional implementation. An example for such a special curve is $-x^2 + y^2 = 1 + x^2y^2$ over the 207-bit prime field $F_p$ with $p = 2^{207} - 5131$. The PFAU prototype we describe in this paper features a ($16 \times 16$)-bit multiplier and has an overall silicon area of 5821 gates when synthesized with a $0.13\mu$ standard-cell library. It can be clocked with a frequency of up to 50 MHz and is capable to perform a constant-time multiplication in the mentioned 207-bit prime field in only 198 clock cycles. A complete double-base scalar multiplication has an execution time of some 365k cycles and requires the pre-computation of 15 points. Our design supports many trade-offs between performance and RAM requirements, which is a highly desirable property for future Internet-of-Things (IoT) applications.
Last updated:  2015-05-05
What Information is Leaked under Concurrent Composition?
Vipul Goyal, Divya Gupta, Abhishek Jain
Achieving security under concurrent composition is notoriously hard. Indeed, in the plain model, far reaching impossibility results for concurrently secure computation are known. On the other hand, some positive results have also been obtained according to various weaker notions of security (such as by using a super-polynomial time simulator). This suggest that somehow, ``not all is lost in the concurrent setting." In this work, we ask what and exactly how much private information can the adversary learn by launching a concurrent attack? ``Can he learn all the private inputs in all the sessions? Or, can we preserve the security of some (or even most) of the sessions fully while compromising (a small fraction of) other sessions? Or is it the case that the security of all (or most) sessions is (at least partially) compromised? If so, can we restrict him to learn an arbitrarily small fraction of input in each session?" We believe the above questions to be fundamental to the understanding of concurrent composition. Indeed, despite a large body of work on the study of concurrent composition, in our opinion, the understanding of what exactly is it that goes wrong in the concurrent setting and to what extent is currently quite unsatisfactory. Towards that end, we adopt the knowledge-complexity based approach of Goldreich and Petrank [STOC'91] to quantify information leakage in concurrently secure computation. We consider a model where the ideal world adversary (a.k.a simulator) is allowed to query the trusted party for some ``leakage'' on the honest party inputs. We obtain both positive and negative results, depending upon the nature of the leakage queries available to the simulator. Informally speaking, our results imply the following: in the concurrent setting, ``significant loss" of security (translating to high leakage in the ideal world) in some of the sessions is unavoidable if one wishes to obtain a general result. However on the brighter side, one can make the fraction of such sessions to be an arbitrarily small polynomial (while fully preserving the security in all other sessions). Our results also have an implication on secure computation in the bounded concurrent setting [Barak-FOCS'01]: we show there exist protocols which are secure as per the standard ideal/real world notion in the bounded concurrent setting. However if the actual number of sessions happen to exceed the bound, there is a graceful degradation of security as the number of sessions increase. (In contrast, prior results do not provide any security once the bound is exceeded.) In order to obtain our positive result, we model concurrent extraction as the classical set-covering problem and develop, as our main technical contribution, a new sparse rewinding strategy. Specifically, unlike previous rewinding strategies which are very ``dense'', we rewind ``small intervals'' of the execution transcript and still guarantee extraction. This yields other applications as well, including improved constructions of precise concurrent zero-knowledge [Pandey et al.-Eurocrypt'08] and concurrently secure computation in the multiple ideal query model [Goyal et al.-Crypto'10]. In order to obtain our negative results, interestingly, we employ techniques from the regime of leakage-resilient cryptography [Dziembowski-Pietrzak-FOCS'08].
Last updated:  2015-05-05
Non-invasive Spoofing Attacks for Anti-lock Braking Systems
Yasser Shoukry, Paul Martin, Paulo Tabuada, Mani B. Srivastava
This work exposes a largely unexplored vector of physical-layer attacks with demonstrated consequences in automobiles. By modifying the physical environment around analog sensors such as Antilock Braking Systems (ABS), we exploit weaknesses in wheel speed sensors so that a malicious attacker can inject arbitrary measurements to the ABS computer which in turn can cause life-threatening situations. In this paper, we describe the development of a prototype ABS spoofer to enable such attacks and the potential consequences of remaining vulnerable to these attacks. The class of sensors sensitive to these attacks depends on the physics of the sensors themselves. ABS relies on magnetic--based wheel speed sensors which are exposed to an external attacker from underneath the body of a vehicle. By placing a thin electromagnetic actuator near the ABS wheel speed sensors, we demonstrate one way in which an attacker can inject magnetic fields to both cancel the true measured signal and inject a malicious signal, thus spoofing the measured wheel speeds. The mounted attack is of a non-invasive nature, requiring no tampering with ABS hardware and making it harder for failure and/or intrusion detection mechanisms to detect the existence of such an attack. This development explores two types of attacks: a disruptive, naive attack aimed to corrupt the measured wheel speed by overwhelming the original signal and a more advanced spoofing attack, designed to inject a counter-signal such that the braking system mistakenly reports a specific velocity. We evaluate the proposed ABS spoofer module using industrial ABS sensors and wheel speed decoders, concluding by outlining the implementation and lifetime considerations of an ABS spoofer with real hardware.
Last updated:  2015-05-05
Optimized Interpolation Attacks on LowMC
Itai Dinur, Yunwen Liu, Willi Meier, Qingju Wang
LowMC is a collection of block cipher families introduced at Eurocrypt 2015 by Albrecht et al. Its design is optimized for instantiations of multi-party computation, fully homomorphic encryption, and zero-knowledge proofs. A unique feature of LowMC is that its internal affine layers are chosen at random, and thus each block cipher family contains a huge number of instances. The Eurocrypt paper proposed two specific block cipher families of LowMC, having 80-bit and 128-bit keys. In this paper, we mount interpolation attacks (algebraic attacks introduced by Jakobsen and Knudsen) on LowMC, and show that a practically significant fraction of $2^{-38}$ of its 80-bit key instances could be broken $2^{23}$ times faster than exhaustive search. Moreover, essentially all instances that are claimed to provide 128-bit security could be broken about $1000$ times faster. In order to obtain these results, we had to develop novel techniques and optimize the original interpolation attack in new ways. While some of our new techniques exploit specific internal properties of LowMC, others are more generic and could be applied, in principle, to any block cipher.
Last updated:  2015-05-05
Order-Revealing Encryption and the Hardness of Private Learning
Mark Bun, Mark Zhandry
An order-revealing encryption scheme gives a public procedure by which two ciphertexts can be compared to reveal the ordering of their underlying plaintexts. We show how to use order-revealing encryption to separate computationally efficient PAC learning from efficient $(\epsilon, \delta)$-differentially private PAC learning. That is, we construct a concept class that is efficiently PAC learnable, but for which every efficient learner fails to be differentially private. This answers a question of Kasiviswanathan et al. (FOCS '08, SIAM J. Comput. '11). To prove our result, we give a generic transformation from an order-revealing encryption scheme into one with strongly correct comparison, which enables the consistent comparison of ciphertexts that are not obtained as the valid encryption of any message. We believe this construction may be of independent interest.
Last updated:  2015-08-08
HETest: A Homomorphic Encryption Testing Framework
Mayank Varia, Sophia Yakoubov, Yang Yang
In this work, we present a generic open-source software framework that can evaluate the correctness and performance of homomorphic encryption software. Our framework, called HEtest, automates the entire process of a test: generation of data for testing (such as circuits and inputs), execution of a test, comparison of performance to an insecure baseline, statistical analysis of the test results, and production of a LaTeX report. To illustrate the capability of our framework, we present a case study of our analysis of the open-source HElib homomorphic encryption software. We stress though that HEtest is written in a modular fashion, so it can easily be adapted to test any homomorphic encryption software.
Last updated:  2015-06-30
STRIBOB / WHIRLBOB Security Analysis Addendum
Markku-Juhani O. Saarinen
This memo collects references to published cryptanalytic results which are directly relevant to the security evaluation of CAESAR first round algorithm STRIBOB and its second round tweaked variant, WHIRLBOB. During the first year after initial publication of STRIBOB and WHIRLBOB, no cryptanalytic breaks or other serious issues have emerged. The main difference in the security between the two variants is that WHIRLBOB allows easier creation of constant-time software implementations resistant to cache timing attacks.
Last updated:  2015-05-27
On the Optimality of Non-Linear Computations of Length-Preserving Encryption Schemes
Mridul Nandi
It is well known that three and four rounds of balanced Feistel cipher or Luby-Rackoff (LR) encryption for two blocks messages are pseudorandom permutation (PRP) and strong pseudorandom permutation (SPRP) respectively. A {\bf block} is $n$-bit long for some positive integer $n$ and a (possibly keyed) {\bf block-function} is a nonlinear function mapping all blocks to themselves, e.g. blockcipher. XLS (eXtended Latin Square) with three blockcipher calls was claimed to be SPRP and later which is shown to be wrong. Motivating with these observations, we consider the following questions in this paper: {\em What is the minimum number of invocations of block-functions required to achieve PRP or SPRP security over $\ell$ blocks inputs}? To answer this question, we consider all those length-preserving encryption schemes, called {\bf linear encryption mode}, for which only nonlinear operations are block-functions. Here, we prove the following results for these encryption schemes: (1) At least $2\ell$ (or $2\ell-1$) invocations of block-functions are required to achieve SPRP (or PRP respectively). These bounds are also tight. (2) To achieve the above bound for PRP over $\ell > 1$ blocks, either we need at least two keys or it can not be {\em inverse-free} (i.e., need to apply the inverses of block-functions in the decryption). In particular, we show that a single-keyed block-function based, inverse-free PRP needs $2\ell$ invocations. (3) We show that 3-round LR using a single-keyed pseudorandom function (PRF) is PRP if we xor a block of input by a masking key.
Last updated:  2017-03-08
A Study of Pair Encodings: Predicate Encryption in Prime Order Groups
Shashank Agrawal, Melissa Chase
Pair encodings and predicate encodings, recently introduced by Attrapadung (Eurocrypt 2014) and Wee (TCC 2014) respectively, greatly simplify the process of designing and analyzing predicate and attribute-based encryption schemes. However, they are still somewhat limited in that they are restricted to composite order groups, and the information theoretic properties are not sufficient to argue about many of the schemes. Here we focus on pair encodings, as the more general of the two. We first study the structure of these objects, then propose a new relaxed but still information theoretic security property. Next we show a generic construction for predicate encryption in prime order groups from our new property; it results in either semi-adaptive or full security depending on the encoding, and gives security under SXDH or DLIN. Finally, we demonstrate the range of our new property by using it to design the first semi-adaptively secure CP-ABE scheme with constant size ciphertexts.
Last updated:  2018-01-07
The Birth of Cryptographic Obfuscation -- A Survey
Uncategorized
Máté Horváth, Levente Buttyán
Show abstract
Uncategorized
The first candidate indistinguishability obfuscator (iO) of Garg et al. (FOCS 2013) changed the previously pessimistic attitude towards general-purpose cryptographic obfuscation. The potential realizability of such a powerful tool motivated a plethora of applications, including solutions for long-standing open problems, from almost all areas of cryptography. At the same time, the question of whether iO is realizable under standard assumptions is still open. In this work, we review the rapid development of candidate constructions and organize the results of the first four years since the breakthrough. Our goal is to give a bird's-eye view of the infancy of cryptographic obfuscation, providing insight into the most important ideas and techniques.
Last updated:  2015-05-01
Side-Channel Analysis of MAC-Keccak Hardware Implementations
Pei Luo, Yunsi Fei, Xin Fang, A. Adam Ding, David R. Kaeli, Miriam Leeser
As Keccak has been selected as the new SHA-3 standard, Message Authentication Code (MAC) (MAC-Keccak) using a secret key will be widely used for integrity checking and authenticity assurance. Recent works have shown the feasibility of side-channel attacks against software implementations of MAC-Keccak to retrieve the key, with the security assessment of hardware implementations remaining an open problem. In this paper, we present a comprehensive and practical side-channel analysis of a hardware implementation of MAC-Keccak on FPGA. Different from previous works, we propose a new attack method targeting the first round output of MAC-Keccak rather than the linear operation $\theta$ only. The results on sampled power traces show that the unprotected hardware implementation of MAC-Keccak is vulnerable to side-channel attacks, and attacking the nonlinear operation of MAC-Keccak is very effective. We further discuss countermeasures against side-channel analysis on hardware MAC-Keccak. Finally, we discuss the impact of the key length on side-channel analysis and compare the attack complexity between MAC-Keccak and other cryptographic algorithms.
Last updated:  2015-09-20
Efficient Ring-LWE Encryption on 8-bit AVR Processors
Uncategorized
Zhe Liu, Hwajeong Seo, Sujoy Sinha Roy, Johann Großschädl, Howon Kim, Ingrid Verbauwhede
Show abstract
Uncategorized
Public-key cryptography based on the ``ring-variant'' of the Learning with Errors (ring-LWE) problem is both efficient and believed to remain secure in a post-quantum world. In this paper, we introduce a carefully-optimized implementation of a ring-LWE encryption scheme for 8-bit AVR processors like the ATxmega128. Our research contributions include several optimizations for the Number Theoretic Transform (NTT) used for polynomial multiplication. More concretely, we describe the Move-and-Add (MA) and the Shift-Add-Multiply-Subtract-Subtract (SAMS2) technique to speed up the performance-critical multiplication and modular reduction of coefficients, respectively. We take advantage of incompletely-reduced intermediate results to minimize the total number of reduction operations and use a special coefficient-storage method to decrease the RAM footprint of NTT multiplications. In addition, we propose a byte-wise scanning strategy to improve the performance of a discrete Gaussian sampler based on the Knuth-Yao random walk algorithm. For medium-term security, our ring-LWE implementation needs 590k, 672k, and 276k clock cycles for key-generation, encryption, and decryption, respectively. On the other hand, for long-term security, the execution time of key-generation, encryption, and decryption amount to 2.2M, 2.6M, and 686k cycles, respectively. These results set new speed records for ring-LWE encryption on an 8-bit processor and outperform related RSA and ECC implementations by an order of magnitude.
Last updated:  2015-05-01
Improved Dual System ABE in Prime-Order Groups via Predicate Encodings
Jie Chen, Romain Gay, Hoeteck Wee
We present a modular framework for the design of efficient adaptively secure attribute-based encryption (ABE) schemes for a large class of predicates under the standard k-Lin assumption in prime-order groups; this is the first uniform treatment of dual system ABE across different predicates and across both composite and prime-order groups. Via this framework, we obtain concrete efficiency improvements for several ABE schemes. Our framework has three novel components over prior works: (i) new techniques for simulating composite-order groups in prime-order ones, (ii) a refinement of prior encodings framework for dual system ABE in composite-order groups, (iii) an extension to weakly attribute-hiding predicate encryption (which includes anonymous identity-based encryption as a special case).
Last updated:  2015-05-01
Revisiting Atomic Patterns for Scalar Multiplications on Elliptic Curves
Franck Rondepierre
This paper deals with the protection of elliptic curve scalar multiplications against side-channel analysis by using the atomicity principle. Unlike other atomic patterns, we investigate new formul\ae{} with same cost for both doubling and addition. This choice is particularly well suited to evaluate double scalar multiplications with the Straus-Shamir trick. Since fixed point multiplications highly benefit from this trick, our pattern allows a huge improvement in this case as other atomic patterns cannot use it. Surprisingly, in other cases our choice remains very efficient. Besides, we also point out a security threat when the curve parameter $a$ is null and propose an even more efficient pattern in this case.
Last updated:  2016-08-25
Higher-Order Cryptanalysis of LowMC
Christoph Dobraunig, Maria Eichlseder, Florian Mendel
LowMC is a family of block ciphers developed particularly for use in multi-party computations and fully homomorphic encryption schemes, where the main performance penalty comes from non-linear operations. Thus, LowMC has been designed to minimize the total quantity of logical "and" operations, as well as the "and" depth. To achieve this, the LowMC designers opted for an incomplete S-box layer that does not cover the complete state, and compensate for it with a very dense, randomly chosen linear layer. In this work, we exploit this design strategy in a cube-like key-recovery attack. We are able to recover the secret key of a round-reduced variant of LowMC with 80-bit security, where the number of rounds is reduced from 11 to 9. Our attacks are independent of the actual instances of the used linear layers and therefore, do not exploit possible weak choices of them. From our results, we conclude that the resulting security margin of 2 rounds is smaller than expected.
Last updated:  2015-12-31
Cryptography for Parallel RAM from Indistinguishability Obfuscation
Yu-Chi Chen, Sherman S. M. Chow, Kai-Min Chung, Russell W. F. Lai, Wei-Kai Lin, Hong-Sheng Zhou
Since many cryptographic schemes are about performing computation on data, it is important to consider a computation model which captures the prominent features of modern system architecture. Parallel random access machine (PRAM) is such an abstraction which not only models multiprocessor platforms, but also new frameworks supporting massive parallel computation such as MapReduce. In this work, we explore the feasibility of designing cryptographic solutions for the PRAM model of computation to achieve security while leveraging the power of parallelism and random data access. We demonstrate asymptotically optimal solutions for a wide-range of cryptographic tasks based on indistinguishability obfuscation. In particular, we construct the first publicly verifiable delegation scheme with privacy in the persistent database setting, which allows a client to privately delegate both computation and data to a server with optimal efficiency. Specifically, the server can perform PRAM computation on private data with parallel efficiency preserved (up to poly-logarithmic overhead). Our results also cover succinct randomized encoding, searchable encryption, functional encryption, secure multiparty computation, and indistinguishability obfuscation for PRAM. We obtain our results in a modular way through a notion of computational-trace indistinguishability obfuscation (CiO), which may be of independent interests.
Last updated:  2018-01-15
Feasibility and Infeasibility of Secure Computation with Malicious PUFs
Dana Dachman-Soled, Nils Fleischhacker, Jonathan Katz, Anna Lysyanskaya, Dominique Schröder
A recent line of work has explored the use of physically uncloneable functions (PUFs) for secure computation, with the goals of (1) achieving universal composability without additional setup, and/or (2) obtaining unconditional security (i.e., avoiding complexity-theoretic assumptions). Initial work assumed that all PUFs, even those created by an attacker, are honestly generated. Subsequently, researchers have investigated models in which an adversary can create malicious PUFs with arbitrary behavior. Researchers have considered both malicious PUFs that might be stateful, as well as malicious PUFs that can have arbitrary behavior but are guaranteed to be stateless. We settle the main open questions regarding secure computation in the malicious-PUF model: * We prove that unconditionally secure oblivious transfer is impossible, even in the stand-alone setting, if the adversary can construct (malicious) stateful PUFs. * If the attacker is limited to creating (malicious) stateless PUFs, then universally composable two-party computation is possible without computational assumptions.
Last updated:  2015-05-01
Zero-Knowledge Accumulators and Set Operations
Esha Ghosh, Olga Ohrimenko, Dimitrios Papadopoulos, Roberto Tamassia, Nikos Triandopoulos
Accumulators provide a way to succinctly represent a set with elements drawn from a given domain, using an \emph{accumulation value}. Subsequently, short proofs for the set-\emph{membership} (or \emph{non-membership}) of any element from the domain can be constructed and efficiently verified with respect to this accumulation value. Accumulators have been widely studied in the literature, primarily, as an \emph{authentication} primitive: a malicious prover (e.g., an untrusted server) should not be able to provide convincing proofs on false statements (e.g., successfully prove membership for a value not in the set) to a verifier that issues membership queries (of course, having no access to set itself). In essence, in existing constructions the accumulation value acts as a (honestly generated) ``commitment'' to the set that allows selective ``opening'' as specified by membership queries---but with no ``hiding'' properties. In this paper we revisit this primitive and propose a privacy-preserving enhancement. We define the notion of a \emph{zero-knowledge accumulator} that provides the following very strong privacy notion: Accumulation values and proofs constructed during the protocol execution leak nothing about the set itself, or any subsequent updates to it (i.e., via element insertions/deletions). We formalize this property by a standard real/ideal execution game. An adversarial party that is allowed to choose the set and is given access to query and update oracles, cannot distinguish whether this interaction takes place with respect to the honestly executed algorithms of the scheme or with a simulator that is not given access to the set itself (and for updates, it does not even learn the type of update that occurred---let alone the inserted/deleted element). We compare our new privacy definition with other recently proposed similar notions showing that it is strictly stronger: We give a concrete example of the update-related information that can be leaked by previous definitions. We provide a mapping of the relations between zero-knowledge accumulators and primitives that are either set in the same security model or solve the same problem. We formally show and discuss a number of implications among primitives, some of which are not immediately evident. We believe this contribution is interesting on its own, as the area has received considerable attention recently (e.g., with the works of [Naor et al., TCC~2015] and [Derler et al., CT-RSA~2015]). We then construct the first dynamic universal zero-knowledge accumulator. Our scheme is perfect zero-knowledge and is secure under the $q$-Strong Bilinear Diffie-Hellman assumption. Finally, building on our dynamic universal zero-knowledge accumulator, we define a \emph{zero-knowledge authenticated set collection} to handle more elaborate set operations (beyond set-membership). In particular, this primitive allows one to outsource a collection of sets to an untrusted server that is subsequently responsible for answering union, intersection and set difference queries over these sets issued by multiple clients. Our scheme provides proofs that are succinct and efficiently verifiable and, at the same time, leak nothing beyond the query result. In particular, it offers verification time that is asymptotically optimal (namely, the same as simply reading the answer), and proof construction that is asymptotically as efficient as existing state-of-the-art constructions--- that however, do not offer privacy.
Last updated:  2015-05-01
Sequential Secret Sharing as a New Hierarchical Access Structure
Mehrdad Nojoumian, Douglas R. Stinson
Due to the rapid growth of the next generation networking and system technologies, computer networks require new design and management. In this context, security, and more specifically, access structures have been one of the major concerns. As such, in this article, sequential secret sharing (SQS), as an application of dynamic threshold schemes, is introduced. In this new cryptographic primitive, different (but related) secrets with increasing thresholds are shared among a set of players who have different levels of authority. Subsequently, each subset of the players can only recover the secret in their own level. Finally, the master secret will be revealed if all the secrets in the higher levels are first recovered. We briefly review the existing threshold modification techniques. We then present our construction and compare it with other hierarchical secret sharing schemes such as disjunctive and conjunctive multilevel secret sharing protocols.
Last updated:  2015-05-04
Success through confidence: Evaluating the effectiveness of a side-channel attack
Uncategorized
Adrian Thillard, Emmanuel Prouff, Thomas Roche
Show abstract
Uncategorized
Side-channel attacks usually apply a divide-and-conquer strategy, separately recovering different parts of the secret. Their efficiency in practice relies on the adversary ability to precisely assess the success or unsucces of each of these recoveries. This makes the study of the attack success rate a central problem in side-channel analysis. In tis paper we tackle this issue in two different settings for the most popular attack, namely the Correlation Power Analysis (CPA). In the first setting, we assume that the targeted subkey is known and we compare the state of the art formulae expressing the success rate as a function of the leakage noise and the algebraic properties of the cryptographic primitive. We also make the link between these formulae and the recent work of Fei et al. at CHES 2012. In the second setting, the subkey is no longer assumed to be known and we introduce the notion of confidence level in an attack result, allowing for the study of different heuristics. Through experiments, we show that the rank evolution of a subkey hypothesis can be exploited to compute a better confidence than considering only the final result.
Last updated:  2015-05-01
Simple Chosen-Ciphertext Security from Low-Noise LPN
Uncategorized
Eike Kiltz, Daniel Masny, Krzysztof Pietrzak
Show abstract
Uncategorized
Recently, Döttling et al. (ASIACRYPT 2012) proposed the first chosen-ciphertext (IND-CCA) secure public-key encryption scheme from the learning parity with noise (LPN) assumption. In this work we give an alternative scheme which is conceptually simpler and more efficient. At the core of our construction is a trapdoor technique originally proposed for lattices by Micciancio and Peikert (EUROCRYPT 2012), which we adapt to the LPN setting. The main technical tool is a new double-trapdoor mechanism, together with a trapdoor switching lemma based on a computational variant of the leftover hash lemma.
Last updated:  2015-07-13
Expiration and Revocation of Keys for Attribute-based Signatures (Full Version)
Stephen R. Tate, Roopa Vishwanathan
Attribute-based signatures, introduced by Maji \emph{et al.}, are signatures that prove that an authority has issued the signer ``attributes'' that satisfy some specified predicate. In existing attribute-based signature schemes, keys are valid indefinitely once issued. In this paper, we initiate the study of incorporating time into attribute-based signatures, where a time instance is embedded in every signature, and attributes are restricted to producing signatures with times that fall in designated validity intervals. We provide three implementations that vary in granularity of assigning validity intervals to attributes, including a scheme in which each attribute has its own independent validity interval, a scheme in which all attributes share a common validity interval, and a scheme in which sets of attributes share validity intervals. All of our schemes provide anonymity to a signer, hide the attributes used to create the signature, and provide collusion-resistance between users.
Last updated:  2015-05-01
New attacks on RSA with Moduli $N=p^rq$
Abderrahmane Nitaj, Tajjeeddine Rachidi
We present three attacks on the Prime Power RSA with modulus $N=p^rq$. In the first attack, we consider a public exponent $e$ satisfying an equation $ex-\phi(N)y=z$ where $\phi(N)=p^{r-1}(p-1)(q-1)$. We show that one can factor $N$ if the parameters $|x|$ and $|z|$ satisfy $|xz|<N^\frac{r(r-1)}{(r+1)^2}$ thereby extending the recent results of Sakar~\cite{SARKAR}. In the second attack, we consider two public exponents $e_1$ and $e_2$ and their corresponding private exponents $d_1$ and $d_2$. We show that one can factor $N$ when $d_1$ and $d_2$ share a suitable amount of their most significant bits, that is $|d_1-d_2|<N^{\frac{r(r-1)}{(r+1)^2}}$. The third attack enables us to factor two Prime Power RSA moduli $N_1=p_1^rq_1$ and $N_2=p_2^rq_2$ when $p_1$ and $p_2$ share a suitable amount of their most significant bits, namely, $|p_1-p_2|<\frac{p_1}{2rq_1q_2}$.
Last updated:  2015-05-01
Factoring RSA moduli with weak prime factors
Uncategorized
Abderrahmane Nitaj, Tajjeeddine Rachidi
Show abstract
Uncategorized
In this paper, we study the problem of factoring an RSA modulus $N=pq$ in polynomial time, when $p$ is a weak prime, that is, $p$ can be expressed as $ap=u_0+M_1u_1+\ldots+M_ku_k$ for some $k$ integers $M_1,\ldots, M_k$ and $k+2$ suitably small parameters $a$, $u_0,\ldots u_k$. We further compute a lower bound for the set of weak moduli, that is, moduli made of at least one weak prime, in the interval $[2^{2n},2^{2(n+1)}]$ and show that this number is much larger than the set of RSA prime factors satisfying Coppersmith's conditions, effectively extending the likelihood for factoring RSA moduli. We also prolong our findings to moduli composed of two weak primes.
Last updated:  2015-05-01
Relaxing Full-Codebook Security: A Refined Analysis of Key-Length Extension Schemes
Peter Gazi, Jooyoung Lee, Yannick Seurin, John Steinberger, Stefano Tessaro
We revisit the security (as a pseudorandom permutation) of cascading-based constructions for block-cipher key-length extension. Previous works typically considered the extreme case where the adversary is given the entire codebook of the construction, the only complexity measure being the number $q_e$ of queries to the underlying ideal block cipher, representing adversary's secret-key-independent computation. Here, we initiate a systematic study of the more natural case of an adversary restricted to adaptively learning a number $q_c$ of plaintext/ciphertext pairs that is less than the entire codebook. For any such $q_c$, we aim to determine the highest number of block-cipher queries $q_e$ the adversary can issue without being able to successfully distinguish the construction (under a secret key) from a random permutation. More concretely, we show the following results for key-length extension schemes using a block cipher with $n$-bit blocks and $\kappa$-bit keys: - Plain cascades of length $\ell = 2r+1$ are secure whenever $q_c q_e^r \ll 2^{r(\kappa+n)}$, $q_c \ll 2^\ka$ and $q_e \ll 2^{2\ka}$. The bound for $r = 1$ also applies to two-key triple encryption (as used within Triple DES). - The $r$-round XOR-cascade is secure as long as $q_c q_e^r \ll 2^{r(\kappa+n)}$, matching an attack by Gazi (CRYPTO 2013). - We fully characterize the security of Gazi and Tessaro's two-call 2XOR construction (EUROCRYPT 2012) for all values of $q_c$, and note that the addition of a third whitening step strictly increases security for $2^{n/4} \le q_c \le 2^{3/4n}$. We also propose a variant of this construction without re-keying and achieving comparable security levels.
Last updated:  2015-05-01
Generalizing Homomorphic MACs for Arithmetic Circuits
Dario Catalano, Dario Fiore, Rosario Gennaro, Luca Nizzardo
Homomorphic MACs, introduced by Gennaro and Wichs in 2013, allow anyone to validate computations on authenticated data without knowledge of the secret key. Moreover, the secret-key owner can verify the validity of the computation without needing to know the original (authenticated) inputs. Beyond security, homomorphic MACs are required to produce short tags (succinctness) and to support composability (i.e., outputs of authenticated computations should be re-usable as inputs for new computations). At Eurocrypt 2013, Catalano and Fiore proposed two realizations of homomorphic MACs that support a restricted class of computations (arithmetic circuits of polynomial degree), are practically efficient, but fail to achieve both succinctness and composability at the same time. In this paper, we generalize the work of Catalano and Fiore in several ways. First, we abstract away their results using the notion of encodings with limited malleability, thus yielding new schemes based on different algebraic settings. Next, we generalize their constructions to work with graded encodings, and more abstractly with $k$-linear groups. The main advantage of this latter approach is that it allows for homomorphic MACs which are (somewhat) composable while retaining succinctness. Interestingly, our construction uses graded encodings in a generic way. Thus, all its limitations (limited composability and non-constant size of the tags) solely depend on the fact that currently known multilinear maps share similar constraints. This means, for instance, that our scheme would support arbitrary circuits (polynomial depth) if we had compact multilinear maps with an exponential number of levels.
Last updated:  2016-02-11
Efficient Unlinkable Sanitizable Signatures from Signatures with Re-Randomizable Keys
Nils Fleischhacker, Johannes Krupp, Giulio Malavolta, Jonas Schneider, Dominique Schröder, Mark Simkin
In a sanitizable signature scheme the signer allows a designated third party, called the sanitizer, to modify certain parts of the message and adapt the signature accordingly. Ateniese et al. (ESORICS 2005) introduced this primitive and proposed five security properties which were formalized by Brzuska et al.~(PKC 2009). Subsequently, Brzuska et al. (PKC 2010) suggested an additional security notion, called unlinkability which says that one cannot link sanitized message-signature pairs of the same document. Moreover, the authors gave a generic construction based on group signatures that have a certain structure. However, the special structure required from the group signature scheme only allows for inefficient instantiations. Here, we present the first efficient instantiation of unlinkable sanitizable signatures. Our construction is based on a novel type of signature schemes with re-randomizable keys. Intuitively, this property allows to re-randomize both the signing and the verification key separately but consistently. This allows us to sign the message with a re-randomized key and to prove in zero-knowledge that the derived key originates from either the signer or the sanitizer. We instantiate this generic idea with Schnorr signatures and efficient $\Sigma$-protocols, which we convert into non-interactive zero-knowledge proofs via the Fiat-Shamir transformation. Our construction is at least one order of magnitude faster than instantiating the generic scheme of Brzuska et al. with the most efficient group signature schemes.
Last updated:  2018-09-28
Augmented Secure Channels and the Goal of the TLS 1.3 Record Layer
Christian Badertscher, Christian Matt, Ueli Maurer, Phillip Rogaway, Björn Tackmann
Motivated by the wide adoption of authenticated encryption and TLS, we suggest a basic channel abstraction, an augmented secure channel (ASC), that allows a sender to send a receiver messages consisting of two parts, where one is privacy-protected and both are authenticity-protected. Working in the tradition of constructive cryptography, we formalize this idea and provide a construction of this kind of channel using the lower-level tool authenticated-encryption. We look at recent proposals on TLS 1.3 and suggest that the criterion by which their security can be judged is quite simple: do they construct an ASC? Due to this precisely defined goal, we are able to give a natural construction that comes with a rigorous security proof and directly leads to a proposal on TLS 1.3 that is provably secure.
Last updated:  2015-04-29
Biclique cryptanalysis of MIBS-80 and PRESENT-80
Uncategorized
Mohammad Hossein Faghihi Sereshgi, Mohammad Dakhilalian, Mohsen Shakiba
Show abstract
Uncategorized
In this paper we present the first biclique cryptanalysis of MIBS block cipher and a new biclique cryptanalysis of PRESENT block cipher. These attacks are performed on full-round MIBS-80 and full-round PRESENT-80. Attack on MIBS- 80 uses matching without matrix method and has a data complexity upper bounded by $2^{52}$ chosen plaintext where it reduced security of this cipher about 1 bit. Attack on PRESENT-80 has a data complexity of at most $2^{22}$ chosen plaintexts and computational complexity of $2^{79.37}$ encryptions that both complexities are lower than other cryptanalyses of PRESENT-80 so far.
Last updated:  2016-04-11
Forgery Attacks on round-reduced ICEPOLE-128
Christoph Dobraunig, Maria Eichlseder, Florian Mendel
ICEPOLE is a family of authenticated encryptions schemes submitted to the ongoing CAESAR competition and in addition presented at CHES 2014. To justify the use of ICEPOLE, or to point out potential weaknesses, third-party cryptanalysis is needed. In this work, we evaluate the resistance of ICEPOLE-128 against forgery attacks. By using differential cryptanalysis, we are able to create forgeries from a known ciphertext-tag pair with a probability of $2^{-60.3}$ for a round-reduced version of ICEPOLE-128, where the last permutation is reduced to 4 (out of 6) rounds. This is a noticeable advantage compared to simply guessing the right tag, which works with a probability of $2^{-128}$. As far as we know, this is the first published attack in a nonce-respecting setting on round-reduced versions of ICEPOLE-128.
Last updated:  2015-04-29
On the Communication Complexity of Secure Computation
Deepesh Data, Manoj M. Prabhakaran, Vinod M. Prabhakaran
Information theoretically secure multi-party computation (MPC) is a central primitive of modern cryptography. However, relatively little is known about the communication complexity of this primitive. In this work, we develop powerful information theoretic tools to prove lower bounds on the communication complexity of MPC. We restrict ourselves to a concrete setting involving 3-parties, in order to bring out the power of these tools without introducing too many complications. Our techniques include the use of a data processing inequality for {\em residual information} --- i.e., the gap between mutual information and Gács-Körner common information, a new {\em information inequality} for 3-party protocols, and the idea of {\em distribution switching} by which lower bounds computed under certain worst-case scenarios can be shown to apply for the general case. Using these techniques we obtain tight bounds on communication complexity by MPC protocols for various interesting functions. In particular, we show concrete functions that have ``communication-ideal'' protocols, which achieve the minimum communication simultaneously on all links in the network. Also, we obtain the first {\em explicit} example of a function that incurs a higher communication cost than the input length in the secure computation model of Feige, Kilian and Naor \cite{FeigeKiNa94}, who had shown that such functions exist. We also show that our communication bounds imply tight lower bounds on the amount of randomness required by MPC protocols for many interesting functions.
Last updated:  2015-06-15
Dual System Encryption Framework in Prime-Order Groups
Nuttapong Attrapadung
We propose a new generic framework for achieving fully secure attribute based encryption (ABE) in prime-order bilinear groups. It is generic in the sense that it can be applied to ABE for arbitrary predicate. All previously available frameworks that are generic in this sense are given only in composite-order bilinear groups. These consist of the frameworks proposed by Wee (TCC'14) and Attrapadung (Eurocrypt'14). Both frameworks provide abstractions of dual-system encryption techniques introduced by Waters (Crypto'09). Our framework can be considered as a prime-order version of Attrapadung's framework and works in a similar manner: it relies on a main component called pair encodings, and it generically compiles any secure pair encoding scheme for a predicate in consideration to a fully secure ABE scheme for that predicate. One feature of our new compiler is that although the resulting ABE schemes will be newly defined in prime-order groups, we require essentially the same security notions of pair encodings as before. Beside the security of pair encodings, our framework assumes only the Matrix Diffie-Hellman assumption, introduced by Escala et al. (Crypto'13), which is a weak assumption that includes the Decisional Linear assumption as a special case. As for its applications, we can plug in available pair encoding schemes and automatically obtain the first fully secure ABE realizations in prime-order groups for predicates of which only fully secure schemes in composite-order groups were known. These include ABE for regular languages, ABE for monotone span programs (and hence Boolean formulae) with short ciphertexts or keys, and completely unbounded ABE for monotone span programs. As a side result, we establish the first generic implication from ABE for monotone span programs to ABE for branching programs. Consequently, we obtain fully-secure ABE for branching programs in some new variants, namely, unbounded, short-ciphertext, and short-key variants. Previous ABE schemes for branching programs are bounded and require linear-size ciphertexts and keys.
Last updated:  2015-04-29
Keccak
Uncategorized
Guido Bertoni, Joan Daemen, Michael Peeters, Gilles Van Assche
Show abstract
Uncategorized
In October 2012, the American National Institute of Standards and Technology (NIST) announced the selection of Keccak as the winner of the SHA-3 Cryptographic Hash Algorithm Competition [10,11]. This concluded an open competition that was remarkable both for its magnitude and the involvement of the cryptographic community. Public review is of paramount importance to increase the confidence in the new standard and to favor its quick adoption. The SHA-3 competition explicitly took this into account by giving open access to the candidate algorithms and everyone in the cryptographic community could try to break them, compare their performance, or simply give comments.
Last updated:  2019-05-17
Succinct Garbled RAM
Ran Canetti, Justin Holmgren
We construct the first fully succinct garbling scheme for RAM programs, assuming the existence of indistinguishability obfuscation for circuits and one-way functions. That is, the size, space requirements, and runtime of the garbled program are the same as those of the input program, up to poly-logarithmic factors and a polynomial in the security parameter. The scheme can be used to construct indistinguishability obfuscators for RAM programs with comparable efficiency, at the price of requiring sub-exponential security of the underlying primitives. In particular, this opens the door to obfuscated computations that are sublinear in the length of their inputs. The scheme builds on the recent schemes of Koppula-Lewko-Waters and Canetti-Holmgren-Jain-Vaikuntanathan [STOC 15]. A key technical challenge here is how to combine the fixed-prefix technique of KLW, which was developed for deterministic programs, with randomized Oblivious RAM techniques. To overcome that, we develop a method for arguing about the indistinguishability of two obfuscated randomized programs that use correlated randomness. Along the way, we also define and construct garbling schemes that offer only partial protection. These may be of independent interest.
Last updated:  2015-04-29
Method to Protect Passwords in Databases for Web Applications
Scott Contini
Trying to make it more difficult to hack passwords has a long history. However the research community has not addressed the change of context from traditional Unix mainframe systems to web applications which face new threats (DoS) and have fewer constraints (client-side computation is allowed). In absence of updated guidance, a variety of solutions are scattered all over the web, from amateur to somewhat professional. However, even the best references have issues such as incomplete details, misuse of terminology, assertion of requirements that are not adequately justified, and too many options presented to the developer, opening the door to potential mistakes. The purpose of this research note is to present a solution with complete details and a concise summary of the requirements, and to provide a solution that developers can readily implement with confidence, assuming that the solution is endorsed by the research community. The proposed solution involves client-side processing of a heavy computation in combination with a server-side hash computation. It follows a similar approach to a few other proposals on the web, but is more complete and justified than any that we found.
Last updated:  2016-05-26
Privately Evaluating Decision Trees and Random Forests
David J. Wu, Tony Feng, Michael Naehrig, Kristin Lauter
Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model's output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. We then give an extension of the semi-honest protocol that is robust against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate a tenfold improvement in computation and bandwidth.
Last updated:  2015-04-29
Feasibility and Completeness of Cryptographic Tasks in the Quantum World
Serge Fehr, Jonathan Katz, Fang Song, Hong-Sheng Zhou, Vassilis Zikas
It is known that cryptographic feasibility results can change by moving from the classical to the quantum world. With this in mind, we study the feasibility of realizing functionalities in the framework of universal composability, with respect to both computational and information-theoretic security. With respect to computational security, we show that existing feasibility results carry over unchanged from the classical to the quantum world; a functionality is “trivial” (i.e., can be realized without setup) in the quantum world if and only if it is trivial in the classical world. The same holds with regard to functionalities that are complete (i.e., can be used to realize arbitrary other functionalities). In the information-theoretic setting, the quantum and classical worlds differ. In the quantum world, functionalities in the class we consider are either complete, trivial, or belong to a family of simultaneous-exchange functionalities (e.g., XOR). However, other results in the information-theoretic setting remain roughly unchanged.
Last updated:  2015-04-28
Condensed Unpredictability
Maciej Skorski, Alexander Golovnev, Krzysztof Pietrzak
We consider the task of deriving a key with high HILL entropy (i.e., being computationally indistinguishable from a key with high min-entropy) from an unpredictable source. Previous to this work, the only known way to transform unpredictability into a key that was $\eps$ indistinguishable from having min-entropy was via pseudorandomness, for example by Goldreich-Levin (GL) hardcore bits. This approach has the inherent limitation that from a source with $k$ bits of unpredictability entropy one can derive a key of length (and thus HILL entropy) at most $k-2\log(1/\epsilon)$ bits. In many settings, e.g. when dealing with biometric data, such a $2\log(1/\epsilon)$ bit entropy loss in not an option. Our main technical contribution is a theorem that states that in the high entropy regime, unpredictability implies HILL entropy. Concretely, any variable $K$ with $|K|-d$ bits of unpredictability entropy has the same amount of so called metric entropy (against real-valued, deterministic distinguishers), which is known to imply the same amount of HILL entropy. The loss in circuit size in this argument is exponential in the entropy gap $d$, and thus this result only applies for small $d$ (i.e., where the size of distinguishers considered is exponential in $d$). To overcome the above restriction, we investigate if it's possible to first ``condense'' unpredictability entropy and make the entropy gap small. We show that any source with $k$ bits of unpredictability can be condensed into a source of length $k$ with $k-3$ bits of unpredictability entropy. Our condenser simply ``abuses" the GL construction and derives a $k$ bit key from a source with $k$ bits of unpredicatibily. The original GL theorem implies nothing when extracting that many bits, but we show that in this regime, GL still behaves like a ``condenser" for unpredictability. This result comes with two caveats (1) the loss in circuit size is exponential in $k$ and (2) we require that the source we start with has \emph{no} HILL entropy (equivalently, one can efficiently check if a guess is correct). We leave it as an intriguing open problem to overcome these restrictions or to prove they're inherent.
Last updated:  2015-04-28
Impossibility of VBB Obfuscation with Ideal Constant-Degree Graded Encodings
Rafael Pass, abhi shelat
A celebrated result by Barak et al (JACM’12) shows the impossibility of general-purpose virtual black-box (VBB) obfuscation in the plain model. A recent work by Canetti, Kalai, and Paneth (TCC’15) extends this result also to the random oracle model (assuming trapdoor per- mutations). In contrast, Brakerski-Rothblum (TCC’15) and Barak et al (EuroCrypt’14) show that in idealized graded encoding models, general-purpose VBB obfuscation indeed is possible; these construction require graded encoding schemes that enable evaluating high-degree (polynomial in the size of the circuit to be obfuscated) polynomials on encodings. We show a complementary impossibility of general-purpose VBB obfuscation in idealized graded encoding models that enable only evaluation of constant-degree polynomials (assuming trapdoor permutations).
Last updated:  2015-06-19
High-Performance Ideal Lattice-Based Cryptography on 8-bit ATxmega Microcontrollers
Thomas Pöppelmann, Tobias Oder, Tim Güneysu
Over the last years lattice-based cryptography has received much attention due to versatile average-case problems like Ring-LWE or Ring-SIS that appear to be intractable by quantum computers. But despite of promising constructions, only few results have been published on implementation issues on very constrained platforms. In this work we therefore study and compare implementations of Ring-LWE encryption and the bimodal lattice signature scheme (BLISS) on an 8-bit Atmel ATxmega128 microcontroller. Since the number theoretic transform (NTT) is one of the core components in implementations of lattice-based cryptosystems, we review the application of the NTT in previous implementations and present an improved approach that significantly lowers the runtime for polynomial multiplication. Our implementation of Ring-LWE encryption takes 27 ms for encryption and 6.7 ms for decryption. To compute a BLISS signature, our software takes 329 ms and 88 ms for verification. These results outperform implementations on similar platforms and underline the feasibility of lattice-based cryptography on constrained devices.
Last updated:  2015-04-28
Financial Cryptography: Algorithmic Mechanisms for a Hedonic Game
Uncategorized
Sumit Chakraborty
Show abstract
Uncategorized
A (or a group of) selling agent wants to allocate and sell a (or a set of) parcel of land optimally and fairly to a buying agent within the capacity constraint of the selling agent and budget constraint of the buying agent. This problem has been solved by combining the concept of algorithmic cooperative game theory and financial cryptography. This is an approach for a group of decision-making agents to reach a mutually beneficial agreement through compromise and stable matching of preference. The work presents a cooperative game and a set of algorithmic coordination mechanisms: SBSS, SBMS (for collective and non-collective bargaining in holdout problem) and MBSS. The game is characterized by a set of agents, inputs, strategic moves, revelation principle, payment function and outputs. The coordination mechanisms are designed based on domain planning, rational fair data exchange and compensation negotiation. These mechanisms preserve the privacy of strategic data through secure multi-party computation (SMC), more specifically solving Yao’s millionaire problem. The mechanisms are analyzed from the perspectives of revelation principle, computational intelligence and communication complexity. The communication complexity depends on the time constraint of the negotiating agents, their information state and the number of negotiation issues. The computational complexity depends on the valuation of pricing plan, compensation estimation and private comparison. It is a mixed strategy game; both sequential and simultaneous moves can be applied intelligently to search a neighborhood space of core solutions.
Last updated:  2015-04-28
Protecting against Multidimensional Linear and Truncated Differential Cryptanalysis by Decorrelation
Céline Blondeau, Aslí Bay, Serge Vaudenay
The decorrelation theory provides a different point of view on the security of block cipher primitives. Results on some statistical attacks obtained in this context can support or provide new insight on the security of symmetric cryptographic primitives. In this paper, we study, for the first time, the multidimensional linear attacks as well as the truncated differential attacks in this context. We show that the cipher should be decorrelated of order two to be resistant against some multidimensional linear and truncated differential attacks. Previous results obtained with this theory for linear, differential, differential-linear and boomerang attacks are also resumed and improved in this paper.
Last updated:  2015-04-28
MMBcloud-tree: Authenticated Index for Verifiable Cloud Service Selection
Jingwei Li, Anna Squicciarini, Dan Lin, Smitha Sundareswaran, Chunfu Jia
Cloud brokers have been recently introduced as an additional computational layer to facilitate cloud selection and service management tasks for cloud consumers. However, existing brokerage schemes on cloud service selection typically assume that brokers are completely trusted, and do not provide any guarantee over the correctness of the service recommendations. It is then possible for a compromised or dishonest broker to easily take advantage of the limited capabilities of the clients and provide incorrect or incomplete responses. To address this problem, we propose an innovative Cloud Service Selection Verification (CSSV) scheme and index structures (MMBcloud-tree) to enable cloud clients to detect misbehavior of the cloud brokers during the service selection process. We demonstrate correctness and efficiency of our approaches both theoretically and empirically.
Last updated:  2016-02-13
PAC Learning of Arbiter PUFs
Uncategorized
Fatemeh Ganji, Shahin Tajik, Jean-Pierre Seifert
Show abstract
Uncategorized
The general concept of Physically Unclonable Functions (PUFs) has been nowadays widely ac cepted and adopted to meet the requirements of secure identification and key generation/storage for cryptographic ciphers. However, shattered by different attacks, e.g., modeling attacks, it has been proved that the promised security features of arbiter PUFs, including unclonability and unpredictability, are not supported unconditionally. However, so far the success of existing modeling attacks relies on pure trial and error estimates. This means that neither the probability of obtaining a useful model (confidence), nor the sufficient number of CRPs, nor the probability of correct prediction (accuracy) is guaranteed. To address these issues, this work presents a Probably Approximately Correct (PAC) learning algorithm. Based on a crucial discretization process, we are able to define a Deterministic Finite Automaton (of polynomial size), which exactly accepts the regular language corresponding to the challenges mapped by the given PUF to one responses.
Last updated:  2015-04-28
Cluster Computing in Zero Knowledge
Alessandro Chiesa, Eran Tromer, Madars Virza
Large computations, when amenable to distributed parallel execution, are often executed on computer clusters, for scalability and cost reasons. Such computations are used in many applications, including, to name but a few, machine learning, webgraph mining, and statistical machine translation. Oftentimes, though, the input data is private and only the result of the computation can be published. Zero-knowledge proofs would allow, in such settings, to verify correctness of the output without leaking (additional) information about the input. In this work, we investigate theoretical and practical aspects of *zero-knowledge proofs for cluster computations*. We design, build, and evaluate zero-knowledge proof systems for which: (i) a proof attests to the correct execution of a cluster computation; and (ii) generating the proof is itself a cluster computation that is similar in structure and complexity to the original one. Concretely, we focus on MapReduce, an elegant and popular form of cluster computing. Previous zero-knowledge proof systems can in principle prove a MapReduce computation's correctness, via a monolithic NP statement that reasons about all mappers, all reducers, and shuffling. However, it is not clear how to generate the proof for such monolithic statements via parallel execution by a distributed system. Our work demonstrates, by theory and implementation, that proof generation can be similar in structure and complexity to the original cluster computation. Our main technique is a bootstrapping theorem for succinct non-interactive arguments of knowledge (SNARKs) that shows how, via recursive proof composition and Proof-Carrying Data, it is possible to transform any SNARK into a *distributed SNARK for MapReduce* which proves, piecewise and in a distributed way, the correctness of every step in the original MapReduce computation as well as their global consistency.
Last updated:  2015-04-24
Cryptography from Post-Quantum Assumptions
Raza Ali Kazmi
In this thesis we present our contribution in the field of post-quantum cryptography. We introduce a new notion of {\em weakly Random-Self-Reducible} public-key cryptosystem and show how it can be used to implement secure Oblivious Transfer. We also show that two recent (Post-quantum) cryptosystems can be considered as {\em weakly Random-Self-Reducible}. We introduce a new problem called Isometric Lattice Problem and reduce graph isomorphism and linear code equivalence to this problem. We also show that this problem has a perfect zero-knowledge interactive proof with respect to a malicious verifier; this is the only hard problem in lattices that is known to have this property.
Last updated:  2018-05-07
Bounds on surmising remixed keys
Daniel R. L. Brown
A remixed key is derived from a secret source key by applying a public but unpredictable random function to the source key. A remixed key models a key derived from a shared secret and a public unpredictable salt, using a common, deterministic, pseudorandom function---which is somewhat like TLS record-layer keys, for example. This report tries to validate the intuition that remixed keys are not easy to surmise due to publicity of the remixing function, in other words, that remixing does not introduce an exploitable spike in the probability distribution of the remixed key relative to the remix function. The report provides pencil-and-paper proofs of numerical bounds on the probability that an adversary can surmise a remixed key, assuming a uniformly random source key and remix function. Some proofs are derived from a proof of an asymptotic result on probability theory (a balls-in-bins bound) in a textbook by Shoup.
Last updated:  2016-03-07
On the Impossibility of Tight Cryptographic Reductions
Christoph Bader, Tibor Jager, Yong Li, Sven Schäge
The existence of tight reductions in cryptographic security proofs is an important question, motivated by the theoretical search for cryptosystems whose security guarantees are truly independent of adversarial behavior and the practical necessity of concrete security bounds for the theoretically-sound selection of cryptographic parameters. At Eurocrypt 2002, Coron described a meta-reduction technique that allows to prove the impossibility of tight reductions for certain digital signature schemes. This seminal result has found many further interesting applications. However, due to a technical subtlety in the argument, the applicability of this technique beyond digital signatures in the single-user setting has turned out to be rather limited. We describe a new meta-reduction technique for proving such impossibility results, which improves on known ones in several ways. First, it enables interesting novel applications. This includes a formal proof that for certain cryptographic primitives (including public-key encryption/key encapsulation mechanisms and digital signatures), the security loss incurred when the primitive is transferred from an idealized single-user setting to the more realistic multi-user setting is impossible to avoid, and a lower tightness bound for non-interactive key exchange protocols. Second, the technique allows to rule out tight reductions from a very general class of non-interactive complexity assumptions. Third, the provided bounds are quantitatively and qualitatively better, yet simpler, than the bounds derived from Coron's technique and its extensions.
Last updated:  2015-12-07
Publicly Verifiable Software Watermarking
Aloni Cohen, Justin Holmgren, Vinod Vaikuntanathan
Software Watermarking is the process of transforming a program into a functionally equivalent ``marked'' program in such a way that it is computationally hard to remove the mark without destroying functionality. Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan and Yang (CRYPTO 2001) defined software watermarking and showed that the existence of indistinguishability obfuscation implies that software watermarking is impossible. Given the recent candidate constructions of indistinguishability obfuscation, this result paints a bleak picture for the possibility of meaningful watermarking. We show that slightly relaxing the functionality requirement gives us strong positive results for watermarking. Namely, instead of requiring the marked program to agree with the original unmarked program on {\em all inputs}, we require only that they agree on a large fraction of inputs. With this relaxation in mind, our contributions are as follows. 1. We define publicly verifiable watermarking where marking a program requires a secret key, but anyone can verify that a program is marked. The handful of existing watermarking schemes are secretly verifiable, and moreover, satisfy only a weak definition where the adversary is restricted in the type of unmarked programs it is allowed to produce (Naccache, Shamir and Stern, PKC 1999; Nishimaki, EUROCRYPT 2013). Moreover, our definition requires security against chosen program attacks, where an adversary has access to an oracle that marks programs of her choice. 2. We construct a publicly verifiable watermarking scheme for any family of puncturable pseudo-random functions (PPRF), assuming indistinguishability obfuscation and injective one-way functions. Complementing our positive result, we show that there are pseudo-random functions which cannot be watermarked, even in a very weak setting. As a corollary, we demonstrate the first family of PRFs that are not point-puncturable.
Last updated:  2015-04-24
Security Analysis of PRINCE
Jeremy Jean, Ivica Nikolic, Thomas Peyrin, Lei Wang, Shuang Wu
In this article, we provide the first third-party security analysis of the PRINCE lightweight block cipher, and the underlying PRINCE_core. First, while no claim was made by the authors regarding related-key attacks, we show that one can attack the full cipher with only a single pair of related keys, and then reuse the same idea to derive an attack in the single-key model for the full PRINCE_core for several instances of the $\alpha$ parameter (yet not the one randomly chosen by the designers). We also show how to exploit the structural linear relations that exist for PRINCE in order to obtain a key recovery attack that slightly breaks the security claims for the full cipher. We analyze the application of integral attacks to get the best known key-recovery attack on a reduced version of the PRINCE cipher. Finally, we provide time-memory-data tradeoffs, that require only known plaintext-ciphertext data, and that can be applied to full PRINCE.
Last updated:  2015-04-23
Constant-Round MPC with Fairness and Guarantee of Output Delivery
Uncategorized
S. Dov Gordon, Feng-Hao Liu, Elaine Shi
Show abstract
Uncategorized
We study the round complexity of multiparty computation with fairness and guaranteed output delivery, assuming existence of an honest majority. We demonstrate a new lower bound and a matching upper bound. Our lower bound rules out any two-round fair protocols in the standalone model, even when the parties are given access to a common reference string (CRS). The lower bound follows by a reduction to the impossibility result of virtual black box obfuscation of arbitrary circuits. Then we demonstrate a three-round protocol with guarantee of output delivery, which in general is harder than achieving fairness (since the latter allows the adversary to force a fair abort). We develop a new construction of a threshold fully homomorphic encryption scheme, with a new property that we call ``flexible'' ciphertexts. Roughly, our threshold encryption scheme allows parties to adapt flexible ciphertexts to the public keys of the non-aborting parties, which provides a way of handling aborts without adding any communication.
Last updated:  2015-04-23
Financial Cryptography: Discriminatory Pricing Mechanism
Uncategorized
Sumit Chakraborty
Show abstract
Uncategorized
This work presents an adaptive profitable discriminatory pricing mechanism for cloud computing based on secure function decomposition, cryptographic commitments and zero knowledge proof. Cloud computing is an emerging trend of enterprise resource planning where a selling agent or service provider (S) wants to allocate a set of computational resources and related IT services optimally and fairly among many buying agents or service consumers (B) within its capacity constraint. Each service consumer discloses its demand plan for an IT portfolio within its budget constraint and rank of preference. An IT portfolio may include SaaS, PaaS, IaaS, CaaS, DaaS and dSaaS. The basic objective of the service provider is to optimize its expected revenue within target profit margin. It is basically a problem of secure function evaluation where the concept of decomposition of a function is considered. It is a constrained non-linear optimization problem; the search is governed by a set of intelligent moves. The communication complexity of the pricing mechanism depends on the time constraint of the negotiating agents, their information state and the number of negotiation issues; it also depends on number of negotiation rounds and the complexity of IT portfolio. The computational cost depends on the complexity of function decomposition. The security and privacy of strategic data of the trading agents provides business intelligence to the pricing mechanism. The ultimate objective of the mechanism is to predict a profitable discriminatory pricing plan for each consumer.
Last updated:  2015-04-23
On Non-Black-Box Simulation and the Impossibility of Approximate Obfuscation
Uncategorized
Nir Bitansky, Omer Paneth
Show abstract
Uncategorized
The introduction of a non-black-box simulation technique by Barak (FOCS 2001) has been a major landmark in cryptography, breaking the previous barriers of black-box impossibility. Barak's technique has given rise to various powerful applications and it is a key component in all known protocols with non-black-box simulation. We present the first non-black-box simulation technique that does not rely on Barak's technique (or on nonstandard assumptions). Invoking this technique, we obtain new and improved protocols resilient to various resetting attacks. These improvements include weaker computational assumptions and better round complexity. A prominent feature of our technique is its compatibility with rewinding techniques from classic black-box zero-knowledge protocols. The combination of rewinding with non-black-box simulation has proven instrumental in coping with challenging goals as: simultaneously-resettable zero-knowledge, proofs of knowledge, and resettable-security from one-way functions. While previous works required tailored modifications to Barak's technique, we give a general recipe for combining our technique with rewinding. This yields simplified resettable protocols in the above settings, as well as improvements in round complexity and required computational assumptions. The main ingredient in our technique is a new impossibility result for general program obfuscation. The results extend the impossibility result of Barak et al. (CRYPTO 2001) to the case of obfuscation with approximate functionality; thus, settling a question left open by Barak et al.. In the converse direction, we show a generic transformation from any resettably-sound zero-knowledge protocol to a family of functions that cannot be obfuscated.
Last updated:  2015-04-23
Breaking the Rabin-Williams digital signature system implementation in the Crypto++ library
Evgeny Sidorov
This paper describes a bug in the implementation of the Rabin-Williams digital signature in the \texttt{Crypto++} framework. The bug is in the misuse of blinding technique that is aimed at preventing timing attacks on the digital signature system implementation, but eventually results in an opportunity to find the private key having only two different signatures of the same message. The CVE identifier of the issue is \texttt{CVE-2015-2141}.
Last updated:  2015-04-23
Improved Higher-Order Differential Attacks on MISTY1
Uncategorized
Achiya Bar-On
Show abstract
Uncategorized
MISTY1 is a block cipher designed by Matsui in 1997. It is widely deployed in Japan, and is recognized internationally as an European NESSIE-recommended cipher and an ISO standard. Since its introduction, MISTY1 was subjected to extensive cryptanalytic efforts, yet no attack significantly faster than exhaustive key search is known on its full version. The best currently known attack is a higher-order differential attack presented by Tsunoo et al. in 2012 which breaks a reduced variant of MISTY1 that contains 7 of the 8 rounds and 4 of the 5 $FL$ layers in $2^{49.7}$ data and $2^{116.4}$ time. In this paper, we present improved higher-order differential attacks on reduced-round MISTY1. Our attack on the variant considered by Tsunoo et al. requires roughly the same amount of data and only $2^{100.4}$ time (i.e., is $2^{16}$ times faster). Furthermore, we present the first attack on a MISTY1 variant with 7 rounds and all 5 $FL$ layers, requiring $2^{51.4}$ data and $2^{121}$ time. To achieve our results, we use a new higher-order differential characteristic for 4-round MISTY1, as well as enhanced key recovery algorithms based on the {\it partial sums} technique.
Last updated:  2017-08-25
A random zoo: sloth, unicorn, and trx
Uncategorized
Arjen K. Lenstra, Benjamin Wesolowski
Show abstract
Uncategorized
Many applications require trustworthy generation of public random numbers. It is shown how this can be achieved using a hash function that is timed to be as slow as desired (sloth), while the correctness of the resulting hash can be verified quickly. It is shown how sloth can be used for uncontestable random number generation (unicorn), and how unicorn can be used for a new trustworthy random elliptic curves service (trx) and random-sample voting.
Last updated:  2015-09-13
On the (im)possibility of receiving security beyond 2^l using an l-bit PRNG: the case of Wang et. al. protocol
Masoumeh Safkhani, Mehdi Hosseinzadeh, Mojtaba Eslamnezhad Namin, Samad Rostampour, Nasour Bagheri
Recently,Wang et al. analyzed the security of two EPC C1-G2 compliant RFID authentication protocols, called RAPLT and SRP^+, and proved that these protocols are vulnerable against de-synchronization and secret disclosure attacks. The time complexity of their attacks were O(2^{16}). In addition, they proposed an improved version of SRP^+ entitled SRP^{++}, for which they claim the security would be O(2^{32}). However, in this letter, we analyze the security of SRP^{++} and show that the complexity of retrieving all secret parameters of a given tag is $O(2^{16})$, similar to its predecessor protocol.
Last updated:  2015-04-23
Privacy-preserving Context-aware Recommender Systems: Analysis and New Solutions
Qiang Tang, Jun Wang
Nowadays, recommender systems have become an indispensable part of our daily life and provide personalized services for almost everything. However, nothing is for free -- such systems have also upset the society with severe privacy concerns because they accumulate a lot of personal information in order to provide recommendations. In this work, we construct privacy-preserving recommendation protocols by incorporating cryptographic techniques and the inherent data characteristics in recommender systems. We first revisit the protocols by Jeckmans et al. at ESORICS 2013 and show a number of security and usability issues. Then, we propose two privacy-preserving protocols, which compute predicted ratings for a user based on inputs from both the user's friends and a set of randomly chosen strangers. A user has the flexibility to retrieve either a predicted rating for an unrated item or the Top-N unrated items. The proposed protocols prevent information leakage from both protocol executions and the protocol outputs: a somewhat homomorphic encryption scheme is used to make all computations run in encrypted form, and inputs from the randomly-chosen strangers guarantee that the inputs of a user's friends will not be compromised even if this user's outputs are leaked. Finally, we use the well-known MovieLens 100k dataset to evaluate the performances for different parameter sizes.
Last updated:  2015-10-21
Optimally Secure Tweakable Blockciphers
Bart Mennink
We consider the generic design of a tweakable blockcipher from one or more evaluations of a classical blockcipher, in such a way that all input and output wires are of size n bits. As a first contribution, we show that any tweakable blockcipher with one primitive call and arbitrary linear pre- and postprocessing functions can be distinguished from an ideal one with an attack complexity of about 2^{n/2}. Next, we introduce the tweakable blockcipher tilde{F}[1]. It consists of one multiplication and one blockcipher call with tweak-dependent key, and achieves 2^{2n/3} security. Finally, we introduce tilde{F}[2], which makes two blockcipher calls, one of which with tweak-dependent key, and achieves optimal 2^n security. Both schemes are more efficient than all existing beyond birthday bound tweakable blockciphers known to date, as long as one blockcipher key renewal is cheaper than one blockcipher evaluation plus one universal hash evaluation.
Last updated:  2015-04-23
Oblivious Transfer from weakly Random Self-Reducible Public-Key Cryptosystem
Claude Crepeau, Raza Ali Kazmi
In this work, we define a new notion of weakly Random-Self-Reducibile cryptosystems and show how it can be used to implement secure Oblivious Transfer. We also show that two recent (Post-quantum) cryptosystems (based on Learning with errors and Approximate Integer GCD) can be considered as weakly Random-Self-Reducible.
Last updated:  2022-12-19
Computationally binding quantum commitments
Dominique Unruh
We present a new definition of computationally binding commitment schemes in the quantum setting, which we call "collapse-binding". The definition applies to string commitments, composes in parallel, and works well with rewinding-based proofs. We give simple constructions of collapse-binding commitments in the random oracle model, giving evidence that they can be realized from hash functions like SHA-3. We evidence the usefulness of our definition by constructing three-round statistical zero-knowledge quantum arguments of knowledge for all NP languages.
Last updated:  2015-10-08
Achieving Differential Privacy with Bias-Control Limited Source
Uncategorized
Yanqing Yao, Zhoujun Li
Show abstract
Uncategorized
In the design of differentially private mechanisms, it’s usually assumed that a uniformly random source is available. However, in many situations it seems unrealistic, and one must deal with various imperfect random sources. Dodis et al. (CRYPTO’12) presented that differential privacy can be achieved with Santha-Vazirani (SV) source via adding a stronger property called SV-consistent sampling and left open question if differential privacy is possible with more realistic (i.e., less structured) sources. A new source, called Bias-Control Limited (BCL) source, introduced by Dodis (ICALP’01), is more realistic. It can be considered as a generalization of the SV and sequential bit-fixing sources. Unfortunately, the natural extension of SV-consistent sampling to the BCL source is hopeless to achieve differential privacy, mainly because SV-consistent sampling requires “consecutive” strings, while some strings can’t be generated from “non-trivial” BCL source. Motivated by this problem, we introduce a new appealing property, called compact BCL-consistent sampling, the degeneration of which is different from SV-consistent sampling shown by Dodis et al. (CRYPTO’12). We prove that if the mechanism based on the BCL source satisfies this property, then it’s differentially private. Even if the BCL source is degenerated into the SV-source, our proof is much more intuitive and simpler than that of Dodis et al. Further, we construct explicit mechanisms using a new truncation technique as well as arithmetic coding. We also propose its concrete results for differential privacy and utility. While the results of Dodis and Yao (CRYPTO’15) imply that if there exist differentially private mechanisms for imperfect randomness, then the parameters should have some constraints, we show an explicit construction of such mechanisms, whose parameters match the prior constraints.
Last updated:  2015-04-23
Higher-Order Side Channel Security and Mask Refreshing
Jean-Sebastien Coron, Emmanuel Prouff, Matthieu Rivain, Thomas Roche
Masking is a widely used countermeasure to protect block cipher implementations against side-channel attacks. The principle is to split every sensitive intermediate variable occurring in the computation into d + 1 shares, where d is called the masking order and plays the role of a security parameter. A masked implementation is then said to achieve dth-order security if any set of d (or less) intermediate variables does not reveal key-dependent information. At CHES 2010, Rivain and Prouff have proposed a higher-order masking scheme for AES that works for any arbitrary order d. This scheme, and its subsequent extensions, are based on an improved version of the shared multiplication processing published by Ishai et al. at CRYPTO 2003. This improvement enables better memory/timing performances but its security relies on the refreshing of the masks at some points in the algorithm. In this paper, we show that the method proposed at CHES 2010 to do such mask refreshing introduces a security flaw in the overall masking scheme. Specically, we show that it is vulnerable to an attack of order d/2 + 1 whereas the scheme is supposed to achieve dth-order security. After exhibiting and analyzing the flaw, we propose a new solution which avoids the use of mask refreshing, and we prove its security. We also provide some implementation trick that makes our proposed solution, not only secure, but also faster than the original scheme.
Last updated:  2015-04-23
On Generalized First Fall Degree Assumptions
Yun-Ju Huang, Christophe Petit, Naoyuki Shinohara, Tsuyoshi Takagi
The first fall degree assumption provides a complexity approximation of Gröbner basis algorithms when the degree of regularity of a polynomial system cannot be precisely evaluated. Most importantly, this assumption was recently used by Petit and Quisquater's to conjecture that the elliptic curve discrete logarithm problem can be solved in subexponential time for binary fields (binary ECDLP). The validity of the assumption may however depend on the systems in play. In this paper, we theoretically and experimentally study the first fall degree assumption for a class of polynomial systems including those considered in Petit and Quisquater's analysis. In some cases, we show that the first fall degree assumption seems to hold and we deduce complexity improvements on previous binary ECDLP algorithms. On the other hand, we also show that the assumption is unlikely to hold in other cases where it would have very unexpected consequences. Our results shed light on a Gröbner basis assumption with major consequences on several cryptanalysis problems, including binary ECDLP.
Last updated:  2015-04-23
A Group-theory Method to The Cycle Structures of Feedback Shift Registers
Ming Li, Yupeng Jiang, Dongdai Lin
In this paper, we consider the cycle structures of feedback shift registers (FSRs). At the beginning, the cycle structures of two special classes of FSRs, pure circulating registers (PCRs) and pure summing registers (PSRs), are studied and it is proved that there are no other FSRs have the same cycle structure of an PCR (or PSR). Then, we regard $n$-stage FSRs as permutations over $2^n$ elements. According to the group theory, two permutations have the same cycle structure if and only if they are conjugate with each other. Since a conjugate of an FSR may no longer an FSR, it is interesting to consider the permutations that always transfer an FSR to an FSR. It is proved that there are exactly two such permutations, the identity mapping and the mapping that map every state to its dual. Furthermore, we prove that they are just the two permutations that transfer any maximum length FSR to an maximum length FSR.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.