All papers in 2012 (Page 3 of 733 results)

Last updated:  2012-09-20
Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems
Thijs Laarhoven, Joop van de Pol, Benne de Weger
This paper is a tutorial introduction to the present state-of-the-art in the field of security of lattice-based cryptosystems. After a short introduction to lattices, we describe the main hard problems in lattice theory that cryptosystems base their security on, and we present the main methods of attacking these hard problems, based on lattice basis reduction. We show how to find shortest vectors in lattices, which can be used to improve basis reduction algorithms. Finally we give a framework for assessing the security of cryptosystems based on these hard problems.
Last updated:  2012-09-27
Pairing computation on Edwards curves with high-degree twists
Liangze Li, Hongfeng Wu, Fan Zhang
In this paper, we propose an elaborate geometry approach to explain the group law on twisted Edwards curves which are seen as the intersection of quadric surfaces in place. Using the geometric interpretation of the group law we obtain the Miller function for Tate pairing computation on twisted Edwards curves. Then we present the explicit formulae for pairing computation on twisted Edwards curves. Our formulae for the doubling step are a littler faster than that proposed by Arene et.al.. Finally, to improve the efficiency of pairing computation we present twists of degree 4 and 6 on twisted Edwards curves.
Last updated:  2013-05-06
Generic Construction of Trace and Revoke Schemes
Uncategorized
Murat Ak, Aggelos Kiayias, Serdar Pehlivanoglu, Ali Aydin Selcuk
Show abstract
Uncategorized
Broadcast encryption (BE) is a cryptographic primitive that allows a broadcaster to encrypt digital content to a privileged set of users and in this way prevent revoked users from accessing the content. In BE schemes, a group of users, called traitor s may leak their keys and enable an adversary to receive the content. Such malicious users can be detected through traitor tracing (TT) schemes. The ultimate goal in a content distribution system would be combining traitor tracing and broadcast encryption (resulting in a trace and revoke system) so that any receiver key found to be compromised in a tracing process would be revoked from future transmissions. In this paper, we propose a generic method to transform a broadcast encryption scheme into a trace and revoke scheme. This transformation involves the utilization of a fingerprinting code over the underlying BE transmission. While fingerprinting codes have been used for constructing traitor tracing schemes in the past, their usage has various shortcomings such as the increase of the public key size with a linear factor in the length of the code. Instead, we propose a novel way to apply fingerprinting codes that allows for efficient parameters while retaining the traceability property. Our approach is based on a new property of fingerprinting codes we introduce, called public samplability. We have instantiated our generic transformation with the BE schemes of [4, 13, 20] something that enables us to produce trace and revoke schemes with novel properties. Specifically, we show (i) a trace and revoke scheme with constant private key size and short ciphertext size, (ii) the first ID-based trace and revoke scheme, (iii) the first publicly traceable scheme with constant private key size and (iv) the first trace and revoke scheme against pirate rebroadcasting attack in the public key setting.
Last updated:  2012-09-08
Dynamic Searchable Symmetric Encryption
Seny Kamara, Charalampos Papamanthou, Tom Roeder
Searchable symmetric encryption (SSE) allows a client to encrypt its data in such a way that this data can still be searched. The most immediate application of SSE is to cloud storage, where it enables a client to securely outsource its data to an untrusted cloud provider without sacrificing the ability to search over it. SSE has been the focus of active research and a multitude of schemes that achieve various levels of security and efficiency have been proposed. Any practical SSE scheme, however, should (at a minimum) satisfy the following properties: sublinear search time, security against adaptive chosen-keyword attacks, compact indexes and the ability to add and delete files efficiently. Unfortunately, none of the previously-known SSE constructions achieve all these properties at the same time. This severely limits the practical value of SSE and decreases its chance of deployment in real-world cloud storage systems. To address this, we propose the first SSE scheme to satisfy all the properties outlined above. Our construction extends the inverted index approach (Curtmola et al., CCS 2006) in several non-trivial ways and introduces new techniques for the design of SSE. In addition, we implement our scheme and conduct a performance evaluation, showing that our approach is highly efficient and ready for deployment.
Last updated:  2014-06-12
PRINCE - A Low-latency Block Cipher for Pervasive Computing Applications (Full version)
Julia Borghoff, Anne Canteaut, Tim Güneysu, Elif Bilge Kavun, Miroslav Knežević, Lars R. Knudsen, Gregor Leander, Ventzislav Nikov, Christof Paar, Christian Rechberger, Peter Rombouts, Søren S. Thomsen, Tolga Yalçın
This paper presents a block cipher that is optimized with respect to latency when implemented in hardware. Such ciphers are desirable for many future pervasive applications with real-time security needs. Our cipher, named PRINCE, allows encryption of data within one clock cycle with a very competitive chip area compared to known solutions. The fully unrolled fashion in which such algorithms need to be implemented calls for innovative design choices. The number of rounds must be moderate and rounds must have short delays in hardware. At the same time, the traditional need that a cipher has to be iterative with very similar round functions disappears, an observation that increases the design space for the algorithm. An important further requirement is that realizing decryption and encryption results in minimum additional costs. PRINCE is designed in such a way that the overhead for decryption on top of encryption is negligible. More precisely for our cipher it holds that decryption for one key corresponds to encryption with a related key. This property we refer to as alpha-reflection is of independent interest and we prove its soundness against generic attacks.
Last updated:  2012-09-16
An ID-Based Signcryption Scheme with Compartmented Secret Sharing for Unsigncryption
Graham Enos, Yuliang Zheng
In this paper the ID-based signcryption scheme of Li, Xin, and Hu is extended to a compartmented scheme. If an organization is partitioned into different compartments, this scheme allows any member of a specific compartment to participate in the unsigncryption; moreover, each member of a compartment has information unique to that individual. This construction is the first (to the authors’ knowledge) to combine identity-based encryption, Shamir’s threshold scheme, and signcryption into an implementable compartmented sharing scheme.
Last updated:  2012-09-07
Cryptanalysis of a recent two factor authentication scheme
Michael Scott
Very recently a scheme has been proposed by Wang and Ma for a robust smart-card based password authentication scheme, which claims to be secure against a Smart Card security breach. In this short note we attempt an initial cryptanalysis of this scheme.
Last updated:  2013-07-08
Invertible Polynomial Representation for Private Set Operations
Jung Hee Cheon, Hyunsook Hong, Hyung Tae Lee
In many private set operations, a set is represented by a polynomial over a ring $\Z_{\sigma}$ for a composite integer $\sigma$, where $\Z_\sigma$ is the message space of some additive homomorphic encryption. While it is useful for implementing set operations with polynomial additions and multiplications, a polynomial representation has a limitation due to the hardness of polynomial factorization over $\Z_\sigma$. That is, it is hard to recover a corresponding set from a resulting polynomial over $\Z_\sigma$ if $\sigma$ is not a prime. In this paper, we propose a new representation of a set by a polynomial over $\Z_\sigma$, in which $\sigma$ is a composite integer with {\em known factorization} but a corresponding set can be efficiently recovered from a polynomial except negligible probability in the security parameter. Note that $\Z_\sigma[x]$ is not a unique factorization domain, so a polynomial may be written as a product of linear factors in several ways. To exclude irrelevant linear factors, we introduce a special encoding function which supports early abort strategy. As a result, our representation can be efficiently inverted by computing all the linear factors of a polynomial in $\Z_{\sigma}[x]$ whose roots locate in the image of the encoding function. When we consider group decryption as in most private set operation protocols, inverting polynomial representations should be done without a single party possessing the secret of the utilized additive homomorphic encryption. This is very hard for Paillier's encryption whose message space is $\Z_N$ with unknown factorization of $N$. Instead, we detour this problem by using Naccache-Stern encryption with message space $\Z_\sigma$ where $\sigma$ is a smooth integer with public factorization. As an application of our representation, we obtain a constant round privacy-preserving set union protocol. Our construction improves the complexity than the previous without an honest majority. It can be also used for a constant round multi-set union protocol and a private set intersection protocol even when decryptors do not possess a superset of the resulting set.
Last updated:  2012-09-07
Computing endomorphism rings of abelian varieties of dimension two
Gaetan Bisson
Generalizing a method of Sutherland and the author for elliptic curves, we design a subexponential algorithm for computing the endomorphism ring structure of ordinary abelian varieties of dimension two over finite fields. Although its correctness and complexity bound rely on several assumptions, we report on practical computations showing that it performs very well and can easily handle previously intractable cases.
Last updated:  2012-09-07
Tahoe – The Least-Authority Filesystem
Zooko Wilcox-O'Hearn, Brian Warner
Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.
Last updated:  2012-09-10
The Curious Case of Non-Interactive Commitments
Mohammad Mahmoody, Rafael Pass
It is well-known that one-way permutations (and even one-to-one one-way functions) imply the existence of non-interactive commitments. Furthermore the construction is black-box (i.e., the underlying one-way function is used as an oracle to implement the commitment scheme, and an adversary attacking the commitment scheme is used as an oracle in the proof of security). We rule out the possibility of black-box constructions of non interactive commitments from general (possibly not one-to-one) one-way functions. As far as we know, this is the first result showing a natural cryptographic task that can be achieved in a black-box way from one-way permutations but not from one-way functions. We next extend our black-box separation to constructions of non-interactive commitments from a stronger notion of one-way functions, which we refer to as \emph{hitting} one-way functions. Perhaps surprisingly, Barak, Ong, and Vadhan (Siam JoC '07) showed that there does exist a non-black-box construction of non-interactive commitments from hitting one-way functions. As far as we know, this is the first result to establish a ``separation'' between the power of black-box and non-black-box use of a primitive to implement a natural cryptographic task. We finally show that unless the complexity class NP has program checkers, the above separations extend also to non-interactive instance-based commitments, and 3-message public-coin honest-verifier zero-knowledge protocols with O(log n)-bit verifier messages. The well-known classical zero-knowledge proof for NP fall into this category.
Last updated:  2012-09-06
False Positive probabilities in q-ary Tardos codes: comparison of attacks
Uncategorized
A. Simone, B. Skoric
Show abstract
Uncategorized
We investigate False Positive (FP) accusation probabilities for q-ary Tardos codes in the Restricted Digit Model. We employ a computation method recently introduced by us, to which we refer as Convolution and Series Expansion (CSE). We present a comparison of several collusion attacks on q-ary codes: majority voting, minority voting, Interleaving, $\tilde\mu$-minimizing and Random Symbol (the q-ary equivalent of the Coin Flip strategy). The comparison is made by looking at the FP rate at approximately fixed False Negative rate. In nearly all cases we find that the strongest attack is either minority voting or $\tilde\mu$-minimizing, depending on the exact setting of parameters such as alphabet size, code length, and coalition size. Furthermore, we present results on the convergence speed of the CSE method, and we show how FP rate computations for the Random Symbol strategy can be sped up by a pre-computation step.
Last updated:  2012-09-06
Functional Encryption with Bounded Collusions via Multi-Party Computation
Sergey Gorbunov, Vinod Vaikuntanathan, Hoeteck Wee
We construct a functional encryption scheme secure against an a priori bounded polynomial number of collusions for the class of all polynomial-size circuits. Our constructions require only semantically secure public-key encryption schemes and pseudo-random generators computable by small-depth circuits (known to be implied by most concrete intractability assumptions). For certain special cases such as predicate encryption schemes with public index, the construction requires only semantically secure encryption schemes, which is clearly the minimal necessary assumption. Our constructions rely heavily on techniques from secure multiparty computation and randomized encodings. All our constructions are secure under a strong, adaptive simulation-based definition of functional encryption.
Last updated:  2012-09-06
Optimizing Segment Based Document Protection (Corrected Version)
Miroslaw Kutylowski, Maciej Gebala
In this paper we provide a corrected and generalized version of the scheme presented at SOFSEM'2012 in our paper ``Optimizing Segment Based Document Protection'' (SOFSEM 2012: Theory and Practice of Computer Science, LNCS 7147, pp. 566-575). We develop techniques for protecting documents with restricted access rights. In these documents so called \emph{segments} are encrypted. Different segments may be encrypted with different keys so that different user may be given different \emph{access rights}. Hierarchy of access rights is represented by means of a directed acyclic \emph{access graph}. The segments are encrypted with keys - where each key corresponds to one node in the access graph. The main feature of the access graph is that if there is an arch $\overrightarrow{AB}$ in the graph, then all segments labelled with $B$ can be decrypted with the key corresponding to node $A$. We show how to minimize the space overhead necessary for auxiliary keying information stored in the document. We provide an algorithm based on node disjoint paths in the access graph and key derivation based on one-way functions. Our current solution, based on maximal weighted matchings, provides an optimal solution for creating subdocuments, in case when frequency of creating each subdocument is known.
Last updated:  2013-12-28
Faster implementation of scalar multiplication on Koblitz curves
Diego F. Aranha, Armando Faz-Hernández, Julio López, Francisco Rodríguez-Henríquez
We design a state-of-the-art software implementation of field and elliptic curve arithmetic in standard Koblitz curves at the 128-bit security level. Field arithmetic is carefully crafted by using the best formulae and implementation strategies available, and the increasingly common native support to binary field arithmetic in modern desktop computing platforms. The i-th power of the Frobenius automorphism on Koblitz curves is exploited to obtain new and faster interleaved versions of the well-known $\tau$NAF scalar multiplication algorithm. The usage of the $\tau^{\lfloor m/3 \rfloor}$ and $\tau^{\lfloor m/4 \rfloor}$ maps are employed to create analogues of the 3-and 4-dimensional GLV decompositions and in general, the $\lfloor m/s \rfloor$-th power of the Frobenius automorphism is applied as an analogue of an $s$-dimensional GLV decomposition. The effectiveness of these techniques is illustrated by timing the scalar multiplication operation for fixed, random and multiple points. To our knowledge, our library was the first to compute a random point scalar multiplication in less than 10^5 clock cycles among all curves with or without endomorphisms defined over binary or prime fields. The results of our optimized implementation suggest a trade-off between speed, compliance with the published standards and side-channel protection. Finally, we estimate the performance of curve-based cryptographic protocols instantiated using the proposed techniques and compare our results to related work.
Last updated:  2012-12-17
Sequential Aggregate Signatures with Short Public Keys: Design, Analysis and Implementation Studies
Kwangsu Lee, Dong Hoon Lee, Moti Yung
The notion of aggregate signature has been motivated by applications and it enables any user to compress different signatures signed by different signers on different messages into a short signature. Sequential aggregate signature, in turn, is a special kind of aggregate signature that only allows a signer to add his signature into an aggregate signature in sequential order. This latter scheme has applications in diversified settings, such as in reducing bandwidth of a certificate chains, and in secure routing protocols. Lu, Ostrovsky, Sahai, Shacham, and Waters presented the first sequential aggregate signature scheme in the standard (non idealized ROM) model. The size of their public key, however, is quite large (i.e., the number of group elements is proportional to the security parameter), and therefore they suggested as an open problem the construction of such a scheme with short keys. Schröder recently proposed a sequential aggregate signature (SAS) with short public keys using the Camenisch-Lysyanskaya signature scheme, but the security is only proven under an interactive assumption (which is considered a relaxed notion of security). In this paper, we propose the first sequential aggregate signature scheme with short public keys (i.e., a constant number of group elements) in prime order (asymmetric) bilinear groups which is secure under static assumptions in the standard model. Further, our scheme employs constant number of pairing operation per message signing and message verification operation. Technically, we start with a public key signature scheme based on the recent dual system encryption technique of Lewko and Waters. This technique cannot give directly an aggregate signature scheme since, as we observed, additional elements should be published in the public key to support aggregation. Thus, our construction is a careful augmentation technique for the dual system technique to allow it to support a sequential aggregate signature scheme via randomized verification. We further implemented our scheme and conducted a performance study and implementation optimization.
Last updated:  2013-08-02
Unconditionally Secure Asynchronous Multiparty Computation with Linear Communication Complexity
Ashish Choudhury, Martin Hirt, Arpita Patra
Unconditionally secure multiparty computation (MPC) allows a set of n mutually distrusting parties to securely compute an agreed function f over some finite field in the presence of a computationally unbounded adversary, who can actively corrupt any t out of the n parties. Designing an asynchronous MPC (AMPC) protocol with a communication complexity of O(n) field elements per multiplication gate is a long standing open problem. We solve the open problem by presenting two AMPC protocols with the corruption threshold t < n / 4. Our first protocol is statistically secure (i.e. involves a negligible error in the protocol outcome) in a completely asynchronous setting and improves the communication complexity of the previous best AMPC protocol in the same setting by a factor of \Theta(n). Our second protocol is perfectly secure (i.e. error free) in a hybrid setting, where one round of communication is assumed to be synchronous and improves the communication complexity of the previous best AMPC protocol in the hybrid setting by a factor of \Theta(n^2). Starting with the seminal work of Beaver (CRYPTO 1991), it is by now a well-known technique to evaluate multiplication gates in an MPC protocol using shared random multiplication triples. The central contribution common to both the presented protocols is a new and simple framework for generating shared random multiplication triples. All the existing protocols approach the problem by first producing shared pairs of random values, followed by computing the shared product of each pair of random values by invoking known protocols for multiplication. Our framework takes a completely different approach and avoids using the multiplication protocols that are typically communication intensive. Namely, we ask the parties to verifiably share random multiplication triples and then securely extract shared random multiplication triples unknown to the adversary. The framework is of independent interest and can be adapted to any honest majority setting.
Last updated:  2015-02-24
Garbling XOR Gates ``For Free'' in the Standard Model
Benny Applebaum
Yao's Garbled Circuit (GC) technique is a powerful cryptographic tool which allows to ``encrypt'' a circuit $C$ by another circuit $\hC$ in a way that hides all information except for the final output. Yao's original construction incurs a constant overhead in both computation and communication per gate of the circuit $C$ (proportional to the complexity of symmetric encryption). Kolesnikov and Schneider (ICALP 2008) introduced an optimized variant that garbles XOR gates ``for free'' in a way that involves no cryptographic operations and no communication. This variant has become very popular and has been employed in several practical implementations leading to notable performance improvements. The security of the free-XOR optimization was originally proven in the random oracle model. In the same paper, Kolesnikov and Schneider also addressed the question of replacing the random oracle with a standard cryptographic assumption and suggested to use a hash function which achieves some form of security under correlated inputs. This claim was revisited by Choi et al. (TCC 2012) who showed that a stronger form of security is required, and proved that the free-XOR optimization can be realized based on a new primitive called \emph{circular 2-correlation hash function}. Unfortunately, it is currently unknown how to implement this primitive based on standard assumptions, and so the feasibility of realizing the free-XOR optimization in the standard model remains an open question. We resolve this question by showing that the free-XOR approach can be realized in the standard model under the \emph{learning parity with noise} (LPN) assumption. Our result is obtained in two steps: (1) We show that the hash function can be replaced with a symmetric encryption which remains secure under a combined form of related-key and key-dependent attacks; and (2) We show that such a symmetric encryption can be constructed based on the LPN assumption.
Last updated:  2012-09-05
Semantically-Secure Functional Encryption: Possibility Results, Impossibility Results and the Quest for a General Definition
Uncategorized
Mihir Bellare, Adam O'Neill
Show abstract
Uncategorized
This paper explains that SS1-secure functional encryption (FE) as defined by Boneh, Sahai and Waters implicitly incorporates security under key-revealing selective opening attacks (SOA-K). This connection helps intuitively explain their impossibility results and also allows us to prove stronger ones. To fill this gap and move us closer to the (laudable) goal of a general and achievable notion of FE security, we seek and provide two ``sans SOA-K'' definitions of FE security that we call SS2 and SS3. We prove various possibility results about these definitions. We view our work as a first step towards the challenging goal of a general, meaningful and achievable notion of FE security.
Last updated:  2013-04-09
RKA Security beyond the Linear Barrier: IBE, Encryption and Signatures
Uncategorized
Mihir Bellare, Kenneth G. Paterson, Susan Thomson
Show abstract
Uncategorized
We provide a framework enabling the construction of IBE schemes that are secure under related-key attacks (RKAs). Specific instantiations of the framework yield RKA-secure IBE schemes for sets of related key derivation functions that are non-linear, thus overcoming a current barrier in RKA security. In particular, we obtain IBE schemes that are RKA secure for sets consisting of all affine functions and all polynomial functions of bounded degree. Based on this we obtain the first constructions of RKA-secure schemes for the same sets for the following primitives: CCA-secure public-key encryption, CCA-secure symmetric encryption and Signatures. All our results are in the standard model and hold under reasonable hardness assumptions.
Last updated:  2012-09-25
Commitments and Efficient Zero-Knowledge Proofs from Learning Parity with Noise
Abhishek Jain, Stephan Krenn, Krzysztof Pietrzak, Aris Tentes
We construct a perfectly binding string commitment scheme whose security is based on the learning parity with noise ($\LPN$) assumption, or equivalently, the hardness of decoding random linear codes. Our scheme not only allows for a simple and efficient zero-knowledge proof of knowledge for committed values (essentially a $\Sigma$-protocol), but also for such proofs showing any kind of relation amongst committed values, i.e. proving that messages $\vm_0,\ldots,\vm_u$, are such that $\vm_0=C(\vm_1,\ldots,\vm_u)$ for any circuit $C$. To get soundness which is exponentially small in a security parameter $t$, and when the zero-knowledge property relies on the LPN problem with secrets of length $\ell$, our $3$ round protocol has communication complexity $\bigO(t|C|\ell\log(\ell))$ and computational complexity of $\bigO(t|C|\ell)$ bit operations. The hidden constants are small, and the computation consists mostly of computing inner products of bit-vectors.
Last updated:  2013-03-01
Constant-Overhead Secure Computation of Boolean Circuits using Preprocessing
Uncategorized
Ivan Damgard, Sarah Zakarias
Show abstract
Uncategorized
We present a protocol for securely computing a Boolean circuit $C$ in presence of a dishonest and malicious majority. The protocol is unconditionally secure, assuming access to a preprocessing functionality that is not given the inputs to compute on. For a large number of players the work done by each player is the same as the work needed to compute the circuit in the clear, up to a constant factor. Our protocol is the first to obtain these properties for Boolean circuits. On the technical side, we develop new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property. We also show a new algorithm for verifying the product of Boolean matrices in quadratic time with exponentially small error probability, where previous methods would only give a constant error.
Last updated:  2016-03-10
Entangled Cloud Storage
Uncategorized
Giuseppe Ateniese, Özgür Dagdelen, Ivan Damgard, Daniele Venturi
Show abstract
Uncategorized
Entangled cloud storage (Aspnes et al., ESORICS 2004) enables a set of clients to "entangle" their files into a single *clew* to be stored by a (potentially malicious) cloud provider. The entanglement makes it impossible to modify or delete significant part of the clew without affecting *all* files encoded in the clew. A clew keeps the files in it private but still lets each client recover his own data by interacting with the cloud provider; no cooperation from other clients is needed. At the same time, the cloud provider is discouraged from altering or overwriting any significant part of the clew as this will imply that none of the clients can recover their files. We put forward the first simulation-based security definition for entangled cloud storage, in the framework of *universal composability* (Canetti, FOCS 2001). We then construct a protocol satisfying our security definition, relying on an *entangled encoding scheme* based on privacy-preserving polynomial interpolation; entangled encodings were originally proposed by Aspnes et al. as useful tools for the purpose of data entanglement. As a contribution of independent interest we revisit the security notions for entangled encodings, putting forward stronger definitions than previous work (that for instance did not consider collusion between clients and the cloud provider). Protocols for entangled cloud storage find application in the cloud setting, where clients store their files on a remote server and need to be ensured that the cloud provider will not modify or delete their data illegitimately. Current solutions, e.g., based on Provable Data Possession and Proof of Retrievability, require the server to be challenged regularly to provide evidence that the clients' files are stored *at a given time*. Entangled cloud storage provides an alternative approach where any single client operates implicitly on behalf of all others, i.e., as long as one client's files are intact, the entire remote database continues to be safe and unblemished.
Last updated:  2012-09-03
Enabling 3-share Threshold Implementations for any 4-bit S-box
Sebastian Kutzner, Phuong Ha Nguyen, Axel Poschmann
Threshold Implementation (TI) is an elegant and widely accepted countermeasure against 1-st order Differential Power Analysis (DPA) in Side Channel Attacks. The 3-share TI is the most efficient version of TI, but so far, it can only be applied to 50\% of all 4-bit S-boxes. In this paper, we study the limitations of decomposition and introduce factorization to enable the 3-share TI for any optimal 4-bit S-box. We propose an algorithm which can decompose any optimal 4-bit S-box to quadratic vectorial boolean functions with a time complexity of $2^{19}$. Furthermore, we use our new methodology in combination with decomposition to optimize ciphers utilizing many different S-boxes, and, to highlight the strength of our new methodology, we construct a 3-share Threshold Implementation of SERPENT which was believed to be not possible until now. Last, we show how to implemented all SERPENT S-boxes with only one mutual core.
Last updated:  2012-09-03
On 3-share Threshold Implementations for 4-bit S-boxes
Sebastian Kutzner, Phuong Ha Nguyen, Axel Poschmann, Huaxiong Wang
One of the most promising lightweight hardware countermeasures against SCA attacks is the so-called Threshold Implementation (TI) countermeasure. In this work we resolve many of the remaining open issues towards it's applicability. In particular, our contribution is three-fold: first we define which optimal (from a cryptographic point of view) S-boxes can be implemented with a 3-share TI. Second, we introduce two methodologies to efficiently implement these S-boxes. Third, as an example, we successfully apply these methodologies to PRESENT and are able to decrease the area requirements of its protected S-box by 57\%.
Last updated:  2016-07-19
On the Implausibility of Constant-Round Public-Coin Zero-Knowledge Proofs
Yi Deng, Juan Garay, San Ling, Huaxiong Wang, Moti Yung
We consider the problem of whether there exist non-trivial constant-round public-coin zero-knowledge (ZK) proofs. To date, in spite of high interest in the above, there is no definite answer to the question. We focus on the type of ZK proofs that admit a universal simulator (which handles all malicious verifiers), and show a connection between the existence of such proof systems and a seemingly unrelated “program understanding” problem: for a natural class of constant-round public-coin ZK proofs (which we call “canonical,” since all known ZK protocols fall into this category), a session prefix output by the universal simulator can actually be used to distinguish a non-trivial property of the next-step functionality of the verifier’s code. Our result can be viewed as extended new evidence against the existence of constant-round public-coin ZK proofs, since the existence of such a proof system will bring about either one of the following: (1) a positive result for the above program-understanding problem, a typical goal in reverse-engineering attempts, commonly believed to be notoriously hard, or (2) a rather unfathomable simulation strategy beyond the only known (straight-line simulation) technique for their argument counterpart, as we also argue. Earlier negative evidence on constant-round public-coin ZK proofs is Barack, Lindell and Vadhan [FOCS ’03]’s result, which was based on the incomparable assumption of the existence of certain entropy-preserving hash functions, now (due to further work) known not to be achievable from standard assumptions via black-box reduction. The core of our technical contribution is showing that there exists a single verifier step for constant-round public-coin ZK proofs whose functionality (rather than its code) is crucial for a successful simulation. This is proved by combining a careful analysis of the behavior of a set of verifiers in the above protocols and during simulation, with an improved structure-preserving version of the well-known Babai-Moran Speedup (de-randomization) Theorem, a key tool of independent interest.
Last updated:  2012-09-03
Compact Implementation and Performance Evaluation of Hash Functions in ATtiny Devices
Josep Balasch, Bariş Ege, Thomas Eisenbarth, Benoit Gérard, Zheng Gong, Tim Güneysu, Stefan Heyse, Stéphanie Kerckhof, François Koeune, Thomas Plos, Thomas Pöppelmann, Francesco Regazzoni, François-Xavier Standaert, Gilles Van Assche, Ronny Van Keer, Loïc van Oldeneel tot Oldenzeel, Ingo von Maurich
The pervasive diffusion of electronic devices in security and privacy sensitive applications has boosted research in cryptography. In this context, the study of lightweight algorithms has been a very active direction over the last years. In general, symmetric cryptographic primitives are good candidates for low-cost implementations. For example, several previous works have investigated the performances of block ciphers on various platforms. Motivated by the recent SHA3 competition, this paper extends these studies to another family of cryptographic primitives, namely hash functions. We implemented different algorithms on an ATMEL AVR ATtiny45 8-bit microcontroller, and provide their performance evaluation using different figures. All the implementations were carried out with the goal of minimizing the code size and memory utilization, and evaluated using a common interface. As part of our contribution, we additionally decided to make all the corresponding source codes available on a web page, under an open-source license. We hope that this paper provides a good basis for researchers and embedded system designers who need to include more and more functionalities in next generation smart devices.
Last updated:  2013-03-03
Succinct Malleable NIZKs and an Application to Compact Shuffles
Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya, Sarah Meiklejohn
Depending on the application, malleability in cryptography can be viewed as either a flaw or — especially if sufficiently understood and restricted — a feature. In this vein, Chase, Kohlweiss, Lysyanskaya, and Meiklejohn recently defined malleable zero-knowledge proofs, and showed how to control the set of allowable transformations on proofs. As an application, they construct the first compact verifiable shuffle, in which one such controlled-malleable proof suffices to prove the correctness of an entire multi-step shuffle. Despite these initial steps, a number of natural problems remained: (1) their construction of controlled-malleable proofs relies on the inherent malleability of Groth-Sahai proofs and is thus not based on generic primitives; (2) the classes of allowable transformations they can support are somewhat restrictive. In this paper, we address these issues by providing a generic construction of controlled-malleable proofs using succinct non-interactive arguments of knowledge, or SNARGs for short. Our construction can support very general classes of transformations, as we no longer rely on the transformations that Groth-Sahai proofs can support.
Last updated:  2012-09-03
On pseudorandomization of information-theoretically secure schemes without hardness assumptions
Koji Nuida
A recent work by Nuida and Hanaoka (in ICITS 2009) provided a proof technique for security of information-theoretically secure cryptographic schemes in which the random input tape is implemented by a pseudorandom generator (PRG). In this paper, we revisit their proof technique and generalize it by introducing some trade-off factor, which involves the original proof technique as a special case and provides a room of improvement of the preceding result. Secondly, we consider two issues of the preceding result; one is the requirement of some hardness assumption in their proof; another is the gap between non-uniform and uniform computational models appearing when transferring from the exact security formulation adopted in the preceding result to the usual asymptotic security. We point out that these two issues can be resolved by using a PRG proposed by Impagliazzo, Nisan and Wigderson (in STOC 1994) against memory-bounded distinguishers, instead of usual PRGs against time-bounded distinguishers. We also give a precise formulation of a computational model explained by Impagliazzo et al., and by using this, perform a numerical comparison showing that, despite the significant advantage of removing hardness assumptions, our result is still better than, or at least competitive to, the preceding result from quantitative viewpoints. The results of this paper would suggest a new motivation to use PRGs against distinguishers with computational constraints other than time complexity in practical situations rather than just theoretical works.
Last updated:  2012-09-03
Scalable Deniable Group Key Establishment
Uncategorized
Kashi Neupane, Rainer Steinwandt, Adriana Suarez Corona
Show abstract
Uncategorized
The popular Katz-Yung compiler from CRYPTO 2003 can be used to transform unauthenticated group key establishment protocols into authenticated ones. In this paper we present a modication of Katz and Yung's construction which maintains the round complexity of their compiler, but for "typical" unauthenticated group key establishments adds authentication in such a way that deniability is achieved as well. As an application, a deniable authenticated group key establishment with three rounds of communication can be constructed.
Last updated:  2012-09-03
Hierarchical Identity-Based (Lossy) Trapdoor Functions
Uncategorized
Alex Escala, Javier Herranz, Benoit Libert, Carla Rafols
Show abstract
Uncategorized
Lossy trapdoor functions, introduced by Peikert and Waters (STOC'08), have received a lot of attention in the last years, because of their wide range of applications in theoretical cryptography. The notion has been recently extended to the identity-based setting by Bellare \textit{et al.} (Eurocrypt'12). We provide one more step in this direction, by considering the notion of hierarchical identity-based (lossy) trapdoor functions (HIB-TDFs). Hierarchical identity-based cryptography has proved very useful both for practical applications and to establish theoretical relations with other cryptographic primitives. The notion of security for IB-TDFs put forward by Bellare \textit{et al.} easily extends to the hierarchical scenario, but an (H)IB-TDF secure in this sense is not known to generically imply other related primitives with security against adaptive-id adversaries, not even IND-ID-CPA secure encryption. Our first contribution is to define a new security property for (H)IB-TDFs. We show that functions satisfying this property imply secure cryptographic primitives in the adaptive identity-based setting: these include encryption schemes with semantic security under chosen-plaintext attacks, deterministic encryption schemes, and (non-adaptive) hedged encryption schemes that maintain some security when messages are encrypted using randomness of poor quality. We emphasize that some of these primitives were unrealized in the (H)IB setting previous to this work. As a second contribution, we describe the first pairing-based HIB-TDF realization. This is also the first example of hierarchical trapdoor function based on traditional number theoretic assumptions: so far, constructions were only known under lattice assumptions. Our HIB-TDF construction is based on techniques that differ from those of Bellare {\it et al.} in that it uses a hierarchical predicate encryption scheme as a key ingredient. The resulting HIB-TDF is proved to satisfy the new security definition, against either selective or, for hierarchies of constant depth, adaptive adversaries. In either case, we only need the underlying predicate encryption system to be selectively secure.
Last updated:  2012-09-03
Are We Compromised? Modelling Security Assessment Games
Uncategorized
Viet Pham, Carlos Cid
Show abstract
Uncategorized
Security assessments are an integral part of organisations' strategies for protecting their digital assets and critical IT infrastructure. In this paper we propose a game-theoretic modelling of a particular form of security assessment -- one which addresses the question ``are we compromised?''. We do so by extending the recently proposed game ``FlipIt'', which itself can be used to model the interaction between defenders and attackers under the Advanced Persistent Threat (APT) scenario. Our extension gives players the option to ``test'' the state of the game before making a move. This allows one to study the scenario in which organisations have the option to perform periodic security assessments of such nature, and the benefits they may bring.
Last updated:  2014-06-23
Privacy Amplification with Asymptotically Optimal Entropy Loss
Nishanth Chandran, Bhavana Kanukurthi, Rafail Ostrovsky, Leonid Reyzin
We study the problem of ``privacy amplification'': key agreement between two parties who both know a weak secret w, such as a password. (Such a setting is ubiquitous on the internet, where passwords are the most commonly used security device.) We assume that the key agreement protocol is taking place in the presence of an active computationally unbounded adversary Eve. The adversary may have partial knowledge about w, so we assume only that w has some entropy from Eve's point of view. Thus, the goal of the protocol is to convert this non-uniform secret w into a uniformly distributed string $R$ that is fully secret from Eve. R may then be used as a key for running symmetric cryptographic protocols (such as encryption, authentication, etc.). Because we make no computational assumptions, the entropy in R can come only from w. Thus such a protocol must minimize the entropy loss during its execution, so that R is as long as possible. The best previous results have entropy loss of $\Theta(\kappa^2)$, where $\kappa$ is the security parameter, thus requiring the password to be very long even for small values of $\kappa$. In this work, we present the first protocol for information-theoretic key agreement that has entropy loss LINEAR in the security parameter. The result is optimal up to constant factors. We achieve our improvement through a somewhat surprising application of error-correcting codes for the edit distance. The protocol can be extended to provide also ``information reconciliation,'' that is, to work even when the two parties have slightly different versions of w (for example, when biometrics are involved).
Last updated:  2012-09-03
Constant Ciphertext Length in CP-ABE
Nishant Doshi, Devesh Jinwala
Ciphertext policy attribute based encryption (CP-ABE) is a technique in which user with secret key containing attributes, only able to decrypt the message if the attributes in the policy match with the attributes in secret key. The existing methods that use reasonably computable decryption policies produce the ciphertext of size at least linearly varying with the number of attributes with additional pairing operations during encryption and decryption. In this paper, we propose a scheme in which ciphertext remains constant in length, irrespective of the number of attributes. Our scheme works for a threshold case: the number of attributes in a policy must be a subset of attributes in a secret key. The security of propose scheme is based on Decisional Bilinear Diffie-Hellman (DBDH) problem.
Last updated:  2015-02-27
Authenticity, Integrity and Proof of Existence for Long-Term Archiving: a Survey
Uncategorized
Martín A. G. Vigil, Daniel Cabarcas, Alexander Wiesmaier, Johannes Buchmann
Uncategorized
Electronic archives are increasingly being used to store information that needs to be available for a long time such as land register information and medical records. In order for the data in such archives to remain useful, their integrity and authenticity must be protected over their entire life span. Also, in many cases it must be possible to prove that the data existed at a certain point in time. In this paper we survey solutions that provide long-term integrity, authenticity, and proof of existence of archived data. We analyze which trust assumptions they require and compare their efficiency. Based on our analysis, we discuss open problems and promising research directions.
Last updated:  2014-01-14
Almost Perfect Algebraic Immune Functions with Good Nonlinearity
Uncategorized
Meicheng Liu, Dongdai Lin
Show abstract
Uncategorized
In the last decade, algebraic and fast algebraic attacks are regarded as the most successful attacks on LFSR-based stream ciphers. Since the notion of algebraic immunity was introduced, the properties and constructions of Boolean functions with maximum algebraic immunity have been researched in a large number of papers. However, there are few results with respect to Boolean functions with provable good immunity against fast algebraic attacks. In previous literature, only Carlet-Feng function, which is affine equivalent to discrete logarithm function, was proven to be optimal against fast algebraic attacks as well as algebraic attacks. In this paper, it is proven that a family of $2k$-variable Boolean functions, including the function recently constructed by Tang et al. [IEEE TIT 59(1): 653--664, 2013], are almost perfect algebraic immune for any integer $k\geq 3$. More exactly, they achieve optimal algebraic immunity and almost perfect immunity to fast algebraic attacks. The functions of such family are balanced and have optimal algebraic degree. A lower bound on their nonlinearity is obtained based on the work of Tang et al., which is better than that of Carlet-Feng function. It is also checked for $3\leq k\leq 9$ that the exact nonlinearity of such functions is very good, which is slightly smaller than that of Carlet-Feng function, and some functions of this family even have a slightly larger nonlinearity than Tang et al.'s function. To sum up, among the known functions with provable good immunity against fast algebraic attacks, the functions of this family make a trade-off between the exact value and the lower bound of nonlinearity.
Last updated:  2013-08-12
The low-call diet: Authenticated Encryption for call counting HSM users
Mike Bond, George French, Nigel P. Smart, Gaven J. Watson
We present a new mode of operation for obtaining authenticated encryption suited for use in banking and government environments where cryptographic services are only available via a Hardware Security Module (HSM) which protects the keys but offers a limited API. The practical problem is that despite the existence of better modes of operation, modern HSMs still provide nothing but a basic (unauthenticated) CBC mode of encryption, and since they mediate all access to the key, solutions must work around this. Our mode of operation makes only a single call to the HSM, yet provides a secure authenticated encryption scheme; authentication is obtained by manipulation of the plaintext being passed to the HSM via a call to an unkeyed hash function. The scheme offers a considerable performance improvement compared to more traditional authenticated encryption techniques which must be implemented using multiple calls to the HSM. Our new mode of operation is provided with a proof of security, on the assumption that the underlying block cipher used in the CBC mode is a strong pseudorandom permutation, and that the hash function is modelled as a random oracle.
Last updated:  2012-09-03
Updating attribute in CP-ABE: A New Approach
Nishant Doshi, Devesh Jinwala
In Ciphertext-Policy Attribute Based Encryption (CP-ABE), attributes are attached to the user&#8223;s secret key and access policy is at-tached to the ciphertext. If attributes in the secret key of a user satisfy the policy then only the genuine user can decrypt the ciphertext. How-ever, such scenario also necessitates periodic updating of the secret key with the changing attributes. According to our observations, the existing attempts at doing so are not efficient. In this paper, we propose a newer approach to add, update or delete the value of particular attribute effi-ciently without the knowledge of the other attributes.
Last updated:  2012-09-03
"Metaproofs" (and their Cryptographic Applications)
Alfredo De Santis, Moti Yung
We develop a non-interactive proof-system which we call "Metaproof" (mu-NIZK proof system); it provides a proof of "the existence of a proof to a statement". This meta-mathematical notion indeed seems redundant when we deal with proving NP statements, but in the context of zero-knowledge theory and cryptography it has a large variety of applications. Combined with another tool we develop which we call "on-line simulatable NIZK proof system", it is the key tool used to solve the open problem of the existence of a many prover non-interactive zero-knowledge system (MP-NIZK proof system). This problem was presented by Micali when the important notion of non-interactive zero-knowledge proofs (NIZK) was first suggested and implemented for a sole prover. The solution immensely enlarges the domain of applications of the NIZK model. The work also provides a new connection between bounded (single-theorem) non-interactive zero-knowledge proofs and the unbounded (multi-theorem) one. This may help in reducing the complexity assumption upon which to base NIZK systems. Remark: This is a full version (with more details, more material, and with new proofs) of the Crypto 1990 paper on Metaproof. Over the years, the concept has been used and reinvented for specific settings beyond the original ones, by others; (which has made it more useful). Recently, we were asked about this paper and about details, so here they are! For historical reasons, except for this remark, this version is presented as it was in the above mentioned date under the above affiliations, though we did not pursue publication before!
Last updated:  2013-09-30
Protocol Misidentification Made Easy with Format-Transforming Encryption
Kevin P. Dyer, Scott E. Coull, Thomas Ristenpart, Thomas Shrimpton
Deep packet inspection (DPI) technologies provide much-needed visibility and control of network traffic using port-independent protocol identification, where a network flow is labeled with its application-layer protocol based on packet contents. In this paper, we provide the first comprehensive evaluation of a large set of DPI systems from the point of view of protocol misidentification attacks, in which adversaries on the network attempt to force the DPI to mislabel connections. Our approach uses a new cryptographic primitive called format-transforming encryption (FTE), which extends conventional symmetric encryption with the ability to transform the ciphertext into a format of our choosing. We design an FTE-based record layer that can encrypt arbitrary application-layer traffic, and we experimentally show that this forces misidentification for all of the evaluated DPI systems. This set includes a proprietary, enterprise-class DPI system used by large corporations and nation-states. We also show that using FTE as a proxy system incurs no latency overhead and as little as 16\% bandwidth overhead compared to standard SSH tunnels. Finally, we integrate our FTE proxy into the Tor anonymity network and demonstrate that it evades real-world censorship by the Great Firewall of China.
Last updated:  2012-09-03
Efficient Query Integrity for Outsourced Dynamic Databases
Qingji Zheng, Shouhuai Xu, Giuseppe Ateniese
As databases are increasingly outsourced to the cloud, data owners require various security assurances. This paper investigates one particular assurance, query integrity, by which a database querier (either the data owner or a third party) can verify that its queries were faithfully executed by the cloud server with respect to the outsourced database. Query integrity is investigated in the setting of dynamic databases, where the outsourced databases can be updated by the data owners as needed. We present a formal security definition of query integrity and a provably-secure efficient construction. Our solution improves upon the state-of-the-art solutions by additionally allowing aggregate queries and more flexible join queries. In addition, we provide better performance by eliminating a linear factor in the extra storage complexity for security purpose. Our solution also achieves a trade-off between computational and communication complexities.
Last updated:  2012-09-03
A Method for Generating Full Cycles by a Composition of NLFSRs
Elena Dubrova
Non-Linear Feedback Shift Registers (NLFSR) are a generalization of Linear Feedback Shift Registers (LFSRs) in which a current state is a non-linear function of the previous state. The interest in NLFSRs is motivated by their ability to generate pseudo-random sequences which are usually hard to break with existing cryptanalytic methods. However, it is still not known how to construct large $n$-stage NLFSRs which generate full cycles of $2^n$ possible states. This paper presents a method for generating full cycles by a composition of NLFSRs. First, we show that an $n*k$-stage register with period $O(2^{2n})$ can be constructed from $k$ $n$-stage NLFSRs by adding to their feedback functions a logic block of size $O(n*k)$. This logic block implements Boolean functions representing the set of pairs of states whose successors have to be exchanged in order to join cycles. Then, we show how to join all cycles into one by using one more logic block of size $O(n*k^2)$ and an extra time step. The presented method is feasible for generating very large full cycles.
Last updated:  2012-09-03
On the Multiple Fault Attack on RSA Signatures with LSBs of Messages Unknown
Uncategorized
Lidong Han, Wei Wei, Mingjie Liu
Show abstract
Uncategorized
In CHES 2009, Coron, Joux, Kizhvatov, Naccache and Paillier(CJKNP) introduced a fault attack on RSA signatures with partially unknown messages. They factored RSA modulus $N$ using a single faulty signature and increased the bound of unknown messages by multiple fault attack, however, the complexity multiple fault attack is exponential in the number of faulty signatures. At RSA 2010, it was improved which run in polynomial time in number of faults. Both previous multiple fault attacks deal with the general case that the unknown part of message is in the middle. This paper handles a special situation that some least significant bits of messages are unknown. First, we describe a sample attack by utilizing the technique of solving simultaneous diophantine approximation problem, and the bound of unknown message is $N^{\frac1{2}-\frac1{2\ell}}$ where $\ell$ is the number of faulty signatures. Our attacks are heuristic but very efficient in practice. Furthermore, the new bound can be extended up to $N^{\frac1{2}^{1+\frac1{\ell}}}$ by the Cohn-Heninger technique. Comparison between previous attacks and new attacks with LSBs of message unknown will be given by simulation test.
Last updated:  2012-09-06
Desynchronization Attack on RAPP Ultralightweight Authentication Protocol
Zahra Ahmadian, Mahmoud Salmasizadeh, Mohammad Reza Aref
RAPP (RFID Authentication Protocol with Permutation) is a recently proposed efficient ultralightweight authentication protocol. The operation used in this protocol is totally different from the other existing ultralightweight protocols due to the use of new introduced data dependent permutations and avoidances of modular arithmetic operations and biased logical operations such as AND and OR. The designers of RAPP claimed that this protocol resists against desynchronization attacks since the last messages of the protocol is sent by the reader and not by the tag. This letter challenges this assumption and shows that RAPP is vulnerable against desynchronization attack. This attack has a remarkable probability of success and is effective whether Hamming weight-based or modular-based rotations are used by the protocol.
Last updated:  2012-09-23
Recursive Linear and Differential Cryptanalysis of Ultralightweight Authentication Protocols
Zahra Ahmadian, Mahmoud Salmasizadeh, Mohammad Reza Aref
Privacy is faced to serious challenges in the ubiquitous computing world. In order to handle this problem, some researches in recent years have focused on design and analysis of privacy friendly ultralightweight authentication protocols. In less than a decade, many ultralightweight authentication protocols are proposed. Though, successful crypanalyses are proposed for almost all of them, most of these attacks are based on ad-hoc methods that are not extensible to a large class of ultralightweight protocols. So this research area still suffers from the lack of structured cryptanalysis and evaluation ethods. In this paper, we introduce new frameworks for full disclosure attacks on ultralightweight authentication protocols based on new concepts of recursive linear and recursive differential cryptanalysis. Both of them exploit triangular functions in ultralightweight protocols and recover all secret data stored in the tag in a recursive manner. The recursive linear attack is applied to Yeh et al. and SLMAP authentication protocols. This attack is passive, deterministic (i.e. the attacker can retrieve all the secrets with probability of one), and requires only a single authentication session. The recursive differential attack is more powerful and can be applied to the protocols which linear attack may not work on. We show the effectiveness of this attack on LMAP++and SASI authentication protocols. This differential attack is probabilistic, active in the sense that the attacker suffices only to block some specific messages, and requires a few authentication sessions.
Last updated:  2012-08-22
Designated Verifier Threshold Proxy Signature Scheme without Random Oracles
Mohammad Beheshti-Atashgah, Majid Bayat, Mahmoud Gardeshi, Mohammad Reza Aref
In a $(t,n)$ designated verifier threshold proxy signature \, scheme, an original signer can delegate his/her signing power to $n$ proxy signers such that any $t$ or more out of $n$ proxy signers can sign messages on behalf of the original signer but $t-1$ or less of the proxy signers cannot generate a valid proxy signature. Of course, the signature is issued for a designated receiver and therefore only the designated receiver can validate the proxy signature. In this paper, we propose a new designated verifier threshold proxy signature scheme and also show that the proposed scheme has provable security in the standard model. The security of proposed scheme is based on the $GBDH$ assumption and the proposed scheme satisfies all the security requirements of threshold proxy signature schemes.
Last updated:  2012-08-22
Short communication: An interpretation of the Linux entropy estimator
Benjamin Pousse
The Linux random number generator (LRNG) aims to produce random numbers with all the limitations due to a deterministic machine. Two recent analysis exist for this generator. These analysis provide strong cryptographic details about LRNG. However both fail to give a mathematical explanation of the entropy estimator embedded. In this paper we propose an interpretation using Newton polynomial interpolation.
Last updated:  2012-08-22
Computational Soundness without Protocol Restrictions
Michael Backes, Ankit Malik, Dominique Unruh
The abstraction of cryptographic operations by term algebras, called Dolev-Yao models, is essential in almost all tool-supported methods for verifying security protocols. Recently significant progress was made in establishing computational soundness results: these results prove that Dolev-Yao style models can be sound with respect to actual cryptographic realizations and security definitions. However, these results came at the cost of imposing various constraints on the set of permitted security protocols: e.g., dishonestly generated keys must not be used, key cycles need to be avoided, and many more. In a nutshell, the cryptographic security definitions did not adequately capture these cases, but were considered carved in stone; in contrast, the symbolic abstractions were bent to reflect cryptographic features and idiosyncrasies, thereby requiring adaptations of existing verification tools. In this paper, we pursue the opposite direction: we consider a symbolic abstraction for public-key encryption and identify two cryptographic definitions called PROG-KDM (programmable key-dependent message) security and MKE (malicious-key extractable) security that we jointly prove to be sufficient for obtaining computational soundness without imposing assumptions on the protocols using this abstraction. In particular, dishonestly generated keys obtained from the adversary can be sent, received, and used. The definitions can be met by existing cryptographic schemes in the random oracle model. This yields the first computational soundness result for trace-properties that holds for arbitrary protocols using this abstraction (in particular permitting to send and receive dishonestly generated keys), and that is accessible to all existing tools for reasoning about Dolev-Yao models without further adaptations.
Last updated:  2013-12-19
Exploiting Collisions in Addition Chain-based Exponentiation Algorithms Using a Single Trace
Uncategorized
Neil Hanley, HeeSeok Kim, Michael Tunstall
Show abstract
Uncategorized
Public key cryptographic algorithms are typically based on group exponentiation algorithms where the exponent is private. A collision attack is typically where an adversary seeks to determine whether two operations in an exponentiation have the same input. In this paper we extend this to an adversary who seeks to determine whether the output of one operation is used as the input to another. We describe implementations of these attacks to a 192-bit scalar multiplication over an elliptic curve that only require a single power consumption trace to succeed with a high probability. Moreover, our attacks do not require any knowledge of the input to the exponentiation algorithm. These attacks would therefore be applicable to algorithms such as EC-DSA, where an exponent is ephemeral, or to implementations where an exponent is blinded. Moreover, we define attacks against exponentiation algorithms that are considered to be resistant to collision attacks and prove that collision attacks are applicable to all addition chain-based exponentiation algorithms. Hence, we demonstrate that a side-channel resistant implementation of a group exponentiation algorithm will require countermeasures that introduce enough noise such that an attack is not practical.
Last updated:  2012-10-02
Cryptanalysis of Two Dynamic ID-based Remote User Authentication Schemes for Multi-Server Architecture
Ding Wang, Chun-guang Ma, De-li Gu, Zhen-shan Cui
Understanding security failures of cryptographic protocols is the key to both patching existing protocols and designing future schemes. In NSS'10, Shao and Chin showed that Hsiang and Shih's dynamic ID-based remote user authentication scheme for multi-server environment is vulnerable to server spoofing attack and fails to preserve user anonymity, and further proposed an improved version which is claimed to be efficient and secure. In this study, however, we will demonstrate that, although Shao-Chin's scheme possesses many attractive features, it still cannot achieve the claimed security goals, and we report its following flaws: (1) It cannot withstand offline password guessing attack under their non-tamper resistance assumption of the smart card; (2) It fails to provide user anonymity; (3) It is prone to user impersonation attack. More recently, Li et al. found that Sood et al.'s dynamic ID-based authentication protocol for multi-server architecture is still vulnerable to several kinds of attacks and presented a new scheme that attempts to overcome the identified weaknesses. Notwithstanding their intentions, Li et al.'s scheme is still found vulnerable to various known attacks by researchers. In this study, we perform a further cryptanalysis and uncover its two other vulnerabilities: (1) It cannot achieve user anonymity, the essential goal of a dynamic ID-based scheme; (2) It is susceptible to offline password guessing attack. The proposed cryptanalysis discourages any use of the two schemes under investigation in practice and reveals some subtleties and challenges in designing this type of schemes.
Last updated:  2012-08-21
An Efficient Signcryption Scheme from q-Diffie-Hellman Problems
Jayaprakash Kar
Confidentiality and authenticity are two fundamental security requirement of Public key Cryptography. These are achieved by encryption scheme and digital signatures respectively. Here we present a provably secure signcryption scheme in random oracle model by modifying Libert et al's scheme [2]. Our scheme is more e±cient and secure than Libert et al's scheme. Tan [1] proved that this scheme is not secure against non-adaptive chosen cipher text attacks. It has been also proved that the semantically secure symmetric encryption scheme proposed in the Libert et al's scheme is not su±cient to guarantee to be secure against adaptive chosen ciphertext attacks. Here we proposed a modified version of Libert et al's scheme. The security of which is proven using two as- sumptions, namely the Strong Diffie-Hellman (SDH) and Diffie-Hellman Inversion (DHI) in the random oracle model.
Last updated:  2012-08-22
Approaches for the Parallelization of Software Implementation of Integer Multiplication
Vladislav Kovtun, Andrew Okhrimenko
In this paper there are considered several approaches for the increasing performance of software implementation of integer multiplication algorithm for the 32-bit & 64-bit platforms via parallelization. The main idea of algorithm parallelization consists in delayed carry mechanism using which authors have proposed earlier [11]. The delayed carry allows to get rid of connectivity in loop iterations for sums accumulation of products, which allows parallel execution of loops iterations in separate threads. Upon completion of sum accumulation threads, it is necessary to make corrections in final result via assimilation of carries. First approach consists in optimization of parallelization for the two execution threads and second approach is an evolution of the first approach and is oriented on three and more execution threads. Proposed approaches for parallelization allow increasing the total algorithm computational complexity, as for one execution thread, but decrease total execution time on multi-core CPU.
Last updated:  2012-12-07
Improved Security Bounds for Key-Alternating Ciphers via Hellinger Distance
John Steinberger
A $t$-round key alternating cipher can be viewed as an abstraction of AES. It defines a cipher $E$ from $t$ fixed public permutations $P_1, \ldots, P_t : \{0,1\}^n \ra \{0,1\}^n$ and a key $k = k_0\Vert \cdots \Vert k_t \in \{0,1\}^{n(t+1)}$ by setting $E_{k}(x) = k_t \oplus P_t(k_{t-1} \oplus P_{t-1}(\cdots k_1 \oplus P_1(k_0 \oplus x) \cdots))$. The indistinguishability of $E_k$ from a random truly random permutation by an adversary who also has oracle access to the (public) random permutations $P_1, \ldots, P_t$ was investigated for $t = 2$ by Even and Mansour and, much later, by Bogdanov et al. The former proved indistinguishability up to $2^{n/2}$ queries for $t = 1$ while the latter proved indistinguishability up to $2^{2n/3}$ queries for $t \geq 2$ (ignoring low-order terms). Our contribution is to improve the analysis of Bogdanov et al$.$ by showing security up to $2^{3n/4}$ queries for $t \geq 3$. Given that security cannot exceed $2^{\frac{t}{t+1}n}$ queries, this is in particular achieves a tight bound for the case $t = 3$, whereas, previously, tight bounds had only been achieved for $t = 1$ (by Even and Mansour) and for $t = 2$ (by Bogdanov et al$.$). Our main technique is an improved analysis of the elegant \emph{sample distinguishability} game introduced by Bogdanov et al. More specifically, we succeed in eliminating adaptivity by considering the Hellinger advantage of an adversary, a notion that we introduce here. To our knowledge, our result constitutes the first time Hellinger distance (a standard measure of ``distance'' between random variables, and a cousin of statistical distance) is used in a cryptographic indistinguishability proof.
Last updated:  2013-04-01
Short Signatures From Diffie-Hellman: Realizing Short Public Key
Uncategorized
Jae Hong Seo
Show abstract
Uncategorized
Efficient signature scheme whose security is relying on reliable assumptions is important. There are few schemes based on the standard assumptions such as the Diffie-Hellman (DH) in the standard model. We present a new approach for (hash-and-sign) DH-based signature scheme in the standard model. First, we combine two known techniques, programmable hashes and a tag-based signature scheme so that we obtain a short signature scheme with somewhat short public key of $\Theta(\frac{\lambda}{\log\lambda})$ group elements. Then, we developed a new technique for {\em asymmetric trade} between the public key and random tags, which are part of signatures. Roughly speaking, we can dramatically reduce the public key size by adding one field element in each signature. More precisely, our proposal produces public key of $\Theta(\sqrt{\frac{\lambda}{\log \lambda}})$ group elements, where $\lambda$ is the security parameter. The signature size is still short, requiring two elements in a group of order $p$ and two integers in $\zp$. In our approach, we can guarantee the security against adversaries that make an a-priori bounded number of queries to signing oracle (we call {\em bounded CMA}). i.e., the maximum number $q$ of allowable signing queries is prescribed at the parameter generating time. Note that for polynomial $q$, we limit ourselves to dealing with only polynomial-time reductions in all security proofs.
Last updated:  2012-08-21
Mix-Compress-Mix Revisited: Dispensing with Non-invertible Random Injection Oracles
Mohammad Reza Reyhanitabar, Willy Susilo
We revisit the problem of building dual-model secure (DMS) hash functions that are simultaneously provably collision resistant (CR) in the standard model and provably pseudorandom oracle (PRO) in an idealized model. Designing a DMS hash function was first investigated by Ristenpart and Shrimpton (ASIACRYPT 2007); they put forth a generic approach, called Mix-Compress-Mix (MCM), and showed the feasibility of the MCM approach with a secure (but inefficient) construction. An improved construction was later presented by Lehmann and Tessaro (ASIACRYPT 2009). The proposed construction by Ristenpart and Shrimpton requires a non-invertible (pseudo-) random injection oracle (PRIO) and the Lehmann-Tessaro construction requires a non-invertible random permutation oracle (NIRP). Despite showing the feasibility of realizing PRIO and NIRP objects in theory–using ideal ciphers and (trapdoor) one-way permutations– these constructions suffer from several efficiency and implementation issues as pointed out by their designers and briefly reviewed in this paper. In contrast to the previous constructions, we show that constructing a DMS hash function does not require any PRIO or NIRP, and hence there is no need for additional (trapdoor) one-way permutations. In fact, Ristenpart and Shrimpton posed the question of whether MCM is secure under easy-to-invert mixing steps as an open problem in their paper.We resolve this question in the affirmative in the fixed-input-length (FIL) hash setting. More precisely, we show that one can sandwich a provably CR function, which is sufficiently compressing, between two random invertible permutations to build a provably DMS compression function. Any multi-property-preserving (MPP) domain extender that preserves CR and PRO can then be used to convert such a DMS compression function to a full-fledged DMS hash function. Interestingly, there are efficient off-the-shelf candidates for all the three ingredients (provably CR compression functions, random invertible permutations, and MPP domain extenders) from which one can choose to implement such a DMS hash function in practice. Further, we also explain the implementation options as well as a concrete instantiation.
Last updated:  2012-08-21
Cryptanalysis on a novel unconditionally secure oblivious polynomial evaluation protocol
Wang Qinglong, Xu Li
Vanishree et.al proposed a novel unconditionally oblivious polynomial evaluation protocol and they claimed that can fulfill both sender and receiver’s security. Here, this protocol is cryptanalyzed. We find that it has a fatal fault which cannot implement the receiver’s security at all and show the detail analyzing process.
Last updated:  2012-08-21
Improved Key Recovery Attacks on Reduced-Round AES in the Single-Key Setting
Patrick Derbez, Pierre-Alain Fouque, Jérémy Jean
In this paper, we revisit meet-in-the-middle attacks on AES in the single-key model and improve on Dunkelman, Keller and Shamir attacks of Asiacrypt 2010. We present the best attack on 7 rounds of AES-128 where data/time/memory complexities are below $2^{100}$. Moreover, we are able to extend the number of rounds to reach attacks on 8 rounds for both AES-192 and AES-256. This gives the best attacks on those two versions with a data complexity of $2^{107}$ chosen-plaintexts, a memory complexity of $2^{96}$ and a time complexity of $2^{172}$ for AES-192 and $2^{196}$ for AES-256. Finally, we also describe the best attack on 9 rounds of AES-256 with $2^{120}$ chosen-plaintexts and time and memory complexities of $2^{203}$. All these attacks have been found by carefully studying the number of reachable multisets in Dunkelman et al. attacks.
Last updated:  2012-08-21
A j-lanes tree hashing mode and j-lanes SHA-256
Shay Gueron
j-lanes hashing is a tree mode that splits an input message to j slices, computes j independent digests of each slice, and outputs the hash value of their concatenation. We demonstrate the performance advantage of j-lanes hashing on SIMD architectures, by coding a 4-lanes-SHA-256 implementation and measuring its performance on the latest 3rd Generation Intel® Core™. For message ranging 2KB to 132KB in length, the 4-lanes SHA-256 is between 1.5 to 1.97 times faster than the fastest publicly available implementation (that we are aware of), and between 1.9 to 2.5 times faster than OpenSSL 1.0.1c. For long messages, there is no significant performance difference between different choices of j. We show that the 4-lanes SHA-256 is faster than the two SHA3 finalists (BLAKE and Keccak) that have a published tree mode implementation. We explain why j-lanes hashing will be even faster on the future AVX2 architecture with 256 bits registers. This suggests that standardizing a tree mode for hash functions (SHA-256 in particular) would deliver significant performance benefits for a multitude of algorithms and usages.
Last updated:  2012-08-18
Efficient Signatures of Knowledge and DAA in the Standard Model
David Bernhard, Georg Fuchsbauer, Essam Ghadafi
Direct Anonymous Attestation (DAA) is one of the most complex cryptographic protocols deployed in practice. It allows an embedded secure processor known as a Trusted Platform Module (TPM) to attest to the configuration of its host computer without violating the owner's privacy. DAA has been standardized by the Trusted Computing Group. The security of the DAA standard and all existing schemes is analyzed in the random oracle model. We provide the first constructions of DAA in the standard model, that is, without relying on random oracles. As a building block for our schemes, we construct the first efficient standard-model signatures of knowledge, which have many applications beyond DAA.
Last updated:  2012-11-25
On the Semantic Security of Functional Encryption Schemes
Uncategorized
Manuel Barbosa, Pooya Farshim
Show abstract
Uncategorized
Functional encryption (FE) is a powerful cryptographic primitive that generalizes many asymmetric encryption systems proposed in recent years. Syntax and security definitions for general FE were recently proposed by Boneh, Sahai, and Waters (BSW) (TCC 2011) and independently by O'Neill (ePrint 2010/556). In this paper we revisit these definitions, identify several shortcomings in them, and propose a new definitional approach that overcomes these limitations. Our definitions display good compositionality properties and allow us to obtain new feasibility and impossibility results for adaptive token-extraction attack scenarios that shed further light on the potential reach of general FE for practical applications. The main contributions of the paper are the following: - We show that the BSW definition of semantic security fails to reject intuitively insecure FE schemes where a ciphertext leaks more about an encrypted message than that which can be recovered from an image under the supported functionality. Our definition (as O'Neill's) does not suffer from this problem. - We introduce an orthogonal notion of \emph{setup security} that rejects all FE schemes where the master secret key may give unwanted power to the TA, allowing the recovery of extra information from images under the supported functionality. We prove FE schemes supporting \emph{all-or-nothing} functionalities are intrinsically setup-secure and further show that many well-known functionalities \emph{are} all-or-nothing. - We extend the equivalence result of O'Neill between indistinguishability and semantic security to restricted \emph{adaptive} token-extraction attacks (the standard notion of security for, e.g., IBE and ABE schemes). We establish that this equivalence holds for the large class of all-or-nothing functionalities. Conversely, we show that the proof technique used to establish this equivalence cannot be applied to schemes supporting a one-way function. - We show that the notable \emph{inner-product} functionality introduced by Katz, Sahai, and Waters (EUROCRYPT 2008) can be used to encode a one-way function under the Small Integer Solution (SIS) problem, and hence natural approaches to prove its (restricted) adaptive security fail. This complements the equivalence result of O'Neill for the non-adaptive case, and leaves open the question of proving the semantic security of existing inner-product encryption schemes.
Last updated:  2013-01-28
Sender Equivocable Encryption Schemes Secure against Chosen-Ciphertext Attacks Revisited
Zhengan Huang, Shengli Liu, Baodong Qin
In Eurocrypt 2010, Fehr et al. proposed the first sender equivocable encryption scheme secure against chosen-ciphertext attack (NC-CCA) and proved that NC-CCA security implies security against selective opening chosen-ciphertext attack (SO-CCA). The NC-CCA security proof of the scheme relies on security against substitution attack of a new primitive, ``cross-authentication code''. However, the security of cross-authentication code can not be guaranteed when all the keys used in the code are exposed. Our key observation is that in the NC-CCA security game, the randomness used in the generation of the challenge ciphertext is exposed to the adversary. This random information can be used to recover all the keys involved in cross-authentication code, and forge a ciphertext (like a substitution attack of cross-authentication code) that is different from but related to the challenge ciphertext. And the response of decryption oracle, with respect to the forged ciphertext, leaks information. This leaked information can be employed by an adversary to spoil the NC-CCA security proof of Fehr et al.'s scheme encrypting multi-bit plaintext. In this paper, we provide a security analysis of Fehr et al.'s scheme, showing that its NC-CCA security proof is flawed by presenting an attack. We point out that Fehr et al.'s scheme encrypting single-bit plaintext can be refined to achieve NC-CCA security, free of cross-authentication code. We introduce the strong notion of cross-authentication code, apply it to Fehr et al.'s scheme, and show that the new version of Fehr et al.'s scheme achieves NC-CCA security for multi-bit plaintext.
Last updated:  2013-06-06
On the Simplicity of Converting Leakages from Multivariate to Univariate – Case Study of a Glitch-Resistant Masking Scheme –
Uncategorized
Amir Moradi, Oliver Mischke
Show abstract
Uncategorized
Several masking schemes to protect cryptographic implementations against side-channel attacks have been proposed. A few considered the glitches, and provided security proofs in presence of such inherent phenomena happening in logic circuits. One which is based on multi-party computation protocols and utilizes Shamir’s secret sharing scheme was presented at CHES 2011. It aims at providing security for hardware implementations – mainly of AES – against those sophisticated side-channel attacks that also take glitches into account. One part of this article deals with the practical issues and relevance of the aforementioned masking scheme. Following the recommendations given in the extended version of the mentioned article, we first provide a guideline on how to implement the scheme for the simplest settings. Constructing an exemplary design of the scheme, we provide practical side-channel evaluations based on a Virtex-5 FPGA. Our results demonstrate that the implemented scheme is indeed secure against univariate power analysis attacks given a basic measurement setup. In the second part of this paper we show how using very simple changes in the measurement setup opens the possibility to exploit multivariate leakages while still performing a univariate attack. Using these techniques the scheme under evaluation can be defeated using only a moderate number of measurements. This is applicable not only to the scheme showcased here, but also to most other known masking schemes where the shares of sensitive values are processed in adjacent clock cycles.
Last updated:  2012-08-18
A Quasigroup Based Random Number Generator for Resource Constrained Environments
Matthew Battey, Abhishek Parakh
This paper proposes a pseudo random number generator (PRNG) based on quasigroups. The proposed PRNG has low memory requirements, is autonomous and the quality of the output stream of random numbers is better than other available standard PRNG implementations (commercial and open source) in majority of the tests. Comparisons are done using the benchmark NIST Statistical Test Suite and compression tools. Results are presented for quality of raw stream of random numbers and for encryption results using these random numbers.
Last updated:  2012-09-13
Some Connections Between Primitive Roots and Quadratic Non-Residues Modulo a Prime
Uncategorized
Sorin Iftene
Show abstract
Uncategorized
In this paper we present some interesting connections between primitive roots and quadratic non-residues modulo a prime. Using these correlations, we propose some polynomial deterministic algorithms for generating primitive roots for primes with special forms (for example, for safe primes).
Last updated:  2012-08-18
Perfect Keyword Privacy in PEKS Systems
Mototsugu Nishioka
This paper presents a new security notion, called \emph{perfect keyword privacy (PKP)}, for non-interactive public-key encryption with keyword search (PEKS) \cite{bcop04}. Although the conventional security notion for PEKS guarantees that a searchable ciphertext leaks no information about keywords, it gives no guarantee concerning leakage of a keyword from the trapdoor. PKP is a notion for overcoming this fatal deficiency. Since the trapdoor has verification functionality, the popular concept of ``indistinguishability'' is inadequate for capturing the notion of keyword privacy from the trapdoor. Hence, our formalization of PKP depends on the idea of formalizing a perfectly one-way hash function \cite{can97,cmr98}. We also present \emph{IND-PKP security} as a useful notion for showing that a given PEKS scheme has PKP. Furthermore, we present PKP+ and IND-PKP+ as enhanced notions of PKP and IND-PKP, respectively. Finally, we present several instances of an IND-PKP or IND-PKP+ secure PEKS scheme, in either the random oracle model or the standard model.
Last updated:  2012-10-01
Functional Encryption: New Perspectives and Lower Bounds
Shweta Agrawal, Sergey Gorbunov, Vinod Vaikuntanathan, Hoeteck Wee
Functional encryption is an emerging paradigm for public-key encryption that enables fine-grained control of access to encrypted data. In this work, we present new perspectives on security definitions for functional encryption, as well as new lower bounds on what can be achieved. Our main contributions are as follows: * We show a lower bound for functional encryption that satisfies a weak (non-adaptive) simulation-based security notion, via pseudo-random functions. This is the first lower bound that exploits unbounded collusions in an essential way. * We put forth and discuss a simulation-based notion of security for functional encryption, with an unbounded simulator (called USIM). We show that this notion interpolates indistinguishability and simulation-based security notions, and has strong correlations to results and barriers in the zero-knowledge and multi-party computation literature.
Last updated:  2012-08-18
New results on nonexistence of generalized bent functions
Uncategorized
Yupeng Jiang, Yingpu Deng
Show abstract
Uncategorized
We get two kinds of new results on nonexistence of generalized bent function. The first one is Based on Feng's results by using Schmidt's field descent method. For the second kind, considering special property of the field $\mathbb{Q}(\zeta_{23^e})$, We get new nonexistence results of generalized bent functions with type $[3,2\cdot23^e]$.
Last updated:  2012-08-18
Computational Entropy and Information Leakage
Benjamin Fuller, Leonid Reyzin
We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z). We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log 1/p note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the \emph{quantity} of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy. Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of \lambda bits, metric entropy gets reduced by a factor 2^\lambda in quality and \lambda in quantity.
Last updated:  2012-08-18
T-MATCH: Privacy-Preserving Item Matching for Storage-Only RFID Tags
Kaoutar Elkhiyaoui, Erik-Oliver Blass, Refik Molva
RFID-based tag matching allows a reader Rk to determine whether two tags Ti and Tj store some attributes that jointly fulfill a boolean constraint. The challenge in designing a matching mechanism is tag privacy. While cheap tags are unable to perform any computation, matching has to be achieved without revealing the tags’ attributes. In this paper, we present T-MATCH, a protocol for secure and privacy preserving RFID tag matching. T-MATCH involves a pair of tags Ti and Tj , a reader Rk, and a backend server S. To ensure tag privacy against Rk and S, T-MATCH employs a new technique based on secure two-party computation that prevents Rk and S from disclosing tag attributes. For tag privacy against eavesdroppers, each tag Ti in T-MATCH stores an IND-CPA encryption of its attribute. Such an encryption allows Rk to update the state of Ti by merely re-encrypting Ti’s ciphertext. T-MATCH targets cheap tags that cannot perform any computation, but are only required to store 150 bytes.
Last updated:  2012-08-18
Finding Lower Bounds on the Complexity of Secret Sharing Schemes by Linear Programming
Carles Padro, Leonor Vazquez, An Yang
Optimizing the maximum, or average, length of the shares in relation to the length of the secret for every given access structure is a difficult and long-standing open problem in cryptology. Most of the known lower bounds on these parameters have been obtained by implicitly or explicitly using that every secret sharing scheme defines a polymatroid related to the access structure. The best bounds that can be obtained by this combinatorial method can be determined by using linear programming, and this can be effectively done for access structures on a small number of participants. By applying this linear programming approach, we improve some of the known lower bounds for the access structures on five participants and the graph access structures on six participants for which these parameters were still undetermined. Nevertheless, the lower bounds that are obtained by this combinatorial method are not tight in general. For some access structures, they can be improved by adding to the linear program non-Shannon information inequalities as new constraints. We obtain in this way new separation results for some graph access structures on eight participants and for some ports of non-representable matroids. Finally, we prove that, for two access structures on five participants, the combinatorial lower bound cannot be attained by any linear secret sharing scheme.
Last updated:  2012-08-14
Deterministic Public Key Encryption and Identity-Based Encryption from Lattices in the Auxiliary-Input Setting
Xiang Xie, Rui Xue, Rui Zhang
Deterministic public key encryption (D-PKE) provides an alternative to randomized public key encryption in various scenarios (e.g. search on encrypted data) where the latter exhibits inherent drawbacks. In CRYPTO'11, Brakerski and Segev formalized a framework for studying the security of deterministic public key encryption schemes with respect to auxiliary inputs. A trivial requirement is that the plaintext should not be efficiently recoverable from the auxiliary inputs. In this paper, we present an efficient deterministic public key encryption scheme in the auxiliary-input setting from lattices. The public key size, ciphertext size and ciphertext expansion factor are improved compared with the scheme proposed by Brakerski and Segev. Our scheme is also secure even in the multi-user setting where related messages may be encrypted under multiple public keys. In addition, the security of our scheme is based on the hardness of the learning with errors (LWE) problem which remains hard even for quantum algorithms. Furthermore, we consider deterministic identity-based public key encryption (D-IBE) in the auxiliary-input setting. The only known D-IBE scheme (without considering auxiliary inputs) in the standard model was proposed by Bellare et al. in EUROCRYPT'12. However, this scheme is only secure in the selective security setting, and Bellare et al. identified it as an open problem to construct adaptively secure D-IBE schemes. The second contribution of this work is to propose a D-IBE scheme from lattices that is adaptively secure.
Last updated:  2012-08-14
Perfect Ambiguous Optimistic Fair Exchange
Yang Wang, Man Ho Au, Willy Susilo
Protocol for fair exchange of digital signatures is essential in many applications including contract signing, electronic commerce, or even peer-to-peer file sharing. In such a protocol, two parties, Alice and Bob, would like to exchange digital signatures on some messages in a fair way. It is known that a trusted arbitrator is necessary in the realization of such a protocol. We identify that in some scenarios, it is required that prior to the completion of the protocol, no observer should be able to tell whether Alice and Bob are conducting such an exchange. Consider the following scenario in which Apple engages Intel in an exchange protocol to sign a contract that terminates their OEM agreement. The information would be of value to a third party (such as the stock broker, or other OEM companies). If the protocol transcript can serve as an evidence that such a communication is in progress, any observer of this communication, including the employees of both companies, would be tempted to capture the transcript and sell it to outsiders. We introduce a new notion called \emph{perfect ambiguous optimistic fair exchange} (PAOFE), which is particularly suitable to the above scenario. PAOFE fulfils all traditional requirements of cryptographic fair exchange of digital signatures and, in addition, guarantees that the communication transcript cannot be used as a proof to convince others that the protocol is in progress. Specifically, we formalize the notion of PAOFE and present a rigorous security model in the multi-user setting under the chosen-key attack. We also present a generic construction of PAOFE from existing cryptographic primitives and prove that our proposal is secure with respect to our definition in the standard model.
Last updated:  2012-12-27
Succinct Arguments from Multi-Prover Interactive Proofs and their Efficiency Benefits
Nir Bitansky, Alessandro Chiesa
\emph{Succinct arguments of knowledge} are computationally-sound proofs of knowledge for NP where the verifier's running time is independent of the time complexity $t$ of the nondeterministic NP machine $M$ that decides the given language. Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilistically-checkable proofs (PCPs). Yet, even when instantiating these constructions with state-of-the-art PCPs, the prover needs $\Omega(t)$ space in order to run in quasilinear time (i.e., time $t \poly(k)$), regardless of the space complexity $s$ of the machine $M$. We say that a succinct argument is \emph{complexity preserving} if the prover runs in time $t \poly(k)$ and space $s \poly(k)$ and the verifier runs in time $|x| \poly(k)$ when proving and verifying that a $t$-time $s$-space random-access machine nondeterministically accepts an input $x$. Do complexity-preserving succinct arguments exist? To study this question, we investigate the alternative approach of constructing succinct arguments based on multi-prover interactive proofs (MIPs) and stronger cryptographic techniques: (1) We construct a one-round succinct MIP of knowledge, where each prover runs in time $t \polylog(t)$ and space $s \polylog(t)$ and the verifier runs in time $|x| \polylog(t)$. (2) We show how to transform any one-round MIP protocol to a succinct four-message argument (with a single prover), while preserving the time and space efficiency of the original MIP protocol; using our MIP protocol, this transformation yields a complexity-preserving four-message succinct argument. As a main tool for our transformation, we define and construct a \emph{succinct multi-function commitment} that (a) allows the sender to commit to a vector of functions in time and space complexity that are essentially the same as those needed for a single evaluation of the functions, and (b) ensures that the receiver's running time is essentially independent of the function. The scheme is based on fully-homomorphic encryption (and no additional assumptions are needed for our succinct argument). (3) In addition, we revisit the problem of \emph{non-interactive} succinct arguments of knowledge (SNARKs), where known impossibilities prevent solutions based on black-box reductions to standard assumptions. We formulate a natural (but non-standard) variant of homomorphic encryption having a \emph{homomorphism-extraction property}. We show that this primitive essentially allows to squash our interactive protocol, while again preserving time and space efficiency, thereby obtaining a complexity-preserving SNARK. We further show that this variant is, in fact, implied by the existence of (complexity-preserving) SNARKs.
Last updated:  2015-07-29
Information-Theoretic Timed-Release Security: Key-Agreement, Encryption, and Authentication Codes
Yohei Watanabe, Takenobu Seito, Junji Shikata
In this paper, we study timed-release cryptography with information-theoretic security. As fundamental cryptographic primitives with information-theoretic security, we can consider key-agreement, encryption, and authentication codes. Therefore, in this paper we deal with information-theoretic timed-release security for all those primitives. Specifically, we propose models and formalizations of security for information-theoretic timed-release key-agreement, encryption, and authentication codes; we also derive tight lower bounds on entities' memory-sizes required for all those ones; and we show optimal constructions of all those ones. Furthermore, we investigate a relationship of mechanisms between information-theoretic timed-release key-agreement and information-theoretic key-insulated key-agreement. It turns out that there exists a simple algorithm which converts the former into the latter, and vice versa. In the sense, we conclude that these two mechanisms are essentially close.
Last updated:  2012-12-14
Barriers in Cryptography with Weak, Correlated and Leaky Sources
Daniel Wichs
There has been much recent progress in constructing cryptosystems that maintain their security without requiring uniform randomness and perfect secrecy. These schemes are motivated by a diverse set of problems such as providing resilience to side-channel leakage, using weak physical sources of randomness as secret keys, and allowing deterministic encryption for high-entropy messages. The study of these problems has significantly deepened our understanding of how randomness is used in cryptographic constructions and proofs. Nevertheless, despite this progress, some basic and seemingly achievable security properties have eluded our reach. For example, we are unable to prove the security of basic tools for manipulating weak/leaky random sources, such as as pseudo-entropy generators and seed-dependent computational condensers. We also do not know how to prove leakage-resilient security of any cryptosystem whose secret key is uniquely determined by its public key. In the context of deterministic encryption we do not have a standard-model constructions achieving the strongest notion of security proposed by Bellare, Boldyreva and O'Neill (CRYPTO '07), that would allow us to encrypt arbitrarily correlated messages of sufficiently large individual entropy. In this work, we provide broad black-box separation results, showing that the security of such primitives cannot be proven under virtually any standard cryptographic hardness assumption via a reduction that treats the adversary as a black box. We do so by formalizing the intuition that ``the only way that a reduction can simulate the correctly distributed view for an attacker is to know all the secrets, in which case it does not learn anything useful from the attack''. Such claims are often misleading and clever way of getting around them allow us to achieve a wealth of positive results with imperfect/leaky randomness. However, in this work we show that this intuition can be formalized and that it indeed presents a real barrier for the examples given above.
Last updated:  2012-09-20
Computing small discrete logarithms faster
Daniel J. Bernstein, Tanja Lange
Computations of small discrete logarithms are feasible even in "secure" groups, and are used as subroutines in several cryptographic protocols in the literature. For example, the Boneh--Goh--Nissim degree-2-homomorphic public-key encryption system uses generic square-root discrete-logarithm methods for decryption. This paper shows how to use a small group-specific table to accelerate these subroutines. The cost of setting up the table grows with the table size, but the acceleration also grows with the table size. This paper shows experimentally that computing a discrete logarithm in an interval of order l takes only 1.93*l^{1/3} multiplications on average using a table of size l^{1/3} precomputed with 1.21*l^{2/3} multiplications, and computing a discrete logarithm in a group of order l takes only 1.77*l^{1/3} multiplications on average using a table of size l^{1/3} precomputed with 1.24*l^{2/3} multiplications.
Last updated:  2012-08-13
Hush Functions Extended to Any Size Input versus Any Size Output
Gideon Samid
Traditional hush functions map a large number to a small number such that the reverse-hush has an infinity of solutions, and nonetheless a collision is hard to come by. This primitive is so abundantly useful that one is tempted to extend it such that any number large or small may be mapped to any number larger, or smaller while maintaining the above conditions. This extension would increase the flexibility of the commodity hush primitive, expand its current applications, and likely suggest new ones. Additional generality may be achieved by allowing the input to determine the computational burden, and involving Turing’s Entscheidungsproblem. We propose an algorithm where a natural number, X, is mapped to another natural number Y, referring to the mapping as a "Crypto Square", and to the reverse as "Crypto Square Root": Y = X**2|c and X = &#8730;Y|c. While the crypto-square mapping is a proper function, the square root equation has infinite solutions. There exists a deterministic solution algorithm to find any desired number of solutions to a square-root equation. This asymmetry proves itself useful, since the mapping is Z+&#8594;Z+, and hence the chance of collision for any finite size set is negligible. Unlike standard one-way functions, crypto-square shields the identity of the input (X), not by the intractability of the reverse function, but by Vernam-like equivocation per the infinity of X candidates. This prospect suggests further examination of this “square” algorithm for possible useful roles in various crypto protocols, especially protocols concerned with privacy, authentication and deniability.
Last updated:  2012-08-13
Crowd-Blending Privacy
Uncategorized
Johannes Gehrke, Michael Hay, Edward Lui, Rafael Pass
Show abstract
Uncategorized
We introduce a new definition of privacy called \emph{crowd-blending privacy} that strictly relaxes the notion of differential privacy. Roughly speaking, $k$-crowd blending private sanitization of a database requires that each individual $i$ in the database ``blends'' with $k$ other individuals $j$ in the database, in the sense that the output of the sanitizer is ``indistinguishable'' if $i$'s data is replaced by $j$'s. We demonstrate crowd-blending private mechanisms for histograms and for releasing synthetic data points, achieving strictly better utility than what is possible using differentially private mechanisms. Additionally, we demonstrate that if a crowd-blending private mechanism is combined with a ``pre-sampling'' step, where the individuals in the database are randomly drawn from some underlying population (as is often the case during data collection), then the combined mechanism satisfies not only differential privacy, but also the stronger notion of zero-knowledge privacy. This holds even if the pre-sampling is slightly biased and an adversary knows whether certain individuals were sampled or not. Taken together, our results yield a practical approach for collecting and privately releasing data while ensuring higher utility than previous approaches.
Last updated:  2012-08-13
Must you know the code of f to securely compute f?
Mike Rosulek
When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit and then securely evaluate that circuit, gate-by-gate. In other words, a secure protocol for evaluating $f$ is typically obtained in a {\em non-black-box-way} from $f$ itself. Consequently, secure computation protocols have high overhead (in communication \& computation) that is directly linked to the circuit-description complexity of $f$. In other settings throughout cryptography, black-box constructions invariably lead to better practical efficiency than comparable non-black-box constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words, {\em must one know the code of $f$ to securely evaluate $f$?} In this work we initiate the theoretical study of this question. We show the following: 1. A complete characterization of the 2-party tasks which admit such security against semi-honest adversaries. The characterization is inspired by notions of {\em autoreducibility} from computational complexity theory. From this characterization, we show a class of pseudorandom functions that {\em cannot} be securely evaluated (when one party holds the seed and the other holds the input) without ``knowing'' the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without ``knowing'' the code of the function. 2. Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a one-way function in zero-knowledge, without ``knowing'' the code of the one-way function.
Last updated:  2012-08-13
A Probabilistic Quantum Key Transfer Protocol
Abhishek Parakh
We propose a protocol to transfer a one-time-pad (in a probabilistic manner) from Alice to Bob, over a public channel. The proposed protocol is unique because Bob merely acts as the receiver of the pad (secret key), i.e. Bob does not need to send any message back to Alice unless he detects eavesdropping. Such a secure transfer of one-time-pad, over public channel, is not possible in classical cryptography and in quantum cryptography all previous protocols require Bob to send almost as many messages back to Alice as she does to Bob, to establish a key.
Last updated:  2014-01-14
New Leakage Resilient CCA-Secure Public Key Encryption
Kaoru Kurosawa, Ryo Nojima, Le Trieu Phong
This paper shows a generic method of constructing CCA-secure public key encryption schemes with leakage resilience on the secret key. It is based on a new kind of universal$_2$ hash proof system which accepts an auxiliary parameter. Specifically, two schemes are presented, basing on the DCR assumption and DLIN assumption respectively.
Last updated:  2014-01-20
EPiC: Efficient Privacy-Preserving Counting for MapReduce
Uncategorized
Erik-Oliver Blass, Guevara Noubir, Triet D. Vo-Huu
Show abstract
Uncategorized
In the face of an untrusted cloud infrastructure, outsourced data needs to be protected. We present EPiC, a practical protocol for the privacy-preserving evaluation of a fundamental operation on data sets: frequency counting. In an encrypted outsourced data set, a cloud user can specify a pattern, and the cloud will count the number of occurrences of this pattern in an oblivious manner. A pattern is expressed as a Boolean formula on the fields of data records and can specify values counting, value comparison, range counting, and conjunctions/disjunctions of field values. We show how a general pattern, defined by a Boolean formula, is arithmetized into a multivariate polynomial and used in EPiC. To increase the performance of the system, we introduce a new somewhat homomorphic encryption scheme based on a previous work on the Hidden Modular Group assumption. This scheme is highly efficient in our particular counting scenario. Besides a formal analysis where we prove EPiC's privacy, we also present implementation and evaluation results. We specifically target Google's prominent MapReduce paradigm as offered by major cloud providers. Our evaluation performed both locally and in Amazon's public cloud with data set sizes of up to 1 TByte shows only a modest overhead of 20% compared to non-private counting, attesting to EPiC's efficiency.
Last updated:  2012-08-08
Stam's Conjecture and Threshold Phenomena in Collision Resistance
Uncategorized
John Steinberger, Xiaoming Sun, Zhe Yang
Show abstract
Uncategorized
At CRYPTO 2008 Stam conjectured that if an $(m\!+\!s)$-bit to $s$-bit compression function $F$ makes $r$ calls to a primitive $f$ of $n$-bit input, then a collision for $F$ can be obtained (with high probability) using $r2^{(nr-m)/(r+1)}$ queries to $f$, which is sometimes less than the birthday bound. Steinberger (Eurocrypt 2010) proved Stam's conjecture up to a constant multiplicative factor for most cases in which $r = 1$ and for certain other cases that reduce to the case $r = 1$. In this paper we prove the general case of Stam's conjecture (also up to a constant multiplicative factor). Our result is qualitatively different from Steinberger's, moreover, as we show the following novel threshold phenomenon: that exponentially many (more exactly, $2^{s-2(m-n)/(r+1)}$) collisions are obtained with high probability after $O(1)r2^{(nr-m)/(r+1)}$ queries. This in particular shows that threshold phenomena observed in practical compression functions such as JH are, in fact, unavoidable for compression functions with those parameters. (This is the full version of the same-titled article that appeared at CRYPTO 2012.)
Last updated:  2014-02-20
Tweakable Blockciphers with Beyond Birthday-Bound Security
Will Landecker, Thomas Shrimpton, R. Seth Terashima
Liskov, Rivest and Wagner formalized the tweakable blockcipher (TBC) primitive at CRYPTO'02. The typical recipe for instantiating a TBC is to start with a blockcipher, and then build up a construction that admits a tweak. Almost all such constructions enjoy provable security only to the birthday bound, and the one that does achieve security beyond the birthday bound (due to Minematsu) severely restricts the tweak size and requires per-invocation blockcipher rekeying. This paper gives the first TBC construction that simultaneously allows for arbitrarily “wide” tweaks, does not rekey, and delivers provable security beyond the birthday bound. Our construction is built from a blockcipher and an $\eAXU$ hash function. As an application of the TBC primitive, LRW suggest the TBC-MAC construction (similar to CBC-MAC but chaining through the tweak), but leave open the question of its security. We close this question, both for TBC-MAC as a PRF and a MAC. Along the way, we find a nonce-based variant of TBC-MAC that has a tight reduction to the security of the underlying TBC, and also displays graceful security degradation when nonces are misused. This result is interesting on its own, but it also serves as an application of our new TBC construction, ultimately giving a variable input-length PRF with beyond birthday-bound security.
Last updated:  2012-09-13
Long Term Confidentiality: a Survey
Uncategorized
Johannes Braun, Johannes Buchmann, Ciaran Mullan, Alex Wiesmaier
Show abstract
Uncategorized
Sensitive electronic data may be required to remain confidential for long periods of time. Yet encryption under a computationally secure cryptosystem cannot provide a guarantee of long term confidentiality, due to potential advances in computing power or cryptanalysis. Long term confidentiality is ensured by information theoretically secure ciphers, but at the expense of impractical key agreement and key management. We overview known methods to alleviate these problems, whilst retaining some form of information theoretic security relevant for long term confidentiality.
Last updated:  2012-08-07
On the Impossibility of Constructing Efficient Key Encapsulation and Programmable Hash Functions in Prime Order Groups
Goichiro Hanaoka, Takahiro Matsuda, Jacob C. N. Schuldt
In this paper, we focus on the problem of minimizing ciphertext overhead, and discuss the (im)possibility of constructing key encapsulation mechanisms (KEMs) with low ciphertext overhead. More specifically, we rule out the existence of algebraic black-box reductions from the (bounded) CCA security of a natural class of KEMs to any non-interactive problem. The class of KEMs captures the structure of the currently most efficient KEMs defined in standard prime order groups, but restricts an encapsulation to consist of a single group element and a string. This result suggests that we cannot rely on existing techniques to construct a CCA secure KEM in standard prime order groups with a ciphertext overhead lower than two group elements. Furthermore, we show how the properties of an (algebraic) programmable hash function can be exploited to construct a simple, efficient and CCA secure KEM based on the hardness of the decisional Diffie-Hellman problem with the ciphertext overhead of just a single group element. Since this KEM construction is covered by the above mentioned impossibility result, this enables us to derive a lower bound on the hash key size of an algebraic programmable hash function, and rule out the existence of algebraic $({\sf poly},n)$-programmable hash functions in prime order groups for any integer $n$. The latter result answers an open question posed by Hofheinz and Kiltz (CRYPTO'08) in the case of algebraic programmable hash functions in prime order groups.
Last updated:  2012-11-03
Multi-receiver Homomorphic Authentication Codes for Network Coding
Uncategorized
Zhaohui Tang, Hoon Wei Lim
Show abstract
Uncategorized
We investigate a new class of authenticate codes (A-codes) that support verification by a group of message recipients in the network coding setting. That is, a sender generates an A-code over a message such that any intermediate node or recipient can check the authenticity of the message, typically to detect pollution attacks. We call such an A-code as multi-receiver homomorphic A-code (MRHA-code). In this paper, we first formally define an MRHA-code. We then derive some lower bounds on the security parameters and key sizes associated with our MRHA-codes. Moreover, we give efficient constructions of MRHA-code schemes that can be used to mitigate pollution attacks on network codes. Unlike prior works on computationally secure homomorphic signatures and MACs for network coding, our MRHA-codes achieve unconditional security.
Last updated:  2012-08-06
Differential Fault Analysis of AES: Towards Reaching its Limits
Uncategorized
Sk Subidh Ali, Debdeep Mukhopadhyay, Michael Tunstall
Show abstract
Uncategorized
In this paper we present a theoretical analysis of the limits of the Differential Fault Analysis (DFA) of AES by developing an inter relationship between conventional cryptanalysis of AES and DFAs. We show that the existing attacks have not reached these limits and present techniques to reach these. More specifically, we propose optimal DFA on states of AES-128 and AES-256. We also propose attacks on the key schedule of the three versions of AES, and demonstrate that these are some of the most efficient attacks on AES to date. Our attack on AES-128 key schedule is optimal, and the attacks on AES-192 and AES-256 key schedule are very close to optimal. Detailed experimental results have been provided for the developed attacks. The work has been compared to other works and also the optimal limits of Differential Fault Analysis of AES.
Last updated:  2013-03-16
A note on ‘An efficient certificateless aggregate signature with constant pairing computations’
Uncategorized
Debiao He, Jianhua Chen, Miaomiao Tian
Show abstract
Uncategorized
Recently, Xiong et al. [H. Xiong, Z. Guan, Z. Chen, F. Li, An efficient certificateless aggregate signature with constant pairing computations, Information Science, 219, pp. 225–235, 2013] proposed an efficient certificateless signature (CLS) scheme and used it to construct a certificateless aggregate signature (CLAS) scheme with constant pairing computations. They also demonstrated that both of the two schemes are provably secure in the random oracle model under the computational Diffie-Hellman assumption. Unfortunately, by giving concrete attacks, we point out that Xiong et al.’s schemes are not secure in their security model.
Last updated:  2012-08-06
Factorization of a 1061-bit number by the Special Number Field Sieve
Uncategorized
Greg Childers
Show abstract
Uncategorized
I provide the details of the factorization of the Mersenne number $2^{1061}-1$ by the Special Number Field Sieve. Although this factorization is easier than the completed factorization of RSA-768, it represents a new milestone for factorization using publicly available software.
Last updated:  2013-05-07
Improved CRT Algorithm for Class Polynomials in Genus 2
Uncategorized
Kristin Lauter, Damien Robert
Show abstract
Uncategorized
We present a generalization to genus~2 of the probabilistic algorithm of Sutherland for computing Hilbert class polynomials. The improvement over the Bröker-Gruenewald-Lauter algorithm for the genus~2 case is that we do not need to find a curve in the isogeny class whose endomorphism ring is the maximal order; rather, we present a probabilistic algorithm for ``going up'' to a maximal curve (a curve with maximal endomorphism ring), once we find any curve in the right isogeny class. Then we use the structure of the Shimura class group and the computation of $(\ell,\ell)$-isogenies to compute all isogenous maximal curves from an initial one. This is an extended version of the article published at ANTS~X.
Last updated:  2012-08-07
Group Signatures with Almost-for-free Revocation
Benoit Libert, Thomas Peters, Moti Yung
Group signatures are a central cryptographic primitive where users can anonymously and accountably sign messages in the name of a group they belong to. Several efficient constructions with security proofs in the standard model ({\it i.e.}, without the random oracle idealization) appeared in the recent years. However, like standard PKIs, group signatures need an efficient revocation system to be practical. Despite years of research, membership revocation remains a non-trivial problem: many existing solutions do not scale well due to either high overhead or constraining operational requirements (like the need for all users to update their keys after each revocation). Only recently, Libert, Peters and Yung (Eurocrypt'12) suggested a new scalable revocation method, based on the Naor-Naor-Lotspiech (NNL) broadcast encryption framework, that interacts nicely with techniques for building group signatures in the standard model. While promising, their mechanism introduces important storage requirements at group members. Namely, membership certificates, which used to have constant size in existing standard model constructions, now have polylog size in the maximal cardinality of the group (NNL, after all, is a tree-based technique and such dependency is naturally expected). In this paper we show how to obtain private keys of {\it constant} size. To this end, we introduce a new technique to leverage the NNL subset cover framework in the context of group signatures but, perhaps surprisingly, without logarithmic relationship between the size of private keys and the group cardinality. Namely, we provide a way for users to efficiently prove their membership of one of the generic subsets in the NNL subset cover framework. This technique makes our revocable group signatures competitive with ordinary group signatures ({\it i.e.}, without revocation) in the standard model. Moreover, unrevoked members (as in PKIs) still do not need to update their keys at each revocation.
Last updated:  2012-08-05
Adaptively Secure Multi-Party Computation with Dishonest Majority
Sanjam Garg, Amit Sahai
Adaptively secure multiparty computation is an essential and fundamental notion in cryptography. In this work we focus on the basic question of constructing a multiparty computation protocol secure against a \emph{malicious}, \emph{adaptive} adversary in the \emph{stand-alone} setting without assuming an honest majority, in the plain model. It has been believed that this question can be resolved by composing known protocols from the literature. We show that in fact, this belief is fundamentally mistaken. In particular, we show: \begin{itemize} \item[-]\textbf{Round inefficiency is unavoidable when using black-box simulation:} There does not exist any $o(\frac{n}{\log{n}})$ round protocol that adaptively securely realizes a (natural) $n$-party functionality with a black-box simulator. Note that most previously known protocols in the adaptive security setting relied on black-box simulators. \item[-]\textbf{A constant round protocol using non-black-box simulation:} We construct a \emph{constant round} adaptively secure multiparty computation protocol in a setting without \emph{honest majority} that makes crucial use of non-black box techniques. \end{itemize} Taken together, these results give the first resolution to the question of adaptively secure multiparty computation protocols with a malicious dishonest majority in the plain model, open since the first formal treatment of adaptive security for multiparty computation in 1996.
Last updated:  2012-08-05
New Preimage Attacks Against Reduced SHA-1
Simon Knellwolf, Dmitry Khovratovich
This paper shows preimage attacks against reduced SHA-1 up to 57 steps. The best previous attack has been presented at CRYPTO 2009 and was for 48 steps finding a two-block preimage with incorrect padding at the cost of 2159.3 evaluations of the compression function. For the same variant our attacks find a one-block preimage at 2150.6 and a correctly padded two-block preimage at 2151.1 evaluations of the compression function. The improved results come out of a differential view on the meet-in-the-middle technique originally developed by Aoki and Sasaki. The new framework closely relates meet-in-the-middle attacks to differential cryptanalysis which turns out to be particularly useful for hash functions with linear message expansion and weak diffusion properties.
Last updated:  2018-10-09
Robust Smart Card based Password Authentication Scheme against Smart Card Security Breach
Ding Wang, Ping Wang, Chun-guang Ma, Zhong Chen
As the most prevailing two-factor authentication mechanism, smart card based password authentication has been a subject of intensive research in the past decade and hundreds of this type of schemes have been proposed. However, most of them were found severely flawed, especially prone to the smart card loss problem, shortly after they were first put forward, no matter the security is heuristically analyzed or formally proved. In SEC'12, Wang pointed out that, the main cause of this issue is attributed to the lack of an appropriate security model to fully identify the practical threats. To address the issue, Wang presented three kinds of security models, namely Type I, II and III, and further proposed four concrete schemes, only two of which, i.e. PSCAV and PSCAb, are claimed to be secure under the harshest model, i.e. Type III security model. However, in this paper, we demonstrate that PSCAV still cannot achieve the claimed security goals and is vulnerable to an offline password guessing attack and other attacks in the Type III security mode, while PSCAb has several practical pitfalls. As our main contribution, a robust scheme is presented to cope with the aforementioned defects and it is proven to be secure in the random oracle model. Moreover, the analysis demonstrates that our scheme meets all the proposed criteria and eliminates several hard security threats that are difficult to be tackled at the same time in previous scholarship.
Last updated:  2012-08-05
Breaking and Repairing GCM Security Proofs
Tetsu Iwata, Keisuke Ohashi, Kazuhiko Minematsu
In this paper, we study the security proofs of GCM (Galois/Counter Mode of Operation). We first point out that a lemma, which is related to the upper bound on the probability of a counter collision, is invalid. Both the original privacy and authenticity proofs by the designers are based on the lemma. We further show that the observation can be translated into a distinguishing attack that invalidates the main part of the privacy proof. It turns out that the original security proofs of GCM contain a flaw, and hence the claimed security bounds are not justified. A very natural question is then whether the proofs can be repaired. We give an affirmative answer to the question by presenting new security bounds, both for privacy and authenticity. As a result, although the security bounds are larger than what were previously claimed, GCM maintains its provable security. We also show that, when the nonce length is restricted to 96 bits, GCM has better security bounds than a general case of variable length nonces.
Last updated:  2012-08-05
Dynamic Credentials and Ciphertext Delegation for Attribute-Based Encryption
Amit Sahai, Hakan Seyalioglu, Brent Waters
Motivated by the question of access control in cloud storage, we consider the problem using Attribute-Based Encryption (ABE) in a setting where users' credentials may change and ciphertexts may be stored by a third party. We find that a comprehensive solution to our problem must simultaneously allow for the revocation of ABE private keys as well as allow for the ability to update ciphertexts to reflect the most recent updates. Our main result is obtained by pairing two contributions: - Revocable Storage. We ask how a third party can process a ciphertext to disqualify revoked users from accessing data that was encrypted in the past, while the user still had access. In applications, such storage may be with an untrusted entity and as such, we require that the ciphertext management operations can be done without access to any sensitive data (which rules out decryption and re-encryption). We define the problem of revocable storage and provide a fully secure construction. Our core tool is a new procedure that we call ciphertext delegation. One can apply ciphertext delegation on a ciphertext encrypted under a certain access policy to `re-encrypt' it to a more restrictive policy using only public information. We provide a full analysis of the types of delegation possible in a number of existing ABE schemes. - Protecting Newly Encrypted Data. We consider the problem of ensuring that newly encrypted data is not decryptable by a user's key if that user's access has been revoked. We give the first method for obtaining this revocation property in a fully secure ABE scheme. We provide a new and simpler approach to this problem that has minimal modifications to standard ABE. We identify and define a simple property called piecewise key generation which gives rise to efficient revocation. We build such solutions for Key-Policy and Ciphertext-Policy Attribute-Based Encryption by modifying an existing ABE scheme due to Lewko et al. to satisfy our piecewise property and prove security in the standard model. It is the combination of our two results that gives an approach for revocation. A storage server can update stored ciphertexts to disqualify revoked users from accessing data that was encrypted before the user's access was revoked. This is the full version of the Crypto 2012 paper.
Last updated:  2012-08-05
Secure Database Commitments and Universal Arguments of Quasi Knowledge
Melissa Chase, Ivan Visconti
In this work we focus on a simple database commitment functionality where besides the standard security properties, one would like to hide the size of the input of the sender. Hiding the size of the input of a player is a critical requirement in some applications, and relatively few works have considered it. Notable exceptions are the work on zero-knowledge sets introduced in~\cite{MRK03}, and recent work on size-hiding private set intersection~\cite{ADT11}. However, neither of these achieves a secure computation (i.e., a reduction of a real-world attack of a malicious adversary into an ideal-world attack) of the proposed functionality. The first result of this submission consists in defining ``secure'' database commitment and in observing that previous constructions do not satisfy this definition. This leaves open the question of whether there is any way this functionality can be achieved. We then provide an affirmative answer to this question by using new techniques that combined together achieve ``secure'' database commitment. Our construction is in particular optimized to require only a constant number of rounds, to provide non-interactive proofs on the content of the database, and to rely only on the existence of a family of CRHFs. This is the first result where input-size hiding secure computation is achieved for an interesting functionality and moreover we obtain this result with standard security (i.e., simulation in expected polynomial time against fully malicious adversaries, without random oracles, non-black-box extraction assumptions, hardness assumptions against super-polynomial time adversaries, or other controversial/strong assumptions). A key building block in our construction is a universal argument enjoying an improved proof of knowledge property, that we call quasi-knowledge. This property is significantly closer to the standard proof of knowledge property than the weak proof of knowledge property satisfied by previous constructions.
Last updated:  2012-08-05
Differential Privacy with Imperfect Randomness
Uncategorized
Yevgeniy Dodis, Adriana Lopez-Alt, Ilya Mironov, Salil Vadhan
Show abstract
Uncategorized
In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC'07) showed that if a source of randomness R is "good enough" to generate a secret key capable of encrypting k bits, then one can deterministically extract nearly k almost uniform bits from R, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an "extractable" source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific "non-extractable" sources of randomness, such as the gamma-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias gamma< 1$ (possibly depending on prior bits). We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC'06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary "low sensitivity" functions that works even with randomness coming from a gamma-Santha-Vazirani source, for any gamma<1. This provides a somewhat surprising "separation" between traditional privacy and differential privacy with respect to imperfect randomness. Interestingly, the design of our mechanism is quite different from the traditional "additive-noise" mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that any (accurate and private) "SV-robust" mechanism for our problem requires a demanding property called consistent sampling, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism.
Last updated:  2013-02-15
Algebraic (Trapdoor) One Way Functions and their Applications
Uncategorized
Dario Catalano, Dario Fiore, Rosario Gennaro, Konstantinos Vamvourellis
Show abstract
Uncategorized
In this paper we introduce the notion of {\em Algebraic (Trapdoor) One Way Functions}, which, roughly speaking, captures and formalizes many of the properties of number-theoretic one-way functions. Informally, a (trapdoor) one way function $F: X \to Y$ is said to be algebraic if $X$ and $Y$ are (finite) abelian cyclic groups, the function is {\em homomorphic} i.e. $F(x)\cdot F(y) = F(x \cdot y)$, and is {\em ring-homomorphic}, meaning that it is possible to compute linear operations ``in the exponent'' over some ring (which may be different from $\ZZ_p$ where $p$ is the order of the underlying group $X$) without knowing the bases. Moreover, algebraic OWFs must be {\em flexibly one-way} in the sense that given $y = F(x)$, it must be infeasible to compute $(x', d)$ such that $F(x')=y^{d}$ (for $d \neq 0$). Interestingly, algebraic one way functions can be constructed from a variety of {\em standard} number theoretic assumptions, such as RSA, Factoring and CDH over bilinear groups. As a second contribution of this paper, we show several applications where algebraic (trapdoor) OWFs turn out to be useful. In particular: - {\em Publicly Verifiable Secure Outsourcing of Polynomials}: We present efficient solutions which work for rings of arbitrary size and characteristic. When instantiating our protocol with the RSA/Factoring based algebraic OWFs we obtain the first solution which supports small field size, is efficient and does not require bilinear maps to obtain public verifiability. - {\em Linearly-Homomorphic Signatures}: We give a direct construction of FDH-like linearly homomorphic signatures from algebraic (trapdoor) one way permutations. Our constructions support messages and homomorphic operations over {\em arbitrary} rings and in particular even small fields such as $\FF_2$. While it was already known how to realize linearly homomorphic signatures over small fields (Boneh-Freeman, Eurocrypt 2011), from lattices in the random oracle model, ours are the first schemes achieving this in a very efficient way from Factoring/RSA. - {\em Batch execution of Sigma protocols}: We construct a simple and efficient Sigma protocol for any algebraic OWP and show a ``batch'' version of it, i.e. a protocol where many statements can be proven at a cost (slightly superior) of the cost of a single execution of the original protocol. Given our RSA/Factoring instantiations of algebraic OWP, this yields, to the best of our knowledge, the first batch verifiable Sigma protocol for groups of unknown order.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.