All papers in 2013 (Page 8 of 881 results)

Last updated:  2013-05-07
On the evaluation of modular polynomials
Andrew V. Sutherland
We present two algorithms that, given a prime ell and an elliptic curve E/Fq, directly compute the polynomial $\Phi_\ell(j(E),Y)\in\Fq[Y] whose roots are the j-invariants of the elliptic curves that are ell-isogenous to E. We do not assume that the modular polynomial Phi_ell(X,Y) is given. The algorithms may be adapted to handle other types of modular polynomials, and we consider applications to point counting and the computation of endomorphism rings. We demonstrate the practical efficiency of the algorithms by setting a new point-counting record, modulo a prime q with more than 5,000 decimal digits, and by evaluating a modular polynomial of level ell=100,019.
Last updated:  2013-04-01
A New Class of Product-sum Type Public Key Cryptosystem,K(V)$\Sigma\Pi$PKC,Constructed Based on Maximum Length Code
Masao KASAHARA
The author recently proposed a new class of knapsack type PKC referred to as K(II)$\Sigma\Pi$PKC [1]. In K(II)$\Sigma\Pi$PKC with old algorithm DA[I], Bob randomly constructs a very small subset of Alice's set of public key whose order is very large, under the condition that the coding rate $\rho$ satisfies $0.01 < \rho < 0.2$. In K(II)$\Sigma\Pi$PKC, no secret sequence such as super-increasing sequence or shifted-odd sequence but the sequence whose components are constructed by a product of the same number of many prime numbers of the same size, is used. In this paper we present a new algorithm, DA(II) for decoding K(II)$\Sigma\Pi$PKC.We show that with new decoding algorithm, DA(II), K(II)$\Sigma\Pi$PKC yields a higher coding rate and a smaller size of public key compared with K(II)$\Sigma\Pi$PKC using old decoding algorithm, DA(I). We further present a generalized version of K(II) $\Sigma\Pi$PKC, referred to as K(\v)$\Sigma\Pi$PKC. We finally present a new decoding algorithm DA(III) and show that, in K(V)$\Sigma\Pi$PKC with DA(III), the relation, $r_F\simeq 0, \rho \simeq \frac{2}{3}$ holds, where $r_F$ is the factor ratio that will be defined in this paper. We show that K(V)$\Sigma\Pi$PKC yields a higher security compared with K(II) $\Sigma\Pi$PKC.
Last updated:  2013-04-01
Malleable Signatures: Complex Unary Transformations and Delegatable Anonymous Credentials
Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya, Sarah Meiklejohn
A signature scheme is malleable if, on input a message m and a signature $\sigma$, it is possible to efficiently compute a signature $\sigma'$ on a related message $m' = T(m)$, for a transformation T that is allowable with respect to this signature scheme. Previous work considered various useful flavors of allowable transformations, such as quoting and sanitizing messages. In this paper, we explore a connection between malleable signatures and anonymous credentials, and give the following contributions: -We define and construct malleable signatures for a broad category of allowable transformation classes, with security properties that are stronger than those that have been achieved previously. Our construction of malleable signatures is generically based on malleable zero-knowledge proofs, and we show how to instantiate it under the Decision Linear assumption. -We construct delegatable anonymous credentials from signatures that are malleable with respect to an appropriate class of transformations; we also show that our construction of malleable signatures works for this class of transformations. The resulting concrete instantiation is the first to achieve security under a standard assumption (Decision Linear) while also scaling linearly with the number of delegations.
Last updated:  2013-04-01
Cryptanalysis of RC4(n,m) Stream Cipher
Mohammad Ali Orumiehchiha, Josef Pieprzyk, Elham Shakour, Ron Steinfeld
$RC4(n,m)$ is a stream cipher based on RC4 and is designed by G. Gong $et ~al.$. It can be seen as a generalization of the famous RC4 stream cipher designed by Ron Rivest. The authors of $RC4(n,m)$ claim that the cipher resists all the attacks that are successful against the original RC4. The paper reveals cryptographic weaknesses of the $RC4(n,m)$ stream cipher. We develop two attacks. The first one is based on non-randomness of internal state and allows to distinguish it from a truly random cipher by an algorithm that has access to $2^{4\cdot n}$ bits of the keystream. The second attack exploits low diffusion of bits in the KSA and PRGA algorithms and recovers all bytes of the secret key. This attack works only if the initial value of the cipher can be manipulated. Apart from the secret key, the cipher uses two other inputs, namely, initial value and initial vector. Although these inputs are fixed in the cipher specification, some applications may allow the inputs to be under the attacker control. Assuming that the attacker can control the initial value, we show a distinguisher for the cipher and a secret key recovery attack that for the \textit{L}-bit secret key, is able to recover it with about $(L/n)\cdot 2^n $ steps. The attack has been implemented on a standard PC and can reconstruct the secret key of RC(8,32) in less than a second.
Last updated:  2013-05-21
A generic construction for voting correctness at minimum cost - Application to Helios
Veronique Cortier, David Galindo, Stephane Glondu, Malika Izabachene
Most voting schemes aim at providing verifiability: voters should be able to check that their ballots did contribute to the outcome (individual verifiability) and that the tallying authorities did their job properly (universal verifiability). Surprisingly, verifiability still does not answer a very simple and natural question: how can I be sure that the published result corresponds to the (sum of) intended votes of the voters? This property is called correctness by Juels, Catalano, and Jakobsson. Actually, even a prominent voting system like Helios does not achieve correctness in the case of a dishonest bulletin board, since it may add ballots. We generalize the aforementioned definition of correctness to account for a malicious bulletin board (full correctness) and we provide a generic construction that transforms a correct voting scheme into a fully correct voting scheme. This construction simply requires to send credentials to the voters, with no additional infrastructure. We further provide a simple and natural criteria that implies voting correctness, which can then be turned into full correctness due to our construction. As an application, we build a variant of Helios that is both fully correct, verifiable and private. Real-world elections often require threshold cryptosystems so that any t out of l trustees can proceed to tallying. We describe a fully distributed (with no dealer) threshold cryptosystem suitable for Helios (in particular, suitable to partial decryption). In doing so we happen to revisit the seminal multi-authority election system from Cramer, Gennaro and Schoenmakers. Altogether, we provide the first proof of privacy, verifiability and correctness for a fully distributed Helios voting scheme (and its enhanced version with credentials), together with its detailed description. This also implies, to our knowledge, the first formal proofs of privacy, verifiability and correctness for the scheme by Cramer et al. Last but not least, we provide an open source implementation of our variant of Helios.
Last updated:  2013-03-30
Distinguishing Attacks on RC4 and A New Improvement of the Cipher
Jing Lv, Bin Zhang, Dongdai Lin
RC4, designed by Rivest in 1987, is the most widely deployed stream cipher in practical applications. In this paper, two new class of statistical biases inherent in RC4 are depicted and it is shown that the RC4 keystream is distinguishable from random no matter how many initial bytes have been dumped. RC4A, proposed by Paul and Preneel at FSE 2004 to strengthen the security of RC4, is also found to be vulnerable to similar attacks. Instead, a new pseudorandom bit generator RC4B is proposed, which is believed to provide better immunity against the known attacks.
Last updated:  2014-02-26
Machine-Generated Algorithms, Proofs and Software for the Batch Verification of Digital Signature Schemes
Joseph A. Akinyele, Matthew Green, Susan Hohenberger, Matthew W. Pagano
As devices everywhere increasingly communicate with each other, many security applications will require low-bandwidth signatures that can be processed quickly. Pairing-based signatures can be very short, but are often costly to verify. Fortunately, they also tend to have efficient batch verification algorithms. Finding these batching algorithms by hand, however, can be tedious and error prone. We address this by presenting AutoBatch, an automated tool for generating batch verification code in either Python or C++ from a high level representation of a signature scheme. AutoBatch outputs both software and, for transparency, a LaTeX file describing the batching algorithm and arguing that it preserves the unforgeability of the original scheme. We tested AutoBatch on over a dozen pairing-based schemes to demonstrate that a computer could find competitive batching solutions in a reasonable amount of time. Indeed, it proved highly competitive. In particular, it found an algorithm that is significantly faster than a batching algorithm from Eurocrypt 2010. Another novel contribution is that it handles cross-scheme batching, where it searches for a common algebraic structure between two distinct schemes and attempts to batch them together. In this work, we expand upon an extended abstract on AutoBatch appearing in ACM CCS 2012 in a number of ways. We add a new loop-unrolling technique and show that it helps cut the batch verification cost of one scheme by roughly half. We describe our pruning and search algorithms in greater detail, including pseudocode and diagrams. All experiments were also re-run using the RELIC pairing library. We compare those results to our earlier results using the MIRACL library, and discuss why RELIC outperforms MIRACL in all but two cases. Automated proofs of several new batching algorithms are also included. AutoBatch is a useful tool for cryptographic designers and implementors, and to our knowledge, it is the first attempt to outsource to machines the design, proof writing and implementation of signature batch verification schemes.
Last updated:  2013-04-05
Cryptanalysis of Some Double-Block-Length Hash Modes of Block Ciphers with $n$-Bit Block and $n$-Bit Key
Deukjo Hong, Daesung Kwon
In this paper, we make attacks on DBL (Double-Block-Length) hash modes of block ciphers with $n$-bit key and $n$-bit block. Our preimage attack on the hash function of MDC-4 scheme requires the time complexity $2^{3n/2}$, which is significantly improved compared to the previous results. Our collision attack on the hash function of MJH scheme has time complexity less than $2^{124}$ for $n = 128$. Our preimage attack on the compression function of MJH scheme find a preimage with time complexity of $2^n$. It is converted to a preimage attack on the hash function with time complexity of $2^{3n/2+2}$. Our preimage attack on the compression function of Mennink's scheme find a preimage with time complexity of $2^{3n/2}$. It is converted to a preimage attack on the hash function with time complexity of $2^{7n/4+1}$. These attacks are helpful for understanding the security of the hash modes together with their security proofs.
Last updated:  2013-03-30
On the Classification of Differential Invariants for Multivariate Post-Quantum Cryptosystems"
Ray Perlner, Daniel Smith-Tone
Multivariate Public Key Cryptography(MPKC) has become one of a few options for security in the quantum model of computing. Though a few multivariate systems have resisted years of effort from the cryptanalytic community, many such systems have fallen to a surprisingly small pool of techniques. There have been several recent attempts at formalizing more robust security arguments in this venue with varying degrees of applicability. We present an extension of one such recent measure of security against a differential adversary which has the benefit of being immediately applicable in a general setting on unmodified multivariate schemes.
Last updated:  2013-03-30
On the Applicability of Time-Driven Cache Attacks on Mobile Devices (Extended Version)
Raphael Spreitzer, Thomas Plos
Cache attacks are known to be sophisticated attacks against cryptographic implementations on desktop computers. Recently, also investigations of such attacks on testbeds with processors that are employed in mobile devices have been done. In this work we investigate the applicability of Bernstein's timing attack and the cache-collision attack by Bogdanov et al. in real environments on three state-of-the-art mobile devices. These devices are: an Acer Iconia A510, a Google Nexus S, and a Samsung Galaxy SIII. We show that T-table based implementations of the Advanced Encryption Standard (AES) leak enough timing information on these devices in order to recover parts of the used secret key using Bernstein's timing attack. We also show that systems with a cache-line size larger than 32 bytes exacerbate the cache-collision attack by Bogdanov et al.
Last updated:  2014-02-04
Confined Guessing: New Signatures From Standard Assumptions
Uncategorized
Florian Böhl, Dennis Hofheinz, Tibor Jager, Jessica Koch, Christoph Striecks
Show abstract
Uncategorized
We put forward a new technique to construct very efficient and compact signature schemes. Our technique combines several instances of an only mildly secure signature scheme to obtain a fully secure scheme. Since the mild security notion we require is much easier to achieve than full security, we can combine our strategy with existing techniques to obtain a number of interesting new (stateless and fully secure) signature schemes. Concretely, we get: * A scheme based on the computational Diffie-Hellman (CDH) assumption in pairing-friendly groups. Signatures contain O(1) and verification keys O(log(k)) group elements, where k is the security parameter. Our scheme is the first CDH-based scheme with such compact verification keys. * A scheme based on the (non-strong) RSA assumption in which both signatures and verification keys contain O(1) group elements. Our scheme is significantly more efficient than existing RSA-based schemes. * A scheme based on the Short Integer Solutions (SIS) assumption. Signatures contain O(log(k) m) and verification keys O(n m) Z_p-elements, where p may be polynomial in k, and n, m denote the usual SIS matrix dimensions. Compared to state-of-the-art SIS-based schemes, this gives very small verification keys, at the price of slightly larger signatures. In all cases, the involved constants are small, and the arising schemes provide significant improvements upon state-of-the-art schemes. The only price we pay is a rather large (polynomial) loss in the security reduction. However, this loss can be significantly reduced at the cost of an additive term in signature and verification key size.
Last updated:  2013-03-30
Fast Collision Attack on MD5
Uncategorized
Tao Xie, Fanbao Liu, Dengguo Feng
Show abstract
Uncategorized
We presented the first single block collision attack on MD5 with complexity of $2^{47}$ MD5 compressions and posted the challenge for another completely new one in 2010. Last year, Stevens presented a single block collision attack to our challenge, with complexity of $2^{50}$ MD5 compressions. We really appreciate Stevens's hard work. However, it is a pity that he had not found even a better solution than our original one, let alone a completely new one and the very optimal solution that we preserved and have been hoping that someone can find it, whose collision complexity is about $2^{41}$ MD5 compressions. In this paper, we propose a method how to choose the optimal input difference for generating MD5 collision pairs. First, we divide the sufficient conditions into two classes: strong conditions and weak conditions, by the degree of difficulty for condition satisfaction. Second, we prove that there exist strong conditions in only 24 steps (one and a half rounds) under specific conditions, by utilizing the weaknesses of compression functions of MD5, which are difference inheriting and message expanding. Third, there should be no difference scaling after state word $q_{25}$ so that it can result in the least number of strong conditions in each differential path, in such a way we deduce the distribution of strong conditions for each input difference pattern. Finally, we choose the input difference with the least number of strong conditions and the most number of free message words. We implement the most efficient 2-block MD5 collision attack, which needs only about $2^{18}$ MD5 compressions to find a collision pair, and show a single-block collision attack with complexity $2^{41}$.
Last updated:  2013-08-16
Highly-Scalable Searchable Symmetric Encryption with Support for Boolean Queries
David Cash, Stanislaw Jarecki, Charanjit Jutla, Hugo Krawczyk, Marcel Rosu, Michael Steiner
This work presents the design, analysis and implementation of the first sub-linear searchable symmetric encryption (SSE) protocol that supports conjunctive search and general Boolean queries on symmetrically-encrypted data and that scales to very large data sets and arbitrarily-structured data including free text search. To date, work in this area has focused mainly on single-keyword search. For the case of conjunctive search, prior SSE constructions required work linear in the total number of documents in the database and provided good privacy only for structured attribute-value data, rendering these solutions too slow and inflexible for large practical databases. In contrast, our solution provides a realistic and practical trade-off between performance and privacy by efficiently supporting very large databases at the cost of moderate and well-defined leakage to the outsourced server (leakage is in the form of data access patterns, never as direct exposure of plaintext data or searched values). A key aspect of our protocols is that it allows the searcher to pivot its conjunctive search on the estimated least frequent keyword in the conjunction. We show that a Decisional Diffie-Hellman (DDH) based pseudo-random function can be used not just to implement search tokens but also to hide query access pattern of non-pivot, and hence possibly highly frequent, keywords in conjunctive queries. We present a formal cryptographic analysis of the privacy and security of our protocols and establish precise upper bounds on the allowed leakage. To demonstrate the real-world practicality of our approach, we provide performance results of a prototype applied to several large representative data sets.
Last updated:  2013-03-28
On secure embedded token design (Long Version) -- Quasi-looped Yao circuits and bounded leakage
Simon Hoerder, Kimmo Järvinen, Dan Page
Within a broader context of mobile and embedded computing, the design of practical, secure tokens that can store and/or process security-critical information remains an ongoing challenge. One aspect of this challenge is the threat of information leakage through side-channel attacks, which is exacerbated by any resource constraints. Although any countermeasure can be of value, it seems clear that approaches providing robust guarantees are most attractive. Along these lines, this paper extends previous work on use of Yao circuits via two contributions. First, we show how careful analysis can fix the maximum number of traces acquired during a DPA attack, effectively bounding leakage from a Yao-based token: for a low enough bound, the token can therefore be secured via conventional (potentially less robust) countermeasures. To achieve this we use modularised Yao circuits, which also support our second contribution: the first Yao-based mplementation of a secure authentication payload, namely HMAC based on SHA.
Last updated:  2013-06-11
Single Password Authentication
Tolga Acar, Mira Belenkiy, Alptekin Küpçü
Users frequently reuse their passwords when authenticating to various online services. Combined with the use of weak passwords or honeypot/phishing attacks, this brings high risks to the security of the user's account information. In this paper, we propose several protocols that can allow a user to use a single password to authenticate to multiple services securely. All our constructions provably protect the user from dictionary attacks on the password, and cross-site impersonation or honeypot attacks by the online service providers. Our solutions assume the user has access to either an untrusted online cloud storage service (as per Boyen [14]), or a mobile storage device that is trusted until stolen. In the cloud storage scenario, we consider schemes that optimize for either storage server or online service performance, as well as anonymity and unlinkability of the user's actions. In the mobile storage scenario, we minimize the assumptions we make about the capabilities of the mobile device: we do not assume synchronization, tamper resistance, special or expensive hardware, or extensive cryptographic capabilities. Most importantly, the user's password remains secure even after the mobile device is stolen. Our protocols provide another layer of security against malware and phishing. To the best of our knowledge, we are the first to propose such various and provably secure password-based authentication schemes. Lastly, we argue that our constructions are relatively easy to deploy, especially if a few single sign-on services (e.g., Microsoft, Google, Facebook) adopt our proposal.
Last updated:  2013-03-28
On generalized semi-bent (and partially bent) Boolean functions
Brajesh Kumar Singh
In this paper, we obtain a characterization of generalized Boolean functions based on spectral analysis. We investigate a relationship between the Walsh-Hadamard spectrum and $\sigma_f$, the sum-of-squares-modulus indicator (SSMI) of the generalized Boolean function. It is demonstrated that $\sigma_f = 2^{2n + s}$ for every $s$-plateaued generalized Boolean function in $n$ variables. Two classes of generalized semi-bent Boolean functions are constructed.% and it is demonstrated that their SSMI is over generalized $s$-plateaued Boolean functions is $2^{2n + s}$. We have constructed a class of generalized semi-bent functions in $(n+1)$ variables from generalized semi-bent functions in $n$ variables and identify a subclass of it for which $\sigma_f$ and $\triangle_{f}$ both have optimal value. Finally, some construction on generalized partially bent Boolean functions are given.
Last updated:  2013-09-11
A New Security and Privacy Framework for RFID In Cloud Computing
Uncategorized
Süleyman Kardas, Serkan Çelik, Muhammed Ali Bingöl, Albert Levi
Uncategorized
RFID is a leading technology that has been rapidly deployed in several daily life applications such as payment, access control, ticketing, and e-passport, which requires strong security and privacy mechanisms. However, RFID systems commonly have limited computational capacity, poor resources and inefficient data management. Hence there is a demanding urge to address these issues in the light of some mechanism which can make the technology excel. Cloud computing is one of the fastest growing segments of IT industry which can provide a cost effective technology and information solution to handling and using data collected with RFID. As more and more information on individuals and companies is placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Therefore, while integrating RFID into the cloud, the security and privacy of the tag owner must be considered. Motivated by this need, we first provide a security and privacy model for RFID technology in the cloud computing. In this model, we first define the capabilities of the adversary and then give the definitions of the security and privacy. After that we propose an example of an RFID authentication protocol in the cloud computing. We prove that the proposal is narrow strong private+ in our privacy model.
Last updated:  2014-04-10
Provably Secure LWE Encryption with Smallish Uniform Noise and Secret
Daniel Cabarcas, Florian Göpfert, Patrick Weiden
In this paper we present the (to the best of our knowledge) first LWE-based encryption scheme that removes the need of Gaussian sampling for the error, i.e. the discrete Gaussian distribution is replaced by the uniform distribution on a (small) set, which at the same time preserves the underlying worst-case hardness. This shows that provable security and efficiency do not necessarily have to mutually exclude each other. We give an asymptotic parameter instantiation for our scheme, as well as some hardness results for LWE which might be of independent interest.
Last updated:  2013-12-04
Search Pattern Leakage in Searchable Encryption: Attacks and New Construction
Uncategorized
Chang Liu, Liehuang Zhu, Mingzhong Wang, Yu-an Tan
Show abstract
Uncategorized
Searching on remote encrypted data (commonly known as \textit{searchable encryption}) has become an important issue in secure data outsourcing, since it allows users to outsource encrypted data to an untrusted third party while maintains the capability of keyword search on the data. Searchable encryption can be achieved using the classical method called oblivious RAM, but the resultant schemes are too inefficient to be applied in the real-world scenarios (e.g., cloud computing). Recently, a number of efficient searchable encryption schemes have been proposed under weaker security guarantees. Such schemes, however, still leak statistical information about the users' search pattern. In this paper, we first present two concrete attack methods to show that the search pattern leakage will result in such a situation: an adversary who has some auxiliary knowledge can uncover the underlying keywords of user queries. To address this issue, we then develop a grouping-based construction (GBC) to transform an existing searchable encryption scheme to a new scheme hiding the search pattern. Finally, experiments based on the real-world dataset demonstrate the effectiveness of our attack methods and the feasibility of our construction.
Last updated:  2013-03-26
A Non Asymptotic Analysis of Information Set Decoding
Yann Hamdaoui, Nicolas Sendrier
We propose here a non asymptotic complexity analysis of some variants of information set decoding. In particular, we give this analysis for the two recent variants { published by May, Meurer and Thomae in 2011 and by Becker, Joux, May and Meurer in 2012 { for which only an asymptotic analysis was available. The purpose is to provide a simple and accurate estimate of the complexity to facilitate the paramater selection for code-based cryptosystems. We implemented those estimates and give a comparison at the end of the paper.
Last updated:  2014-05-14
Completeness Theorems for All Finite Stateless 2-Party Primitives
Daniel Kraschewski
Since Kilian showed in 1988 that oblivious transfer (OT) is complete in the sense that every secure multi-party computation can be realized from this primitive, cryptographers are working on reductions of OT to various other primitives. A long-standing open question in this context is the classification of finite stateless 2-party primitives (so-called "cryptogates"), i.e. trusted black boxes that can be jointly queried by two parties, have finite input and output alphabets, and do not change behavior depending on time or input history. Over the decades, completeness criteria have been found for deterministic cryptogates (i.e. primitives without internal randomness), noisy channels, and symmetric (i.e., both parties receive the same output) or asymmetric (i.e., only one party receives any output at all) randomized cryptogates. However, the known criteria for randomized primitives other than noisy channels only hold in presence of passive adversaries (i.e., even corrupted parties still follow the protocol). We complete this line of research by providing simple but comprehensive combinatorial completeness criteria for ALL finite stateless 2-party primitives. I.e., for the first time there are completeness criteria for randomized primitives that are neither symmetric nor asymmetric (but give different outputs to the querying parties), and we overcome the limitation that previous results for randomized primitives with input from BOTH parties only regarded passive adversaries. A fundamental tool of our approach is a powerful lemma from real algebraic geometry, which allows us to base a cryptographic security proof on a rather "game-theoretic" approach. As a corollary of our work, every non-complete example of a finite stateless 2-party primitive is essentially symmetric. This relationship between non-completeness and symmetric output behavior was previously only known for deterministic cryptogates.
Last updated:  2013-03-26
Interactive Coding, Revisited
Kai-Min Chung, Rafael Pass, Sidharth Telang
How can we encode a communication protocol between two parties to become resilient to adversarial errors on the communication channel? This question dates back to the seminal works of Shannon and Hamming from the 1940's, initiating the study of error-correcting codes (ECC). But, even if we encode each message in the communication protocol with a ``good'' ECC, the error rate of the encoded protocol becomes poor (namely $O(1/m)$ where $m$ is the number of communication rounds). Towards addressing this issue, Schulman (FOCS'92, STOC'93) introduced the notion of \emph{interactive coding}. We argue that whereas the method of separately encoding each message with an ECC ensures that the encoded protocol carries the same amount of information as the original protocol, this may no longer be the case if using interactive coding. In particular, the encoded protocol may completely leak a player's private input, even if it would remain secret in the original protocol. Towards addressing this problem, we introduce the notion of \emph{knowledge-preserving interactive coding}, where the interactive coding protocol is required to preserve the ``knowledge'' transmitted in the original protocol. Our main results are as follows. \begin{itemize} \item The method of separately applying ECCs to each message is essentially optimal: No knowledge-preserving interactive coding scheme can have an error rate of $1/m$, where $m$ is the number of rounds in the original protocol. \item If restricting to computationally-bounded (polynomial-time) adversaries, then assuming the existence of one-way functions (resp. subexponentially-hard one-way functions), for every $\epsilon>0$, there exists a knowledge-preserving interactive coding schemes with constant error rate and information rate $n^{-\epsilon}$ (resp. $1/\polylog(n)$) where $n$ is the security parameter; additionally to achieve an error of even $1/m$ requires the existence of one-way functions. \item Finally, even if we restrict to computationally-bounded adversaries, knowledge-preserving interactive coding schemes with constant error rate can have an information rate of at most $o(1/\log n)$. This results applies even to \emph{non-constructive} interactive coding schemes. \end{itemize}
Last updated:  2013-03-26
Improving the Message-ciphertext Rate of Lewko's Fully Secure IBE Scheme
Dingding Jia, Bao Liand Yamin Liu, Qixiang Mei
In Eurocrypt 2012, Lewko presented a fully secure IBE scheme in the prime order setting based on the decisional linear assumption. We note that some random factor involved in the ciphertext can further be used to hide yet another message , and get a new fully secure IBE scheme with better message-ciphertext rate. Similar to Lewko's scheme, we use dual pairing vector space in prime order bilinear groups to simulate the canceling and parameter hiding properties of composite order settings. The security of our scheme is based on the subspace assumption, which can be reduced to the decisional linear assumption. We employ the dual system encryption technique in our security proof.
Last updated:  2014-09-03
Efficient and Secure Algorithms for GLV-Based Scalar Multiplication and their Implementation on GLV-GLS Curves (Extended Version)
Uncategorized
Armando Faz-Hernandez, Patrick Longa, Ana H. Sanchez
Show abstract
Uncategorized
We propose efficient algorithms and formulas that improve the performance of side-channel protected elliptic curve computations with special focus on scalar multiplication exploiting the Gallant-Lambert-Vanstone (CRYPTO 2001) and Galbraith-Lin-Scott (EUROCRYPT 2009) methods. Firstly, by adapting Feng et al.'s recoding to the GLV setting, we derive new regular algorithms for variable-base scalar multiplication that offer protection against simple side-channel and timing attacks. Secondly, we propose an efficient, side-channel protected algorithm for fixed-base scalar multiplication which combines Feng et al.'s recoding with Lim-Lee's comb method. Thirdly, we propose an efficient technique that interleaves ARM and NEON-based multiprecision operations over an extension field to improve performance of GLS curves on modern ARM processors. Finally, we showcase the efficiency of the proposed techniques by implementing a state-of-the-art GLV-GLS curve in twisted Edwards form defined over GF(p^2), which supports a four dimensional decomposition of the scalar and is fully protected against timing attacks. Analysis and performance results are reported for modern x64 and ARM processors. For instance, we compute a variable-base scalar multiplication in 89,000 and 244,000 cycles on an Intel Ivy Bridge and an ARM Cortex-A15 processor (respect.); using a precomputed table of 6KB, we compute a fixed-base scalar multiplication in 49,000 and 116,000 cycles (respect.); and using a precomputed table of 3KB, we compute a double scalar multiplication in 115,000 and 285,000 cycles (respect.). The proposed techniques represent an important improvement of the state-of-the-art performance of elliptic curve computations, and allow us to set new speed records in several modern processors. The techniques also reduce the cost of adding protection against timing attacks in the computation of GLV-based variable-base scalar multiplication to below 10%.
Last updated:  2013-03-26
The fragility of AES-GCM authentication algorithm
Uncategorized
Shay Gueron, Vlad Krasnov
Show abstract
Uncategorized
A new implementation of the GHASH function has been recently committed to a Git version of OpenSSL, to speed up AES-GCM. We identified a bug in that implementation, and made sure it was quickly fixed before trickling into an official OpenSSL trunk. Here, we use this (already fixed) bug as a real example that demonstrates the fragility of AES-GCM’s authentication algorithm (GHASH). One might expect that incorrect MAC tag generation would only cause legitimate message-tag pairs to fail authentication (which is already a serious problem). However, since GHASH is a “polynomial evaluation” MAC, the bug can be exploited for actual message forgery.
Last updated:  2013-03-19
Incentivizing Outsourced Computation
Mira Belenkiy, Melissa Chase, C. Chris Erway, John Jannotti, Alptekin Küpçü, Anna Lysyanskaya
We describe different strategies a central authority, the boss, can use to distribute computation to untrusted contractors. Our problem is inspired by volunteer distributed computing projects such as SETI@home, which outsource computation to large numbers of participants. For many tasks, verifying a task's output requires as much work as computing it again; additionally, some tasks may produce certain outputs with greater probability than others. A selfish contractor may try to exploit these factors, by submitting potentially incorrect results and claiming a reward. Further, malicious contractors may respond incorrectly, to cause direct harm or to create additional overhead for result-checking. We consider the scenario where there is a credit system whereby users can be rewarded for good work and fined for cheating. We show how to set rewards and fines that incentivize proper behavior from rational contractors, and mitigate the damage caused by malicious contractors. We analyze two strategies: random double-checking by the boss, and hiring multiple contractors to perform the same job. We also present a bounty mechanism when multiple contractors are employed; the key insight is to give a reward to a contractor who catches another worker cheating. Furthermore, if we can assume that at least a small fraction h of the contractors are honest (1% &#8722; 10%), then we can provide graceful degradation for the accuracy of the system and the work the boss has to perform. This is much better than the Byzantine approach, which typically assumes h > 60%.
Last updated:  2013-04-29
MiniLEGO: Efficient Secure Two-Party Computation From General Assumptions
Tore Kasper Frederiksen, Thomas Pelle Jakobsen, Jesper Buus Nielsen, Peter Sebastian Nordholt, Claudio Orlandi
One of the main tools to construct secure two-party computation protocols are Yao garbled circuits. Using the cut-and-choose technique, one can get reasonably efficient Yao-based protocols with security against malicious adversaries. At TCC 2009, Nielsen and Orlandi suggested to apply cut-and-choose at the gate level, while previously cut-and-choose was applied on the circuit as a whole. This appealing idea allows for a speed up with practical significance (in the order of the logarithm of the size of the circuit) and has become known as the ``LEGO'' construction. Unfortunately the construction by Nielsen and Orlandi is based on a specific number-theoretic assumption and requires public-key operations per gate of the circuit. The main technical contribution of this work is a new XOR-homomorphic commitment scheme based on oblivious transfer, that we use to cope with the problem of connecting the gates in the LEGO construction. Our new protocol has the following advantages: \begin{enumerate} \item It maintains the efficiency of the LEGO cut-and-choose. \item After a number of seed oblivious transfers linear in the security parameter, the construction uses only primitives from Minicrypt (i.e., private-key cryptography) per gate in the circuit (hence the name MiniLEGO). \item On the contrary of original LEGO, MiniLEGO is compatible with all known optimization for Yao garbled gates (row reduction, free-XORs, point-and-permute). \end{enumerate}
Last updated:  2013-05-06
Optimal Suspicion Functions for Tardos Traitor Tracing Schemes
Uncategorized
Jan-Jaap Oosterwijk, Boris Skoric, Jeroen Doumen
Show abstract
Uncategorized
We investigate alternative suspicion functions for Tardos traitor tracing schemes. In the simple decoder approach (computation of a score for every user independently) we derive suspicion functions that optimize a performance indicator related to the sufficient code length $\ell$ in the limit of large coalition size $c$. Our results hold for the Restricted-Digit Model as well as the Combined-Digit Model. The scores depend on information that is usually not available to the tracer -- the attack strategy or the tallies of the symbols received by the colluders. We discuss how such results can be used in realistic contexts. We study several combinations of coalition attack strategy vs. suspicion function optimized against some attack (another attack or the same). In many of these combinations the usual scaling $\ell \propto c^2$ is replaced by a lower power of $c$, e.g. $c^{3/2}$. We find that the interleaving strategy is an especially powerful attack, and the suspicion function tailored against interleaving is effective against all considered attacks.
Last updated:  2013-03-15
On the security of a certicateless signature scheme in the standard model
Lin Cheng, Qiaoyan Wen, Zhengping Jin, Hua Zhang
Most of certificateless signature schemes without random oracles can not resist key replacement attack. To overcome this security weakness, Yu et al. recently propose a new certificateless signature scheme and claimed that their scheme is provably secure in the standard model. However, in this paper, we show their scheme is still insecure against key replacement attack where an adversary who replaces the public key of a signer can forge valid signatures on any messages for that signer without knowing the signer's partial secret key. Moreover, we show Yu et al.'s certificateless signature scheme is vulnerable to ``malicious-but-passive'' KGC attack where a malicious KGC can forge valid signatures by embedding extra trapdoors in the system parameter.
Last updated:  2013-03-15
Policy-based Secure Deletion
Christian Cachin, Kristiyan Haralambiev, Hsu-Chun Hsiao, Alessandro Sorniotti
Securely deleting data from storage systems has become difficult today. Most storage space is provided as a virtual resource and traverses many layers between the user and the actual physical storage medium. Operations to properly erase data and wipe out all its traces are typically not foreseen. This paper introduces a cryptographic model for policy-based secure deletion of data in storage systems, whose security relies on the proper erasure of cryptographic keys. Deletion operations are expressed in terms of a deletion policy that describes data destruction through deletion attributes and protection classes. A protection class is first applied to the stored data. Later, a secure deletion operation takes attributes as parameters and triggers the destruction of all data whose protection class is deleted according to the policy. No stored data is ever re-encrypted. A cryptographic construction is presented for deletion policies given by directed acyclic graphs; it is built in a modular way from exploiting that secure deletion schemes may be composed with each other. Finally, the paper describes a prototype implementation of a Linux filesystem with policy-based secure deletion.
Last updated:  2013-03-25
Some Fixes To SSH
Uncategorized
Xu ZiJie
Show abstract
Uncategorized
To against some known attacks to Secure Shell (SSH), I propose some fixes to SSH. The fixes include add a key producer function and revise the MAC.
Last updated:  2013-10-22
Practical (Second) Preimage Attacks on TCS_SHA-3
Gautham Sekar, Soumyadeep Bhattacharya
TCS\_SHA-3 is a family of four cryptographic hash functions that are covered by an US patent (US 2009/0262925). The digest sizes are 224, 256, 384 and 512 bits. The hash functions use bijective functions in place of the standard, compression functions. In this paper we describe first and second preimage attacks on the full hash functions. The second preimage attack requires negligible time and the first preimage attack requires $O(2^{36})$ time. In addition to these attacks, we also present a negligible-time second preimage attack on a strengthened variant of the TCS\_SHA-3. All the attacks have negligible memory requirements.
Last updated:  2013-08-27
Secure and Constant Cost Public Cloud Storage Auditing with Deduplication
Jiawei Yuan, Shucheng Yu
Data integrity and storage efficiency are two important requirements for cloud storage. Proof of Retrievability (POR) and Proof of Data Possession (PDP) techniques assure data integrity for cloud storage. Proof of Ownership (POW) improves storage efficiency by securely removing unnecessarily duplicated data on the storage server. However, trivial combination of the two techniques, in order to achieve both data integrity and storage efficiency, results in non-trivial duplication of metadata (i.e., authentication tags), which contradicts the objectives of POW. Recent attempts to this problem introduce tremendous computational and communication costs and have also been proven not secure. It calls for a new solution to support efficient and secure data integrity auditing with storage deduplication for cloud storage. In this paper we solve this open problem with a novel scheme based on techniques including polynomial-based authentication tags and homomorphic linear authenticators. Our design allows deduplication of both files and their corresponding authentication tags. Data integrity auditing and storage deduplication are achieved simultaneously. Our proposed scheme is also characterized by constant realtime communication and computational cost on the user side. Public auditing and batch auditing are both supported. Hence, our proposed scheme outperforms existing POR and PDP schemes while providing the additional functionality of deduplication. We prove the security of our proposed scheme based on the Computational Diffie-Hellman problem, the Static Diffie-Hellman problem and the t-Strong Diffie-Hellman problem. Numerical analysis and experimental results on Amazon AWS show that our scheme is efficient and scalable.
Last updated:  2013-03-15
AES-like ciphers: are special S-boxes better then random ones? (Virtual isomorphisms again)
Alexander Rostovtsev
In [eprint.iacr.org/2012/663] method of virtual isomorphisms of ciphers was applied for differential/linear cryptanalysis of AES. It was shown that AES seems to be weak against those attacks. That result can be generalized to AES-like ciphers, which diffusion map is a block matrix, and its block size is the same as the S-box size. S-box is possibly weak if it is affine equivalent to a substitution that has the same cycling type as an affine substitution. Class of possibly weak S-boxes is very large; we do not know is there an S-box that is not possibly weak. Strength of AES-like cipher is defined by virtual isomorphism and not by differential/linear properties of the S-box. So we can assume that special S-boxes have little or no advantage comparatively to random nonlinear S-boxes. The conjecture is verified by experiments. If the conjecture is true, then search of the best S-boxes that maximizes the cipher strength against differential and linear attacks joined with virtual isomorphisms has no sense.
Last updated:  2013-04-03
A note on the practical complexity of the NFS in the medium prime case: Smoothness of Norms
Naomi Benger, Manuel Charlemagne, Kefei Chen
During an ongoing examination of the behaviour, in practice, of the Number Field Sieve (NFS) in the medium prime case we have noticed numerous interesting patterns. In this paper we present findings on run-time observations of an aspect of the sieving stage. The contributions of these observations to the computational mathematics community are twofold: firstly, they bring us a step closer to understanding the true practical effectiveness of the algorithm and secondly, they enabled the development of a test for the effectiveness of the polynomials used in the NFS. The results of this work are of particular interest to cryptographers: the run-time of the NFS determines directly the security level of some discrete logarithm problem based protocols, such as those arising in pairing-based cryptography.
Last updated:  2013-03-14
High-Performance Scalar Multiplication using 8-Dimensional GLV/GLS Decomposition
Joppe W. Bos, Craig Costello, Huseyin Hisil, Kristin Lauter
This paper explores the potential for using genus~2 curves over quadratic extension fields in cryptography, motivated by the fact that they allow for an 8-dimensional scalar decomposition when using a combination of the GLV/GLS algorithms. Besides lowering the number of doublings required in a scalar multiplication, this approach has the advantage of performing arithmetic operations in a 64-bit ground field, making it an attractive candidate for embedded devices. We found cryptographically secure genus 2 curves which, although susceptible to index calculus attacks, aim for the standardized 112-bit security level. Our implementation results on both high-end architectures (Ivy Bridge) and low-end ARM platforms (Cortex-A8) highlight the practical benefits of this approach.
Last updated:  2013-03-13
Key Wrapping with a Fixed Permutation
Dmitry Khovratovich
We present an efficient key wrapping scheme that uses a single wide permutation and does not rely on block ciphers. The scheme is capable of wrapping keys up to 1400 bits long and processing arbitrarily long headers. Our scheme easily delivers the security level of 128 bits or higher with the master key of the same length. The permutation can be taken from the sponge hash functions such as SHA-3 (Keccak), Quark, Photon, Spongent. We also present a simple proof of security within the concept of Deterministic Authenticated Encryption (DAE) introduced by Rogaway and Shrimpton. We extend the setting by allowing the adversary to query the permutation and following the indifferentiability setting in the security proof of the sponge construction.
Last updated:  2014-01-15
On Weak Keys and Forgery Attacks against Polynomial-based MAC Schemes
Gordon Procter, Carlos Cid
Universal hash functions are commonly used primitives for fast and secure message authentication in the form of Message Authentication Codes (MACs) or Authenticated Encryption with Associated Data (AEAD) schemes. These schemes are widely used and standardised, the most well known being McGrew and Viega's Galois/Counter Mode (GCM). In this paper we identify some properties of hash functions based on polynomial evaluation that arise from the underlying algebraic structure. As a result we are able to describe a general forgery attack, of which Saarinen's cycling attack from FSE 2012 is a special case. Our attack removes the requirement for long messages and applies regardless of the field in which the hash function is evaluated. Furthermore we provide a common description of all published attacks against GCM, by showing that the existing attacks are the result of these algebraic properties of the polynomial-based hash function. We also greatly expand the number of known weak GCM keys and show that almost every subset of the keyspace is a weak key class. Finally, we demonstrate that these algebraic properties and corresponding attacks are highly relevant to GCM/2+, a variant of GCM designed to increase the efficiency in software.
Last updated:  2013-10-04
An architecture for practical actively secure MPC with dishonest majority
Marcel Keller, Peter Scholl, Nigel P. Smart
We present a runtime environment for executing secure programs via a multi-party computation protocol in the preprocessing model. The runtime environment is general and allows arbitrary reactive computations to be performed. A particularly novel aspect is that it automatically determines the minimum number of rounds needed for a computation, and uses this to minimize the overall cost of the computation. Various experiments are reported on, on various non-trivial functionalities. We show how, by utilizing the ability of modern processors to execute multiple threads at a time, one can obtain various tradeoffs between latency and throughput.
Last updated:  2013-03-13
A NEW METHOD OF CHOOSING PRIMITIVE ELEMENTS FOR BREZING-WENG FAMILIES OF PAIRING FRIENDLY ELLIPTIC CURVES
Kisoon YOON
In this paper we present a new method of choosing primitive elements for Brezing-Weng families of pairing friendly elliptic curves with small rho-value, and we improve on previously-known best rho-values of families for the cases k=16, 22, 28 and 46. Our construction uses fixed discriminants.
Last updated:  2013-03-12
Non-isomorphic Biclique Cryptanalysis and Its Application to Full-Round mCrypton
Uncategorized
M. Shakiba, M. Dakhilalian, H. Mala
Show abstract
Uncategorized
Biclique attack, is a new cryptanalytic technique which brings new tools from the area of hash functions to the area of block cipher cryptanalysis. Till now, this technique is the only one able to analyze the full-round AES cipher in a single key scenario. In this paper, we introduce non-isomorphic biclique attack, a modified version of the original biclique attack. In this attack we obtain isomorphic groups of bicliques, each group contains several non-isomorphic bicliques of different dimensions. Actually, these bicliques are the results of an asymmetric key partitioning which is done according to two sets of key differences. Using this technique it is possible to get a chance to expand the length of bicliques or mount an attack with less data complexity. We found out the lightweight block cipher mCrypton is an appropriate candidate to be analyzed with this technique and bicliques up to five rounds can be constructed for this block cipher. Furthermore, we use two additional minor techniques, including pre-computation/re-computation in the bicliques construction and early abort technique in the matching stage, as well as a property observed in the diffusion layer of mCrypton to obtain more improvements for the complexity of our attacks on full-round mCrypton-96 and mCrypton-128.
Last updated:  2013-03-12
Limitations of the Meta-Reduction Technique: The Case of Schnorr Signatures
Marc Fischlin, Nils Fleischhacker
We revisit the security of Fiat-Shamir signatures in the non-programmable random oracle model. The well-known proof by Pointcheval and Stern for such signature schemes (Journal of Cryptology, 2000) relies on the ability to re-program the random oracle, and it has been unknown if this property is inherent. Pailler and Vergnaud (Asiacrypt 2005) gave some first evidence of the hardness by showing via meta-reduction techniques that algebraic reductions cannot succeed in reducing key-only attacks against unforgeability to the discrete-log assumptions. We also use meta-reductions to show that the security of Schnorr signatures cannot be proven equivalent to the discrete logarithm problem without programming the random oracle. Our result also holds under the one-more discrete logarithm assumption but applies to a large class of reductions, we call *single-instance* reductions, subsuming those used in previous proofs of security in the (programmable) random oracle model. In contrast to algebraic reductions, our class allows arbitrary operations, but can only invoke a single resettable adversary instance, making our class incomparable to algebraic reductions. Our main result, however, is about meta-reductions and the question if this technique can be used to further strengthen the separations above. Our answer is negative. We present, to the best of our knowledge for the first time, limitations of the meta-reduction technique in the sense that finding a meta-reduction for general reductions is most likely infeasible. In fact, we prove that finding a meta-reduction against a potential reduction is equivalent to finding a ``meta-meta-reduction'' against the strong existential unforgeability of the signature scheme. This means that the existence of a meta-reduction implies that the scheme must be insecure (against a slightly stronger attack) in the first place.
Last updated:  2013-03-12
Rethinking Definitions of Security for Session Key Agreement
Wesley George, Charles Rackoff
We consider session key agreement (SKA) protocols operating in a public key infrastructure, with pre-specified peers, that take no session ID as input, and output only a session key. Despite much work on SKA, we argue that there is no good definition of security for this (very natural) protocol syntax. The difficulty is that in this setting the adversary may not be able to tell which processes share a key, and thus which session keys may be revealed without trivializing the security condition. We consider security against adversaries that control all network traffic, can register arbitrary public keys, and can retrieve session keys. We do not attempt to mitigate damage from hardware failures, such as session-state compromise, as we aim to improve our understanding of this simpler setting. We give two natural but substantially different game based definitions of security and prove that they are equivalent. Such proofs are rare for SKA. The bulk of this proof consists of showing that, for secure protocols, only compatible processes can be made to share a key. This property is very natural but surprisingly subtle. For comparison, we give a version of our definition in which processes output session IDs and we give strong theorems relating these two types of definitions.
Last updated:  2013-03-20
Multi-bit homomorphic encryption based on learning with errors over rings
Zhang Wei, Liu Shuguang, Yang Xiaoyuan
Basing on Learning with errors over rings (RLWE) assumption, we provide a new multi-bit somewhat homomorphic encryption scheme. We introduce canonical embedding to transform a ring element into a vector, such that polynomial multiplication can be performed in O(nlog n) scalar operations, and ciphertext size is reduced at the same time. The CPA security of this scheme can be reduced into RLWE assumption.
Last updated:  2013-03-12
How to Hide Circuits in MPC: An Efficient Framework for Private Function Evaluation
Uncategorized
Payman Mohassel, Saeed Sadeghian
Show abstract
Uncategorized
We revisit the problem of general-purpose \emph{private function evaluation} (PFE) wherein a single party $P_1$ holds a circuit $\C$, while each $P_i$ for $1 \le i \leq n$ holds a private input $x_i$, and the goal is for a subset (or all) of the parties to learn $\C(x_1, \ldots, x_n)$ but nothing else. We put forth a general framework for designing PFE where the task of hiding the circuit and securely evaluating its gates are addressed independently: First, we reduce the task of hiding the circuit topology to oblivious evaluation of a mapping that encodes the topology of the circuit, which we refer to as \emph{oblivious extended permutation} (OEP) since the mapping is a generalization of the permutation mapping. Second, we design a subprotocol for private evaluation of a single gate (PFE for one gate), which we refer to as \emph{private gate evaluation} (PGE). Finally, we show how to naturally combine the two components to obtain efficient and secure PFE. We apply our framework to several well-known general-purpose MPC constructions, in each case, obtaining the most efficient PFE construction to date, for the considered setting. Similar to the previous work we only consider semi-honest adversaries in this paper. \begin{itemize} \item In the \emph{multiparty} case with dishonest majority, we apply our techniques to the seminal GMW protocol~\cite{GMW87} and obtain the first general-purpose PFE with \emph{linear complexity} in the circuit size. \item In the \emph{two-party} case, we transform Yao's garbled circuit protocol~\cite{yao86} into a constant-round two-party PFE. Depending on the instantiation of the underlying subprotocol, we either obtain a two-party PFE with linear complexity that improves on the only other work with similar asymptotic efficiency (Katz and Malka, ASIACRYPT 2011~\cite{katzpfe}), or a two-party PFE that provides the best concrete efficiency to date despite not being linear. \item The above two constructions are for boolean circuits. In case of \emph{arithmetic circuits}, we obtain the first PFE with linear complexity based on any additively homomorphic encryption scheme. \end{itemize} Though each construction uses different techniques, a common feature in all three is that the overhead of hiding the circuit $\C$ is essentially equal to the cost of running the OEP protocol on a vector of size $|\C|$. As a result, to improve efficiency, one can focus on lowering the cost of the underlying OEP protocol. OEP can be instantiated using a singly homomorphic encryption or any general-purpose MPC but we introduce a new construction that we show is significantly more efficient than these alternatives, in practice. The main building block in our OEP construction is an efficient protocol for \emph{oblivious switching network evaluation} (OSN), a generalization of the previously studied oblivious shuffling problem which is of independent interest. Our results noticeably improve efficiency of the previous solutions to oblivious shuffling, yielding a factor of 25 or more gain in computation and communication.
Last updated:  2013-03-09
2048XKS-F & 4096XKS-F - Two Software Oriented High Security Block Ciphers
Dieter Schmidt
2048XKS-F (eXtended Key Schedule - Feistel) is a Feistel cipher with a block length of 2048 bit and a key size of 4096 bit or 8192 bit, respectively. It uses the round function of the Subtitution-Permutation-Networks (SPN)1024 [11] and 1024XKS [12]as the f-function. 4096XKS-F is a Feistel cipher with a block length of 4096 bit and a key size of 8192 bit or 16384 bit, respectively. It uses the round function of the Substitution-Permutation-Network (SPN) 2048XKS as the f-function. Both 2048XKS-F and 4096XKS-F have 32 rounds. Additionally, there are #define statements in the reference implementation tocontrol which of the functions are compiled first, e.g. the diffusion layer or the s-box layer. In total, there are 6 #define statements in each reference implementation, making up 64 different ciphers. 2048XKS-F and 4096XKS-F are designed for 32 bit microprocessors with an integer hardware multiplier.
Last updated:  2013-03-07
An MQ/Code Cryptosystem Proposal
Leonard J. Schulman
We describe a new trap-door (and PKC) proposal. The proposal is ``multivariate quadratic'' (relies on the hardness of solving systems of quadratic equations); it is also code-based, and uses the code-scrambling technique of McEliece (1978). However, in the new proposal, the error-correcting code is not revealed in the public key, which protects against the leading attacks on McEliece's method.
Last updated:  2013-08-24
Hard-Core Predicates for a Diffie-Hellman Problem over Finite Fields
Nelly Fazio, Rosario Gennaro, Irippuge Milinda Perera, William E. Skeith III
A long-standing open problem in cryptography is proving the existence of (deterministic) hard-core predicates for the Diffie-Hellman problem defined over finite fields. In this paper, we make progress on this problem by defining a very natural variation of the Diffie-Hellman problem over $\mathbb{F}_{p^2}$ and proving the unpredictability of every single bit of one of the coordinates of the secret DH value. To achieve our result, we modify an idea presented at CRYPTO'01 by Boneh and Shparlinski [4] originally developed to prove that the LSB of the elliptic curve Diffie-Hellman problem is hard. We extend this idea in two novel ways: 1. We generalize it to the case of finite fields $\mathbb{F}_{p^2}$; 2. We prove that any bit, not just the LSB, is hard using the list decoding techniques of Akavia et al. [1] (FOCS'03) as generalized at CRYPTO'12 by Duc and Jetchev [6]. In the process, we prove several other interesting results: - Our result also hold for a larger class of predicates, called \emph{segment predicates} in [1]; - We extend the result of Boneh and Shparlinski to prove that every bit (and every segment predicate) of the elliptic curve Diffie-Hellman problem is hard-core; - We define the notion of \emph{partial one-way function} over finite fields $\mathbb{F}_{p^2}$ and prove that every bit (and every segment predicate) of one of the input coordinates for these functions is hard-core.
Last updated:  2015-02-06
New Lattice Based Signature Using The Jordan Normal Form
Hemlata Nagesh, Birendra Kumar Sharma
In this paper it is shown that the use of Jordan normal form instead of Hermite normal form would improve substantially the efficiency and the security of the lattice based signature scheme. In this scheme we also use a new hash function in such a way that the efficiency improved is obtain without decreasing the security of the function.
Last updated:  2014-05-21
Yet Another Attack On the Chinese Remainder Theorem Based Hierarchical Access Control Scheme
Niu Liu, Shaohua Tang, Lingling Xu
The hierarchical access control scheme based on Chinese Reminder Theorem [49] (CRTHACS) was supposed to be capable of hiding hierarchical structure, but Geiselmann et al. [18] showed practical attacks on CRTHACS to reveal the hierarchies it hides. Then, Zou et al. modified it, and gave a new CRTHACS [50] to resist those attacks. Nevertheless, we find that the modified version is still defective if it permits changes of structure, i.e. the scheme works in a dynamic scenario. In this paper, we describe our attack on the modified version of CRTHACS. We extend the description of the CRTHACS in a more proper form making it easier for us to look into the problem it has. We find the key character of the vulnerability which we name as double-invariance. We generalize our attack in an algebraic form and apply it to a series of hierarchical cryptographic access control schemes that share the same vulnerability with CRTHACS. We also give the countermeasure to fix this vulnerability.
Last updated:  2014-01-31
Two is the fastest prime: lambda coordinates for binary elliptic curves
Uncategorized
Thomaz Oliveira, Julio López, Diego F. Aranha, Francisco Rodríguez-Henríquez
Show abstract
Uncategorized
In this work, we present new arithmetic formulas for a projective version of the affine point representation $(x,x+y/x),$ for $x\ne 0,$ which leads to an efficient computation of the scalar multiplication operation over binary elliptic curves.A software implementation of our formulas applied to a binary Galbraith-Lin-Scott elliptic curve defined over the field $\mathbb{F}_{2^{254}}$ allows us to achieve speed records for protected/unprotected single/multi-core random-point elliptic curve scalar multiplication at the 127-bit security level. When executed on a Sandy Bridge 3.4GHz Intel Xeon processor, our software is able to compute a single/multi-core unprotected scalar multiplication in $69,500$ and $47,900$ clock cycles, respectively; and a protected single-core scalar multiplication in $114,800$ cycles. These numbers are improved by around 2\% and 46\% on the newer Ivy Bridge and Haswell platforms, respectively, achieving in the latter a protected random-point scalar multiplication in 60,000 clock cycles.
Last updated:  2013-12-11
Blank Digital Signatures
Christian Hanser, Daniel Slamanig
In this paper we present a novel type of digital signatures, which we call blank digital signatures. The basic idea behind this scheme is that an originator can define and sign a message template, describing fixed parts of a message as well as multiple choices for exchangeable parts of a message. One may think of a form with blank fields, where for such fields the originator specifies all the allowed strings to choose from. Then, a proxy is given the power to sign an instantiation of the template signed by the originator by using some secret information. By an instantiation, the proxy commits to one allowed choice per blank field in the template. The resulting message signature can be publicly verified under the originator's and the proxy's signature verification keys. Thereby, no verifying party except the originator and the proxy learn anything about the ``unused'' choices from the message template given a message signature. Consequently, the template is hidden from verifiers. We discuss several applications, provide a formal definition of blank digital signature schemes and introduce a security model. Furthermore, we provide an efficient construction of such a blank digital signature scheme from any secure digital signature scheme, pairing-friendly elliptic curves and polynomial commitments, which we prove secure in our model. We also provide a detailed efficiency analysis of our proposed construction supporting its practicality. Finally, we outline several open issues and extensions for future work.
Last updated:  2013-03-07
An Ideal-Security Protocol for Order-Preserving Encoding
Uncategorized
Raluca Ada Popa, Frank H. Li, Nickolai Zeldovich
Show abstract
Uncategorized
Order-preserving encryption - an encryption scheme where the sort order of ciphertexts matches the sort order of the corresponding plaintexts - allows databases and other applications to process queries involving order over encrypted data efficiently. The ideal security guarantee for order-preserving encryption put forth in the literature is for the ciphertexts to reveal no information about the plaintexts besides order. Even though more than a dozen schemes were proposed, all these schemes leak more information than order. This paper presents the first order-preserving scheme that achieves ideal security. Our main technique is mutable cipher- texts, meaning that over time, the ciphertexts for a small number of plaintext values change, and we prove that mutable ciphertexts are needed for ideal security. Our resulting protocol is interactive, with a small number of interactions. We implemented our scheme and evaluated it on microbenchmarks and in the context of an encrypted MySQL database application. We show that in addition to providing ideal security, our scheme achieves 1–2 orders of magnitude higher performance than the state-of-the-art order-preserving encryption scheme, which is less secure than our scheme.
Last updated:  2013-06-07
Attribute-Based Encryption for Circuits from Multilinear Maps
Uncategorized
Sanjam Garg, Craig Gentry, Shai Halevi, Amit Sahai, Brent Waters
Show abstract
Uncategorized
In this work, we provide the first construction of Attribute-Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi. This paper is the result of a merge of the works of Garg, Genry, and Halevi and of Sahai and Waters, and subsumes both these works.
Last updated:  2015-06-04
Oblivious PAKE: Efficient Handling of Password Trials
Franziskus Kiefer, Mark Manulis
In this work we introduce the notion of Oblivious Password based Authenticated Key Exchange (O-PAKE) and a compiler to transform a large class of PAKE into O-PAKE protocols. O-PAKE allows a client that shares one password with a server to use a set of passwords within one PAKE session. It succeeds if and only if one of those input passwords matches the one stored on the server side. The term oblivious is used to emphasise that no information about any password, input by the client, is made available to the server. Using special processing techniques, our O-PAKE compiler reaches nearly constant run time on the server side, independent of the size of the client’s password set. We prove security of the O-PAKE compiler under standard assumptions using the latest game-based PAKE model by Abdalla, Fouque and Pointcheval (PKC 2005), tailored to our needs. We identify the requirements that PAKE protocols must satisfy in order to suit the compiler and give two concrete O-PAKE instantiation.
Last updated:  2013-03-05
Direct Proof of Security of Wegman-Carter Authentication with Partially Known Key
Aysajan Abidin, Jan-Åke Larsson
Information-theoretically secure (ITS) authentication is needed in Quantum Key Distribution (QKD). In this paper, we study security of an ITS authentication scheme proposed by Wegman\&Carter, in the case of partially known authentication key. This scheme uses a new authentication key in each authentication attempt, to select a hash function from an Almost Strongly Universal$_2$ hash function family. The partial knowledge of the attacker is measured as the trace distance between the authentication key distribution and the uniform distribution; this is the usual measure in QKD. We provide direct proofs of security of the scheme, when using partially known key, first in the information-theoretic setting and then in terms of witness indistinguishability as used in the Universal Composability (UC) framework. We find that if the authentication procedure has a failure probability $\epsilon$ and the authentication key has an $\epsilon'$ trace distance to the uniform, then under ITS, the adversary's success probability conditioned on an authentic message-tag pair is only bounded by $\epsilon+|\mT|\epsilon'$, where $|\mT|$ is the size of the set of tags. Furthermore, the trace distance between the authentication key distribution and the uniform increases to $|\mT|\epsilon'$ after having seen an authentic message-tag pair. Despite this, we are able to prove directly that the authenticated channel is indistinguishable from an (ideal) authentic channel (the desired functionality), except with probability less than $\epsilon+\epsilon'$. This proves that the scheme is ($\epsilon+\epsilon'$)-UC-secure, without using the composability theorem.
Last updated:  2018-03-11
Deterministic Public-Key Encryption for Adaptively Chosen Plaintext Distributions
Ananth Raghunathan, Gil Segev, Salil Vadhan
Bellare, Boldyreva, and O'Neill (CRYPTO '07) initiated the study of deterministic public-key encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversarially-chosen plaintext distributions that are independent of the public key used by the scheme. In most scenarios, however, it is typically not realistic to assume that adversaries do not take the public key into account when attacking a scheme. We show that it is possible to guarantee meaningful security even for plaintext distributions that depend on the public key. We extend the previously proposed notions of security, allowing adversaries to adaptively choose plaintext distributions {\em after} seeing the public key, in an interactive manner. The only restrictions we make are that: (1) plaintext distributions are unpredictable (as is essential in deterministic public-key encryption), and (2) the number of plaintext distributions from which each adversary is allowed to adaptively choose is upper bounded by $2^{p}$, where $p$ can be any predetermined polynomial in the security parameter and plaintext length. For example, with $p = 0$ we capture plaintext distributions that are independent of the public key, and with $p = O(s \log s)$ we capture, in particular, all plaintext distributions that are samplable by circuits of size $s$. Within our framework we present both constructions in the random-oracle model based on any public-key encryption scheme, and constructions in the standard model based on lossy trapdoor functions (thus, based on a variety of number-theoretic assumptions). Previously known constructions heavily relied on the independence between the plaintext distributions and the public key for the purposes of randomness extraction. In our setting, however, randomness extraction becomes significantly more challenging once the plaintext distributions and the public key are no longer independent. Our approach is inspired by research on randomness extraction from seed-dependent distributions. Underlying our approach is a new generalization of a method for such randomness extraction, originally introduced by Trevisan and Vadhan (FOCS '00) and Dodis (PhD Thesis, MIT, '00).
Last updated:  2013-10-08
Tamper Resilient Cryptography Without Self-Destruct
Ivan Damgaard, Sebastian Faust, Pratyay Mukherjee, Daniele Venturi
We initiate a general study of schemes resilient to both tampering and leakage attacks. Tampering attacks are powerful cryptanalytic attacks where an adversary can change the secret state and observes the effect of such changes at the output. Our contributions are outlined below: (1) We propose a general construction showing that any cryptographic primitive where the secret key can be chosen as a uniformly random string can be made secure against bounded tampering and leakage. This holds in a restricted model where the tampering functions must be chosen from a set of bounded size after the public parameters have been sampled. Our result covers pseudorandom functions, and many encryption and signature schemes. (2) We show that standard ID and signature schemes constructed from a large class of Sigma-protocols (including the Okamoto scheme, for instance) are secure even if the adversary can arbitrarily tamper with the prover's state a bounded number of times and/or obtain some bounded amount of leakage. Interestingly, for the Okamoto scheme we can allow also independent tampering with the public parameters. (3) We show a bounded tamper and leakage resilient CCA secure public key cryptosystem based on the DDH assumption. We first define a weaker CPA-like security notion that we can instantiate based on DDH, and then we give a general compiler that yields CCA-security with tamper and leakage resilience. This requires a public tamper-proof common reference string. (4) Finally, we explain how to boost bounded tampering and leakage resilience (as in 2. and 3. above) to continuous tampering and leakage resilience, in the so-called floppy model where each user has a personal floppy (containing leak- and tamper-free information) which can be used to refresh the secret key (note that if the key is not updated, continuous tamper resilience is known to be impossible). For the case of ID schemes, we also show that if the underlying protocol is secure in the bounded retrieval model, then our compiler remains secure, even if the adversary can tamper with the computation performed by the device. In some earlier work, the implementation of the tamper resilient primitive was assumed to be aware of the possibility of tampering, in that it would switch to a special mode and, e.g., self-destruct if tampering was detected. None of our results require this assumption.
Last updated:  2013-03-05
Analysis and Improvement of Lindell's UC-Secure Commitment Schemes
Olivier Blazy, Céline Chevalier, David Pointcheval, Damien Vergnaud
In 2011, Lindell proposed an efficient commitment scheme, with a non-interactive opening algorithm, in the Universal Composability (UC) framework. He recently acknowledged a bug in its security analysis for the adaptive case. We analyze the proof of the original paper and propose a simple patch of the scheme. More interestingly, we then modify it and present a more efficient commitment scheme secure in the UC framework, with the same level of security as Lindell's protocol: adaptive corruptions, with erasures. The security is proven in the standard model (with a Common Reference String) under the classical Decisional Diffie-Hellman assumption. Our proposal is the most efficient UC-secure commitment proposed to date (in terms of computational workload and communication complexity).
Last updated:  2013-03-05
Practical collision attack on 40-step RIPEMD-128
Gaoli Wang
RIPEMD-128 is an ISO/IEC standard cryptographic hash function proposed in 1996 by Dobbertin, Bosselaers and Preneel. There are two different and independent parallel lines called $line1$ operation and $line2$ operation, and each operation has 64 steps. The results of two line operations are combined at the end of every application of the compression function. In this paper, we present collision differential characteristics for both $line1$ operation and $line2$ operation by choosing a proper message difference. By using message modification technique seriously, we improve the probabilities of the differential characteristics so that we can give a collision attack on 40-step RIPEMD-128 with a complexity of $2^{35}$ computations.
Last updated:  2013-03-05
Succinct Non-Interactive Zero Knowledge Arguments from Span Programs and Linear Error-Correcting Codes
Helger Lipmaa
Recently, Gennaro, Gentry, Parno and Raykova~\cite{eprint2012:GennaroGPR} proposed an efficient non-interactive zero knowledge argument for Circuit-SAT, based on non-standard notions like conscientious and quadratic span programs. We propose a new non-interactive zero knowledge argument, based on a simple combination of \emph{standard} span programs (that verify the correctness of every individual gate) and high-distance linear error-correcting codes (that check the consistency of wire assignments). We simplify all steps of the argument. As one of the corollaries, we design an (optimal) wire checker, based on systematic Reed-Solomon codes, of size $8 n$ and degree $4 n$, while the wire checker from~\cite{eprint2012:GennaroGPR} has size $24 n$ and degree $76 n$, where $n$ is the circuit size. Importantly, the new argument has constant verifier's computation.
Last updated:  2013-03-05
An Attack Against Fixed Value Discrete Logarithm Representations
Gergely Alpár, Jaap-Henk Hoepman, Wouter Lueks
Attribute-based credentials (ABCs) are an important building block of privacy-enhancing identity management. Since non-identifying attributes can easily be abused as the anonymity they provide hides the perpetrator, cryptographic mechanisms need to be introduced to make them revocable. However, most of these techniques are not efficient enough in practice. ABCs with practical revocation have recently been proposed by Hajny and Malina~\cite{Hajny-Malina-2012}. Their ABCs make use of different discrete logarithm representations of a fixed value. Although this technique is attractive as the verification of a particular issuer's credentials is easy, it has an intrinsic weakness. Colluding users can efficiently forge new credentials that are indistinguishable from legally issued ones.
Last updated:  2015-06-26
Speeding up Ate Pairing Computation in Affine Coordinates
Uncategorized
Duc-Phong Le, Chik How Tan
Show abstract
Uncategorized
At Pairing 2010, Lauter et al's analysis showed that Ate pairing computation in affine coordinates may be much faster than projective coordinates at high security levels. In this paper, we further investigate techniques to speed up Ate pairing computation in affine coordinates. On the one hand, we improve Ate pairing computation over elliptic curves admitting an even twist by describing an $4$-ary Miller algorithm in affine coordinates. This technique allows us to trade one multiplication in the full extension field and one field inversion for several multiplications in a smaller field. On the other hand, we investigate pairing computations over elliptic curves admitting a twist of degree $3$. We propose new fast explicit formulas for Miller function that are comparable to formulas over even twisted curves. We further analyze pairing computation on cubic twisted curves by proposing efficient subfamilies of pairing-friendly elliptic curves with embedding degrees $k = 9$, and $15$. These subfamilies allow us not only to obtain a very simple form of curve, but also lead to an efficient arithmetic and final exponentiation.
Last updated:  2013-05-13
Throughput Optimized Implementations of QUAD
Uncategorized
Jason R. Hamlet, Robert W. Brocato
Show abstract
Uncategorized
We present several software and hardware implementations of QUAD, a recently introduced stream cipher designed to be provably secure and practical to implement. The software implementations target both a personal computer and an ARM microprocessor. The hardware implementations target field programmable gate arrays. The purpose of our work was to first find the baseline performance of QUAD implementations, then to optimize our implementations for throughput. Our software implementations perform comparably to prior work. Our hardware implementations are the first known implementations to use random coefficients, in agreement with QUAD’s security argument, and achieve much higher throughput than prior implementations.
Last updated:  2013-02-27
On r-th Root Extraction Algorithm in F_q For q=lr^s+1 (mod r^(s+1)) with 0 < l < r and Small s
Namhun Koo, Gook Hwa Cho, Soonhak Kwon
We present an r-th root extraction algorithm over a finite field F_q. Our algorithm precomputes a primitive r^s-th root of unity where s is the largest positive integer satisfying r^s| q-1, and is applicable for the cases when s is small. The proposed algorithm requires one exponentiation for the r-th root computation and is favorably compared to the existing algorithms.
Last updated:  2013-02-27
The Algorithm of AAES
Shiyong Zhang, Gongliang Chen, Lei Fan
The Advanced Encryption Standard (AES) was specified in 2001 by the National Institute of Standards and Technology. This paper expand the method and make it possible to realize a new AES-like algorithm that has 256 bits fixed block size, which is named AAES algorithm. And we use Verilog to simulate the arithmetic and use Lattice Diamond to simulate the hardware property and action. We get the conclusion that the algorithm can be easily used on indestury and it is more robustness and safety than AES. And they are on the same order of magnitude in hardware implementation.
Last updated:  2013-05-06
A Conditional Proxy Broadcast Re-Encryption Scheme Supporting Timed-Release
Uncategorized
Kaitai Liang, Qiong Huang, Roman Schlegel, Duncan S. Wong, Chunming Tang
Show abstract
Uncategorized
To allow a delegator not only to delegate the keyword-controlled decryption rights of a broadcast encryption to a set of specified recipi- ents, but also to control when the decryption rights will be delegated, in this paper, for the first time, we introduce a new notion called Timed- Release Conditional Proxy Broadcast Re-Encryption (TR-CPBRE). We also propose a concrete construction for TR-CPBRE which can be proven selective identity adaptive CCA secure under the (P; Q; f)-general de- cisional Diffie-Hellman exponent assumption, and chosen-time period chosen-ciphertext secure under the bilinear Diffie-Hellman assumption. When compared with the existing CPBRE and Timed-Release Proxy Re-Encryption (TR-PRE) schemes, our scheme achieves better efficiency, and enables the delegator to make a fine-grained delegation of decryption rights to multiple delegatees.
Last updated:  2013-02-27
Public Key Exchange Using Matrices Over Group Rings
Delaram Kahrobaei, Charalambos Koupparis, Vladimir Shpilrain
We offer a public key exchange protocol in the spirit of Diffie-Hellman, but we use (small) matrices over a group ring of a (small) symmetric group as the platform. This ``nested structure" of the platform makes computation very efficient for legitimate parties. We discuss security of this scheme by addressing the Decision Diffie-Hellman (DDH) and Computational Diffie-Hellman (CDH) problems for our platform.
Last updated:  2013-03-03
Compact Hardware Implementations of ChaCha, BLAKE, Threefish, and Skein on FPGA
Nuray At, Jean-Luc Beuchat, Eiji Okamoto, Ismail San, Teppei Yamazaki
The cryptographic hash functions BLAKE and Skein are built from the ChaCha stream cipher and the tweakable Threefish block cipher, respectively. Interestingly enough, they are based on the same arithmetic operations, and the same design philosophy allows one to design lightweight coprocessors for hashing and encryption. The key element of our approach is to take advantage of the parallelism of the algorithms to deeply pipeline our Arithmetic an Logic Units, and to avoid data dependencies by interleaving independent tasks. We show for instance that a fully autonomous implementation of BLAKE and ChaCha on a Xilinx Virtex-6 device occupies 144 slices and three memory blocks, and achieves competitive throughputs. In order to offer the same features, a coprocessor implementing Skein and Threefish requires a substantial higher slice count.
Last updated:  2013-08-20
PUF Modeling Attacks on Simulated and Silicon Data
Ulrich Rührmair, Jan Sölter, Frank Sehnke, Xiaolin Xu, Ahmed Mahmoud, Vera Stoyanova, Gideon Dror, Jürgen Schmidhuber, Wayne Burleson, Srinivas Devadas
We show in this paper how several proposed Strong Physical Unclonable Functions (PUFs) can be broken by numerical modeling attacks. Given a set of challenge-response pairs (CRPs) of a Strong PUF, our attacks construct a computer algorithm which behaves indistinguishably from the original PUF on almost all CRPs. This algorithm can subsequently impersonate the PUF, and can be cloned and distributed arbitrarily. This breaks the security of almost all applications and protocols that are based on the respective PUF. The PUFs we attacked successfully include standard Arbiter PUFs and Ring Oscillator PUFs of arbitrary sizes, and XOR Arbiter PUFs, Lightweight Secure PUFs, and Feed-Forward Arbiter PUFs of up to a given size and complexity. The attacks are based upon various machine learning techniques, including a specially tailored variant of Logistic Regression and Evolution Strategies. Our results were obtained on a large number of CRPs coming from numerical simulations, as well as four million CRPs collected from FPGAs and ASICs. The performance on silicon CRPs is very close to simulated CRPs, confirming a conjecture from earlier versions of this work. Our findings lead to new design requirements for secure electrical PUFs, and will be useful to PUF designers and attackers alike.
Last updated:  2013-04-01
Message Authentication Codes Secure against Additively Related-Key Attacks
Keita Xagawa
Message Authentication Code (MAC) is one of most basic primitives in cryptography. After Biham (EUROCRYPT 1993) and Knudsen (AUSCRYPT 1992) proposed related-key attacks (RKAs), RKAs have damaged MAC's security. To relieve MAC of RKA distress, Bellare and Cash proposed pseudo-random functions (PRFs) secure against multiplicative RKAs (CRYPTO 2010). They also proposed PRFs secure against additive RKAs, but their reduction requires sub-exponential time. Since PRF directly implies Fixed-Input Length (FIL) MAC, their PRFs result in MACs secure against multiplicative RKAs. In this paper, we proposed Variable-Input Length (VIL) MAC secure against additive RKAs, whose reductions are polynomial time in the security parameter. Our construction stems from MACs from number-theoretic assumptions proposed by Dodis, Kiltz, Pietrzak, Wichs (EUROCRYPT 2012) and public-key encryption schemes secure against additive RKAs proposed by Wee (PKC 2012).
Last updated:  2013-02-27
Full Characterization of Functions that Imply Fair Coin Tossing and Ramifications to Fairness
Gilad Asharov, Yehuda Lindell, Tal Rabin
It is well known that it is impossible for two parties to toss a coin fairly (Cleve, STOC 1986). This result implies that it is impossible to securely compute with fairness any function that can be used to toss a coin fairly. In this paper, we focus on the class of deterministic Boolean functions with finite domain, and we ask for which functions in this class is it possible to information-theoretically toss an unbiased coin, given a protocol for securely computing the function with fairness. We provide a \emph{complete characterization} of the functions in this class that imply and do not imply fair coin tossing. This characterization extends our knowledge of which functions cannot be securely computed with fairness. In addition, it provides a focus as to which functions may potentially be securely computed with fairness, since a function that cannot be used to fairly toss a coin is not ruled out by the impossibility result of Cleve (which is the \emph{only} known impossibility result for fairness). In addition to the above, we draw corollaries to the feasibility of achieving fairness in two possible fail-stop models.
Last updated:  2018-09-14
Shorter Quasi-Adaptive NIZK Proofs for Linear Subspaces
Charanjit S. Jutla, Arnab Roy
We define a novel notion of quasi-adaptive non-interactive zero knowledge (NIZK) proofs for probability distributions on parametrized languages. It is quasi-adaptive in the sense that the common reference string (CRS) generator can generate the CRS depending on the language parameters. However, the simulation is required to be uniform, i.e., a single efficient simulator should work for the whole class of parametrized languages. For distributions on languages that are linear subspaces of vector spaces over bilinear groups, we give quasi-adaptive computationally sound NIZKs that are shorter and more efficient than Groth-Sahai NIZKs. For many cryptographic applications quasi-adaptive NIZKs suffice, and our constructions can lead to significant improvements in the standard model. Our construction can be based on any k-linear assumption, and in particular under the eXternal Diffie Hellman (XDH) assumption our proofs are even competitive with Random-Oracle based Sigma-protocol NIZK proofs. We also show that our system can be extended to include integer tags in the defining equations, where the tags are provided adaptively by the adversary. This leads to applicability of our system to many applications that use tags, e.g. applications using Cramer-Shoup projective hash proofs. Our techniques also lead to the shortest known (ciphertext) fully secure identity based encryption (IBE) scheme under standard static assumptions (SXDH). Further, we also get a short publicly-verifiable CCA2-secure IBE scheme.
Last updated:  2015-03-03
Unconditionally Secure and Universally Composable Commitments from Physical Assumptions
Ivan Damgard, Alessandra Scafuro
We present a constant-round unconditional black-box compiler that transforms any ideal (i.e., statistically-hiding and statistically-binding) straight-line extractable commitment scheme, into an extractable and equivocal commitment scheme, therefore yielding to UC-security [9]. We exemplify the usefulness of our compiler by providing two (constant-round) instantiations of ideal straight-line extractable commitment based on (malicious) PUFs [36] and stateless tamper-proof hardware tokens [26], therefore achieving the first unconditionally UC-secure commitment with malicious PUFs and stateless tokens, respectively. Our constructions are secure for adversaries creating arbitrarily malicious stateful PUFs/tokens. Previous results with malicious PUFs used either computational assumptions to achieve UC- secure commitments or were unconditionally secure but only in the indistinguishability sense [36]. Similarly, with stateless tokens, UC-secure commitments are known only under computational assumptions [13, 24, 15], while the (not UC) unconditional commitment scheme of [23] is secure only in a weaker model in which the adversary is not allowed to create stateful tokens. Besides allowing us to prove feasibility of unconditional UC-security with (malicious) PUFs and stateless tokens, our compiler can be instantiated with any ideal straight-line extractable commitment scheme, thus allowing the use of various setup assumptions which may better fit the application or the technology available.
Last updated:  2013-02-27
On the Arithmetic Complexity of Strassen-Like Matrix Multiplications
Murat Cenk, M. Anwar Hasan
The Strassen algorithm for multiplying $2 \times 2$ matrices requires seven multiplications and 18 additions. The recursive use of this algorithm for matrices of dimension $n$ yields a total arithmetic complexity of $(7n^{2.81}-6n^2)$ for $n=2^k$. Winograd showed that using seven multiplications for this kind of multiplications is optimal, so any algorithm for multiplying $2 \times 2$ matrices with seven multiplications is therefore called a Strassen-like algorithm. Winograd also discovered an additively optimal Strassen-like algorithm with 15 additions. This algorithm is called the Winograd's variant, whose arithmetic complexity is $(6n^{2.81}-5n^2)$ for $n=2^k$ and $(3.73n^{2.81}-5n^2)$ for $n=8\cdot 2^k$, which is the best-known bound for Strassen-like multiplications. This paper proposes a method that reduces the complexity of Winograd's variant to $(5n^{2.81}+0.5n^{2.59}+2n^{2.32}-6.5n^2)$ for $n=2^k$. It is also shown that the total arithmetic complexity can be improved to $(3.55n^{2.81}+0.148n^{2.59}+1.02n^{2.32}-6.5n^2)$ for $n=8\cdot 2^k$, which, to the best of our knowledge, improves the best-known bound for a Strassen-like matrix multiplication algorithm.
Last updated:  2013-03-04
URDP: General Framework for Direct CCA2 Security from any Lattice-Based PKE Scheme
Roohallah Rastaghi
Design efficient Lattice-based cryptosystem secure against adaptive chosen ciphertext attack (IND-CCA2) is a challenge problem. To the date, full CCA2-security of all proposed Lattice-based PKE schemes achieved by using a generic transformations such as either strongly unforgeable one-time signature schemes (SU-OT-SS), or a message authentication code (MAC) and weak form of commitment. The drawback of these schemes is that encryption requires "separate encryption". Therefore, the resulting encryption scheme is not sufficiently efficient to be used in practice and it is inappropriate for many applications such as small ubiquitous computing devices with limited resources such as smart cards, active RFID tags, wireless sensor networks and other embedded devices. In this work, for the first time, we introduce an efficient universal random data padding (URDP) scheme, and show how it can be used to construct a "direct" CCA2-secure encryption scheme from "any" worst-case hardness problems in (ideal) lattice in the standard model, resolving a problem that has remained open till date. This novel approach is a "black-box" construction and leads to the elimination of separate encryption, as it avoids using general transformation from CPA-secure scheme to a CCA2-secure one. IND-CCA2 security of this scheme can be tightly reduced in the standard model to the assumption that the underlying primitive is an one-way trapdoor function.
Last updated:  2013-02-27
Lossy Chains and Fractional Secret Sharing
Yuval Ishai, Eyal Kushilevitz, Omer Strulovich
Motivated by the goal of controlling the amount of work required to access a shared resource or to solve a cryptographic puzzle, we introduce and study the related notions of {\em lossy chains} and {\em fractional secret sharing}. Fractional secret sharing generalizes traditional secret sharing by allowing a fine-grained control over the amount of uncertainty about the secret. More concretely, a fractional secret sharing scheme realizes a fractional access structure $f:2^{[n]}\to [m]$ by guaranteeing that from the point of view of each set $T\subseteq [n]$ of parties, the secret is {\em uniformly} distributed over a set of $f(T)$ potential secrets. We show that every (monotone) fractional access structure can be realized. For {\em symmetric} structures, in which $f(T)$ depends only on the size of $T$, we give an efficient construction with share size $poly(n,\log m)$. Our construction of fractional secret sharing schemes is based on the new notion of {\em lossy chains} which may be of independent interest. A lossy chain is a Markov chain $(X_0,\ldots,X_n)$ which starts with a random secret $X_0$ and gradually loses information about it at a rate which is specified by a {\em loss function} $g$. Concretely, in every step $t$, the distribution of $X_0$ conditioned on the value of $X_t$ should always be uniformly distributed over a set of size $g(t)$. We show how to construct such lossy chains efficiently for any possible loss function $g$, and prove that our construction achieves an optimal asymptotic information rate.
Last updated:  2013-02-28
A Tutorial on White-box AES
James A. Muir
White-box cryptography concerns the design and analysis of implementations of cryptographic algorithms engineered to execute on untrusted platforms. Such implementations are said to operate in a \emph{white-box attack context}. This is an attack model where all details of the implementation are completely visible to an attacker: not only do they see input and output, they see every intermediate computation that happens along the way. The goal of a white-box attacker when targeting an implementation of a cipher is typically to extract the cryptographic key; thus, white-box implementations have been designed to thwart this goal (\ie to make key extraction difficult/infeasible). The academic study of white-box cryptography was initiated in 2002 in the seminal work of Chow, Eisen, Johnson and van Oorschot (SAC 2002). Here, we review the first white-box AES implementation proposed by Chow \etal and give detailed information on how to construct it. We provide a number of diagrams that summarize the flow of data through the various look-up tables in the implementation, which helps clarify the overall design. We then briefly review the impressive 2004 cryptanalysis by Billet, Gilbert and Ech-Chatbi (SAC 2004). The BGE attack can be used to extract an AES key from Chow \etal's original white-box AES implementation with a work factor of about $2^{30}$, and this fact has motivated subsequent work on improved AES implementations.
Last updated:  2013-07-06
On the Complexity of Broadcast Setup
Martin Hirt, Pavel Raykov
Byzantine broadcast is a distributed primitive that allows a specific party (called ``sender'') to consistently distribute a value $v$ among $n$ parties in the presence of potential misbehavior of up to $t$ of the parties. Broadcast requires that correct parties always agree on the same value and if the sender is correct, then the agreed value is $v$. Broadcast without a setup (i.e., from scratch) is achievable from point-to-point channels if and only if $t < n/3$. In case $t \ge n/3$ a trusted setup is required. The setup may be assumed to be given initially or generated by the parties in a setup phase. It is known that generating setup for protocols with cryptographic security is relatively simple and only consists of setting up a public-key infrastructure. However, generating setup for information-theoretically secure protocols is much more involved. In this paper we study the complexity of setup generation for information-theoretic protocols using point-to-point channels and temporarily available broadcast channels. We optimize the number of rounds in which the temporary broadcast channels are used while minimizing the number of bits broadcast with them. We give the first information-theoretically secure broadcast protocol tolerating $t < n/2$ that uses the temporary broadcast channels during only 1 round in the setup phase. Furthermore, only $\cO(n^3)$ bits need to be broadcast with the temporary broadcast channels during that round, independently of the security parameter employed. The broadcast protocol presented in this paper allows to construct the first information-theoretically secure MPC protocol which uses a broadcast channel during only one round. Additionally, the presented broadcast protocol supports refreshing, which allows to broadcast an a priori unknown number of times given a fixed-size setup.
Last updated:  2013-02-28
On the Negative Effects of Trend Noise and Its Applications in Side-Channel Cryptanalysis
Yuchen Cao, Yongbin Zhou, Zhenmei Yu
Side-channel information leaked during the execution of cryptographic modules usually contains various noises. Normally, these noises have negative effects on the performance of side-channel attacks exploiting noisy leakages. Therefore, to reduce noise in leakages usually serves to be an effective approach to enhance the performance of side-channel attacks. However, most existing noise reduction methods treat all noises as a whole, instead of identifying and dealing with each of them individually. Motivated by this, this paper investigates the feasibility and implications of identifying trend noise from any other noises in side-channel acquisitions and then dealing with it accordingly. Specifically, we discuss the effectiveness of applying least square method (LSM for short) to remove inherent trend noise in side-channel leakages, and also clarify the limited capability of existing noise reduction methods in dealing with trend noise. For this purpose, we perform a series of correlation power analysis attacks, as a case of study, against a set of real power traces, published in the second stage of international DPA contest which provides a public set of original power traces without any preprocessing, from an unprotected FPGA implementation of AES encryption. The experimental results firmly confirmed the soundness and validity of our analysis and observations.
Last updated:  2013-11-29
Notions of Black-Box Reductions, Revisited
Paul Baecher, Chris Brzuska, Marc Fischlin
Reductions are the common technique to prove security of cryptographic constructions based on a primitive. They take an allegedly successful adversary against the construction and turn it into a successful adversary against the underlying primitive. To a large extent, these reductions are black-box in the sense that they consider the primitive and/or the adversary against the construction only via the input-output behavior, but do not depend on internals like the code of the primitive or of the adversary. Reingold, Trevisan, and Vadhan~(TCC, 2004) provided a widely adopted framework, called the RTV framework from hereon, to classify and relate different notions of black-box reductions. Having precise notions for such reductions is very important when it comes to black-box separations, where one shows that black-box reductions cannot exist. An impossibility result, which clearly specifies the type of reduction it rules out, enables us to identify the potential leverages to bypass the separation. We acknowledge this by extending the RTV framework in several respects using a more fine-grained approach. First, we capture a type of reduction---frequently ruled out by so-called meta-reductions---which escapes the RTV framework so far. Second, we consider notions that are ``almost black-box'', i.e., where the reduction receives additional information about the adversary, such as its success probability. Third, we distinguish explicitly between efficient and inefficient primitives and adversaries, allowing us to determine how relativizing reductions in the sense of Impagliazzo and Rudich (STOC, 1989) fit into the picture.
Last updated:  2014-05-21
Attacks and Comments on Several Recently Proposed Key Management Schemes
Niu Liu, Shaohua Tang, Lingling Xu
In this paper, we review three problematic key management(KM) schemes recently proposed, including Kayam’s scheme for groups with hierarchy [9], Piao’s group KM scheme [13], Purushothama’s group KM schemes [15]. We point out the problems in each scheme. Kayam’s scheme is not secure to collusion attack. Piao’s group KM scheme is not secure and has a bad primitive. The hard problem it bases is not really hard. Purushothama’s scheme has a redundant design that costs lots of resources and doesn’t give an advantage to the security evel and dynamic efficiency of it. We also briefly analyze the underlying reasons why these problem emerge.
Last updated:  2013-09-18
Constant-round secure two-party computation from a linear number of oblivious transfer
Samuel Ranellucci, Alain Tapp
We construct a protocol for constant round Two-Party Secure Function Evaluation in the standard model which improves previous protocols in several ways. We are able to reduce the number of calls to Oblivious Transfer by a factor proportional to the security parameter. In addition to being more efficient than previous instantiations, our protocol only requires black box calls to OT and Commitment. This is achieved by the use of a faulty variant of the Cut-and-Choose OT. The concepts of Garbling Schemes, faulty Cut-and-Choose Oblivious Transfer and Privacy Amplification are combined using the Cut-and-Choose paradigm to obtain the final protocol.
Last updated:  2013-02-27
Learning with Rounding, Revisited: New Reduction, Properties and Applications
Joel Alwen, Stephan Krenn, Krzysztof Pietrzak, Daniel Wichs
The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen [BPR12] at EUROCRYPT '12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulus-to-error ratio are super-polynomial. In this work we resolve the main open problem of [BPR12] and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulus-to-error ratio. In particular, a smaller modulus gives us greater efficiency, and a smaller modulus-to-error ratio gives us greater security, which now follows from the worst-case hardness of GapSVP with polynomial (rather than super-polynomial) approximation factors. As a tool in the reduction, we show that there is a ``lossy mode'' for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of sufficient min-entropy, and very simple new constructions of deterministic encryption, lossy trapdoor functions and reusable extractors. Our approach is inspired by a technique of Goldwasser et al. [GKPV10] from ICS '10, which implicitly showed the existence of a ``lossy mode'' for LWE. By refining this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of super-polynomial) modulus and modulus-to-error ratio.
Last updated:  2013-02-27
Biclique Cryptanalysis of the Full-Round KLEIN Block Cipher
Zahra Ahmadian, Mahmoud Salmasizadeh, Mohammad Reza Aref
In this paper we present a biclique attack on the newly proposed block cipher KLEIN-64. We first introduce some weaknesses of the diffusion layer and key schedule of this algorithm. Then we exploit them to present a full round attack on KLEIN-64 using an asymmetric biclique. The (worst case) computations and data complexity of this attack are 2^{62.84} and 2^{39}, respectively. A modified version of this attack is also presented which is slightly faster at the expense of the data required.
Last updated:  2013-02-27
State convergence in bit-based stream ciphers
Sui-Guan Teo, Harry Bartlett, Ali Alhamdan, Leonie Simpson, Kenneth Koon-Ho Wong, Ed Dawson
Well-designed initialisation and keystream generation processes for stream ciphers should ensure that each key-IV pair generates a distinct keystream. In this paper, we analyse some ciphers where this does not happen due to state convergence occurring either during initialisation, keystream generation or both. We show how state convergence occurs in each case and identify two mechanisms which can cause state convergence.
Last updated:  2013-07-24
A new index calculus algorithm with complexity $L(1/4+o(1))$ in very small characteristic
Antoine Joux
In this paper, we describe a new algorithm for discrete logarithms in small characteristic. This algorithm is based on index calculus and includes two new contributions. The first is a new method for generating multiplicative relations among elements of a small smoothness basis. The second is a new descent strategy that allows us to express the logarithm of an arbitrary finite field element in terms of the logarithm of elements from the smoothness basis. For a small characteristic finite field of size $Q=p^n$, this algorithm achieves heuristic complexity $L_Q(1/4+o(1)).$ For technical reasons, unless $n$ is already a composite with factors of the right size, this is done by embedding $\GF{Q}$ in a small extension $\GF{Q^e}$ with $e\leq 2\lceil \log_p n \rceil$.
Last updated:  2014-10-22
On-the-Fly Multiparty Computation on the Cloud via Multikey Fully Homomorphic Encryption
Uncategorized
Adriana Lopez-Alt, Eran Tromer, Vinod Vaikuntanathan
Show abstract
Uncategorized
We propose a new notion of secure multiparty computation aided by a computationally-powerful but untrusted "cloud" server. In this notion that we call on-the-fly multiparty computation (MPC), the cloud can non-interactively perform arbitrary, dynamically chosen computations on data belonging to arbitrary sets of users chosen on-the-fly. All user's input data and intermediate results are protected from snooping by the cloud as well as other users. This extends the standard notion of fully homomorphic encryption (FHE), where users can only enlist the cloud's help in evaluating functions on their own encrypted data. In on-the-fly MPC, each user is involved only when initially uploading his (encrypted) data to the cloud, and in a final output decryption phase when outputs are revealed; the complexity of both is independent of the function being computed and the total number of users in the system. When users upload their data, they need not decide in advance which function will be computed, nor who they will compute with; they need only retroactively approve the eventually-chosen functions and on whose data the functions were evaluated. This notion is qualitatively the best possible in minimizing interaction, since the users' interaction in the decryption stage is inevitable: we show that removing it would imply generic program obfuscation and is thus impossible. Our contributions are two-fold: 1. We show how on-the-fly MPC can be achieved using a new type of encryption scheme that we call multikey FHE, which is capable of operating on inputs encrypted under multiple, unrelated keys. A ciphertext resulting from a multikey evaluation can be jointly decrypted using the secret keys of all the users involved in the computation. 2. We construct a multikey FHE scheme based on NTRU, a very efficient public-key encryption scheme proposed in the 1990s. It was previously not known how to make NTRU fully homomorphic even for a single party. We view the construction of (multikey) FHE from NTRU encryption as a main contribution of independent interest. Although the transformation to a fully homomorphic system deteriorates the efficiency of NTRU somewhat, we believe that this system is a leading candidate for a practical FHE scheme.
Last updated:  2013-02-20
On the security of a certificateless aggregate signature scheme
Lin Cheng, Qiaoyan Wen, Zhengping Jin, Hua Zhang, Liming Zhou
Aggregate signature can combinensignatures on nmessages fromnusers into a single short signature, and the resulting signature can convince the verifier that thenusers indeed signed the ncorresponding messages. This feature makes aggregate signature very useful especially in environments with low bandwidth communication, low storage and low computability since it greatly reduces the total signature length and verification cost. Recently, Xiong et al. presented an efficient certificateless aggregate signature scheme. They proved that their scheme is secure in a strengthened security model, where the “malicious-but-passive” KGC attack was considered. In this paper, we show that Xiong et al.’s certificateless aggregate signature scheme is not secure even in a weaker security model called “honest-but-curious” KGC attack model.
Last updated:  2013-03-11
Man-in-the-Middle Secure Authentication Schemes from LPN and Weak PRFs
Vadim Lyubashevsky, Daniel Masny
We show how to construct, from any weak pseudorandom function, a 3-round symmetric-key authentication protocol that is secure against man-in-the-middle attacks. The construction is very efficient, requiring both the secret key and communication size to be only 3n bits long. Our techniques also extend to certain classes of randomized weak-PRFs, chiefly among which are those based on the classical LPN problem and its more efficient variants such as Toeplitz-LPN and Ring-LPN. Building a man-in-the-middle secure authentication scheme from any weak-PRF resolves a problem left open by Dodis et al. (Eurocrypt 2012), while building a man-in-the-middle secure scheme based on any variant of the LPN problem solves the main open question in a long line of research aimed at constructing a practical light-weight authentication scheme based on learning problems, which began with the work of Hopper and Blum (Asiacrypt 2001).
Last updated:  2013-02-22
Systematic Construction and Comprehensive Evaluation of Kolmogorov-Smirnov Test based Side-Channel Distinguishers
Hui Zhao, Yongbin Zhou, Francois-Xavier Standaert, Hailong Zhang
Generic side-channel distinguishers aim at revealing the correct key embedded in cryptographic modules even when few assumptions can be made about their physical leakages. In this context, Kolmogorov-Smirnov Analysis (KSA) and Partial Kolmogorov-Smirnov analysis (PKS) were proposed respectively. Although both KSA and PKS are based on the Kolmogorov-Smirnov (KS) test, they really differ a lot from each other in terms of construction strategies. Inspired by this, we construct nine new variants by combining their strategies in a systematic way. Furthermore, we explore the effectiveness and efficiency of all these twelve KS test based distinguishers under various simulated scenarios in a univariate setting within a unified comparison framework, and also investigate how these distinguishers behave in practical scenarios. For these purposes, we perform a series of attacks against both simulated traces and real traces. Evaluation metrics such as Success Rate (SR) and Guessing Entropy (GE) are used to measure the efficiency of key recovery attacks in our evaluation. Our experimental results not only show how to choose the most suitable KS test based distinguisher in a particular scenario, but also clarify the practical meaning of all these KS test based distinguishers in practice.
Last updated:  2013-03-20
Functional Encryption Supporting Recursive Languages
Somindu C. Ramanna, Palash Sarkar
We provide a construction for functional encryption over the set of recursive languages. In this scheme, a secret key $\sk_{\mathcal{M}}$ encodes a halting double-stack deterministic pushdown automaton (2DPDA) $\mathcal{M}$ that accepts by final state. Encryption algorithm takes a message $m$ and a string $w$ as input and outputs a ciphertext $\cipher$. A user possessing $\sk_{\mathcal{M}}$ can decrypt $\cipher$ only if $\mathcal{M}$ accepts $w$. Halting 2DPDAs can simulate halting deterministic Turing machines and hence our construction essentially covers all recursive languages. The construction is built upon Waters' bilinear pairing-based functional encryption scheme over regular languages. The main technical novelty is in handling stack contents and $\lambda$-transitions (i.e., transitions that do not advance the input pointer) of the automata. This is reflected both in the construction and the security arguments. The scheme is shown to be selectively secure based on the decision $\ell$-expanded bilinear Diffie-Hellman exponent assumption introduced by Waters.
Last updated:  2013-02-20
Filtered nonlinear cryptanalysis of reduced-round Serpent, and the Wrong-Key Randomization Hypothesis.
James McLaughlin, John A. Clark
We present a deterministic algorithm to find nonlinear S-box approximations, and a new nonlinear cryptanalytic technique; the “filtered” nonlinear attack, which achieves the lowest data complexity of any known-plaintext attack on reduced-round Serpent so far. We demonstrate that the Wrong-Key Randomization Hypothesis is not entirely valid for attacks on reduced-round Serpent which rely on linear cryptanalysis or a variant thereof, and survey the effects of this on existing attacks (including existing nonlinear attacks) on 11 and 12-round Serpent.
Last updated:  2013-06-13
Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World
Uncategorized
Dan Boneh, Mark Zhandry
Show abstract
Uncategorized
We initiate the study of quantum-secure digital signatures and quantum chosen ciphertext security. In the case of signatures, we enhance the standard chosen message query model by allowing the adversary to issue quantum chosen message queries: given a superposition of messages, the adversary receives a superposition of signatures on those messages. Similarly, for encryption, we allow the adversary to issue quantum chosen ciphertext queries: given a superposition of ciphertexts, the adversary receives a superposition of their decryptions. These adversaries model a natural ubiquitous quantum computing environment where end-users sign messages and decrypt ciphertexts on a personal quantum computer. We construct classical systems that remain secure when exposed to such quantum queries. For signatures, we construct two compilers that convert classically secure signatures into signatures secure in the quantum setting and apply these compilers to existing post-quantum signatures. We also show that standard constructions such as Lamport one-time signatures and Merkle signatures remain secure under quantum chosen message attacks, thus giving signatures whose quantum security is based on generic assumptions. For encryption, we define security under quantum chosen ciphertext attacks and present both public-key and symmetric-key constructions.
Last updated:  2013-02-20
Square Root Algorithm in F_q for q=2^s+1 (mod 2^(s+1))
Namhun Koo, Gook Hwa Cho, Soonhak Kwon
We present a square root algorithm in F_q which generalizes Atkins's square root algorithm for q=5(mod 8) and Kong et al.'s algorithm for q=9(mod 16) Our algorithm precomputes a primitive 2^s-th root of unity where s is the largest positive integer satisfying 2^s| q-1, and is applicable for the cases when s is small. The proposed algorithm requires one exponentiation for square root computation and is favorably compared with the algorithms of Atkin, Muller and Kong et al.
Last updated:  2014-01-30
Efficient Private File Retrieval by Combining ORAM and PIR
Travis Mayberry, Erik-Oliver Blass, Agnes Hui Chan
Abstract—Recent research results on tree-based Oblivious RAM by Shi et al. [15] obtain communication complexity of O(l · log3(N)) in the worst-case for an N-capacity storage with blocks size l. The individual nodes in the tree, however, are constructed using traditional ORAMs which have worst-case communication complexity linear in their capacity and block size. PIR protocols are able to provide better worst-case bounds (decoupling capacity from block size), but have traditionally been less practical than ORAM due to the fact that they require O(N) computational complexity on the server. This paper presents Path-PIR, a hybrid ORAM construction, using techniques from PIR, that overcomes the individual weaknesses of each. Path-PIR significantly reduces communication complexity when the block size of the ORAM is large. Compared to existing work, this leads to smaller data transfer costs by orders of magnitude for practical sized databases and achieves worst-case communication complexity of O(l · log2 (N)) for large block sizes. Additionally, the typically high computational cost of PIR is negated by the tree structure of the ORAM, which requires only a small fraction of the database to be operated on for each query. We also investigate the concept of an ORAM’s latency, which is the amount of communication required before users receive the result of their query. We show that Path-PIR achieves lower latency than any existing scheme, only about four times the block size. Using Amazon EC2 as an example, we demonstrate that even with the additional cost of PIR computation, Path-PIR provides a significant monetary saving compared to related work.
Last updated:  2013-12-02
Between a Rock and a Hard Place: Interpolating Between MPC and FHE
Ashish Choudhury, Jake Loftus, Emmanuela Orsini, Arpita Patra, Nigel P. Smart
We present a computationally secure MPC protocol for threshold adversaries which is parametrized by a value L. When L=2 we obtain a classical form of MPC protocol in which interaction is required for multiplications, as L increases interaction is reduced in that one requires interaction only after computing a higher degree function. When L approaches infinity one obtains the FHE based protocol of Gentry, which requires no interaction. Thus one can trade communication for computation in a simple way. Our protocol is based on an interactive protocol for ``bootstrapping'' a somewhat homomorphic encryption scheme. The key contribution is that our presented protocol is highly communication efficient enabling us to obtain reduced communication when compared to traditional MPC protocols for relatively small values of L.
Last updated:  2013-03-04
Security of Quantum-Readout PUFs against quadrature based challenge estimation attacks
Uncategorized
Boris Skoric, Allard P. Mosk, Pepijn W. H. Pinkse
Show abstract
Uncategorized
The concept of quantum-secure readout of Physical Unclonable Functions (PUFs) has recently been realized experimentally in an optical PUF system. We analyze the security of this system under the strongest type of classical attack: the challenge estimation attack. The adversary performs a measurement on the challenge quantum state in order to learn as much about it as he can. Using this knowledge he then tries to reconstruct the challenge and to emulate the PUF. We consider quadrature measurements, which are the most informative practical measurements known to us. We prove that even under this attack the expected number of photons detected in the verification mechanism is approximately a factor $S+1$ too low; here $S$ is the Quantum Security Parameter, defined as the number of modes in the optical system divided by the number of photons in the challenge. The photon count allows for a reliable distinction between an authentic PUF and a challenge estimation attack.
Last updated:  2013-09-04
A Security Framework for Analysis and Design of Software Attestation
Frederik Armknecht, Ahmad-Reza Sadeghi, Steffen Schulz, Christian Wachsmann
Software attestation has become a popular and challenging research topic at many established security conferences with an expected strong impact in practice. It aims at verifying the software integrity of (typically) resource-constrained embedded devices. However, for practical reasons, software attestation cannot rely on stored cryptographic secrets or dedicated trusted hardware. Instead, it exploits side-channel information, such as the time that the underlying device needs for a specific computation. As traditional cryptographic solutions and arguments are not applicable, novel approaches for the design and analysis are necessary. This is certainly one of the main reasons why the security goals, properties and underlying assumptions of existing software attestation schemes have been only vaguely discussed so far, limiting the confidence in their security claims. Thus, putting software attestation on a solid ground and having a founded approach for designing secure software attestation schemes is still an important open problem. We provide the first steps towards closing this gap. Our first contribution is a security framework that formally captures security goals, attacker models, and various system and design parameters. Moreover, we present a generic software attestation scheme that covers most existing schemes in the literature. Finally, we analyze its security within our framework, yielding sufficient conditions for provably secure software attestation schemes. We expect that such a consolidating work allows for a meaningful security analysis of existing schemes and supports the design of arguably secure software attestation schemes and will inspire new research in this area.
Last updated:  2015-11-27
Secret Sharing, Rank Inequalities, and Information Inequalities
Sebastia Martin, Carles Padro, An Yang
Beimel and Orlov proved that all information inequalities on four or five variables, together with all information inequalities on more than five variables that are known to date, provide lower bounds on the size of the shares in secret sharing schemes that are at most linear on the number of participants. We present here another two negative results about the power of information inequalities in the search for lower bounds in secret sharing. First, we prove that all information inequalities on a bounded number of variables can only provide lower bounds that are polynomial on the number of participants. And second, we prove that the rank inequalities that are derived from the existence of two common informations can provide only lower bounds that are at most cubic in the number of participants.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.