All papers (Page 220 of 21962 results)

Last updated:  1999-03-04
Lattice Based Cryptography: A Global Improvement
Uncategorized
Daniele Micciancio
Show abstract
Uncategorized
We describe a general technique to simplify as well as to improve several lattice based cryptographic protocols. The technique is rather straightforward and is easily applied to the protocols, and gives both a simpler analysis and better performance than the original protocols. The improvement is global: the modified protocols are simpler, faster, require less storage, use less bandwidth and need less random bits than the originals. Moreover, the improvement is achieved without any loss in security: we formally prove that the modified protocols are at least as secure as the original ones. In fact, the modified protocols might even be more secure as the adversary gets less information. We exemplify our technique on the Goldreich-Goldwasser zero-knowledge proof systems for lattice problems and the GGH public key cryptosystem.
Last updated:  1999-02-14
Public-key cryptography and password protocols
Uncategorized
Shai Halevi, Hugo Krawczyk
Show abstract
Uncategorized
We study protocols for strong authentication and key exchange in asymmetric scenarios where the authentication server possesses a pair of private and public keys while the client has only a weak human-memorizable password as its authentication key. We present and analyze several simple password protocols in this scenario, and show that the security of these protocols can be formally proven based on standard cryptographic assumptions. Remarkably, our analysis shows optimal resistance to off-line password guessing attacks under the choice of suitable public key encryption functions. In addition to user authentication, we enhance our protocols to provide two-way authentication, authenticated key exchange, defense against server's compromise, and user anonymity. We complement these results with a proof that public key techniques are unavoidable for password protocols that resist off-line guessing attacks. As a further contribution, we introduce the notion of public passwords that enables the use of the above protocols in situations where the client's machine does not have the means to validate the server's public key. Public passwords serve as "hand-held certificates" that the user can carry without the need for special computing devices.
Last updated:  1999-02-10
An error in the mixed adversary protocol by Fitzi, Hirt and Maurer
Uncategorized
Ivan Damgard
Show abstract
Uncategorized
We point out an error in the multiparty computation protocol for mixed adversaries and zero error from the Crypto 98 paper by Fitzi, Hirt and Maurer. We show that the protocol only works under a stronger requirement on the adversary than the one claimed. Hence the bound on the adversary's corruption capability given there is not tight. Subsequent work has shown, however, a new bound which is indeed tight.
Last updated:  1999-02-08
Chinese Remaindering with Errors
Uncategorized
Oded Goldreich, Dana Ron, Madhu Sudan
Show abstract
Uncategorized
The Chinese Remainder Theorem states that a positive integer m is uniquely specified by its remainder modulo k relatively prime integers p_1,...,p_k, provided m < \prod_{i=1}^k p_i. Thus the residues of m modulo relatively prime integers p_1 < p_2 < ... < p_n form a redundant representation of m if m <= \prod_{i=1}^k p_i and k < n. This suggests a number-theoretic construction of an ``error-correcting code'' that has been implicitly considered often in the past. In this paper we provide a new algorithmic tool to go with this error-correcting code: namely, a polynomial-time algorithm for error-correction. Specifically, given n residues r_1,...,r_n and an agreement parameter t, we find a list of all integers m < \prod_{i=1}^k p_i such that (m mod p_i) = r_i for at least t values of i in {1,...,n}, provided t = Omega(sqrt{kn (log p_n)/(log p_1)}). We also give a simpler algorithm to decode from a smaller number of errors, i.e., when t > n - (n-k)(log p_1)/(log p_1 + \log p_n). In such a case there is a unique integer which has such agreement with the sequence of residues. One consequence of our result is that is a strengthening of the relationship between average-case complexity of computing the permanent and its worst-case complexity. Specifically we show that if a polynomial time algorithm is able to guess the permanent of a random n x n matrix on 2n-bit integers modulo a random n-bit prime with inverse polynomial success rate, then #P=BPP. Previous results of this nature typically worked over a fixed prime moduli or assumed very small (though non-negligible) error probability (as opposed to small but non-negligible success probability).
Last updated:  1999-10-11
Signature Schemes Based on the Strong RSA Assumption
Uncategorized
Ronald Cramer, Victor Shoup
Show abstract
Uncategorized
We describe and analyze a new digital signature scheme. The new scheme is quite efficient, does not require the the signer to maintain any state, and can be proven secure against adaptive chosen message attack under a reasonable intractability assumption, the so-called Strong RSA Assumption. Moreover, a hash function can be incorporated into the scheme in such a way that it is also secure in the random oracle model under the standard RSA Assumption.
Last updated:  1998-12-24
Comparing Entropies in Statistical Zero-Knowledge with Applications to the Structure of SZK
Uncategorized
Oded Goldreich, Salil Vadhan
Show abstract
Uncategorized
We consider the following (promise) problem, denoted ED (for Entropy Difference): The input is a pairs of circuits, and YES instances (resp., NO instances) are such pairs in which the first (resp., second) circuit generates a distribution with noticeably higher entropy. On one hand we show that any language having a (honest-verifier) statistical zero-knowledge proof is Karp-reducible to ED. On the other hand, we present a public-coin (honest-verifier) statistical zero-knowledge proof for ED. Thus, we obtain an alternative proof of Okamoto's result by which HVSZK (i.e., Honest-Verifier Statistical Zero-Knowledge) equals public-coin HVSZK. The new proof is much simpler than the original one. The above also yields a trivial proof that HVSZK is closed under complementation (since ED easily reduces to its complement). Among the new results obtained is an equivalence of a weak notion of statistical zero-knowledge to the standard one.
Last updated:  1998-12-10
Secure Distributed Storage and Retrieval
Uncategorized
Juan A. Garay, Rosario Gennaro, Charanjit Jutla, Tal Rabin
Show abstract
Uncategorized
In his well-known Information Dispersal Algorithm paper, Rabin showed a way to distribute information in n pieces among n servers in such a way that recovery of the information is possible in the presence of up to t inactive servers. An enhanced mechanism to enable construction in the presence of malicious faults, which can intentionally modify their pieces of the information, was later presented by Krawczyk. Yet, these methods assume that the malicious faults occur only at reconstruction time. <P> In this paper we address the more general problem of secure storage and retrieval of information (SSRI), and guarantee that also the process of storing the information is correct even when some of the servers fail. Our protocols achieve this while maintaining the (asymptotical) space optimality of the above methods. <P> We also consider SSRI with the added requirement of confidentiality, by which no party except for the rightful owner of the information is able to learn anything about it. This is achieved through novel applications of cryptographic techniques, such as the distributed generation of receipts, distributed key management via threshold cryptography, and ``blinding.'' <P> An interesting byproduct of our scheme is the construction of a secret sharing scheme with shorter shares size in the amortized sense. An immediate practical application of our work is a system for the secure deposit of sensitive data. We also extend SSRI to a ``proactive'' setting, where an adversary may corrupt all the servers during the lifetime of the system, but only a fraction during any given time interval.
Last updated:  1999-02-01
The Disparity between Work and Entropy in Cryptology
Uncategorized
John Pliam
Show abstract
Uncategorized
A brief theory of work is developed. In it, the work-factor and guesswork of a random variable are linked to intuitive notions of time complexity in a brute-force attack. Bounds are given for a specific work-factor called the minimum majority. Tight bounds are given for the guesswork in terms of variation distance. Differences between work-factor, guesswork and the entropy of a random variable are pointed out, calling into question a common misconception about entropy indicating work.
Last updated:  1998-08-31
Security amplification by composition: The case of doubly-iterated, ideal ciphers
Uncategorized
William Aiello, Mihir Bellare, Giovanni Di Crescenzo, Ramarathnam Venkatesan
Show abstract
Uncategorized
We investigate, in the Shannon model, the security of constructions corresponding to double and (two-key) triple DES. That is, we consider F<sub>k1</sub>(F<sub>k2</sub>(.)) and F<sub>k1</sub>(F<sub>k2</sub><sup>-1</sup>(F<sub>k1</sub>(.))) with the component functions being ideal ciphers. This models the resistance of these constructions to ``generic'' attacks like meet in the middle attacks. We obtain the first proof that composition actually increases the security in some meaningful sense. We compute a bound on the probability of breaking the double cipher as a function of the number of computations of the base cipher made, and the number of examples of the composed cipher seen, and show that the success probability is the square of that for a single key cipher. The same bound holds for the two-key triple cipher. The first bound is tight and shows that meet in the middle is the best possible generic attack against the double cipher.
Last updated:  1998-08-12
Insecurity of Quantum Computations
Uncategorized
Hoi-Kwong Lo
Show abstract
Uncategorized
It had been widely claimed that quantum mechanics can protect private information during public decision in for example the so-called two-party secure computation. If this were the case, quantum smart-cards could prevent fake teller machines from learning the PIN (Personal Identification Number) from the customers' input. Although such optimism has been challenged by the recent surprising discovery of the insecurity of the so-called quantum bit commitment, the security of quantum two-party computation itself remains unaddressed. Here I answer this question directly by showing that all *one-sided* two-party computations (which allow only one of the two parties to learn the result) are necessarily insecure. As corollaries to my results, quantum one-way oblivious password identification and the so-called quantum one-out-of-two oblivious transfer are impossible. I also construct a class of functions that cannot be computed securely in any <i>two-sided</i> two-party computation. Nevertheless, quantum cryptography remains useful in key distribution and can still provide partial security in ``quantum money'' proposed by Wiesner.
Last updated:  1998-06-17
Relations among Notions of Security for Public-Key Encryption Schemes
Uncategorized
Mihir Bellare, Anand Desai, David Pointcheval, Phillip Rogaway
Show abstract
Uncategorized
We compare the relative strengths of popular notions of security for public key encryption schemes. We consider the goals of indistinguishability and non-malleability, each under chosen plaintext attack and two kinds of chosen ciphertext attack. For each of the resulting pairs of definitions we prove either an implication (every scheme meeting one notion must meet the other) or a separation (there is a scheme meeting one notion but not the other, assuming the first notion can be met at all). We similarly treat plaintext awareness, a notion of security in the random oracle model. An additional contribution of this paper is a new definition of non-malleability which we believe is simpler than the previous one.
Last updated:  1998-06-06
Almost All Discrete Log Bits Are Simultaneously Secure
Uncategorized
Claus P. Schnorr
Show abstract
Uncategorized
Let G be a finite cyclic group with generator \alpha and with an encoding so that multiplication is computable in polynomial time. We study the security of bits of the discrete log x when given exp<sub>\alpha</sub>(x), assuming that the exponentiation function exp<sub>\alpha</sub>(x) = \alpha<sup>x</sup> is one-way. We reduce the general problem to the case that G has odd order q. If G has odd order q the security of the least-significant bits of x and of the most significant bits of the rational number x/q \in [0,1) follows from the work of Peralta [P85] and Long and Wigderson [LW88]. We generalize these bits and study the security of consecutive <i> shift bits</i> lsb(2<sup>-i</sup>x mod q) for i=k+1,...,k+j. When we restrict exp<sub>\alpha</sub> to arguments x such that some sequence of j consecutive shift bits of x is constant (i.e., not depending on x) we call it a 2<sup>-j</sup>-<i>fraction</i> of exp<sub>\alpha</sub>. For groups of odd group order q we show that every two 2<sup>-j</sup>-fractions of exp<sub>\alpha</sub> are equally one-way by a polynomial time transformation: Either they are all one-way or none of them. Our <i> key theorem</i> shows that arbitrary j consecutive shift bits of x are simultaneously secure when given exp<sub>\alpha</sub>(x) iff the 2<sup>-j</sup>-fractions of exp<sub>\alpha</sub> are one-way. In particular this applies to the j least-significant bits of x and to the j most-significant bits of x/q \in [0,1). For one-way exp<sub>\alpha</sub> the individual bits of x are secure when given exp<sub>\alpha</sub>(x) by the method of Hastad, Naslund [HN98]. For groups of even order 2<sup>s</sup>q we show that the j least-significant bits of \lfloor x/2<sup>s</sup>\rfloor, as well as the j most-significant bits of x/q \in [0,1), are simultaneously secure iff the 2<sup>-j</sup>-fractions of exp<sub>\alpha'</sub> are one-way for \alpha' := \alpha<sup>2<sup>s</sup> </sup>. We use and extend the models of generic algorithms of Nechaev (1994) and Shoup (1997). We determine the generic complexity of inverting fractions of exp<sub>\alpha</sub> for the case that \alpha has prime order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q consecutive shift bits of random x are for constant \varepsilon >0 simultaneously secure against generic attacks. Every generic algorithm using t generic steps (group operations) for distinguishing bit strings of j consecutive shift bits of x from random bit strings has at most advantage O((\lg q)j\sqrt{t} (2<sup>j</sup>/q)<sup>1/4</sup>).
Last updated:  1998-06-14
Many-to-one Trapdoor Functions and their Relation to Public-key Cryptosystems
Uncategorized
Mihir Bellare, Shai Halevi, Amit Sahai, Salil Vadhan
Show abstract
Uncategorized
The heart of the task of building public key cryptosystems is viewed as that of ``making trapdoors;'' in fact, public key cryptosystems and trapdoor functions are often discussed as synonymous. How accurate is this view? In this paper we endeavor to get a better understanding of the nature of ``trapdoorness'' and its relation to public key cryptosystems, by broadening the scope of the investigation: we look at general trapdoor functions; that is, functions that are not necessarily injective (ie., one-to-one). Our first result is somewhat surprising: we show that non-injective trapdoor functions (with super-polynomial pre-image size) can be constructed {from} any one-way function (and hence it is unlikely that they suffice for public key encryption). On the other hand, we show that trapdoor functions with polynomial pre-image size are sufficient for public key encryption. Together, these two results indicate that the pre-image size is a fundamental parameter of trapdoor functions. We then turn our attention to the converse, asking what kinds of trapdoor functions can be constructed from public key cryptosystems. We take a first step by showing that in the random-oracle model one can construct injective trapdoor functions from any public key cryptosystem.
Last updated:  1999-08-01
Security and Composition of Multi-party Cryptographic Protocols
Uncategorized
Ran Canetti
Show abstract
Uncategorized
We present general definitions of security for multiparty cryptographic protocols and show that, using these definitions, security is preserved under a natural composition method. The definitions follow the general paradigm of known definitions; yet some substantial modifications and simplifications are introduced. In particular, `black-box simulation' is no longer required. The composition method is essentially the natural `subroutine substitution' method suggested by Micali and Rogaway. We first present the general definitional approach. Next we consider several settings for multiparty protocols. These include the cases of non-adaptive and adaptive adversaries, as well as the information-theoretic and the computational models.
Last updated:  1998-05-28
Making An Empty Promise With A Quantum Computer (Or, A Brief Review on the Impossibility of Quantum Bit Commitment)
Uncategorized
H. F. Chau, H. -K. Lo
Show abstract
Uncategorized
Alice has made a decision in her mind. While she does not want to reveal it to Bob at this moment, she would like to convince Bob that she is committed to this particular decision and that she cannot change it at a later time. Is there a way for Alice to get Bob's trust? Until recently, researchers had believed that the above task can be performed with the help of quantum mechanics. And the security of the quantum scheme lies on the uncertainty principle. Nevertheless, such optimism was recently shattered by Mayers and by us, who found that Alice can always change her mind if she has a quantum computer. Here, we survey this dramatic development and its implications on the security of other quantum cryptographic schemes.
Last updated:  1999-01-01
Quantum Computers Render Quantum Key Distribution Unconditionally Secure Over Arbitrarily Long Distances
Uncategorized
Hoi-Kwong Lo, H. F. Chau
Uncategorized
Abstract: Quantum cryptography has long been claimed to be useful for achieving many tasks that are impossible from the perspective of conventional cryptography. Arguably, the most important problem in quantum cryptography has been a rigorous proof of the security of quantum key distribution, the most well-known application. This notoriously hard problem has eluded researchers for years and has become even more important after the recent surprising demonstration of the insecurity of many other quantum cryptographic schemes including quantum bit commitment. Here, we solve this long standing problem by showing that, given quantum computers, quantum key distribution over an arbitrarily long distance of a realistic noisy channel can be made unconditionally secure. The novel technique we use is reduction from a quantum scheme to a classical scheme. The security in realistic noisy environments is then proven by using the recent theory of fault-tolerant quantum computation.
Last updated:  1998-05-04
More on Proofs of Knowledge
Uncategorized
Shai Halevi, Silvio Micali
Show abstract
Uncategorized
The notion of proofs of knowledge is central to cryptographic protocols, and many definitions for it have been proposed. In this work we explore a different facet of this notion, not addressed by prior definitions. Specifically, prior definitions concentrate on capturing the properties of the verifier, and do not pay much attention to the properties of the prover. Our new definition is strictly stronger than previous ones, and captures new and desirable properties. In particular, it guarantees prover feasibility, that is, it guarantees that the time spent by the prover in a proof of knowledge is comparable to that it spends in an "extraction" of this knowledge. Our definition also enables one to consider meaningfully the case of a single, specific prover.
Last updated:  1998-04-30
Randomness versus Fault-Tolerance
Uncategorized
Ran Canetti, Eyal Kushilevitz, Rafail Ostrovsky, Adi Rosen
Show abstract
Uncategorized
We investigate the relations between two major requirements of multiparty protocols: {\em fault tolerance} (or {\em resilience}) and {\em randomness}. Fault-tolerance is measured in terms of the maximum number of colluding faulty parties, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest parties). Randomness is measured in terms of the total number of random bits needed by the parties in order to execute the protocol. Previously, the upper bound on the amount of randomness required by general constructions for securely computing any non-trivial function f was polynomial both in $n$, the total number of parties, and the circuit-size C(f). This was the state of knowledge even for the special case t=1 (i.e., when there is at most one faulty party). In this paper, we show that for any linear-size circuit, and for any number t < n/3 of faulty parties, O(poly(t) * log n) randomness is sufficient. More generally, we show that for any function f with circuit-size C(f), we need only O(poly(t) * log n + poly(t) * C(f)/n) randomness in order to withstand any coalition of size at most t. Furthermore, in our protocol only t+1 parties flip coins and the rest of the parties are deterministic. Our results generalize to the case of adaptive adversaries as well.
Last updated:  1998-04-30
A Random Server Model for Private Information Retrieval (or How to Achieve Information Theoretic PIR Avoiding Data Replication)
Uncategorized
Yael Gertner, Shafi Goldwasser, Tal Malkin
Show abstract
Uncategorized
Private information retrieval (PIR) schemes enable users to obtain information from databases while keeping their queries secret from the database managers. We propose a new model for PIR, utilizing auxiliary random servers to provide privacy services for database access. In this model, prior to any on-line communication where users request queries, the database engages in an initial preprocessing setup stage with the random servers. Using this model we achieve the first PIR information theoretic solution in which the database does not need to give away its data to be replicated, and with minimal on-line computation cost for the database. This solves privacy and efficiency problems inherent to all previous solutions. In particular, all previous information theoretic PIR schemes required multiple replications of the database into separate entities which are not allowed to communicate with each other; and in all previous schemes (including ones which do not achieve information theoretic security), the amount of computation performed by the database on-line for every query is at least linear in the size of the database. In contrast, in our solutions the database does not give away its contents to any other entity; and after the initial setup stage, which costs at most O(n log n) in computation, the database needs to perform only O(1) amount of computation to answer questions of users on-line. All the extra on-line computation is done by the auxiliary random servers.
Last updated:  1998-04-22
Maintaining Authenticated Communication in the Presence of Break-ins
Uncategorized
Ran Canetti, Shai Halevi, Amir Herzberg
Show abstract
Uncategorized
We study the problem of maintaining authenticated communication over untrusted communication channels, in a scenario where the communicating parties may be occasionally and repeatedly broken into for transient periods of time. Once a party is broken into, its cryptographic keys are exposed and perhaps modified. Yet, we want parties whose security is thus compromised to regain their ability to communicate in an authenticated way aided by other parties. In this work we present a mathematical model for this highly adversarial setting, exhibiting salient properties and parameters, and then describe a practically-appealing protocol for the task of maintaining authenticated communication in this model. A key element in our solution is devising {\em proactive distributed signature (PDS) schemes} in our model. Although PDS schemes are known in the literature, they are all designed for a model where authenticated communication and broadcast primitives are available. We therefore show how these schemes can be modified to work in our model, where no such primitives are available a-priori. In the process of devising the above schemes, we also present a new definition of PDS schemes (and of distributed signature schemes in general). This definition may be of independent interest.
Last updated:  2003-08-04
The Random Oracle Methodology, Revisited
Ran Canetti, Oded Goldreich, Shai Halevi
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Last updated:  1998-03-17
Chameleon Hashing and Signatures
Uncategorized
Hugo Krawczyk, Tal Rabin
Show abstract
Uncategorized
We introduce CHAMELEON SIGNATURES that provide with an undeniable commitment of the signer to the contents of the signed document (as regular digital signatures do) but, at the same time, do not allow the recipient of the signature to disclose the contents of the signed information to any third party without the signer's consent. These signatures are closely related to Chaum's "undeniable signatures", but chameleon signatures allow for simpler and more efficient realizations than the latter. In particular, they are essentially non-interactive and do not involve the design and complexity of zero-knowledge proofs on which traditional undeniable signatures are based. Instead, chameleon signatures are generated under the standard method of hash-then-sign. Yet, the hash functions which are used are CHAMELEON HASH FUNCTIONS. These hash functions are characterized by the non-standard property of being collision-resistant for the signer but collision tractable for the recipient. We present simple and efficient constructions of chameleon hashing and chameleon signatures. The former can be constructed based on standard cryptographic assumptions (such as the hardness of factoring or discrete logarithms) and have efficient realizations based on these assumptions. For the signature part we can use any digital signature (such as RSA or DSS) and prove the unforgeability property of the resultant chameleon signatures solely based on the unforgeability of the underlying digital signature in use.
Last updated:  1998-03-13
A Modular Approach to the Design and Analysis of Authentication and Key Exchange Protocols
Uncategorized
Mihir Bellare, Ran Canetti, Hugo Krawczyk
Show abstract
Uncategorized
We present a general framework for constructing and analyzing authentication protocols in realistic models of communication networks. This framework provides a sound formalization for the authentication problem and suggests simple and attractive design principles for general authentication and key exchange protocols. The key element in our approach is a modular treatment of the authentication problem in cryptographic protocols; this applies to the definition of security, to the design of the protocols, and to their analysis. In particular, following this modular approach, we show how to systematically transform solutions that work in a model of idealized authenticated communications into solutions that are secure in the realistic setting of communication channels controlled by an active adversary. Using these principles we construct and prove the security of simple and practical authentication and key-exchange protocols. In particular, we provide a security analysis of some well-known key exchange protocols (e.g. authenticated Diffie-Hellman key exchange), and of some of the techniques underlying the design of several authentication protocols that are currently being deployed on a large scale for the Internet Protocol and other applications.
Last updated:  1998-03-10
An Efficient Non-Interactive Statistical Zero-Knowledge Proof System for Quasi-Safe Prime Products
Uncategorized
Rosario Gennaro, Daniele Micciancio, Tal Rabin
Show abstract
Uncategorized
We present efficient zero-knowledge proof systems for quasi-safe prime products and other related languages. Quasi-safe primes are a relaxation of safe primes, a class of prime numbers useful in many cryptographic applications. Our proof systems achieve higher security and better efficiency than all previously known ones. In particular, all our proof systems are perfect or statistical zero-knowledge, meaning that even a computationally unbounded adversary cannot extract any information from the proofs. Moreover, our proof systems are extremely efficient because they do not use general reductions to NP-complete problems, can be easily parallelized preserving zero-knowledge, and are non-interactive for computationally unbounded provers. The prover can also be efficiently implemented given some trapdoor information and using very little interaction. We demonstrate the applicability of quasi-safe primes by showing how they can be effectively used in the context of RSA based undeniable signatures to enforce the use of ``good'' public keys, i.e., keys such that if a signer can convince a recipient of the validity of a signature, then he won't be able to subsequently deny the same signature in case of a dispute.
Last updated:  1998-06-16
Fast Batch Verification for Modular Exponentiation and Digital Signatures
Uncategorized
Mihir Bellare, Juan A. Garay, Tal Rabin
Show abstract
Uncategorized
Many tasks in cryptography (e.g., digital signature verification) call for verification of a basic operation like modular exponentiation in some group: given (g,x,y) check that g<sup>x</sup>=y. This is typically done by re-computing g<sup>x</sup> and checking we get y. We would like to do it differently, and faster. The approach we use is batching. Focusing first on the basic modular exponentiation operation, we provide some probabilistic batch verifiers, or tests, that verify a sequence of modular exponentiations significantly faster than the naive re-computation method. This yields speedups for several verification tasks that involve modular exponentiations. Focusing specifically on digital signatures, we then suggest a weaker notion of (batch) verification which we call ``screening.'' It seems useful for many usages of signatures, and has the advantage that it can be done very fast; in particular, we show how to screen a sequence of RSA signatures at the cost of one RSA verification plus hashing.
Last updated:  1998-03-04
A Practical Public Key Cryptosystem Provably Secure against Adaptive Chosen Ciphertext Attack
Uncategorized
Ronald Cramer, Victor Shoup
Show abstract
Uncategorized
A new public key cryptosystem is presented that is provably secure against adaptive chosen ciphertext attack. The scheme is quite practical, and the proof of security relies only on standard intractability assumptions.
Last updated:  1998-02-26
On the possibility of basing Cryptography on the assumption that $P \neq NP$
Uncategorized
Oded Goldreich, Shafi Goldwasser
Show abstract
Uncategorized
Recent works by Ajtai and by Ajtai and Dwork bring to light the old (general) question of whether it is at all possible to base the security of cryptosystems on the assumption that $\P\neq\NP$. We discuss this question and in particular review and extend a two-decade old result of Brassard regarding this question. Our conclusion is that the question remains open.
Last updated:  1998-02-22
Universal Service Providers for Database Private Information Retrieval
Uncategorized
Giovanni Di-Crescenzo, Yuval Ishai, Rafail Ostrovsky
Show abstract
Uncategorized
We consider the question of private information retrieval in the so-called ``commodity-based'' model. This model was recently proposed by Beaver for practically-oriented service-provider internet applications. In this paper, we show the following, somewhat surprising, results regarding this model for the problem of private-information retrieval: (1) the service-provider model allows to dramatically reduce the overall communication involving the user, using off-line pre-processing messages from ``service-providers'' to databases, where the service-providers do not need to know the database contents, nor the future user's requests; (2) our service-provider solutions are resilient against more than a majority (in fact, all-but-one) coalitions of service-providers; and (3) these results hold for {\em both} the computational and the information-theoretic setting. More specifically, we exhibit a service-provider algorithm which can ``sell'' (i.e., generate and send) ``commodities'' to users and databases, that subsequently allow users to retrieve database contents in a way which hides from the database which particular item the user retrieves. The service-providers need not know anything about the contents of the databases nor the nature of the user's requests in order to generate commodities. Our commodity-based solution significantly improves communication complexity of the users (i.e., counting both the size of commodities bought by the user from the service-providers and the subsequent communication with the databases) compared to all previously known on-line private information retrieval protocols (i.e., without the help of the service-providers). Moreover, we show how commodities from different service-providers can be {\em combined} in such a way that even if ``all-but-one'' of the service-providers collude with the database, the user's privacy remains intact. Finally, we show how to re-use commodities in case of multiple requests (i.e., in the amortized sense), how to ``check'' commodity-correctness, and how some of the solutions can be extended to the related problem of {\em Private Information Storage}.
Last updated:  1998-02-03
Private Information Retrieval by Keywords
Uncategorized
Benny Chor, Niv Gilboa, Moni Naor
Show abstract
Uncategorized
Private information retrieval (PIR) schemes enable a user to access one or more servers that hold copies of a database and {\em privately} retrieve parts of the $n$ bits of data stored in the database. This means that the queries give each individual database no partial information (in the information theoretic or computational sense) on the identity of the item retrieved by the user. All known PIR schemes assume that the user knows the {\em physical address} of the sought item. This is usually not the case when accessing a public database that is not managed by the user. Such databases are typically presented with keywords, which are then internally translated (at the database end) to physical addresses, using an appropriate search structure (for example, a hash table or a binary tree). In this note we describe a simple, modular way to privately access data by keywords. It combines {\em any} conventional search structure with {\em any} underlying PIR scheme (including single server schemes). The transformation requires no modification in the way that the search structure is maintained. Therefore the same database will support both private and regular (non private) searches.
Last updated:  1998-01-27
The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Uncategorized
A. De Santis, G. Di Crescenzo, O. Goldreich, G. Persiano.
Show abstract
Uncategorized
The input to the Graph Clustering Problem consists of a sequence of integers $m_1,...,m_t$ and a sequence of $\sum_{i=1}^{t}m_i$ graphs. The question is whether the equivalence classes, under the graph isomorphism relation, of the input graphs have sizes which match the input sequence of integers. In this note we show that this problem has a (perfect) zero-knowledge interactive proof system. This result improves over <a href="http:../1996/96-14.html">record 96-14</a>, where a parametrized (by the sequence of integers) version of the problem was studied.
Last updated:  1998-01-08
On Protocol Divertibility
Uncategorized
Gerrit Bleumer
Show abstract
Uncategorized
In this paper, we establish the notion of divertibility as a protocol property as opposed to the existing notion as a language property (see Okamoto, Ohta). We give a definition of protocol divertibility that applies to arbitrary 2-party protocols and is compatible with Okamoto and Ohta's definition in the case of interactive zero-knowledge proofs. Other important examples falling under the new definition are blind signature protocols. A sufficient criterion for divertibility is presented and found to be satisfied by many examples of protocols in the literature. The generality of the definition is further demonstrated by examples from protocol classes that have not been considered for divertibility before. We show diverted El-Gamal encryption and diverted Diffie-Hellman key exchange.
Last updated:  1997-12-05
Optimistic fair Exchange of Digital Signatures
Uncategorized
N. Asokan, V. Shoup, M. Waidner
Show abstract
Uncategorized
We present a new protocol that allows two players to exchange digital signatures (including RSA and DSS) over the Internet in a fair way, so that either each player gets the other's signature, or neither player does. One obvious application is where the signatures represent items of value, for example, an electronic check or airline ticket; the protocol can also be adapted to exchange encrypted data. The protocol relies on a trusted third party, but is "optimistic," in that the third party is only needed in cases where one player attempts to cheat or simply crashes. This is an important property, as it greatly reduces the load on the third party, which in particular facilitates a more robust and secure implementation of the third party.
Last updated:  1997-11-09
Generalized Diffie-Hellman Modulo a Composite is not Weaker than Factoring
Uncategorized
Eli Biham, Dan Boneh, Omer Reingold
Show abstract
Uncategorized
The Diffie-Hellman key-exchange protocol may naturally be extended to k>2 parties. This gives rise to the generalized Diffie-Hellman assumption (GDH-Assumption). Naor and Reingold have recently shown an efficient construction of pseudo-random functions and reduced the security of their construction to the GDH-Assumption. In this note, we prove that breaking this assumption modulo a composite would imply an efficient algorithm for factorization. Therefore, the security of both the key-exchange protocol and the pseudo-random functions can be reduced to factoring.
Last updated:  1997-10-06
Visual Authentication and Identification
Uncategorized
Moni Naor, Benny Pinkas.
Show abstract
Uncategorized
The problems of authentication and identification have received wide interest in cryptographic research. However, there has been no satisfactory solution for the problem of authentication by a human recipient who does not use any trusted computational device. The problem of authentication arises for example in the context of smartcard--human interaction, in particular in the context of electronic wallets. The problem of identification is ubiquitous in communication over insecure networks. This paper introduces visual authentication and visual identification methods, which are authentication and identification methods for human users based on visual cryptography. These methods are very natural and easy to use, and can be implemented using very common ``low tech'' technology. The methods we suggest are efficient in the sense that a single transparency can be used for several authentications or for several identifications. The visual authentication methods we suggest are not limited to authenticating textual messages, and can be used to authenticate any image. An important contribution of this paper is the introduction of a framework for proving the security of protocols in which humans take an active part. We rigorously prove the security of our schemes using this framework.
Last updated:  1998-08-01
Self-Delegation with Controlled Propagation - or - What If You Lose Your Laptop.
Uncategorized
Oded Goldreich, Birgit Pfitzmann, Ronald L. Rivest
Show abstract
Uncategorized
We introduce delegation schemes wherein a user may delegate rights to himself, i.e., to other public keys he owns, but may not safely delegate those rights to others, i.e., to their public keys. In our motivating application, a user has a primary (long-term) key that receives rights, such as access privileges, that may not be delegated to others, yet the user may reasonably wish to delegate these rights to new secondary (short-term) keys he creates to use on his laptop when traveling, to avoid having to store his primary secret key on the vulnerable laptop. We propose several cryptographic schemes, both generic and practical, that allow such self-delegation while providing strong motivation for the user not to delegate rights that he only obtained for personal use to other parties.
Last updated:  1997-08-15
Identity Escrow
Uncategorized
Joe Kilian, Erez Petrank
Show abstract
Uncategorized
We introduce the notion of escrowed identity, an application of key-escrow ideas to the problem of identification. In escrowed identity, one party, A, does not give his identity to another party B, but rather gives him information that would allow an authorized third party, E, to determine A's identity. However, B receives a guarantee that E can indeed determine A's identity. We give protocols for escrowed identity based on the El-Gamal (signature and encryption) schemes and on the RSA function. A useful feature of our protocol is that after setting up A to use the system, E is only involved when it is actually needed to determine A's identity.
Last updated:  1997-08-15
CBC MAC for Real-Time Data Sources
Uncategorized
Erez Petrank, Charles Rackoff
Show abstract
Uncategorized
The Cipher Block Chaining (CBC) Message Authentication Code (MAC) is an authentication method which is widely used in practice. It is well known that the naive use of CBC MAC for variable length messages is not secure, and a few rules of thumb for the correct use of CBC MAC are known by folklore. The first rigorous proof of the security of CBC MAC, when used on fixed length messages, was given only recently by Bellare, Kilian and Rogaway. They also suggested variants of CBC MAC that handle variable length messages but in these variants the length of the message has to be known in advance (i.e., before the message is processed). We study CBC authentication of real time applications in which the length of the message is not known until the message ends, and furthermore, since the application is real-time, it is not possible to start processing the authentication only after the message ends. We first present a variant of CBC MAC, called {\em double MAC} (DMAC) which handles messages of variable unknown lengths. Computing DMAC on a message is virtually as simple and as efficient as computing the standard CBC MAC on the message. We provide a rigorous proof that its security is implied by the security of the underlying block cipher. Next, we argue that the basic CBC MAC is secure when applied to prefix free message space. A message space can be made prefix free by authenticating also the (usually hidden) last character which marks the end of the message.
Last updated:  1997-07-11
Collision-Resistant Hashing: Towards Making UOWHFs Practical
Uncategorized
Mihir Bellare, Phillip Rogaway
Show abstract
Uncategorized
Recent attacks on the cryptographic hash functions MD4 and MD5 make it clear that (strong) collision-resistance is a hard-to-achieve goal. We look towards a weaker notion, the <i>universal one-way hash functions</i> (UOWHFs) of Naor and Yung, and investigate their practical potential. The goal is to build UOWHFs not based on number theoretic assumptions, but from the primitives underlying current cryptographic hash functions like MD5 and SHA. Pursuing this goal leads us to new questions. The main one is how to extend a compression function to a full-fledged hash function in this new setting. We show that the classic Merkle-Damgard method used in the standard setting fails for these weaker kinds of hash functions, and we present some new methods that work. Our main construction is the "XOR tree." We also consider the problem of input length-variability and present a general solution.
Last updated:  1997-06-13
Factoring via Strong Lattice Reduction Algorithms
Uncategorized
Harald Ritter, Carsten Roessner
Show abstract
Uncategorized
We address to the problem to factor a large composite number by lattice reduction algorithms. Schnorr has shown that under a reasonable number theoretic assumptions this problem can be reduced to a simultaneous diophantine approximation problem. The latter in turn can be solved by finding sufficiently many l_1--short vectors in a suitably defined lattice. Using lattice basis reduction algorithms Schnorr and Euchner applied Schnorrs reduction technique to 40--bit long integers. Their implementation needed several hours to compute a 5% fraction of the solution, i.e., 6 out of 125 congruences which are necessary to factorize the composite. In this report we describe a more efficient implementation using stronger lattice basis reduction techniques incorporating ideas of Schnorr, Hoerner and Ritter. For 60--bit long integers our algorithm yields a complete factorization in less than 3 hours.
Last updated:  1997-06-02
Towards realizing random oracles: Hash functions that hide all partial information
Uncategorized
Ran Canetti
Show abstract
Uncategorized
The random oracle model is a very convenient setting for designing cryptographic protocols. In this idealized model all parties have access to a common, public random function, called a random oracle. Protocols in this model are often very simple and efficient; also the analysis is often clearer. However, we do not have a general mechanism for transforming protocols that are secure in the random oracle model into protocols that are secure in real life. In fact, we do not even know how to meaningfully specify the properties required from such a mechanism. Instead, it is a common practice to simply replace - often without mathematical justification - the random oracle with a `cryptographic hash function' (e.g., MD5 or SHA). Consequently, the resulting protocols have no meaningful proofs of security.
Last updated:  1997-05-04
Protecting Data Privacy in Private Information Retrieval Schemes
Uncategorized
Yuval Ishai, Eyal Kushilevitz
Show abstract
Uncategorized
Private Information Retrieval (PIR) schemes allow a user to retrieve the i-th bit of a data string x, replicated in k>=2 databases, while keeping the value of i private. The main cost measure for such a scheme is its communication complexity. We study PIR schemes where in addition to the user's privacy we require data privacy. That is, in every invocation of the retrieval protocol the user learns exactly a single physical bit of x and no other information. Further, we require that even a dishonest user would not learn more than a single physical data bit. We present general transformations that allow translating PIR schemes satisfying certain properties into PIR schemes that respect data privacy as well, with a small penalty in the communication complexity. Using our machinery we are able to translate currently known PIR solutions into schemes satisfying the newly introduced, stronger privacy constraint. In particular we get: a k-database scheme of complexity O(log(n) n^{1/(2k-1)}) for every k>=2; an O(log(n))-database scheme of poly-logarithmic complexity; a 2-database computational PIR of complexity O(n^c), for every constant c>0. All these require only a single round of interaction.
Last updated:  1997-04-21
A Probabilistic Error-Correcting Scheme
Uncategorized
S. Decatur, O. Goldreich, D. Ron
Show abstract
Uncategorized
In the course of research in Computational Learning Theory, we found ourselves in need of an error-correcting encoding scheme for which few bits in the codeword yield no information about the plain message. Being unaware of a previous solution, we came-up with the scheme presented here. Since this scheme may be of interest to people working in Cryptography, we thought it may be worthwhile to ``publish'' this part of our work within the Cryptography community. Clearly, a scheme as described above cannot be deterministic. Thus, we introduce a probabilistic coding scheme which, in addition to the standard coding theoretic requirements, has the feature that any constant fraction of the bits in the (randomized) codeword yields no information about the message being encoded. This coding scheme is also used to obtain efficient constructions for the Wire-Tap Channel Problem.
Last updated:  2002-10-07
A note on negligible functions
Uncategorized
Mihir Bellare
Show abstract
Uncategorized
In theoretical cryptography, one formalizes the notion of an adversary's success probability being ``too small to matter'' by asking that it be a negligible function of the security parameter. We argue that the issue that really arises is what it might mean for a collection of functions to be ``negligible.'' We consider (and define) two such notions, and prove them equivalent. Roughly, this enables us to say that any cryptographic primitive has a specific associated ``security level.'' In particular we say this for any one-way function. We also reconcile different definitions of negligible error arguments and computational proofs of knowledge that have appeared in the literature. Although the motivation is cryptographic, the main result is purely about negligible functions.
Last updated:  1997-03-05
Efficient Cryptographic Protocols Based on Noisy Channels.
Uncategorized
Claude Crepeau
Show abstract
Uncategorized
The Wire-Tap Channel of Wyner shows that a Binary Symmetric Channel may be used as a basis for exchanging a secret key. Later, Crepeau and Kilian showed how a BSC may be used to implement Oblivious Transfer. Unfortunately, this result is rather impractical as it requires $n sup 11$ bits to be sent through the BSC to accomplish a single OT. The current paper provides efficient protocols to achieve Bit Commitment and Oblivious Transfer based on the existence of a BSC. Our protocols respectively use the BSC $n$ times and $n sup 3$ times. These results are based on a technique known as Generalized Privacy Amplification.
Last updated:  1997-02-26
Round-Optimal Zero-Knowledge Arguments Based on any One-Way Function
Uncategorized
Mihir Bellare, Markus Jakobsson, Moti Yung
Show abstract
Uncategorized
We fill a gap in the theory of zero-knowledge protocols by presenting NP-arguments that achieve negligible error probability and computational zero-knowledge in four rounds of interaction, assuming only the existence of a one-way function. This result is optimal in the sense that four rounds and a one-way function are each individually necessary to achieve a negligible error zero-knowledge argument for NP.
Last updated:  1997-02-26
A New Paradigm for Collision-free Hashing: Incrementality at Reduced Cost
Uncategorized
Mihir Bellare, Daniele Micciancio
Show abstract
Uncategorized
We present a simple, new paradigm for the design of collision-free hash functions. Any function emanating from this paradigm is <i>incremental.</i> (This means that if a message x which I have previously hashed is modified to x' then rather than having to re-compute the hash of x' from scratch, I can quickly ``update'' the old hash value to the new one, in time proportional to the amount of modification made in x to get x'.) Also any function emanating from this paradigm is parallelizable, useful for hardware implementation.
Last updated:  1996-12-10
Public-Key Cryptosystems from Lattice Reduction Problems
Uncategorized
Oded Goldreich, Shafi Goldwasser, Shai Halevi
Show abstract
Uncategorized
We present a new proposal for a trapdoor one-way function, from which we derive public-key encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of lattice-reduction problems, providing a possible alternative to existing public-key encryption algorithms and digital signatures such as RSA and DSS.
Last updated:  1996-12-06
Verifiable Partial Key Escrow
Uncategorized
Mihir Bellare, Shafi Goldwasser
Show abstract
Uncategorized
One of the main objections to existing proposals for key escrow is that the individual's privacy relies on too high a level of trust in the law enforcement agencies. In particular, even if the government is trustworthy today, it may be replaced by an un-trustworthy government tomorrow which could immediately and suddenly recover the secret keys of all users.
Last updated:  1996-11-03
The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Uncategorized
Oded Goldreich
Show abstract
Uncategorized
The Graph Clustering Problem is parameterized by a sequence of positive integers, $m_1,...,m_t$. The input is a sequence of $\sum_{i=1}^{t}m_i$ graphs, and the question is whether the equivalence classes under the graph isomorphism relation have sizes which match the sequence of parameters. In this note we show that this problem has a (perfect) zero-knowledge interactive proof system.
Last updated:  1996-09-25
On the Contrast in Visual Cryptography Schemes
Uncategorized
Carlo Blundo, Alfredo De Santis, Douglas R. Stinson
Show abstract
Uncategorized
A visual cryptography scheme is a method to encode a secret image SI into shadow images called shares such that certain qualified subsets of shares enable the ``visual'' recovery of the secret image. The ``visual'' recovery consists of xeroxing the shares onto transparencies, and then stacking them. The shares of a qualified set will reveal the secret image without any cryptographic computation. In this paper we analyze the contrast of the reconstructed image in k out of n visual cryptography schemes. (In such a scheme any k shares will reveal the image, but no set of k-1 shares gives any information about the image.) In the case of 2 out of n threshold schemes we give a complete characterization of schemes having optimal contrast and minimum pixel expansion in terms of certain balanced incomplete block designs. In the case of k out of n threshold schemes with k>2 we obtain upper and lower bounds on the optimal contrast.
Last updated:  1996-08-05
Proactive RSA
Uncategorized
Yair Frankel, Peter Gemmell, Philip D. MacKenzie, Moti Yung
Show abstract
Uncategorized
We consider a "mobile adversary" which may corrupt all participants throughout the lifetime of the system in a non-monotonic fashion (i.e. recoveries are possible) but the adversary is unable to corrupt too many participants during any short time period. Schemes resiliant to such adverasry are called proactive. We present a proactive RSA system in which a threshold of servers applies the RSA signature (or decryption) function in a distributed manner. Employing new combinatorial and elementary number theoretic techniques, our protocol enables the dynamic updating of the servers (which hold the RSA key distributively); it is secure even when a linear number of the servers are corrupted during any time period; it efficiently "self-maintains" the security of the function and its messages (ciphertexts or signatures); and it enables continuous availability, namely, correct function application using the shared key is possible at any time.
Last updated:  1997-02-17
On the Construction of Pseudo-Random Permutations: Luby-Rackoff Revisited
Uncategorized
Moni Naor, Omer Reingold
Show abstract
Uncategorized
Luby and Rackoff showed a method for constructing a pseudo-random permutation from a pseudo-random function. The method is based on composing four (or three for weakened security) so called Feistel permutations each of which requires the evaluation of a pseudo-random function. We reduce somewhat the complexity of the construction and simplify its proof of security by showing that two Feistel permutations are sufficient together with initial and final pair-wise independent permutations. The revised construction and proof provide a framework in which similar constructions may be brought up and their security can be easily proved. We demonstrate this by presenting some additional adjustments of the construction that achieve the following: 1. Reduce the success probability of the adversary. 2. Provide a construction of pseudo-random permutations with large input size using pseudo-random functions with small input size. 3. Provide a construction of a pseudo-random permutation using a single pseudo-random function.
Last updated:  1996-07-26
Oblivious Transfers and Intersecting Codes
Uncategorized
Gilles Brassard, Claude Crepeau, Miklos Santha
Show abstract
Uncategorized
Assume A owns t secret k-bit strings. She is willing to disclose one of them to B, at his choosing, provided he does not learn anything about the other strings. Conversely, B does not want A to learn which secret he chose to learn. A protocol for the above task is said to implement One-out-of-t String Oblivious Transfer. An apparently simpler task corresponds to the case k=1 and t=2 of two one-bit secrets: this is known as One-out-of-two Bit OT. We address the question of implementing the former assuming the existence of the later. In particular, we prove that the general protocol can be implemented from O(tk) calls to One-out-of-two Bit OT. This is optimal up to a small multiplicative constant. Our solution is based on the notion of self-intersecting codes. Of independent interest, we give several efficient new constructions for such codes. Another contribution of this paper is a set of information-theoretic definitions for correctness and privacy of unconditionally-secure oblivious transfer.
Last updated:  1996-07-26
Collision-Free Hashing from Lattice Problems
Uncategorized
Oded Goldreich, Shafi Goldwasser, Shai Halevi
Show abstract
Uncategorized
Recently Ajtai described a construction of one-way functions whose security is equivalent to the difficulty of some well known approximation problems in lattices. We show that essentially the same construction can also be used to obtain collision-free hashing.
Last updated:  1996-07-02
Access Control and Signatures via Quorum Secret Sharing
Uncategorized
Moni Naor, Avishai Wool
Show abstract
Uncategorized
We suggest a method of controlling the access to a secure database via quorum systems. A quorum system is a collection of sets (quorums) every two of which have a nonempty intersection. Quorum systems have been used for a number of applications in the area of distributed systems. We propose a separation between access servers which are protected and trustworthy, but may be outdated, and the data servers which may all be compromised. The main paradigm is that only the servers in a complete quorum can collectively grant (or revoke) access permission. The method we suggest ensures that after authorization is revoked, a cheating user Alice will not be able to access the data even if many access servers still consider her authorized, and even if the complete raw database is available to her. The method has a low overhead in terms of communication and computation. It can also be converted into a distributed system for issuing secure signatures.
Last updated:  1996-06-14
Visual Cryptography II: Improving the Contrast Via the Cover Base
Uncategorized
Moni Naor, Adi Shamir
Show abstract
Uncategorized
In Eurocrypt'94 we proposed a a new type of cryptographic scheme, which can decode concealed images without any cryptographic computations, by placing two transparencies on top of each other and using the decoder's (human) visual systems. One of the drawback of that proposal was a loss in contrast: a black pixel is translated in the reconstruction into a black region, but a white pixel is translated into a grey region (half black and half white). In this paper we propose am alternative model for reconstruction with a different set of operation (which we call the ``Cover" semi-group) is proposed. In this model we are able to obtain a better contrast than is possible in the previous one. We prove tight bounds on the contrast as a function of the complexity of the scheme. We also show that for constructing k-out-n secret sharing schemes when n and k are greater than 2 the new method is not applicable.
Last updated:  1996-05-23
Upper bound on the communication complexity of private information retrieval
Uncategorized
Andris Ambainis
Show abstract
Uncategorized
Private information retrieval was introduced by Chor, Goldreich, Kushilevitz and Sudan. It is the problem of reading a bit from the database so that it remains secret which bit we need. If the database exists in several identical copies, it is possible to ask queries so that each of copies alone does not get any information about the adress of the needed bit. We construct a scheme for private information retrieval with k databases and O(n sup (1/(2k-1)) ) bits of communication.
Last updated:  1996-05-16
Private Information Storage
Uncategorized
Rafail Ostrovsky, Victor Shoup
Show abstract
Uncategorized
We consider the setting of hiding information through the use of multiple databases that do not interact with one another. Previously, in this setting solutions for retrieval of data in the efficient manner were given, where a user achieves this by interacting with all the databases. We consider the case of both writing and reading. While the case of reading was well studied before, the case of writing was previously completely open. In this paper, we show how to implement both read and write operations. As in the previous papers, we measure, as a function of k and n the amount of communication required between a user and all the databases for a single read/write operation, and achieve efficient read/write schemes. Moreover, we show a general reduction from reading database scheme to reading and writing database scheme, with the following guarantees: for any k, given a retrieval only k-database scheme with communication complexity R(k,n) we show a (k+1) reading and writing database scheme with total communication complexity O(R(k,n) * (log n)^{O(1)}). It should be stressed that prior to the current paper no trivial (i.e. sub-linear) bounds for private information storage were known.
Last updated:  1996-05-14
Linear Zero-Knowledge - A note on Efficient Zero-Knowledge Proofs and Arguments
Uncategorized
Ronald Cramer, Ivan Damgaard
Show abstract
Uncategorized
We present a zero-knowledge proof system for any NP language L, which allows showing that x is in L using communication corresponding to $O(|x| sup c)+k$ bit commitments, with error probability $2 sup -k$, and where c is a constant depending only on L. The proof can be based on any bit commitment scheme with a particular set of properties. We suggest an efficient implementation based on factoring. The protocol allows showing that a Boolean formula of size n is satisfiable, with error probability $2 sup -n$, using O(n) commitments. This is the first protocol for SAT that is linear in this sense.<br> [The rest of the abstract was truncated and appears below -- the library.]
Last updated:  1996-05-14
On Monotone Function Closure of Statistical Zero-Knowledge
Uncategorized
Ronald Cramer, Ivan Damgaard
Show abstract
Uncategorized
Assume we are given a language L with an honest verifier perfect zero-knowledge proof system. Assume also that the proof system is an Arthur-Merlin game with at most 3 moves. The class of such languages includes all random self-reducible language, and also any language with a perfect zero-knowledge non-interactive proof. We show that such a language satisfies a certain closure property, namely that languages constructed from L by applying certain monotone functions to statements on membership in L have perfect zero-knowledge proof systems. The new set of languages we can build includes L itself, but also for example languages consisting of n words of which at least t are in L. A similar closure property is shown to hold for the complement of L and for statistical zero-knowledge. The property we need for the monotone functions used to build the new languages is that there are efficient secret sharing schemes for their associated access structures. This includes (but is not necessarily limited to) all monotone functions with polynomial size monotone formulas.
Last updated:  1996-05-10
Deniable Encryption
Uncategorized
Ran Canetti, Cynthia Dwork, Moni Naor, Rafi Ostrovsky
Show abstract
Uncategorized
Consider a situation in which the transmission of encrypted messages is intercepted by an adversary who can later ask the sender to reveal the random choices (and also the secret key, if one exists) used in generating the ciphertext, thereby exposing the cleartext. An encryption scheme is <B>deniable</B> if the sender can generate `fake random choices' that will make the ciphertext `look like' an encryption of a different cleartext, thus keeping the real cleartext private. Analogous requirements can be formulated with respect to attacking the receiver and with respect to attacking both parties. In this paper we introduce deniable encryption and propose constructions of schemes with polynomial deniability. In addition to being interesting by itself, and having several applications, deniable encryption provides a simplified and elegant construction of <B>adaptively secure</B> multiparty computation.
Last updated:  1996-08-07
Incoercible Multiparty Computation
Uncategorized
Ran Canetti, Rosario Gennaro
Show abstract
Uncategorized
Current secure multiparty protocols have the following deficiency. The public transcript of the communication can be used as an involuntary <B>commitment</B> of the parties to their inputs and outputs. Thus parties can be later coerced by some authority to reveal their private data. Previous work that has pointed this interesting problem out contained only partial treatment. In this work we present the first general and rigorous treatment of the coercion problem in secure computation. First we present a general definition of protocols that provide resilience to coercion. Our definition constitutes a natural extension of the general paradigm used for defining secure multiparty protocols. Next we show that if trapdoor permutations exist then any function can be incoercibly computed (i.e., computed by a protocol that provides resilience to coercion) in the presence of computationally bounded adversaries and only public communication channels. This holds as long as less than half the parties are coerced (or corrupted). In particular, ours are the first incoercible protocols without physical assumptions. Also, our protocols constitute an alternative solution to the recently solved adaptive security problem.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.