All papers (Page 240 of 24071 results)
The Rectangle Attack - Rectangling the Serpent
Serpent is one of the 5 AES finalists. The best attack published so far
analyzes up to 9 rounds. In this paper we present attacks on 7-round,
8-round, and 10-round variants of Serpent. We attack 7-round variant of
Serpent with all key lengths, and 8- and 10-round variants wih 256-bit keys.
The 10-roun attack on the 256-bit keys variants is the best published
attack on the cipher. The attack enhances the amplified boomerang attack
and uses better differentials. We also present the best 3-round, 4-round,
5-round and 6-round differential characteristics of Serpent.
Some observations on the theory of cryptographic hash functions
In this paper, we study several issues related to the notion of
``secure'' hash functions. Several necessary conditions
are considered, as well as a popular sufficient condition
(the so-called random oracle model). We study the security of
various problems that are motivated by the notion of a secure
hash function.
These problems are analyzed in the random oracle model,
and we prove that
the obvious trivial algorithms are optimal.
As well, we look closely at
reductions between various problems. In particular,
we consider the important question ``does preimage resistance imply
collision resistance?''. Finally, we study the relationship
of the security of
hash functions built using the Merkle-Damgard construction
to the security of the underlying compression function.
An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation
A credential system is a system in which users can obtain
credentials from organizations and demonstrate possession of these
credentials. Such a system is anonymous when transactions carried out by the
same user cannot be linked. An anonymous credential system is of significant
practical relevance because it is the best means of providing privacy for
users. In this paper we propose a practical anonymous credential system that
is based on the strong RSA assumption and the decisional Diffie-Hellman
assumption modulo a safe prime product and is considerably superior to
existing ones:
(1) We give the first practical solution that allows
a user to unlinkably demonstrate possession of a credential as many times as
necessary without involving the issuing organization.
(2) To prevent misuse of anonymity, our scheme is the first to offer optional
anonymity revocation for particular transactions.
(3) Our scheme offers separability: all organizations can choose their
cryptographic keys independently of each other.
Moreover, we suggest more effective means of preventing users from sharing their
credentials, by introducing {\em all-or-nothing} sharing: a user who allows a
friend to use one of her credentials once, gives him the ability to use all of
her credentials, i.e., taking over her identity. This is implemented by a new
primitive, called {\em circular encryption}, which is of independent interest,
and can be realized from any semantically secure cryptosystem in the random
oracle model.
Analysis of a Subset Sum Randomizer
In [5] an efficient pseudo-random number generator (PRNG) with
provable security is described. Its security is based on the
hardness of the subset sum or knapsack problem. In this paper
we refine these ideas to design a PRNG with independent seed and
output generation. This independence allows for greater
parallelism, design flexibility, and possibly greater security.
On adaptive vs. non-adaptive security of multiparty protocols
Security analysis of multiparty cryptographic protocols distinguishes
between two types of adversarial settings: In the non-adaptive
setting, the set of corrupted parties is chosen in advance, before the
interaction begins. In the adaptive setting, the adversary chooses
who to corrupt during the course of the computation. We study the
relations between adaptive security (i.e., security in the adaptive
setting) and non-adaptive security, according to two definitions and
in several models of computation. While affirming some prevailing
beliefs, we also obtain some unexpected results. Some highlights of
our results are:
o According to the definition of Dodis-Micali-Rogaway (which is set in
the information-theoretic model), adaptive and non-adaptive security
are equivalent. This holds for both honest-but-curious and Byzantine
adversaries, and for any number of parties.
o According to the definition of Canetti, for honest-but-curious
adversaries, adaptive security is equivalent to non-adaptive
security when the number of parties is logarithmic, and is strictly
stronger than non-adaptive security when the number of parties is
super-logarithmic. For Byzantine adversaries, adaptive security is
strictly stronger than non-adaptive security, for any number of
parties.
Efficient Traitor Tracing Algorithms using List Decoding
Uncategorized
Uncategorized
We apply powerful, recently discovered techniques for the list decoding of error-correcting codes to the problem of efficiently tracing traitors. Traitor tracing schemes have been extensively studied for use as a piracy deterrent. In a widely studied model for protecting digital content, each user in the system is associated with a unique set of symbols. For example, the sets may be used to install a software CD or decrypt pay-TV content. The assignment of sets is done in such a way that if a bounded collection of sets is used to form a new set to enable piracy, at least one of the traitor sets can be identified by applying a traitor tracing algorithm to the newly formed set. Much work has focused on methods for constructing such traceability schemes, but the complexity of the traitor tracing algorithms has received little attention. A widely used traitor tracing algorithm, the TA algorithm, has a running time of $O(\n)$ in general, where $\n$ is number of sets in the system (e.g., the number of copies of the CD), and therefore is inefficient for large populations. In this paper we use a coding theoretic approach to produce traceability schemes for which the TA algorithm is very fast. We show that when suitable error-correcting codes are used to construct traceability schemes, and fast list decoding algorithms are used to trace, the run time of the TA algorithm is polynomial in the codeword length. We also use the strength of the error-correcting code approach to construct traceability schemes with more efficient algorithms for finding all possible traitor coalitions. Finally, we provide evidence that amongst traceability schemes in general, TA traceability schemes are the most likely to be amenable to efficient tracing methods.
An observation regarding Jutla's modes of operation
Recently, Jutla suggested two new modes of operation for block ciphers. These modes build on traditional CBC and ECB
modes, respectively, but add to them masking of the outputs and inputs. Jutla proved that these masking operations considerably
strengthen CBC and ECB modes. In particular, together with a simple checksum, the modified modes ensure not only confidentiality, but
also authenticity. Similar modes were also suggested by Gligor and Donsecu and by Rogaway.
In Jutla's proposal (as well as in some of the other proposals), the masks themselves are derived from an IV via the same block
cipher as used for the encryption (perhaps with a different key). In this work we note, however, that the function for deriving these masks
need not be cryptographic at all. In particular, we prove that a universal hash function (a-la-Carter-Wegman) is sufficient for this
purpose.
Timed-Release Cryptography
Let $n$ be a large composite number. Without factoring $n$, the
validation of $a^{2^t} (\bmod \, n)$ given $a$, $t$ with $gcd(a, n) =
1$ and $t < n$ can be done in $t$ squarings modulo $n$. For $t \ll n$
(e.g., $n > 2^{1024}$ and $t < 2^{100}$), no lower complexity than $t$
squarings is known to fulfill this task (even considering massive
parallelisation). Rivest et al suggested to use such constructions as
good candidates for realising timed-release crypto problems.
We argue the necessity for zero-knowledge proof of the correctness of
such constructions and propose the first practically efficient
protocol for a realisation. Our protocol proves, in $\log_2 t$
standard crypto operations, the correctness of $(a^e)^{2^t}
(\bmod\,n)$ with respect to $a^e$ where $e$ is an RSA encryption
exponent. With such a proof, a {\em Timed-release RSA Encryption} of a
message $M$ can be given as $a^{2^t} M (\bmod \,n)$ with the
assertion that the correct decryption of the RSA ciphertext $M^e
(\bmod \, n)$ can be obtained by performing $t$ squarings modulo $n$
starting from $a$. {\em Timed-release RSA signatures} can be
constructed analogously.
Digitally Watermarking RSA Moduli
The moduli used in RSA (see \cite{rsa}) can be generated by many different sources.
The generator of that modulus knows its factorization.
They have the ability to forge signatures or break any system based on this moduli.
If a moduli and the RSA parameters associated with it were generated by a reputable source,
the system would have higher value than if the parameters were generated by an unknown entity.
An RSA modulus is digitally marked, or digitally trade marked, if the generator and other
identifying features of the modulus can be identified and possibly verified by the modulus itself.
The basic concept of digitally marking an RSA modulus would be to fix the upper bits of the
modulus to this tag.
Thus anyone who sees the public modulus can tell who generated the modulus and who the generator
believes the intended user/owner of the modulus is.
Two types of trade marking will be described here.
The first is simpler but does not give verifiable trade marking.
The second is more complex, but allows for verifiable trade marking of RSA moduli.
The second part of this paper describes how to generate an RSA modulus with fixed upper bits.
Ciphers with Arbitrary Finite Domains
Uncategorized
Uncategorized
We introduce the problem of enciphering members of a finite set $M$
where $k=|M|$ is arbitrary (in particular, it need not be a power
of two). We want to achieve this goal starting from a block cipher
(which requires a message space of size $N=2^n$, for some $n$).
We look at a few solutions to this problem, focusing on the case
when $M=[0, k-1]$.
New Zero-knowledge Undeniable Signatures - Forgery of Signature Equivalent to Factorisation
We propose a new zero-knowledge undeniable signature scheme which is
based on the intractability of computing high-order even powers modulo
a composite. The new scheme has a number of desirable properties: (i)
forgery of a signature (including existential forgery) is proven to be
equivalent to factorisation, (ii) perfect zero-knowledge, (iii)
efficient protocols for signature verification and non-signature
denial: both measured by $O(\log k)$ (multiplications) where $1/k$
bounds the probability of error. For a denial protocol, this
performance is unprecedented.
How to achieve a McEliece-based Digital Signature Scheme
McEliece is one of the oldest known public key cryptosystems.
Though it was less widely studied that RSA, it is remarkable that
all known attacks are still exponential. It is widely believed that
code-based cryptosystems like McEliece does not allow practical
digital signatures. In the present paper we disprove this belief
and show a way to build a practical signature scheme based on coding
theory. It's security can be reduced in the random oracle model to
the well-known {\em syndrome decoding problem} and the
distinguishability of permuted binary Goppa codes from a random
code. For example we propose a scheme with signatures of $81$-bits
and a binary security workfactor of $2^{83}$.
Robust key-evolving public key encryption schemes
We propose a key-evolving paradigm to deal with the key
exposure problem of public key encryption schemes.
The key evolving paradigm is like the one used for
forward-secure digital signature schemes.
Let time be divided into time periods such that
at time period $j$, the decryptor holds the secret key
$SK_j$, while the public key PK is fixed during its
lifetime.
At time period $j$, a sender encrypts a message $m$ as
$\langle j, c\rangle$, which can be decrypted only
with the private key $SK_j$.
When the time makes a transit from period $j$ to $j+1$, the
decryptor updates its private key from $SK_j$ to $SK_{j+1}$
and deletes $SK_j$ immediately.
The key-evolving paradigm assures that compromise of the
private key $SK_j$ does not jeopardize the message encrypted
at the other time periods.
\par
We propose two key-evolving public key encryption schemes
with $z$-resilience such that compromise of $z$ private keys
does not affect confidentiality of messages encrypted in
other time periods.
Assuming that the DDH problem is hard,
we show one scheme semantically secure against passive
adversaries and the other scheme semantically secure against
the adaptive chosen ciphertext attack under the random
oracle.
Fully Distributed Threshold RSA under Standard Assumptions
The aim of the present article is to propose a fully distributed environment for the RSA scheme.
What we have in mind is highly sensitive applications and even if we are ready to pay a price in
terms of efficiency, we do not want any compromise of the security assumptions that we make.
Recently Shoup proposed a practical RSA threshold signature scheme that allows to share the
ability to sign between a set of players. This scheme can be used for decryption as well.
However, Shoup's protocol assumes a trusted dealer to generate and distribute the keys.
This comes from the fact that the scheme needs a special assumption on the RSA modulus and
this kind of RSA moduli cannot be easily generated in an efficient way with many players.
Of course, it is still possible to call theoretical results on multiparty computation, but we
cannot hope to design efficient protocols. The only practical result to generate RSA moduli
in a distributive manner is Boneh and Franklin protocol but this protocol cannot be easily
modified to generate the kind of RSA moduli that Shoup's protocol requires.
The present work takes a different path by proposing a method to enhance the key generation
with some additional properties and revisits the proof of Shoup to work with the resulting
RSA moduli. Both of these enhancements decrease the performance of the basic protocols.
However, we think that in the applications that we target, these enhancements provide
practical solutions. Indeed, the key generation protocol is usually run only once and the
number of players have time to perform their task so that the communication or time complexity
are not overly important.
Are 'Strong' Primes Needed for RSA
We review the arguments in favor of using so-called ``strong primes''
in the RSA public-key cryptosystem. There are two types of such
arguments: those that say that strong primes are needed to protect
against factoring attacks, and those that say that strong primes are
needed to protect against ``cycling'' attacks (based on repeated
encryption).
We argue that, contrary to common belief, it is unnecessary to use
strong primes in the RSA cryptosystem. That is, by using strong
primes one gains a negligible increase in security over what is
obtained merely by using ``random'' primes of the same size. There
are two parts to this argument. First, the use of strong primes
provides no additional protection against factoring attacks, because
Lenstra's method of factoring based on elliptic curves (ECM) circumvents any
protection that might have been offered by using strong primes.
The methods that 'strong' primes are intended to guard against, as well as ECM, are
probabalistic in nature, but ECM succeeds with higher probability. For RSA key sizes being proposed now, the probability
of success of these methods is very low.
Additionally, the newer Number Field Sieve algorithm can factor RSA keys with certainty in
less time than these methods.
Second, a simple group-theoretic argument shows that
cycling attacks are extremely unlikely to be effective, as long as the
primes used are large. Indeed, even probabalistic factoring attacks will succeed much more
quickly and with higher probability than cycling attacks.
Secure and Efficient Asynchronous Broadcast Protocols
Reliable broadcast protocols are a fundamental building block for
implementing replication in fault-tolerant distributed systems. This paper
addresses secure service replication in an asynchronous environment with a
static set of servers, where a malicious adversary may corrupt up to a
threshold of servers and controls the network. We develop a formal model
using concepts from modern cryptography, present modular definitions for
several broadcast problems, including reliable, atomic, and secure causal
broadcast, and present protocols implementing them. Reliable broadcast is a
basic primitive, also known as the Byzantine generals problem, providing
agreement on a delivered message. Atomic broadcast imposes additionally a
total order on all delivered messages. We present a randomized atomic
broadcast protocol based on a new, efficient multi-valued asynchronous
Byzantine agreement primitive with an external validity condition.
Apparently, no such efficient asynchronous atomic broadcast protocol
maintaining liveness and safety in the Byzantine model has appeared
previously in the literature. Secure causal broadcast extends atomic
broadcast by encryption to guarantee a causal order among the delivered
messages. Threshold-cryptographic protocols for signatures, encryption, and
coin-tossing also play an important role.
A Note on Cryptanalysis of the Preliminary Version of the NTRU Signature Scheme
In this paper a preliminary version of the NTRU signature scheme
is cryptanalyzed. The attack exploits a correlation between some
bits of a signature and coefficients of a secret random
polynomial. The attack does not apply to the next version of the
signature scheme.
Last updated: 2001-01-26
MinRank problem and Zero-knowledge authentication
A zero-knowledge protocol provides provably secure authentication based on a computational problem.
Several such schemes have been proposed since 1984, and the most practical ones rely on
problems such as factoring that are unfortunately subexponential.
Still there are several other (practical) schemes based on NP-complete problems.
Among them, the problems of coding theory
are in spite of some 20 years of significant research effort, still exponential to solve.
The problem MinRank recently arouse in cryptanalysis of HFE (Crypto'99) and TTM (Asiacrypt'2000) public key cryptosystems.
It happens to ba a strict generalization of those hard problems of (decoding) error correcting codes.
We propose a new Zero-knowledge scheme based on MinRank.
We prove it to be Zero-knowledge by black-box simulation.
We compare it to other practical schemes based on NP-complete problems.
The MinRank scheme allows also an easy setup for a public key shared by a few users, and thus allows anonymous group signatures.
Separating Decision Diffie-Hellman from Diffie-Hellman in cryptographic groups
In many cases, the security of a cryptographic scheme based on Diffie--Hellman does in fact rely on the hardness of the
Diffie--Hellman Decision problem. In this paper, we show that the
hardness of Decision Diffie--Hellman is a much stronger hypothesis than the hardness of the regular Diffie--Hellman problem. Indeed, we
describe a reasonably looking cryptographic group where Decision
Diffie--Hellman is easy while Diffie--Hellman is equivalent to a --
presumably hard -- Discrete Logarithm Problem. This shows that care
should be taken when dealing with Decision Diffie--Hellman, since its
security cannot be taken for granted.
The One-More-RSA-Inversion Problems and the Security of Chaum's Blind Signature Scheme
We introduce a new class of computational problems which we
call the ``one-more-RSA-inversion'' problems. Our main result is that
two problems in this class, which we call the chosen-target and known-target
inversion problems respectively, have polynomially-equivalent computational
complexity. We show how this leads to a proof of security for Chaum's RSA-based
blind signature scheme in the random oracle model based on the assumed hardness
of either of these problems. We define and prove analogous results for
``one-more-discrete-logarithm'' problems. Since the appearence of the
preliminary version of this paper, the new problems we have introduced
have found other uses as well.
Efficient Algorithms for Computing Differential Properties of Addition
In this paper we systematically study the differential properties of
addition modulo $2^n$. We derive $\Theta(\log n)$-time algorithms
for most of the properties, including differential probability of
addition. We also present log-time algorithms for finding good
differentials. Despite the apparent simplicity of modular addition,
the best known algorithms require naive exhaustive computation. Our
results represent a significant improvement over them. In the most
extreme case, we present a complexity reduction from
$\Omega(2^{4n})$ to $\Theta(\log n)$.
New constructions of resilient Boolean functions with maximal nonlinearity
In this paper we develop a technique that allows to obtain new effective constructions of highly resilient Boolean functions with high nonlinearity. In particular, we prove that the upper bound
$2^{n-1}-2^{m+1}$ on nonlinearity of m-resilient n-variable Boolean
functions is achieved for $0.6n-1\le m\le n-2$.
A Content Certified E-mail Protocol with a Public Mailbox
This short note presents some ideas of the content certified e-mail protocol with a public mailbox. By applying the Diffie-Hellman key exchange style to reflect the agreement between the sender and receiver.
The notion of certified e-mail just similar to the certification procedures for regular mails in the real-life post offices. Yet the post office only certifies whether the mail has been sent or received by the appropriate parties, but not its content. This insufficient verification in paper authentication protocols can be easily solved by digital signature schemes.
A certified e-mail protocol must have the following security properties:
1. The sender must have some way of proving that the receiver received the mail, should the receiver later try to deny it.
2. The receiver must have some way of proving that the sender did not send the mail, should the sender later try to claim that she did.
Universally Composable Security: A New Paradigm for Cryptographic Protocols
We present a general framework for representing cryptographic protocols and analyzing their security. The framework
allows specifying the security requirements of practically any
cryptographic task in a unified and systematic way.
Furthermore, in this framework the security of protocols
is maintained under a general composition operation, called
universal composition.
The proposed framework with its security-preserving composition property allow for modular design and analysis of complex cryptographic protocols from relatively simple building blocks.
Moreover, within this framework, protocols are guaranteed to maintain their security within any context, even in the presence of an unbounded number of arbitrary protocol instances that run concurrently in an adversarially controlled manner.
This is a useful guarantee, that allows arguing about the security of
cryptographic protocols in complex and unpredictable environments such
as modern communication networks.
A Model for Asynchronous Reactive Systems and its Application to Secure Message Transmission
We present the first rigorous model for secure reactive systems in
asynchronous networks with a sound cryptographic semantics, supporting
abstract specifications and the composition of secure systems. This
enables modular proofs of security, which is essential in bridging
the gap between the rigorous proof techniques of cryptography and
tool-supported formal proof techniques.
The model follows the general simulatability approach of modern
cryptography. A variety of network structures and trust models can
be described, such as static and adaptive adversaries.
As an example of our specification methodology we provide the first
abstract and complete specification for Secure Message Transmission,
improving on recent results by Lynch, and verify one
concrete implementation. Our proof is based on a general theorem on
the security of encryption in a reactive multi-user setting,
generalizing a recent result by Bellare et.al.
How to Encrypt Long Messages without Large Size Symmetric/Asymmetric Encryption Schemes
Suppose that we wish to encrypt long messages
with small overhead by a public key encryption scheme
which is secure against adaptive chosen ciphertext attack (IND-CCA2).
Then the previous schemes require either
a large size one-way trapdoor permutation (OAEP)
or both a large size symmetric encryption scheme
and a small size asymmetric encryption scheme (hybrid encryption).
In this paper,
we show a scheme which requires only a small size
asymmetric encryption scheme satisfying IND-CCA2
for our purpose.
Therefore, the proposed scheme is very efficient.
A hash function and
a psuedorandom bit generator are used as random oracles.
On the Security of Modular Exponentiation with Application to the Construction of Pseudorandom Generators
Assuming the inractability of factoring, we show that
the output of the exponentiation modulo a composite function
$f_{N,g}(x)=g^x\bmod N$ (where $N=P\cdot Q$) is pseudorandom,
even when its input is restricted to be half the size.
This result is equivalent to the simultaneous hardness of the upper
half of the bits of $f_{N,g}$,
proven by Hastad, Schrift and Shamir.
Yet, we supply a different proof that is significantly simpler
than the original one. In addition, we suggest a pseudorandom generator
which is more efficient than all previously known factoring
based pseudorandom generators.
Candidate One-Way Functions Based on Expander Graphs
We suggest a candidate one-way function using combinatorial
constructs such as expander graphs. These graphs are used to
determine a sequence of small overlapping subsets of input bits,
to which a hard-wired random predicate is applied.
Thus, the function is extremely easy to evaluate:
all that is needed is to take multiple projections of the
input bits, and to use these as entries to a look-up table.
It is feasible for the adversary to scan the look-up table,
but we believe it would be infeasible to find an input that fits a given
sequence of values obtained for these overlapping projections.
The conjectured difficulty of inverting the suggested function
does not seem to follow from any well-known assumption.
Instead, we propose the study of the complexity of inverting
this function as an interesting open problem,
with the hope that further research will provide evidence to our
belief that the inversion task is intractable.
Last updated: 2001-01-05
Non-Deforming Digital Watermarks
TaKE cryptography offers subliminal marking of a digital stream so that any tampering, induces an unacceptable distortion of the primary information. Encrypted audio and video streams are decrypted by one key to the original content (e.g. music), and through another key to the digital watermark (e.g. name of legitimate user). Unlike the prevailing methods which are based on distorting the protected contents, or locking it through a digital signature, TaKE -- Tailored Key Encryption -- preserves the integrity of the original stream, and its digital watermarks are inconspicious. Daniel (tm) is a particular TaKE cryptography which also offers an instant and flexible trade off between security level and speed and convenience level. The described method is fast and proper for both high capacity stream, and secrecy sensitive streams..
RSA-OAEP is Secure under the RSA Assumption
Recently Victor Shoup noted that there is a gap in
the widely-believed security result of OAEP against adaptive
chosen-ciphertext attacks. Moreover, he showed that,
presumably,
OAEP cannot be proven secure from the {\it one-wayness}
of the underlying trapdoor permutation.
This paper establishes another result on the security
of OAEP. It proves that OAEP offers semantic security
against adaptive chosen-ciphertext attacks,
in the random oracle model, under the {\it partial-domain}
one-wayness of the underlying permutation.
Therefore, this uses a formally stronger assumption.
Nevertheless, since partial-domain one-wayness of the RSA function
is equivalent to its (full-domain) one-wayness, it follows that
the security of RSA--OAEP can actually
be proven under the sole RSA assumption, although
the reduction is not tight.
OAEP Reconsidered
The OAEP encryption scheme was introduced by Bellare and Rogaway
at Eurocrypt '94.
It converts any trapdoor permutation scheme into a public-key
encryption scheme.
OAEP is widely believed to provide
resistance against adaptive chosen ciphertext attack.
The main justification for this belief is a supposed proof of security
in the random oracle model, assuming the underlying
trapdoor permutation scheme is one way.
This paper shows conclusively that this justification is invalid.
First, it observes that there appears to be a non-trivial gap in the
OAEP security proof.
Second,
it proves that this gap cannot be filled,
in the sense that
there can be no standard "black box" security reduction
for OAEP.
This is done by proving that there exists an oracle
relative to which the general OAEP scheme is insecure.
The paper also presents a new scheme OAEP+, along with
a complete proof of security in the random oracle model.
OAEP+ is essentially just as efficient as OAEP,
and even has a tighter security reduction.
It should be stressed that these results do not
imply that a particular instantiation of OAEP, such as RSA-OAEP, is insecure.
They simply undermine the original justification for its security.
In fact, it turns out -- essentially by accident,
rather than by design -- that RSA-OAEP is secure in the random oracle
model; however, this fact relies on special algebraic properties
of the RSA function, and not on the security
of the general OAEP scheme.
Essential Shannon Security with Keys Smaller Than the Encrypted Message
To a cryptographer the claim that “Shannon Security was achieved with keys smaller than the encrypted message" appears unworthy of attention, much as the claim of “perpetuum mobile” is to a physicist. Albeit, from an engineering point of view solar cells which power satellites exhibit an “essential perpetuum mobile” and are of great interest. Similarly for Shannon Security, as it is explored in this article. We discuss encryption schemes designed to confound a diligent cryptanalyst who works his way from a captured ciphertext to a disappointing endpoint where more than one otherwise plausible plaintexts are found to be associated with keys that encrypt them to that ciphertext. Unlike some previous researchers who explored this equivocation as a special case of existing schemes, this approach is aimed at devising a symmetric encryption for that purpose per se.
Graph-Based Authentication of Digital Streams
Uncategorized
Uncategorized
We consider the authentication of digital streams over a lossy
network. The overall approach taken is graph-based, as this yields
simple methods for controlling overhead, delay, and the ability to
authenticate, while serving to unify many previously known hash- and
MAC-based techniques. The loss pattern of the network is defined
probabilistically, allowing both bursty and random packet loss to be
modeled. Our authentication schemes are customizable by the
sender of the stream; that is, within reasonable constraints on
the input parameters, we provide schemes that achieve the desired
authentication probability while meeting the input upper bound on the
overhead per packet. In addition, we demonstrate that some of the
shortcomings of previously known schemes correspond to easily
identifiable properties of a graph, and hence, may be more easily
avoided by taking a graph-based approach to designing authentication
schemes.
Session-Key Generation using Human Passwords Only
We present session-key generation protocols in a model where the
legitimate parties share {\em only} a human-memorizable
password, and there is no additional setup assumption in the
network. Our protocol is proven secure under the assumption that
trapdoor permutations exist. The security guarantee holds with
respect to probabilistic polynomial-time adversaries that control
the communication channel (between the parties), and may omit,
insert and modify messages at their choice. Loosely speaking, the
effect of such an adversary that attacks an execution of our
protocol is comparable to an attack in which an adversary is only
allowed to make a constant number of queries of the form ``is $w$
the password of Party $A$''. We stress that the result holds also
in case the passwords are selected at random from a small
dictionary so that it is feasible (for the adversary) to scan the
entire directory. We note that prior to our result, it was not
known whether or not such protocols were attainable without the
use of random oracles or additional setup assumptions.
A Complete Problem for Statistical Zero Knowledge
We present the first complete problem for SZK, the class of
(promise) problems
possessing statistical zero-knowledge proofs (against an
honest verifier).
The problem, called STATISTICAL DIFFERENCE,
is to decide whether two efficiently samplable distributions
are either statistically close or far apart. This
gives a new characterization of
SZK that makes no reference to interaction or zero knowledge.
We propose the use of complete problems to unify and
extend the study of statistical zero knowledge. To this end,
we examine several
consequences of our Completeness Theorem and its proof, such as:
(1) A way to make every (honest-verifier) statistical
zero-knowledge proof very communication efficient,
with the prover sending only one bit
to the verifier (to achieve soundness error 1/2).
(2) Simpler proofs of many of the previously known
results about statistical zero knowledge, such as
the Fortnow and Aiello--Håstad upper bounds on the complexity of SZK
and
Okamoto's result that SZK is closed under complement.
(3) Strong closure properties of SZK which amount to
constructing statistical zero-knowledge proofs for complex assertions
built out of simpler assertions already shown to be in SZK.
(4) New results about the various measures of "knowledge complexity,"
including a collapse in the hierarchy corresponding
to knowledge complexity in the "hint" sense.
(5) Algorithms for manipulating the statistical difference
between efficiently samplable distributions, including transformations
which "polarize" and "reverse" the statistical relationship
between a pair of distributions.
Multiparty Computation from Threshold Homomorphic Encryption
We introduce a new approach to multiparty computation (MPC)
basing it on homomorphic
threshold crypto-systems. We show that given keys for any
sufficiently efficient
system of this type, general MPC protocols for $n$ players can be
devised which are
secure against an active adversary that corrupts any minority of the
players.
The total number of bits sent is $O(nk|C|)$, where $k$ is the
security parameter and $|C|$ is
the size of a (Boolean) circuit computing the function to be
securely evaluated.
An earlier proposal by Franklin and Haber with the same complexity
was only secure
for passive adversaries, while all earlier protocols with active
security had complexity at
least quadratic in $n$. We give two examples of threshold
cryptosystems that can support our
construction and lead to the claimed complexities.
Correlation Immune Boolean Functions with Very High Nonlinearity
Here we provide a construction method for unbalanced, first order
correlation immune Boolean functions on even number of variables
$n \geq 6$. These functions achieve the currently best known
nonlinearity $2^{n-1} - 2^{\frac{n}{2}} + 2^{\frac{n}{2} - 2}$ .
Then we provide a simple modification of these functions to get
unbalanced correlation immune Boolean functions on even number of
variables $n$, with nonlinearity
$2^{n-1} - 2^{\frac{n}{2}} + 2^{\frac{n}{2} - 2} - 2$ and maximum
possible algebraic degree $n-1$. Moreover, we present a detailed
study on the Walsh spectra of these functions.
A Construction of Resilient Functions with High Nonlinearity
Uncategorized
Uncategorized
The relationship between nonlinearity and
resiliency for a function $F:\mathbb{F}_2^n \mapsto
\mathbb{F}_2^m$ is considered. We give a construction of resilient
functions with high nonlinearity. The construction leads to the
problem of finding a set of linear codes with a fixed minimum
distance, having the property that the intersection
between any two codes is the all zero codeword only. This problem is
considered, and existence results are provided. The constructed
functions obtain a nonlinearity superior to previous construction
methods.
CRYPTANALYSIS OF THE A5/2 ALGORITHM
An attack on the A5/2 stream cipher algorithm is described, that
determines the linear relations among the output sequence bits.
The vast majority of the unknown output bits can be reconstructed.
The time complexity of the attack is proportional to 2**17.
Reducing the Gate Count of Bitslice DES
This paper describes various techniques to reduce the number of logic gates needed to implement the DES S-boxes in bitslice software. Using standard logic gates, an average of 56 gates per S-box was achieved, while an average of 51 was produced when non-standard gates were utilized. This is an improvement over the previous best result, which used an average of 61 non-standard gates.
Spectral Analysis of High Order Correlation Immune Functions
We use the recent results on the spectral structure of
correlation immune and resilient Boolean functions for the
investigations of high order correlation immune functions.
At first, we give simple proofs of some theorems where only
long proofs were known. Next, we introduce the matrix of
nonzero Walsh coefficients and establish important properties
of this matrix. We use these properties to prove the nonexistence
of some high order correlation immune functions. Finally, we
establish the order of magnitude for the number of (n-4)th
order correlation immune functions of n variables.
Spectral Domain Analysis of Correlation Immune and Resilient Boolean Functions
In this paper we prove a general result on the Walsh Transform
of an arbitrary Boolean function. As a consequence, we obtain several
divisibility results on the Walsh Transform of correlation immune and
resilient Boolean functions. This allows us to improve upper bounds
on the nonlinearity of correlation immune and resilient Boolean
functions. Also we provide new necessary conditions on the algebraic
normal form of correlation immune/resilient functions attaining the
maximum possible nonlinearity.
New Constructions of Resilent and Correlation Immune Boolean Functions achieving Upper Bounds on Nonlinearity
Recently weight divisibility results on resilient and correlation
immune Boolean functions have received a lot of attention. These
results have direct consequences towards the upper bound on nonlinearity
of resilient and correlation immune Boolean functions of certain order.
Now the clear benchmark in the design of resilient Boolean functions
(which optimizes Sigenthaler's inequality) is to provide results
which attain the upper bound on nonlinearity. Here we construct a
7-variable, 2-resilient Boolean function with nonlinearity 56. This
solves the maximum nonlinearity issue for 7-variable functions with
any order of resiliency. Using this 7-variable function, we also
construct a 10-variable, 4-resilient Boolean function with nonlinearity
480. Construction of these two functions were justified as important
open questions in Crypto 2000. Also we provide methods to generate an
infinite sequence of Boolean functions on $n = 7 + 3i$ variables
$(i \geq 0)$ with order of resiliency $m = 2 + 2i$, algebraic degree
$4 + i$ and nonlinearity $2^{n-1} - 2^{m+1}$, which were not known
earlier. We conclude with a few interesting construction results
on unbalanced correlation immune functions of 5 and 6 variables.
Highly Nonlinear Balanced Boolean Functions with very good Autocorrelation Property
Constructing highly nonlinear balanced Boolean functions with very good
autocorrelation property is an interesting open question. In this direction
we use the measure $\Delta_f$ for a function $f$ proposed by Zhang and
Zheng (1995). We provide balanced functions $f$ with currently best known
nonlinearity and $\Delta_f$ values together. Our results for 15-variable
functions disprove the conjecture proposed by Zhang and Zheng (1995),
where our constructions are based on modifications of
Patterson-Wiedemann (1983) functions. Also we propose a simple
bent based construction technique to get functions with very good
$\Delta_f$ values for odd number of variables. This construction has
a root in Kerdock Codes. Moreover, our construction on even number
of variables is a recursive one and we conjecture (similar to Dobbertin's
conjecture (1994) with respect to nonlinearity) that this provides
minimum possible value of $\Delta_f$ for a function $f$ on even number
of variables.
The Saturation Attack - a Bait for Twofish
We introduce the notion of a saturation attack and present attacks on
reduced-round versions of the Twofish block cipher. Our attack for all
generic key sizes of Twofish (i.e., for 128-bit, 192-bit and 256-bit
keys) improves on exhaustive key search for seven rounds of Twofish
with full whitening, and for eight rounds of Twofish without whitening
at the end. The core of the attack is a a key-independent
distinguisher for six rounds of Twofish. The distinguisher is used to
attack up to 7 rounds of Twofish with full whitening and and 8 rounds
of Twofish with prewhitening only - half of the cipher. The attacks
take up to 2^127 chosen plaintexts (half of the codebook!) and are 2-4
times faster than exhaustive search.
Efficient Zero-Knowledge Proofs of Knowledge Without Intractability Assumptions
We initiate the investigation of the class of relations
that admit extremely efficient perfect zero knowledge
proofs of knowledge: constant number of rounds, communication
linear in the length of the statement and the witness, and
negligible knowledge error. In its most general incarnation,
our result says that for relations that have a particular
three-move honest-verifier zero-knowledge (HVZK) proof of knowledge,
and which admit a particular three-move HVZK proof of knowledge for
an associated commitment relation, perfect zero knowledge
(against a general verifier) can be achieved essentially for free,
even when proving statements on several instances combined
under under monotone function composition. In addition,
perfect zero-knowledge is achieved with an optimal 4-moves.
Instantiations of our main protocol lead to efficient perfect
ZK proofs of knowledge of discrete logarithms and RSA-roots,
or more generally, $q$-one-way group homomorphisms.
None of our results rely on intractability assumptions.
Provably Secure Password-Authenticated Key Exchange Using Diffie-Hellman
When designing password-authenticated key exchange protocols (as
opposed to key exchange protocols authenticated using
cryptographically secure keys), one must not allow any information
to be leaked that would allow verification of the password (a weak
shared key), since an attacker who obtains this information may be
able to run an off-line dictionary attack to determine the
correct password. Of course, it may be extremely difficult to hide
all password information, especially if the attacker may pose as one
of the parties in the key exchange. Nevertheless, we present a new
protocol called PAK which is the first Diffie-Hellman-based
password-authenticated key exchange protocol to provide a formal
proof of security (in the random oracle model) against both passive
and active adversaries. In
addition to the PAK protocol that provides mutual explicit
authentication, we also show a more efficient protocol called PPK that
is provably secure in the implicit-authentication model. We then
extend PAK to a protocol called PAK-X, in which one side (the
client) stores a plaintext version of the password, while the other
side (the server) only stores a verifier for the password. We
formally prove security of PAK-X, even when the server is
compromised. Our formal model for password-authenticated key
exchange is new, and may be of independent interest.
Constructions and Bounds for Unconditionally Secure Commitment Schemes
Commitment schemes have been extensively studied since they
were introduced by Blum in 1982. Rivest recently
showed how to construct unconditionally secure commitment schemes,
assuming the existence of a trusted initializer. In this paper, we present a
formal mathematical model for such schemes, and analyze their
binding and concealing properties. In particular, we
show that such schemes cannot be perfectly concealing: there is necessarily
a small probability that Alice can cheat Bob by committing to one value
but later revealing a different value. We prove several
bounds on Alice's cheating probability, and present constructions
of schemes that achieve optimal cheating probabilities. We also
show a close link between commitment schemes and the classical
``affine resolvable designs''.
Constructing Pseudo-Random Permutations with a Prescribed Structure
We show how to construct pseudo-random permutations that satisfy a
certain cycle restriction, for example that the permutation be
cyclic (consisting of one cycle containing all the elements) or an
involution (a self-inverse permutation) with no fixed points. The
construction can be based on any (unrestricted) pseudo-random
permutation. The resulting permutations
are defined succinctly and their
evaluation at a given point is efficient. Furthermore, they enjoy
a {\em fast forward} property, i.e. it is possible to iterate
them at a very small cost.
On Symmetrically Private Information Retrieval
In this paper we give single-round single-server symmetrically private information retrieval (SPIR) scheme, in which privacy of user follows from intractability of quadratic residuosity problem and and privacy of database follows from the number theoretic XOR assumption introduced in this paper. Proposed scheme achieves the communication complexity $O(n^{\e})$, for any $\e > 0$, where $n$ is the number of bits in the database. We also present an efficient block retrieval SPIR scheme. Intrestingly, we show that an $( K \log{n})$ SPIR scheme is possible if there exists an probabilistic bit encryption scheme on which certain operators can be defined with desired properties. Finally we go on to generalize SPIR scheme to private retrieval of secrets and sharing by a group of users. It can also be viewed as an extended secret sharing scheme. We also discover and prove certain properties related to quadratic residuosity in particular and probabilistic bit encryption in general.
Decimation Attack of Stream Ciphers
This paper presents a new attack called {\em Decimation Attack}
of most stream ciphers. It exploits the property that multiple clocking
(or equivalently $d$-th decimation) of a LFSR can simulate the behavior
of many other LFSRs of possible shorter length. It yields then significqnt
improvements of all the previous known correlation and fast correlation attacks
provided a new criterion is satisfied. This criterion on the length of the feedback
polynomial is then defined to resist the decimation attack. Simulation results and
complexity comparison are detailed for ciphertext only attack.
Encryption Modes with Almost Free Message Integrity
We define a new mode of operation for block ciphers which in addition to providing confidentiality also ensures message integrity. In contrast, previously for message integrity a separate pass was required to compute a cryptographic message authentication code (MAC). The new mode of operation, called Integrity Aware Parallelizable Mode (IAPM),
requires a total of m+1 block cipher evaluations on a plain-text of length m blocks. For comparison, the well known CBC (cipher block chaining) encryption mode requires m block cipher evaluations, and the second pass of computing the CBC-MAC essentially requires additional m+1 block cipher evaluations. As the name suggests, the new mode is also highly parallelizable.
On the Complexity of Verifiable Secret Sharing and Multi-Party Computation
We first study the problem of doing Verifiable Secret Sharing (VSS)
information theoretically secure for a general access structure. We
do it in the model where private channels between players and a
broadcast channel is given, and where an active, adaptive adversary
can corrupt any set of players not in the access structure. In
particular, we consider the complexity of protocols for this
problem, as a function of the access structure and the number of
players. For all access structures where VSS is possible at all, we
show that, up to a polynomial time black-box reduction, the
complexity of adaptively secure VSS is the same as that of ordinary
secret sharing (SS), where security is only required against a
passive, static adversary. Previously, such a connection was only
known for linear secret sharing and VSS schemes.
We then show an impossibility result indicating that a similar
equivalence does not hold for Multiparty Computation (MPC): we show
that even if protocols are given black-box access for free to an
idealized secret sharing scheme secure for the access structure in
question, it is not possible to handle all relevant access
structures efficiently, not even if the adversary is passive and
static. In other words, general MPC can only be black-box reduced
efficiently to secret sharing if extra properties of the secret
sharing scheme used (such as linearity) are assumed.
General Secure Multi-Party Computation from any Linear Secret Sharing Scheme
We show that verifiable secret sharing (VSS) and secure multi-party
computation (MPC) among a set of $n$ players can efficiently be based
on {\em any} linear secret sharing scheme (LSSS) for the players,
provided that the access structure of the LSSS allows MPC or VSS at
all. Because an LSSS neither guarantees reconstructability when some
shares are false, nor verifiability of a shared value, nor allows for
the multiplication of shared values, an LSSS is an apparently much weaker
primitive than VSS or MPC.
Our approach to secure MPC is generic and applies to both the
in\-for\-ma\-tion-theoretic and the cryptographic setting. The
construction is based on 1) a formalization of the special
multiplicative property of an LSSS that is needed to perform a
multiplication on shared values, 2) an efficient generic construction
to obtain from any LSSS a multiplicative LSSS for the same access
structure, and 3) an efficient generic construction to build
verifiability into every LSSS (always assuming that the adversary
structure allows for MPC or VSS at all).
The protocols are efficient. In contrast to all previous
information-theo\-re\-ti\-cal\-ly secure protocols, the field size is not
restricted (e.g, to be greater than $n$). Moreover, we exhibit
adversary structures for which our protocols are polynomial in $n$
while all previous approaches to MPC for non-threshold adversaries
provably have super-polynomial complexity.
Using fewer Qubits in Shor's Factorization Algorithm via Simultaneous Diophantine Approximation
While quantum computers might speed up in principle
certain computations dramatically, in pratice, though
quantum computing technology is still in its infancy.
Even we cannot clearly envison at present what the
hardware of that machine will be like.
Nevertheless, we can be quite confident that it will be
much easier to build any practical quantum computer
operating on a few number of quantum bits rather than one operating
on a huge number of of quantum bits.
It is therefore of big practical impact to use the resource
of quantum bits very spare,
i.e., to find quantum algorithms which use as few as possible
quantum bits.
Here, we present a method to reduce the number of actually needed qubits
in Shor's algorithm to factor a composite number $N$.
Exploiting the inherent probabilism of quantum computation we are able to
substitute the continued fraction algorithm to find a certain unknown
fraction by a simultaneous Diophantine approximation.
While the continued fraction algorithm is able to find a Diophantine
approximation to a single known fraction with a denominator greater than
$N^2$, our simultaneous Diophantine approximation method computes in
polynomial time unusually good approximations to known fractions with a
denominator of size $N^{1+\varepsilon}$, where $\varepsilon$ is allowed to be
an arbitrarily small positive constant.
As these unusually good approximations are almost unique we are able to
recover an unknown denominator using fewer qubits in the quantum part of our
algorithm.
Electronic Jury Voting Protocols
This work elicits the fact that all current
proposals for electronic voting schemes disclose the final
tally of the votes.
In certain situations, like jury voting, this may be undesirable.
We present a robust and universally verifiable
Membership Testing Scheme (MTS) that allows, among other things,
a collection of voters to cast votes and determine whether
their tally belongs to some pre--specified set (e.g., exceeds a
given threshold)
--- our scheme discloses no additional information than that implied
from the knowledge of such membership.
We discuss several extensions of our basic MTS.
All the constructions presented
combine features of two parallel lines of research
concerning electronic voting schemes, those based on MIX--networks
and in homomorphic encryption.
Random Oracles in Constantinople: Practical Asynchronous Byzantine Agreement using Cryptography
Byzantine agreement requires a set of parties in a distributed system to
agree on a value even if some parties are corrupted. A new protocol for
Byzantine agreement in a completely asynchronous network is presented that
makes use of cryptography, specifically of threshold signatures and
coin-tossing protocols. These cryptographic protocols have practical and
provably secure implementations in the ``random oracle'' model. In
particular, a coin-tossing protocol based on the Diffie-Hellman problem is
presented and analyzed.
The resulting asynchronous Byzantine agreement protocol is both practical
and theoretically nearly optimal because it tolerates the maximum number of
corrupted parties, runs in constant expected time, has message
and communication complexity close to the optimum, and uses a trusted dealer
only in a setup phase, after which it can process a virtually unlimited
number of transactions.
The protocol is formulated as a transaction processing service in a
cryptographic security model, which differs from the standard
information-theoretic formalization and may be of independent interest.
The Complete Distribution of Linear Probabilities of MARS' s-box
This paper shows the complete linear probability distribution of MARS' s-box. The best bias is $\dfrac{84}{2^9}$ ($=2^{-2.61}$), while the designers' estimation is $\dfrac{64}{2^9}$ and the best previously known bias is $\dfrac{82}{2^9}$.
Anonymous Fingerprinting with Direct Non-Repudiation
Fingerprinting schemes support copyright protection by enabling the merchant of a data item to identify the original buyer of a
redistributed copy. In asymmetric schemes, the merchant can also convince an arbiter of this fact. Anonymous fingerprinting schemes enable buyers to purchase digital items anonymously; however, identification is possible if they redistribute the data item.
Recently, a concrete and reasonably efficient construction based on digital coins was proposed. A disadvantage is that the accused
buyer has to participate in any trial protocol to deny charges. Trials with direct non-repudiation, i.e., the merchant alone
holds enough evidence to convince an arbiter, are more useful in real life. This is similar to the difference between ``normal'' and ``undeniable'' signatures.
In this paper, we present an equally efficient anonymous fingerprinting scheme with direct non-repudiation. The main technique
we use, delayed verifiable encryption, is related to coin tracing in escrowed cash systems. However, there are technical differences, mainly to provide an unforgeable link to license conditions.
Forward Security in Threshold Signature Schemes
We consider the usage of forward security with threshold
signature schemes. This means that even if more than the
threshold number of players are compromised, some security remains:
it is not possible to forge signatures relating to the past. In
this paper, we describe the first forward-secure threshold
signature schemes whose parameters (other than signing or verifying
time) do not vary in length with the number of time periods in the
scheme. Both are threshold versions of the Bellare-Miner
forward-secure signature scheme, which is Fiat-Shamir-based. One
scheme uses multiplicative secret sharing, and tolerates mobile
eavesdropping adversaries. The second scheme is based on polynomial
secret sharing, and we prove it forward-secure based on the security
of the Bellare-Miner scheme. We then sketch modifications which
would allow this scheme to tolerate malicious adversaries. Finally,
we give several general constructions which add forward security to
any existing threshold scheme.
Last updated: 2001-03-16
Secure Multiparty Computation of Approximations
Approximation algorithms can sometimes be used to obtain efficient
solutions where no efficient exact computation is known. In
particular, approximations are often useful in a distributed setting
where the inputs are held by different parties and are extremely
large. Furthermore, for some applications, the parties want to
cooperate to compute a function of their inputs without revealing more
information than they have to.
Suppose the function $\fhat$ is an approximation to the function $f$.
Secure multiparty computation of $f$ allows the parties to compute $f$
without revealing more than they have to, but requires some additional
overhead in computation and communication. Hence, if $f$ is
inefficient or just efficient enough to be practical, a secure
computation of $f$ may be impractically expensive. A secure
computation of $\fhat$ may be efficient enough, but a secure
computation of $\fhat$ is not necessarily as private as a secure
computation of $f$, because the output of $\fhat$ may reveal more
information than the output of $f$. In this paper, we present
definitions and protocols of secure multiparty approximate computation
that show how to realize most of the cost savings available by using
$\fhat$ instead of $f$ without losing the privacy of a secure
computation of $f$.
We make three contributions in this paper. First, we give formal
definitions of secure multiparty approximate computations. Second, we
introduce some general techniques for constructing secure multiparty
approximations. Finally, we present an efficient,
sublinear-communication, secure approximate computation for the
Hamming and $L^{1}$ distances.
Concrete Security Characterizations of PRFs and PRPs: Reductions and Applications
We investigate, in a concrete security setting, several alternate
characterizations of pseudorandom functions (PRFs) and pseudorandom
permutations (PRPs). By analyzing the concrete complexity of the
reductions between the standard notions and the alternate ones, we
show that the latter, while equivalent under polynomial-time
reductions, are weaker in the concrete security sense. With these
alternate notions, we argue that it is possible to get better
concrete security bounds for certain PRF/PRP-based schemes. As an
example, we show how using an alternate characterization of a PRF
could result in tighter security bounds for a certain class of
message authentication codes. We also apply these techniques to
give a simple concrete security analysis of the counter mode of
encryption. In addition, our results provide some insight into how
injectivity impacts pseudorandomness.
An Information-Theoretic Model for Steganography
An information-theoretic model for steganography with a passive adversary is
proposed. The adversary's task of distinguishing between an innocent cover
message $C$ and a modified message $S$ containing hidden information is
interpreted as a hypothesis testing problem. The security of a
steganographic system is quantified in terms of the relative entropy (or
discrimination) between the distributions of $C$ and $S$, which yields
bounds on the detection capability of any adversary. It is shown that
secure steganographic schemes exist in this model provided the covertext
distribution satisfies certain conditions. A universal stegosystem is
presented in this model that needs no knowledge of the covertext
distribution, except that it is generated from independently repeated
experiments.
Accountable Certificate Management using Undeniable Attestations
Uncategorized
Uncategorized
This paper initiates a study of accountable certificate management
methods, necessary to support long-term authenticity of digital
documents. Our main contribution is a model for accountable
certificate management, where clients receive attestations
confirming inclusion/removal of their certificates from the database
of valid certificates. We explain why accountability depends on the
inability of the third parties to create contradictory attestations.
After that we define an undeniable attester as a primitive that
provides efficient attestation creation, publishing and
verification, so that it is intractable to create contradictory
attestations. We introduce authenticated search trees and build an
efficient undeniable attester upon them. The proposed system is the
first accountable long-term certificate management system.
Moreover, authenticated search trees can be used in many
security-critical applications instead of the (sorted) hash trees to
reduce trust in the authorities, without decrease in efficiency.
Therefore, the undeniable attester promises looks like a very useful
cryptographic primitive with a wide range of applications.
Authentication and Key Agreement via Memorable Password
This paper presents a new password authentication and key agreement protocol, AMP, based on the amplified password idea.
The intrinsic problems with password authentication are the password itself has low entropy and the password file is very hard
to protect.
We present the amplified password proof and the amplified password file for solving these problems.
A party commits the high entropy information and amplifies her password with that information in the amplifed password proof.
She never shows any information except that she knows it.
Our amplified password proof idea is very similar to the zero-knowledge proof in that sense.
We adds one more idea; the amplified password file for password file protection.
A server stores the amplified verifiers in the amplified password file that is secure against a server file compromise and a dictionary
attack.
AMP mainly provides the password-verifier based authentication
and the Diffie-Hellman based key agreement, securely and efficiently.
AMP is easy to generalize in any other cyclic groups.
In spite of those plentiful properties, AMP is actually the most efficient protocol among the related protocols
due to the simultaneous multiple exponentiation method.
Several variants such as AMP^i, AMPn, AMP^n+, AMP+, AMP++, and AMP^c are also proposed.
Among them, AMP^n is actually the basic protocol of this paper that describes the amplified password proof idea
while AMP is the most complete protocol that adds the amplified password file.
AMP^i simply removes the amplified password file from AMP.
In the end, we give a comparison to the related protocols in terms of efficiency.
Authenticated Encryption: Relations among notions and analysis of the generic composition paradigm
Uncategorized
Uncategorized
An authenticated encryption scheme is a symmetric encryption scheme whose goal is to provide both privacy and integrity. We consider two possible notions of authenticity for such schemes, namely integrity of plaintexts and integrity of ciphertexts, and relate them (when coupled with IND-CPA) to the standard notions of privacy (IND-CCA, NM-CPA) by presenting implications and separations between all notions considered. We then analyze the security of authenticated encryption schemes designed by ``generic composition,'' meaning making black-box use of a given symmetric encryption scheme and a given MAC. Three composition methods are considered, namely Encrypt-and-MAC, MAC-then-encrypt, and Encrypt-then-MAC. For each of these, and for each notion of security, we indicate whether or not the resulting scheme meets the notion in question assuming the given symmetric encryption scheme is secure against chosen-plaintext attack and the given MAC is unforgeable under chosen-message attack. We provide proofs for the cases where the answer is ``yes'' and counter-examples for the cases where the answer is ``no.''
Security of the Most Significant Bits of the Shamir Message Passing Scheme
Boneh and Venkatesan have recently proposed a polynomial time
algorithm for recovering a ``hidden'' element $\alpha$ of a
finite field $\F_p$ of $p$ elements from rather short
strings of the most significant bits of the remainder
mo\-du\-lo $p$ of $\alpha t$ for several values of $t$ selected uniformly
at random from $\F_p^*$. Unfortunately the applications to the
computational security of most significant bits
of private keys of some finite field exponentiation based cryptosystems
given by Boneh and Venkatesan are not quite correct. For the Diffie-Hellman
cryptosystem the
result of Boneh and Venkatesan has been corrected and
generalized in our recent paper.
Here a similar analysis is given for the Shamir message passing scheme.
The results depend on some bounds
of exponential sums.
Security of Polynomial Transformations of the Diffie--Hellman Key
Uncategorized
Uncategorized
D. Boneh and R. Venkatesan have recently proposed an approachto proving that a reasonably small portions of most significant bits of the Diffie-Hellman key modulo a prime are as secure the the whole key. Some further improvements and generalizations have been obtained by I. M. Gonzales Vasco and I. E. Shparlinski. E. R. Verheul has obtained certain analogies of these results in the case of Diffie--Hellman keys in extensions of finite fields, when an oracle is given to compute a certain polynomial function of the key, for example, the trace in the background field. Here we obtain some new results in this direction concerning the case of so-called "unreliable" oracles.
ACE: The Advanced Cryptographic Engine
This document describes
the Advanced Cryptographic Engine (ACE).
It specifies a public key encryption
scheme as well as a
digital signature scheme
with enough detail to ensure interoperability between different
implementations.
These schemes are almost as efficient as commercially used schemes,
yet unlike such schemes, can be proven secure under reasonable
and well-defined
intractability assumptions.
A concrete security analysis of both schemes is presented.
An Efficient Identification Scheme Based on Permuted Patterns
Uncategorized
Uncategorized
This paper proposes a new identification scheme based on a hard
partition problem rather than factoring or discrete logarithm
problems. The new scheme minimizes at the same time the communication
complexity and the computational cost required by the parties.
Since only simple operations are needed for an identification,
our scheme is well suited for smart cards with very limited
processing power. With a "good" implementation, the scheme is much
faster than the Fiat-Shamir or Shamir's PKP schemes.
On the Security of Diffie--Hellman Bits
Boneh and Venkatesan have recently proposed a polynomial time
algorithm for recovering a "hidden" element $\alpha$ of a finite field $\F_p$ of $p$ elements from rather short strings of the most significant bits of the remainder modulo $p$ of $\alpha t$ for several
values of $t$ selected uniformly at random from $\F_p^*$. We use some
recent bounds of exponential sums to generalize this algorithm to the case when $t$ is selected from a quite small subgroup of $\F_p^*$.
Namely, our results apply to subgroups of size at least
$p^{1/3+ \varepsilon}$ for all primes $p$ and to subgroups of size at
least $p^{\varepsilon}$ for almost all primes $p$, for any fixed
$\varepsilon >0$.
We also use this generalization to improve (and correct)
one of the statements of the aforementioned work about the
computational security of the most significant bits of the
Diffie--Hellman key.
Threshold Cryptography Secure Against the Adaptive Adversary, Concurrently
A threshold cryptosystem or signature scheme is a system with $n$ participants
where an honest majority can successfully decrypt a message or issue a
signature, but where the security and functionality properties of the
system are retained even as
the adversary corrupts up to $t$ players.
We present the novel technique of a committed proof,
which is a new general tool that enables security of threshold
cryptosystems in the presence of the adaptive adversary.
We also put forward a new measure of security for threshold schemes
secure in the adaptive adversary model: security under concurrent
composition.
Using committed proofs, we construct concurrently and adaptively secure
threshold protocols for a variety of cryptographic applications.
In particular, based on the recent scheme by Cramer-Shoup, we
construct adaptively secure threshold cryptosystems secure against
adaptive chosen ciphertext attack under the DDH intractability
assumption.
Last updated: 2000-07-09
Fast Verification of Any Remote Procedure Call: Short Witness-Indistinguishable One-Round Proofs for NP
Uncategorized
Uncategorized
The paper
is withdrawn.
As communicated to us by C. Dwork, M. Langberg,
M. Naor and K. Nissim [1] the protocol as presented in the paper
is not sufficient to prove the claims. We gratefully acknowledge
the authors of [1] for pointing out this error to us.
REFERENCES:
[1] C. Dwork, M. Langberg, M. Naor, and K. Nissim,
"Succinct Proofs for NP and Spooky Interactions" private communication,
July 4, 2000.
Lower Bounds on the Efficiency of Generic Cryptographic Constructions
We present lower bounds on the efficiency of
constructions for Pseudo-Random Generators (PRGs) and
Universal One-Way Hash Functions (UOWHFs) based on
black-box access to one-way permutations. Our lower bounds are tight as
they match the efficiency of known constructions.
A PRG (resp. UOWHF) construction based on black-box access is
a machine that is given oracle access to a permutation. Whenever
the permutation is hard to invert, the construction is hard to break.
In this paper we give lower bounds on the
number of invocations to
the oracle by the construction.
If $S$ is the assumed security of the oracle permutation $\pi$
(i.e. no adversary of size $S$ can invert $\pi$ on a fraction
larger than $1/S$ of its inputs)
then a PRG (resp. UOWHF) construction that
stretches (resp. compresses) its input by $k$ bits must query $\pi$
in $q=\Omega(k/\log S)$ points. This matches known constructions.
Our results are given in an extension of the Impagliazzo-Rudich
model. That is, we prove that a proof of the existence of PRG (resp. UOWHF)
black-box constructions that beat our lower bound would imply
a proof of the unconditional existence of such construction
(which would also imply $P \neq NP$).
Last updated: 2001-06-19
Cryptanalysis of RSA with small prime difference
We show that choosing an RSA modulus with a small difference of its prime factors
yields improvements on the small private exponent attacks of Wiener and Boneh-Durfee.
Identification Protocols Secure Against Reset Attacks
We provide identification protocols that are secure even
when the adversary can reset the internal state and/or randomization source of
the user identifying itself, and when executed in an asynchronous environment
like the Internet that gives the adversary concurrent access to instances of
the user. These protocols are suitable for use by devices (like smartcards)
which when under adversary control may not be able to reliably maintain their
internal state between invocations.
Authenticated Key Exchange Secure Against Dictionary Attacks
This paper gives definitions and results about password-based
protocols for authenticated key exchange (AKE), mutual authentication
MA), and the combination of these goals (AKE, MA).
Such protocols are designed to work despite interference by an active
adversary and despite the use of passwords drawn from a space so small
that an adversary might well enumerate, off line,
a user's password.
While several such password-based protocols have been suggested,
the underlying theory has been lagging, and
some of the protocols don't actually work.
This is an area strongly in need of foundations,
but definitions and theorems here can get overwhelmingly complex.
To help manage this complexity we begin by defining a model, one rich enough
to deal with password guessing, forward secrecy,
server compromise, and loss of session keys.
The one model can be used to
define various goals.
We take AKE (with implicit authentication---no one besides
your intended partner could possibly get the key, though he may or may
not actually get it) as the basic goal.
Then we prove that any secure
AKE protocol can be
embellished (in a simple and generic way)
to also provide for MA.
This approach turns out to be simpler than trying to
augment an MA protocol to also distribute a session key.
Next we prove correctness for the idea at the center
of the Encrypted Key-Exchange (EKE) protocol
of Bellovin and Merritt:
we prove (in an ideal-cipher model) that
the two-flow protocol at the core of EKE is
a secure AKE.
Combining with the result above we have a
simple 3-flow protocol for AKE,MA which is
proven secure against dictionary attack.
Concurrent Zero-Knowledge in Poly-logarithmic Rounds
A proof is concurrent zero-knowledge if it remains zero-knowledge when run in an asynchronous environment, such as
the Internet. It is known that zero-knowledge is not necessarily preserved in such an environment; Kilian, Petrank and Rackoff have
shown that any {\bf 4} rounds zero-knowledge interactive proof (for a non-trivial language) is not concurrent zero-knowledge. On the
other hand, Richardson and Kilian have shown that there exists a concurrent zero-knowledge argument for all languages in NP, but it
requires a {\bf polynomial} number of rounds. In this paper, we present a concurrent zero-knowledge proof for all languages in NP
with a drastically improved complexity: our proof requires only a poly-logarithmic, specifically, $\omega(\log^2 k)$ number of rounds.
Thus, we narrow the huge gap between the known upper and lower bounds on the number of rounds required for a zero-knowledge
proof that is robust for asynchronous composition.
Last updated: 2003-03-26
Chosen Message Attack Against Goldreich-Goldwasser-Halevi's Signature Scheme from Crypto'97
The Goldreich-Goldwasser-Halevi(GGH)'s signature scheme from Crypto '99 is cryptanalyzed, which is based on the well-known lattice problem. We mount a chosen message attack on the signature scheme, and show the signature scheme is vulnerable to the attack. We collects $n$ lattice points that are linearly independent each other, and constructs a new basis that generates a sub-lattice of the original lattice. The sub-lattice is shown to be sufficient to generate a valid signature. Empirical results are presented to show the effectiveness of the attack. Finally, we show that the cube-like parameter used for the private-key generation is harmful to the security of the scheme.
Tailored Key Encryption (TaKE) Tailoring a key for a given pair of plaintext/ciphertext
Abstract. The prevailing cryptographies are attacked on the basis of
the fact that only a single element in the key space will match a
plausible plaintext with a given ciphertext. Any cryptography that
would violate this unique-key assumption, will achieve added security
through deniability (akin to One Time Pad). Such cryptography is being
described. It is achieved by breaking away from the prevailing notion
that the key is a binary string of a fixed length. The described key is
random-size non-linear array: a graph constructed from vertices and
edges. The binary naming of the vertices and edges, and the
configuration are all part of the key. Such keys can take-on most of
the necessary complexity, which allows the algorithm itself to be
exceedingly simple (a-la Turing Machine).
The Security of Chaffing and Winnowing
This paper takes a closer look at Rivest's
chaffing-and-winnowing paradigm for data privacy. We begin with a
\textit{definition} which enables one to determine clearly whether a
given scheme qualifies as ``chaffing-and-winnowing.'' We then analyze
Rivest's schemes to see what quality of data privacy they provide. His
simplest scheme is easily proven secure but is ineffient. The security
of his more efficient scheme ---based on all-or-nothing transforms
(AONTs)--- is however more problematic. It can be attacked under
Rivest's definition of security of an AONT, and even under stronger
notions does not appear provable. We show however that by using a OAEP
as the AONT one can prove security. We also present a different scheme,
still using AONTs, that is equally efficient and easily proven secure
even under the original weak notion of security of AONTs.
New Directions in Design of Resilient Boolean Functions
There has been a recent upsurge of research in the design of resilient
Boolean functions for use in stream cipher systems. The existing
research concentrates on maximum degree resilient functions and tries
to obtain as high nonlinearity as possible. In sharp contrast to this
approach we identify the class of functions with {\em provably best}
possible trade-off among the parameters: number of variables,
resiliency, nonlinearity and algebraic degree. We first prove a
sharper version of McEliece theorem for Reed-Muller codes as applied
to resilient functions, which also generalizes the well known
Xiao-Massey characterization. As a consequence a nontrivial upper
bound on the nonlinearity of resilient functions is obtained. This
result coupled with Siegenthaler's inequality naturally leads to
the notion of provably best resilient functions. We further show that
such best functions can be constructed by the Maiorana-McFarland
like technique. In cases where this method fails, we provide new ideas
to construct best functions. We also briefly discuss efficient
implementation of these functions in hardware.
Efficient Protocols based on Probabilistic Encryption using Composite Degree Residue Classes
We study various applications and variants of Paillier's probabilistic
encryption scheme. First, we propose a threshold variant of the scheme,
and also zero-knowledge protocols for proving that a given ciphertext
encodes a given plaintext, and for verifying multiplication of encrypted values.
We then show how these building blocks can be used for applying the
scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the
basic scheme for a yes/no vote can be easily adapted to casting a
vote for up to $t$ out of $L$ candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of $L$ elections can be optimised such that for a certain range of parameter values, a ballot has size only $O(\log L)$ bits.
Finally, we propose a variant of the encryption scheme, that allows
reducing the expansion factor of Paillier's scheme from 2 to almost 1.
Public Electronic Contract Protocol
The notion of Public Electronic Contract (PEC) Protocol is presented in this paper. In the idea, the PEC will be published on a public directory (of certain groups) and let all the members to review the true (raw) transaction information. Collection of information of PEC reflects more reliable facts of the market trends rather than merely depends on the data provided by certain agencies for estimation. The goal is to eliminate the opportunities for certain agencies to manipulate the data and persuade the investors to make inappropriate decisions on purchases or investments. A perfect open market with open facts should be established in the future. The PEC also contains the property of public witnesses so that the transactions will be more secure. In order to keep the protocol simple; its implementation is mainly based on RSA public key scheme.
An Encryption Algorithm and Key-stream Generator for Chinese Text Messages by Character Internal Code Structure
This paper proposes an algorithm to encipher the Chinese plaintext message written in Big-5/GB Chinese character internal codes. Unlike the ordinary ciphers, the Crypto-key of our proposed algorithm consists of a pair of formulas and a set of parameter values. The senders can formulate and compose their own unique formulas and parameters for encryption. According to the character internal code structure, we apply the formulas in a Key-stream generator to encipher the Chinese plaintext message. Since the proposed stream generator does not contain permanent encryption and decryption operations, the opponents are inadequate to predict the forms of its output (ciphertext). Experiment results show that the proposed algorithm achieves the data secrecy.
On Resilient Boolean Functions with Maximal Possible Nonlinearity
It is proved that the maximal possible nonlinearity of $n$-variable
$m$-resilient Boolean function is $2^{n-1}-2^{m+1}$ for
${2n-7\over 3}\le m\le n-2$. This value can be achieved only for
optimized functions (i.~e. functions with an algebraic degree $n-m-1$).
For ${2n-7\over 3}\le m\le n-\log_2{n-2\over 3}-2$ it is suggested a method
to construct an $n$-variable $m$-resilient function with maximal possible
nonlinearity $2^{n-1}-2^{m+1}$ such that each variable presents in ANF of this
function in some term of maximal possible length $n-m-1$.
For $n\equiv 2\pmod 3$, $m={2n-7\over 3}$,
it is given a scheme of hardware implementation for such function that
demands approximately $2n$ gates EXOR and $(2/3)n$ gates AND.
Combinatorial Properties of Frameproof and Traceability Codes
In order to protect copyrighted material, codes may be
embedded in the content or codes may be associated with the
keys used to recover the content. Codes can offer protection
by providing some form of traceability for pirated
data. Several researchers have studied different notions of
traceability and related concepts in recent years. "Strong"
versions of traceability allow at least one member of a
coalition that constructs a "pirate decoder" to be
traced. Weaker versions of this concept ensure that no
coalition can "frame" a disjoint user or group of users. All
these concepts can be formulated as codes having certain
combinatorial properties.
In this paper, we study the relationships between the various
notions, and we discuss equivalent formulations using
structures such as perfect hash families. We use methods from
combinatorics and coding theory to provide bounds (necessary
conditions) and constructions (sufficient conditions) for the
objects of interest.
Last updated: 2000-03-10
Implications of the Nontriviality of Entropy Approximation
The paper was withdrawn because it contained a fatal flaw.
A New Forward-Secure Digital Signature Scheme
We improve the Bellare-Miner (Crypto '99) construction of signature
schemes with forward security in the random oracle model. Our scheme
has significantly shorter keys and is, therefore, more practical. By
using a direct proof technique not used for forward-secure schemes
before, we are able to provide better security bounds for the original
construction as well as for our scheme.
Bellare and Miner also presented a method for constructing such schemes
without the use of the random oracle. We conclude by proposing an
improvement to their method and an additional, new method for accomplishing
this.
On Security Preserving Reductions -- Revised Terminology
Many of the results in Modern Cryptography are actually
transformations of a basic computational phenomenon (i.e., a
basic primitive, tool or assumption) to a more complex
phenomenon (i.e., a higher level primitive or
application). The transformation is explicit and is always
accompanied by an explicit reduction of the violation of the
security of the former phenomenon to the violation of the
latter. A key aspect is the efficiency of the reduction. We
discuss and slightly modify the hierarchy of reductions
originally suggested by Levin.
A tool for obtaining tighter security analyses of pseudorandom function based constructions, with applications to PRP to PRF conversion
Uncategorized
Uncategorized
We present a general probabilistic lemma that can be applied
to upper bound the advantage of an adversary in distinguishing between two
families of functions. Our lemma reduces the task of upper bounding the
advantage to that of upper bounding the ratio of two probabilities associated
to the adversary, when this ratio is is viewed as a random variable. It
enables us to obtain significantly tighter analyses than more conventional
methods.
In this paper we apply the technique to the problem of PRP to PRF conversion.
We present a simple, new construction of a PRF from a PRP that makes only
two invocations of the PRP and has insecurity linear in the number of
queries made by the adversary. We also improve the analysis of the
truncation construction.
Concurrent Zero-Knowledge
Uncategorized
Uncategorized
One of the toughest challenges in designing
cryptographic protocols is to design them so that they will remain
secure even when composed. For example, concurrent executions of a
zero-knowledge protocol by a single prover (with one or more
verifiers) may leak information and may not be zero-knowledge in
toto. In this work we:
(1) Suggest time as a mechanism to design concurrent cryptographic
protocols and in particular maintaining zero-knowledge under
concurrent execution.
(2) Introduce the notion of of Deniable Authentication
and connect it to the problem of concurrent zero-knowledge.
We do not assume global synchronization, however we assume an
(alpha,beta) timing constraint: for any two processors $P_1$
and $P_2$, if $P_1$ measures alpha elapsed time on its local
clock and $P_2$ measures beta elapsed time on its local clock, and
$P_2$ starts after $P_1$ does, then $P_2$ will finish after
$P_1$ does. We show that for an adversary controlling all the
processors clocks (as well as their communication channels) but
which is constrained by an (alpha,beta) constraint
there exist four-round almost concurrent zero-knowledge interactive proofs
and perfect concurrent zero-knowledge arguments for every language in NP.
We also address the more specific problem of Deniable Authentication,
for which we propose several particularly efficient solutions.
Deniable Authentication is of independent interest, even in the
sequential case; our concurrent solutions yield sequential
solutions, without recourse to timing, i.e., in the standard model.
Resettable Zero-Knowledge
Uncategorized
Uncategorized
We introduce the notion of Resettable Zero-Knowledge
(rZK), a new security measure for cryptographic protocols which
strengthens the classical notion of zero-knowledge. In essence, an
rZK protocol is one that remains zero knowledge even if an adeversary
can interact with the prover many times, each time resetting the
prover to its initial state and forcing him to use the same random
tape.
Under general complexity asumptions, which hold for example if the
Discrete Logarithm Problem is hard, we construct (1) rZK proof-systems
for NP: (2) constant-round resettable witness-indistinguishable
proof-systems for NP; and (3) constant-round rZK arguments for NP in
the public key model where verifiers have fixed, public keys
associated with them.
In addition to shedding new light on what makes zero knowledge
possible (by constructing ZK protocols that use randomness in a
dramatically weaker way than before), rZK has great relevance to
applications. Firstly, we show that rZK protocols are closed under
parallel and concurrent execution and thus are guaranteed to be secure
when implemented in fully asynchronous networks, even if an adversary
schedules the arrival of every message sent. Secondly, rZK protocols
enlarge the range of physical ways in which provers of a ZK protocols
can be securely implemented, including devices which cannot reliably
toss coins on line, nor keep state betweeen invocations. (For
instance, because ordinary smart cards with secure hardware are
resattable, they could not be used to implement securely the provers
of classical ZK protocols, but can now be used to implement securely
the provers of rZK protocols.)
Public-Key Cryptography and Password Protocols: The Multi-User Case
Uncategorized
Uncategorized
The problem of password authentication over an insecure network
when the user holds only a human-memorizable password has
received much attention in the literature. The first rigorous
treatment was provided by Halevi and Krawczyk (ACM CCS, 1998),
who studied off-line password guessing attacks in the scenario in
which the authentication server possesses a pair of private and
public keys. HK's definition of security concentrates
on the single-user (and single server) case. <P>
In this work we:
(1) Show the inadequacy of both the Halevi-Krawczyk formalization
and protocol in the case where there is more than a single user:
using a simple and realistic attack, we prove failure of the HK
solution in the two-user case.
(2) Propose a new definition of security for the multi-user case,
expressed in terms of transcripts of the entire system, rather
than individual protocol executions.
(3) Suggest several ways of achieving this security against both
static and dynamic adversaries.
In a recent revision of their paper, Halevi and Krawczyk attempted
to handle the multi-user case. We expose a weakness in their approach.
Improving the Exact Security of Digital Signature Schemes
Uncategorized
Uncategorized
We provide two contributions to exact security analysis of
digital signatures:
We put forward a new method of constructing Fiat-Shamir-like
signature schemes that yields better "exact security" than the original
Fiat-Shamir method; and
we extend exact security analysis to "exact cost-security analysis" by
showing that digital signature schemes with "loose security" may be
preferable for reasonable measures of cost.
Security of all RSA and Discrete Log Bits
Uncategorized
Uncategorized
We study the security of individual bits in an RSA
encrypted message E_N(x). We show that given E_N(x), predicting any
single bit in x with only a non-negligible advantage over the trivial
guessing strategy, is (through a polynomial time reduction) as hard as
breaking RSA. Moreover, we prove that blocks of O(log log N) bits of x
are computationally indistinguishable from random bits. The results
carry over to the Rabin encryption scheme.
Considering the discrete exponentiation function, g^x modulo p, with
probability 1-o(1) over random choices of the prime p, the analog
results are demonstrated. Finally, we prove that the bits of ax+b
modulo p give hard core predicates for any one-way function f.
Non-Malleable Encryption: Equivalence between Two Notions, and an Indistinguishability-Based Characterization
Uncategorized
Uncategorized
We prove the equivalence of two definitions of non-malleable
encryption appearing in the literature--- the original one of Dolev, Dwork
and Naor and the later one of Bellare, Desai, Pointcheval and Rogaway. The
equivalence relies on a new characterization of non-malleable encryption in
terms of the standard notion of indistinguishability of Goldwasser and
Micali. We show that non-malleability is equivalent to indistinguishability
under a ``parallel chosen ciphertext attack,'' this being a new kind of
chosen ciphertext attack we introduce, in which the adversary's decryption
queries are not allowed to depend on answers to previous queries, but must be
made all at once. This characterization simplifies both the notion of
non-malleable encryption and its usage, and enables one to see more easily
how it compares with other notions of encryption. The results here apply to
non-malleable encryption under any form of attack, whether chosen-plaintext,
chosen-ciphertext, or adaptive chosen-ciphertext.
A Composition Theorem for Universal One-Way Hash Functions
Uncategorized
Uncategorized
In this paper we present a new scheme for constructing
universal one-way hash functions that hash arbitrarily long messages
out of universal one-way hash functions that hash fixed-length messages.
The new construction is extremely simple and is also very efficient,
yielding shorter keys than previously proposed composition
constructions.
A forward-secure digital signature scheme
Uncategorized
Uncategorized
We describe a digital signature scheme in which the
public key is fixed but the secret signing key is updated at regular
intervals so as to provide a <i>forward security</i> property:
compromise of the current secret key does not enable an adversary to
forge signatures pertaining to the past. This can be useful to
mitigate the damage caused by key exposure without requiring
distribution of keys. Our construction uses ideas from the
Fiat-Shamir and Ong-Schnorr identification and
signature schemes, and is proven to be forward secure based
on the hardness of factoring, in the random oracle model. The
construction is also quite efficient.
Interleaved Zero-Knowledge in the Public-Key Model
Uncategorized
Uncategorized
We introduce the notion of Interleaved Zero-Knowledge (iZK), a new
security measure for cryptographic protocols which strengthens
the classical notion of zero-knowledge, in a way suitable for
multiple concurrent executions in an asynchronous environment
like the internet. We prove that iZK protocols are robust:
they are ``parallelizable'', and preserve security when run
concurrently in a fully asynchronous network. Furthermore,
this holds even if the prover's random-pads in all these
concurrent invocations are identical. Thus, iZK protocols are
ideal for smart-cards and other devices which cannot reliably
- « Previous
- 1
- ...
- 239
- 240
- 241
- Next »