All papers (Page 238 of 24080 results)
Diffie-Hellman Problems and Bilinear Maps
We investigate relations among the discrete logarithm (DL)
problem, the Diffie-Hellman (DH) problem and the bilinear
Diffie-Hellman (BDH) problem when we have an efficient computable
non-degenerate bilinear map $e:G\times G \rightarrow H$. Under a
certain assumption on the order of $G$, we show that the DH
problem on $H$ implies the DH problem on $G$, and both of them are
equivalent to the BDH problem when $e$ is {\it weak-invertible}.
Moreover, we show that given the bilinear map $e$ an injective
homomorphism $f:H\rightarrow G$ enables us to solve the DH problem
on $G$ efficiently, which implies the non-existence a {\it
self-bilinear} map $e:G\times G \rightarrow G$ when the DH problem
on $G$ is hard. Finally we introduce a sequence of bilinear maps
and its applications.
How to convert any ID-based Signature Schemes
This paper describes how any Identity Based Signature schemes
can be used to implement a Group Signature scheme.
The performance of the generated Group Signature scheme is similar to
the performance of the underlying ID-based Signature scheme.
This makes our proposal very attractive since
most of existing group signature schemes that have been proposed so far
are grossly inefficient. In contrast, ID-based signature schemes can be
very efficient especially if they use elliptic curves and pairing.
Universal Padding Schemes for RSA
A common practice to encrypt with RSA is to first apply a padding scheme to the message and then to exponentiate the result with the public exponent; an example of this is OAEP. Similarly, the usual way of signing with RSA is to apply some padding scheme and then to exponentiate the result with the private exponent, as for example in PSS. Usually, the RSA modulus used for encrypting is different from the one used for signing. The goal of this paper is to simplify this common setting. First, we show that PSS can also be used for encryption, and gives an encryption scheme semantically secure against adaptive chosen-ciphertext attacks, in the random oracle model. As a result, PSS can be used indifferently for encryption or signature. Moreover, we show that PSS allows to safely use the same RSA key-pairs for both encryption and signature, in a concurrent manner. More generally, we show that using PSS the same set of keys can be used for both encryption and signature for any trapdoor partial-domain one-way permutation. The practical consequences of our result are important: PKIs and public-key implementations can be significantly simplified.
Point Multiplication on Ordinary Elliptic Curves over Fields of Characteristic Three
In this paper we investigate the efficiency of cryptosystems based on
ordinary elliptic curves over fields of characteristic three.
We look at different representations for curves and consider some of
the algorithms necessary to perform efficient point multiplication.
We give example timings for our operations and compare them with
timings for curves in characteristic two of a similar level of security.
We show that using the Hessian form in characteristic three
produces a point multiplication algorithm under $50$ percent slower
than the equivalent system in characteristic two.
Thus it is conceivable that curves in characteristic three, could
offer greater performance than currently perceived by the community.
A Note on the Bilinear Diffie-Hellman Assumption
Uncategorized
Uncategorized
Abstract. The Bi-linear Diffie-Hellman (BDH) intractability assumption is required to establish the security of new Weil-pairing based cryptosystems. BDH is reducible to most of the older believed-to-be-hard discrete-log problems and DH problems, but there is no known reduction from any of those problems to BDH. Let the bilinear mapping be e:G1 X G1->G2, where G1 and G2 are cyclic groups. We show that a many-one reduction from any of the relevant problems to BDH has to include an efficient mapping \phi:G2 ->G1 where \phi(g^{x})=f(x)P. Here g, and P are generators of the corresponding cyclic groups. The function \phi must be used in the reduction either before or after the call to oracle BDH. We show that if f(x)=ax^n+b for any constants a,b,n, then \phi could be used as an oracle for a probabilistic polynomial time solution for Decision Diffie-Hellman in G2. Thus such a reduction is unlikely.
An Efficient Procedure to Double and Add Points on an Elliptic Curve
We present an algorithm that speeds exponentiation on a
general elliptic curve by an estimated 3.8% to 8.5% over the best
known general exponentiation methods when using affine coordinates.
This is achieved by eliminating a field multiplication when we compute 2P + Q from given points P, Q on the curve. We give
applications to simultaneous multiple exponentiation and to the
Elliptic Curve Method of factorization. We show how this
improvement together with another idea can speed the
computation of the Weil and Tate pairings by up to 7.8%.
On Linear Redundancy in the AES S-Box
Uncategorized
Uncategorized
We show the existence of a previously unknown linear redundancy
property of the only nonlinear component of the AES block cipher.
It is demonstrated that the outputs of the 8*8 Rijndael s-box
(based on inversion in a finite field) are all equivalent under
affine transformation. The method used to discover these affine
relations is novel and exploits a new fundamental result on the
invariance properties of local connection structure of affine
equivalence classes. As well as increasing existing concerns about
the security of the AES, these results may also have serious
consequences for many other ciphers recently proposed for
standardisation.
The GGM Construction does NOT yield Correlation Intractable Function Ensembles
We consider the function ensembles emerging from the
construction of Goldreich, Goldwasser and Micali (GGM),
when applied to an arbitrary pseudoramdon generator.
We show that, in general, such functions
fail to yield correlation intractable ensembles.
Specifically, it may happen that, given a description of such a function,
one can easily find an input that is mapped to zero under this function.
A New Class of Unsafe Primes
In this paper,
a new special-purpose factorization algorithm is presented,
which finds a prime factor $p$ of an integer $n $
in polynomial time, if $4p-1$ has the form
$d b^2$ where $d \in \{3, 11, 19, 43, 67, 163\}$
and $b$ is an integer.
Hence such primes should be avoided when we select the
RSA secret keys. Some generalizations of the algorithm are
discussed in the paper as well.
Last updated: 2003-02-05
Clock-Controlled Alternating Step Generator
A new construction of a pseudorandom generator based on a simple combination of three feedback shift registers (FSRs) is introduced. The main characteristic of its structure is that the output of one of the three FSRs controls the clocking of the other two FSRs.
This construction allows users to generate a large family of sequences using the same initial states and the same feedback functions of the three combined FSRs. The construction is related to the Alternating Step Generator that is a special case of this construction.
The period, and the lower and upper bound of the linear complexity of the output sequences of the construction whose control FSR generates a de Bruijn sequence and the other two FSRs generate m-sequences are established. Furthermore, it is established that the distribution of short patterns in these output sequences occur equally likely and that they are secure against correlation attacks. All these properties make it a suitable crypto-generator for stream cipher applications.
Efficient Arithmetic on Hyperelliptic Curves
Using the Frobenius endomorphism the operation of computing
scalar-mulitples in the Jacobian of a hyperelliptic curve is sped-up
considerably. The kind of curves considered are Kobiltz i.e. subfield
curves, defined over a small finite field which are then considered
over a large extension field. We deal with computation of
the group order over various extension fields, algorithms to obtain
the mentioned speed-up, and experimental results concerning both
issues. Additionally an alternative set-up is treated which uses arihtmetic in the finite field only and allows shorter code for similar security.
Furthermore explicit formulae to perform the arithmetic in the ideal class group explicitely are derived and can thus be used for implementation in hardware; in software they are also faster than the generic Cantor algorithm. As a second group suitable for cryptographic applications the trace-zero-variety is considered. Here we investigate the group operation and deal with security issues.
Secret sharing schemes on access structures with intersection number equal to one
The characterization of ideal access structures and
the search for bounds on the optimal information rate
are two important problems in secret sharing.
These problems are studied in this paper for access structures
with intersection number equal to one, that is, access structures
such that there is at most one participant in the intersection
of any two different minimal qualified subsets.
The main result in this work is the complete characterization
of the ideal access structures with intersection number equal to one.
Besides, bounds on the optimal information rate
are provided for the non-ideal case.
An Extension of Kedlaya's Algorithm to Hyperelliptic Curves in Characteristic 2
We present an algorithm for computing the zeta function of an arbitrary hyperelliptic curve
over a finite field $\FF_q$ of characteristic 2, thereby extending the algorithm of Kedlaya
for odd characteristic.
For a genus $g$ hyperelliptic curve defined over $\FF_{2^n}$,
the average-case time complexity is $O(g^{4 + \varepsilon} n^{3 + \varepsilon})$
and the average-case space complexity is $O(g^{3} n^{3})$, whereas the worst-case time and space
complexities are $O(g^{5 + \varepsilon} n^{3 + \varepsilon})$ and $O(g^{4} n^{3})$ respectively.
Forward-Secure Signatures with Fast Key Update
In regular digital signatures, once the secret key is compromised, all signatures, even those that were issued by the honest signer before the compromise, will not be trustworthy any more. Forward-secure signatures have been proposed to address this major shortcoming.
We present a new forward-secure signature scheme, called KREUS, with several advantages. It has the most efficient Key Update of all known schemes, requiring just a single modular squaring. Our scheme thus enables more frequent Key Update and hence allows shorter time periods, enhancing security: fewer signatures might become invalid as a result of key compromise. In addition, the on-line component of signing is also very efficient, consisting of a single multiplication. We precisely analyze the total signer costs and show that they are lower when the number of signatures per time period is small; the advantage of our scheme increases considerably as the number of time periods grows.
Our scheme's security relies on the Strong-RSA assumption and the random-oracle-based Fiat-Shamir transform.
On the Power of Claw-Free Permutations
Probabilistic Signature Scheme (PSS), Full Domain Hash (FDH) and
several of their variants are widely used signature schemes, which can
be formally analyzed in the random oracle model. These schemes output
a signature of the form (f^{-1}(y),pub), where y somehow depends
on the message signed (and pub) and f is some public trapdoor
permutation (typically RSA). Interestingly, all these signature
schemes can be proven {\em asymptotically} secure for an {\em
arbitrary} trapdoor permutation f, but their {\em exact} security
seems to be significantly better for {\em special} trapdoor
permutations like RSA. This leads to two natural questions:
(1) can the asymptotic security analysis be improved with general
trapdoor permutations?; and, if not, (2) what {\em general
cryptographic assumption} on f --- enjoyed by specific functions
like RSA --- is ``responsible'' for the improved security?
We answer both these questions. First, we show that if f is a
``black-box'' trapdoor permutation, then the poor exact security is
unavoidable. More specifically, the ``security loss'' for general
trapdoor permutations is \Omega(q_hash), where q_hash is the
number of random oracle queries made by the adversary (which could be
quite large). On the other hand, we show that all the security
benefits of the RSA-based variants come into effect once f comes
from a family of {\em claw-free permutation} pairs. Our results
significantly narrow the current ``gap'' between general trapdoor
permutations and RSA to the ``gap'' between trapdoor permutations and
claw-free permutations. Additionally, they can be viewed as the first
{\em security/efficiency} separation between these basic cryptographic
primitives. In other words, while it was already believed that certain
cryptographic objects {\em can} be build from claw-free permutations
but {\em not} from general trapdoor permutations, we show that certain
important schemes (like FDH and PSS) provably work with {\em either},
but enjoy a much better tradeoff between security and efficiency when
deployed with claw-free permutations.
Applying General Access Structure to Metering Schemes
Uncategorized
Uncategorized
In order to decide on advertisement fees for web servers, Naor and Pinkas introduced metering schemes secure against coalition of corrupt servers and clients. In their schemes any server is able to construct a proof to be sent to an audit agency if and only if it has been
visited by at least a certain number of clients. Several researchers have generalized the idea of Naor and Pinkas: first metering scheme with pricing and dynamic multi-threshold metering schemes have been proposed; later the solution has been extended to allow for
general access structures and an approach on linear algebra has been introduced. In this paper we are interested in the efficiency of applying general access structures and linear algebra techniques to metering schemes. We propose a new model considering general access structures for clients, corrupted clients and servers. Then we bind the
access structures for clients and corrupted clients into one. We propose a new metering scheme, which is more efficient w.r.t.\ communication complexity and memory requirements than the scheme of Blundo \textit{et al.}
An Upper Bound on the Size of a Code with the $k$-Identifiable Parent Property
Uncategorized
Uncategorized
The paper gives an upper bound on the size of a $q$-ary code of length
$n$ that has the $k$-identifiable parent property. One consequence of
this bound is that the optimal rate of such a code is determined in
many cases when $q\rightarrow\infty$ with $k$ and $n$ fixed.
Encryption-Scheme Security in the Presence of Key-Dependent Messages
Encryption that is only semantically secure should not be used on messages that depend on the underlying secret key; all bets are off when, for example,one encrypts using a shared key K the value K.
Here we introduce a new notion of security, KDM security, appropriate for key-dependent messages. The notion makes sense in both the public-key and shared-key settings. For the latter we show that KDM security is easily achievable within the random-oracle model.
By developing and achieving stronger notions of encryption-scheme security it is hoped that protocols which are proven secure under ``formal'' models of security can, in time, be safely realized by generically instantiating their primitives.
A New Statistical Testing for Symmetric Ciphers and Hash Functions
This paper presents a new, powerful statistical testing of symmetric
ciphers and hash functions which allowed us to detect biases in both
of these systems where previously known tests failed. We first give a
complete characterization of the Algebraic Normal Form (ANF) of
random Boolean functions by means of the Möbius transform. Then we
built a new testing based on the comparison between the structure of
the different Boolean functions Algebraic Normal Forms characterizing
symmetric ciphers and hash functions and those of purely random
Boolean functions. Detailed testing results on several cryptosystems
are presented. As a main result we show that AES, DES Snow and
Lili-128 fail all or part of the tests and thus present strong biases.
Identity-Based Signcryption
Uncategorized
Uncategorized
A signcryption scheme is a scheme that provides private and authenticated delivery of messages between two parties.
It does this in a more efficient manner than a straightforward composition of an encryption scheme with a signature scheme.
An identity-based cryptosystem is one in which the public key may be any string (or may be derived from any string).
In this paper we propose an identity-based signcryption scheme.
We give a security model for such a scheme and sketch the details of how our scheme may be proved secure in this model.
Last updated: 2003-08-11
A new public key encryption scheme provably secure against adaptive chosen cipher-text attack
Uncategorized
Uncategorized
We present a new public key cryptosystem based on
the notion called square decisional Diffie-Hellman problem. The
scheme is provably secure against adaptive chosen cipher-text
attack under the hardness assumption of the square decisional
Diffie-Hellman problem. Compared with Cramer and Shoup's notable
public key scheme, our scheme enjoys several nice features:
(1)Both schemes are provably secure against adaptive chosen
cipher-text attack under the intractability paradigm (the security
of Cramer-Shoup's scheme is based on the standard decisional
Diffie-Hellman problem while ours based on the square decisional
Diffie-Hellman problem; (2)The computational and communication
complexity of our scheme is equivalent to the Cramer and Shoup's
scheme however, the test function of Cramer-shoup's scheme is
linear while our scheme is non-linear, therefore our reduction is
more efficient.
Generating Large Non-Singular Matrices over an Arbitrary Field with Blocks of Full Rank
This note describes a technique for generating
large non-singular matrices with blocks of full rank.
While this may be of independent interest, our motivation
arises in the white-box implementation of cryptographic algorithms with S-boxes.
Last updated: 2003-02-05
The (a, b)-Shrinking Generator
A new construction of a pseudorandom generator based on a simple combination of two LFSRs is introduced. This construction allows users to generate a large family of sequences using the same initial states and the same characteristic feedback polynomials of the two combined LFSRs. The construction is related to the so-called shrinking generator that is a special case of this construction.
The construction has attractive properties such as exponential period, exponential linear complexity, good statistical properties and security against correlation attacks. All these properties make it a suitable crypto-generator for stream cipher applications.
Building curves with arbitrary small MOV degree over finite prime fields
We investigate the possibility of building elliptic curves over finite prime fields having given small MOV-degrees. Using complex multiplication, we give many examples of such curves.
A Fuzzy Vault Scheme
We describe a simple and novel cryptographic construction that we
refer to as a {\em fuzzy vault}. A player Alice may place a secret value $\kappa$ in a fuzzy vault and ``lock'' it using a set $A$ of elements from some public universe $U$. If Bob tries to ``unlock'' the vault using a set $B$ of similar length, he obtains $\kappa$ only if $B$ is close to $A$, i.e., only if $A$ and $B$ overlap substantially. In constrast to previous constructions of this flavor, ours possesses the useful feature of {\em order invariance}, meaning that the ordering of $A$ and $B$ is immaterial to the functioning of the vault. As we show, our scheme enjoys provable security against a computationally unbounded attacker.
TMAC: Two-Key CBC MAC
In this paper, we propose TMAC, Two-Key CBC Message Authentication Code. TMAC is a refinement of XCBC (which is a variant of CBC MAC) shown by Black and Rogaway. We use only $(k+n)$-bit key for TMAC while XCBC uses $(k+2n)$-bit key, where $k$ is the key length of the underlying block cipher and $n$ is its block length. The cost for reducing the size of secret keys is almost negligible; only one shift and one conditional XOR. Similarly to XCBC, our algorithm correctly and efficiently handles messages of arbitrary bit length.
Multiplicative Masking and Power Analysis of AES
The recently proposed multiplicative masking countermeasure against power
analysis attacks on AES is interesting as it does not require the costly recomputation and RAM storage
of S-boxes for every run of AES. This is important for applications where the
available space is very limited such as the smart card applications.
Unfortunately, it is here shown that this method is
in fact inherently vulnerable to differential power analysis.
Other possible random masking methods are also discussed.
Efficient and Concurrent Zero-Knowledge from any public coin HVZK protocol
We show how to efficiently transform any public coin honest verifier zero knowledge proof system into a proof system that is concurrent zero-knowledge with respect to any (possibly cheating) verifier via
black box simulation. By efficient we mean that our transformation incurs only an additive overhead, both in terms of the number of rounds and the computational and communication complexity of each round, independently of the complexity of the original protocol. Moreover, the transformation preserves (up to negligible additive terms) the soundness and completeness error probabilities. The new proof system is proved secure based on the Decisional Diffie-Hellman (DDH) assumption, in the standard model of computation, i.e., no random oracles, shared random strings, or public key infrastructure is assumed. In addition to the introduction of a practical protocol, this construction provides yet another example of ideas in plausibility results that turn into ideas in the construction of practical protocols.
We prove our main result by developing a mechanism for simulatable commitments that may be of independent interest. In particular, it allows a weaker result that is interesting as well. We present
an efficient transformation of any honest verifier public-coin
computational zero-knowledge proof into a (public coin) computational zero-knowledge proof secure against any verifier. The overhead of this second transformation is minimal: we only increase the number of rounds by 3, and increase the computational cost by 2 public key operations for each round of the original protocol. The cost of the more general transformation leading to concurrent zero knowledge is also close to optimal (for black box simulation), requiring only omega(log n) additional rounds (where n is a security parameter and omega(log n) can be any superlogarithmic function of n (e.g.,
log(n)log^*(n)), and omega(log n) additional public key operations for each round of the original protocol.
On Chosen Ciphertext Security of Multiple Encryptions
We consider the security of multiple and possibly related
plaintexts in the context of a chosen ciphertext attack.
That is the attacker in addition and concurrently to obtaining encryptions
of multiple plaintexts under the same key,
may issue encryption and decryption queries and
partial information queries.
Loosely speaking, an encryption scheme is considered
secure under such attacks if all that the adversary can learn from
such attacks about the selected plaintexts can be obtained from the
corresponding partial information queries.
The above definition extends the definition of semantic security
under chosen ciphertext attacks (CCAs),
which is also formulated in this work.
The extension is in considering the security of multiple plaintexts
rather than the security of a single plaintext. We prove that both these
formulations are equivalent to the standard formulation of CCA,
which refers to indistinguishability of encryptions. The good news
is that any encryption scheme that is secure in the standard CCA
sense is in fact secure in the extended model.
The treatment holds both for public-key and private-key
encryption schemes.
Constructing Elliptic Curves with Prescribed Embedding Degrees
Pairing-based cryptosystems depend on the existence of groups where
the Decision Diffie-Hellman problem is easy to solve, but the
Computational Diffie-Hellman problem is hard. Such is the case of
elliptic curve groups whose embedding degree is large enough to
maintain a good security level, but small enough for arithmetic
operations to be feasible. However, the embedding degree is usually
enormous, and the scarce previously known suitable elliptic groups
had embedding degree $k \leqslant 6$. In this note, we examine
criteria for curves with larger $k$ that generalize prior work by
Miyaji et al. based on the properties of cyclotomic
polynomials, and propose efficient representations for the
underlying algebraic structures.
Higher Order Correlation Attacks, XL algorithm and Cryptanalysis of Toyocrypt
Uncategorized
Uncategorized
There is abundant literature on how to use linear approximations to break various stream ciphers. In this paper we show that it is possible to design an efficient attack based on higher degree approximations. We reduce the attack to solving an overdefined system of multivariate equations and use the XL algorithm from Eurocrypt 2000. The complexity of the XL algorithm is sometimes controversial, however in practice and for the cases relevant here (much more equations than variables), we show that the behaviour of XL is predictable with the utmost precision and 100% accuracy.
Our new attack allows to break efficiently stream ciphers that are known to be immune to all the previously known attacks. It has also surprisingly small and loose requirements on the keystream needed, giving powerful attack scenarios, leading even to (almost) ciphertext-only attacks.
For example, the new attack breaks the stream cipher Toyocrypt submitted to the Japanese government Cryptrec call for cryptographic primitives, and one of only two candidates accepted to the second phase of Cryptrec evaluation process. Toyocrypt is a 128-bit stream cipher and at the time of submission it was claimed to resist to all known attacks. Later, Mihaljevic and Imai showed that the effective key length in Toyocrypt is only 96 bits. Still Toyocrypt may be easily modified to avoid this attack and have full 128 bit key. Our best attack breaks both Toyocrypt and the modified versions taking exactly 2^92 CPU clocks for a 128-bit cipher. Moreover it works in much less restrictive conditions that the previous attack, for example knowing ONLY that the ciphertext is in English.
Adapting the weaknesses of the Random Oracle model to the Generic Group model.
This paper presents results that show that there exist
problems in that are provably hard in the generic group model but
easy to solve whenever the random encoding function is replaced
with a specific encoding function (or one drawn from a specific
set of encoding functions). We also show that there exist
cryptographic schemes that are provably hard in the generic group
model but easy to break in practice.
Efficient and Player-Optimal Strong Consensus
Uncategorized
Uncategorized
In the {\em strong consensus} problem,
$n$ players attempt to reach agreement on a value initially held
by {\em one of the good players}, despite the (malicious) behavior
of up to $t$ of them. Although the problem is closely related to the
standard consensus problem (aka Byzantine agreement),
the only known solution with the optimal number of players requires exponential
computation and communication in the unconditional setting.
In this paper we study this problem, and present {\em efficient} protocols
and tight lower bounds for several standard distributed computation models
--- unconditional, computational, synchronous, and asynchronous.
Towards Provably-Secure Timed E-Commerce: The Trusted Delivery Layer
Uncategorized
Uncategorized
Certified exchange of messages is an essential mechanism for e-commerce; the timing aspects (timeouts and timestamps) are very important for practical applications. However existing formal methods for security analysis assume simplified completely synchronous or completely asynchronous models, and cannot deal with the timing aspects of these (and other e-commerce) protocols. We present model for realistic, Δ-synchronized adversarial settings. We then present a simple, efficient and provably-secure protocol for certified, time-stamped message delivery, providing precise guarantees of delay and timestamps. Our model and analysis use concrete (rather than asymptotic) notions of security.
A semantically secure elliptic curve RSA scheme with small expansion factor
We propose an elliptic curve scheme over the ring $\entq$, which is efficient and semantically secure in the standard model, and it has expansion
factor 2 (previous schemes with similar features present expansion factors greater or equal
than 4).
Demytko's RSA type scheme has been used as an underlying primitive
to obtain efficiency and probabilistic encryption. Semantic security of the scheme is based on a new decisional assumption, namely,
the Decisional Small Root Assumption. Confidence on this assumption is also discussed.
Authentication of Quantum Messages
Authentication is a well-studied area of classical cryptography: a sender A and a receiver B sharing a classical private key want to exchange a classical message with the guarantee that the message has not been modified or replaced by a dishonest party with control of the communication line. In this paper we study the authentication of messages composed of quantum states.
We give a formal definition of authentication in the quantum setting. Assuming A and B have access to an insecure quantum channel and share a private, classical random key, we provide a non-interactive scheme that both enables A to encrypt and authenticate (with unconditional security) an m qubit message by encoding it into m+s qubits, where the probability decreases exponentially in the security parameter s. The scheme requires a private key of size 2m+O(s). To achieve this, we give a highly efficient protocol for testing the purity of shared EPR pairs.
It has long been known that learning information about a general quantum state will necessarily disturb it. We refine this result to show that such a disturbance can be done with few side effects, allowing it to circumvent cryptographic protections. Consequently, any scheme to authenticate quantum messages must also encrypt them. In contrast, no such constraint exists classically: authentication and encryption are independent tasks, and one can authenticate a message while leaving it publicly readable.
This reasoning has two important consequences: On one hand, it allows us to give a lower bound of 2m key bits for authenticating m qubits, which makes our protocol asymptotically optimal. On the other hand, we use it to show that digitally signing quantum states is impossible, even with only computational security.
Some Applications of Threshold Signature Schemes to Distributed Protocols
In a threshold signature scheme, a group of players share a secret information in such a way that only those subsets with a minimum number of players can compute a valid signature. We propose methods to construct some useful and computationally secure distributed protocols from threshold signature schemes satisfying some suitable properties.
Namely, we prove that any threshold signature scheme which is non-interactive can be used to construct a metering scheme. We also design a distributed key distribution scheme from any deterministic threshold signature scheme. The security of these news schemes is reduced to the security of the corresponding threshold signature schemes. Furthermore, the constructed protocols reach some desirable properties.
Applications of Multilinear Forms to Cryptography
Uncategorized
Uncategorized
We study the problem of finding efficiently computable non-degenerate multilinear maps from $G_1^n$ to $G_2$, where $G_1$ and $G_2$ are groups of the same prime order, and where computing discrete logarithms in $G_1$ is hard. We present several applications to cryptography, explore directions for building such maps, and give some reasons to believe that finding examples with $n>2$ may be difficult.
On the efficiency of the Clock Control Guessing Attack
Many bitstream generators are based on linear feedback shift registers. A widespread technique for the cryptanalysis of those generators is the linear consistency test (LCT). In this paper, we consider an application of the LCT in cryptanalysis of clock-controlled bitstream generators, called \textsl{clock control guessing}. We give a general and very simple method for estimating the efficiency of clock control guessing, yielding an upper bound on the effective key length of a whole group of bitstream generators. Finally, we apply the technique against a number of clock-controlled generators, such as the A5/1, alternating step generator, step1-step2 generator, cascade generator, and others.
Breaking and Provably Repairing the SSH Authenticated Encryption Scheme: A Case Study of the Encode-then-Encrypt-and-MAC Paradigm
The Secure Shell (SSH) protocol is one of the most popular
cryptographic protocols on the Internet. Unfortunately, the current
SSH authenticated encryption mechanism is insecure. In this paper, we
propose several fixes to the SSH protocol and, using techniques from
modern cryptography, we prove that our modified versions of SSH meet
strong new chosen-ciphertext privacy and integrity requirements.
Furthermore, our proposed fixes will require relatively little
modification to the SSH protocol and to SSH implementations. We
believe that our new notions of privacy and integrity for encryption
schemes with stateful decryption algorithms will be of independent
interest.
Key-Insulated Public-Key Cryptosystems
Cryptographic computations (decryption, signature generation, etc.)
are often performed on a relatively insecure device (e.g., a mobile
device or an Internet-connected host) which cannot be trusted to
maintain secrecy of the private key. We propose and investigate the
notion of \emph{key-insulated security} whose goal is to minimize the damage
caused by secret-key exposures. In our model, the secret key(s)
stored on the insecure device are refreshed at discrete time periods
via interaction with a physically-secure --- but
computationally-limited --- device which stores a ``master key''. All
cryptographic computations are still done on the insecure device, and
the public key remains unchanged. In a (t, N)-key-insulated scheme, an
adversary who compromises the insecure device and obtains secret keys
for up to t periods of his choice is unable to violate the security
of the cryptosystem for \emph{any} of the remaining N-t periods.
Furthermore, the scheme remains secure (for \emph{all} time periods)
against an adversary who compromises \emph{only} the physically-secure
device.
We notice that key-insulated schemes significantly improve the security
guarantee of forward-secure schemes [A97,BM99], in which exposure
of the secret key at even a single time period (necessarily)
compromises the security of the system for all future time
periods. This improvement is achieved with minimal cost: infrequent
key updates with a (possibly untrusted) secure device.
We focus primarily on key-insulated public-key encryption. We construct a
(t,N)-key-insulated encryption scheme based on any (standard) public-key
encryption scheme, and give a more efficient construction based on the
DDH assumption. The latter construction is then extended to achieve
chosen-ciphertext security.
Attack on Private Signature Keys of the OpenPGP Format, PGP(TM) Programs and Other Applications Compatible with OpenPGP
The article describes an attack on OpenPGP format, which leads to disclosure of the private signature keys of the DSA and RSA algorithms. The OpenPGP format is used in a number of applications including PGP, GNU Privacy Guard and other programs specified on the list of products compatible with OpenPGP, which is available at http://www.pgpi.org/products. Therefore all these applications must undergo the same revision as the actual program PGP. The success of the attack was practically verified and demonstrated on the PGP program, version 7.0.3 with a combination of AES and DH/DSS algorithms. As the private signature key is the basic information of the whole system which is kept secret, it is encrypted using the strong cipher. However, it shows that this protection is illusory, as the attacker has neither to attack this cipher nor user´s secret passphrase. A modification of the private key file in a certain manner and subsequent capturing of one signed message is sufficient for successful attack. Insufficient protection of the integrity of the public as well as private parts of signature keys in the OpenPGP format is analyzed in DSA and RSA algorithms and on the basis of this, a procedure of attacks is shown on both private signature keys. The attacks apply to all lengths of parameters (modules, keys) of RSA and DSA. In the end the cryptographic measures for correction of the OpenPGP format as well as PGP format are proposed.
Fault based cryptanalysis of the Advanced Encryption Standard
In this paper we describe several fault attacks on the
Advanced Encryption Standard (AES).
First, using optical fault induction attacks as recently
publicly presented by Skorobogatov and Anderson \cite{SA}, we
present an implementation independent fault attack on AES.
This attack is able to determine the complete $128$-bit
secret key of a sealed tamper-proof smartcard by
generating $128$ faulty cipher texts.
Second, we present
several implementation-dependent fault attacks on AES.
These attacks
rely on the observation that due to the AES's known timing analysis
vulnerability (as pointed out by Koeune and Quisquater \cite{KQ}),
any implementation of the AES must ensure a data independent timing
behavior for the so called AES's {\tt xtime} operation. We present
fault attacks on AES based on various timing analysis resistant
implementations of the {\tt xtime}-operation.
Our strongest attack in this direction
uses a very liberal fault model and requires only $256$ faulty
encryptions to determine a $128$-bit key.
How to repair ESIGN
The ESIGN signature scheme was provided with an inadequate proof
of security. We propose two techniques to repair the scheme,
which we name ESIGN-D and ESIGN-R.
Another improvement of ESIGN is encouraged, where the public key
is hashed together with the message.
This allows to have a security proof in the multi key setting.
Additionally, the lower security of ESIGN compared to RSA-PSS
leads to suggest that a common public key is used for ESIGN and RSA-PSS,
leaving to the signer the choice between fast signature or better
security.
Fault attacks on RSA with CRT: Concrete Results and Practical Countermeasures
This article describes concrete results and practically approved countermeasures concerning differential fault attacks
on RSA using the CRT. It especially investigates smartcards with a RSA coprocessor where any hardware countermeasure to
defeat such fault attacks have been switched off.
This scenario has been chosen in order to completely analyze the resulting effects
and errors occurring inside the hardware. Using the results of this kind of physical
stress attack enables the development of completely reliable software countermeasures.
Although {\em successful\/} RSA attacks on the investigated hardware have been only possible with an expensive enhanced
laboratory equipment, the effects on the unprotected hardware have been tremendously. This caused lots of analysis efforts to
investigate what really happened during the attack. Indeed, this will be addressed in this paper.
We first report on the nature of the resulting errors within the hardware due to the physical stress applied to the
smartcard. Hereafter, we describe the experiments and results with a very efficient and prominent software RSA-CRT DFA
countermeasure. This method could be shown to be insufficient, i.e., detected nearly no error, when we introduced
stress at the right position during the computation.
Naturally, a detailed error analysis model followed, specifying every failure point during the RSA-CRT operation.
This model finally allowed to develop and present here new very practically oriented software countermeasures hedging
the observed and characterized fault attacks.
Eventually, we present the security analysis of our new developed software RSA-CRT DFA countermeasures.
Thanks to their careful specification according to the observed and analyzed errors they resisted all kinds of physical
stress attacks and were able to detect any subtle computation error, thus avoiding to break the smartcard by fault attacks.
Nevertheless, we stress, that although our software countermeasures have been fully approved by practical experiments,
we are convinced that only sophisticated hardware countermeasures like sensors and filters in combination with
software countermeasures will be able to provide a secure and comfortable base to defeat in general any conceivable
fault attacks scenario on smartcards properly.
Authenticated Identity-Based Encryption
Suppose Alice wishes to send a message to Bob using an identity-based
encryption scheme (recall such a scheme is a public key cryptosystem where
any string is a valid public key), but desires integrity as well as
security. In other words, Alice wants Bob to know that only she could have
sent the message. Furthermore, suppose
she does not want the non-repudiation property
that would necessarily be present if she simply used an identity-based
signature scheme i.e. she does not want Bob to be able to prove
to a third party
that she is the sender.
We augment the system of Boneh and Franklin
to allow communication with integrity without nonrepudiation.
We formalize notions of security and integrity for our scheme, and show that
new encryption and decryption
algorithms are more efficient, despite being equally secure
and authenticated.
Further Results and Considerations on Side Channel Attacks on RSA
This paper contains three parts. In the first part we present a new side channel attack on plaintext encrypted by EME-OAEP PKCS#1 v.2.1. In contrast with Manger´s attack, we attack that part of the plaintext, which is shielded by the OAEP method. In the second part we show that Bleichenbacher’s and Manger’s attack on the RSA encryption scheme PKCS#1 v.1.5 and EME-OAEP PKCS#1 v.2.1 can be converted to an attack on the RSA signature scheme with any message encoding (not only PKCS). This is a new threat for those implementations of PKI, in which the roles of signature and encryption keys are not strictly separated. This situation is often encountered in the SSL protocol used to secure access to web servers. In the third part we deploy a general idea of fault-based attacks on the RSA-KEM scheme and present two particular attacks as the examples. The result is the private key instead of the plaintext as with attacks on PKCS#1 v.1.5 and v.2.1. These attacks should highlight the fact that the RSA-KEM scheme is not an entirely universal solution to problems of RSAES-OAEP implementation and that even here the manner of implementation is significant.
Weak Keys in MST1
The public key cryptosystem $MST_1$ has been introduced in~\cite{MaStTr00}. Its security relies on the hardness of factoring with respect to wild logarithmic signatures. To identify `wild-like' logarithmic signatures, the criterion of being totally-non-transversal has been proposed.
We give tame totally-non-transversal logarithmic signatures for the alternating and symmetric groups of degree $\ge 5$. Hence, basing a key generation procedure on the assumption that totally-non-transversal logarithmic signatures are `wild like' seems critical. We also discuss the problem of recognizing `weak' totally-non-transversal
logarithmic signatures, and demonstrate that another proposed key generation procedure based on permutably transversal logarithmic signatures may produce weak keys.
A Distributed and Computationally Secure Key Distribution Scheme
In 1999, Naor, Pinkas and Reingold introduced schemes in which some groups of servers distribute keys among a set of users in a
distributed way. They gave some specific proposals both in the unconditional and in the computational security framework. Their computationally secure
scheme is based on the Decisional Diffie-Hellman Assumption. This model assumes secure communication between users and servers. Furthermore it
requires users to do some expensive computations in order to obtain a key.
In this paper we modify the model introduced by Naor et al., requiring authenticated channels instead of assuming the existence of secure channels. Our model makes the user's computations easier, because most computations of the protocol are carried out by servers, keeping to a more realistic situation. We propose a basic scheme, that makes use of ElGamal cryptosystem, and that fits in with this model in the case of a passive adversary. We then add zero-knowledge proofs and verifiable secret sharing to prevent from the action of an active adversary. We consider general structures (not only the threshold ones) for those subsets of servers that can provide a key to a user and for those tolerated subsets of servers that can be corrupted by the adversary. We find necessary combinatorial conditions on these structures in order to provide security to our scheme.
Improved key recovery of level 1 of the Bluetooth Encryption System
The encryption system \(E_{0}\), which is the encryption system used
in the Bluetooth specification, is a two level system where a key
and a packet nonce is given to a level 1 key stream generator, which
produces the key for a level 2 key stream generator, whose output is
used to encrypt.
We give a method for recovering the key for the level 1 key stream
generator given the internal keys for two or three
level 2 key stream generators. This method, combined with published
methods for recovering keys for the level 2 key stream generator,
can be used to recover the \(E_{0}\) second key with $O(2^{65})$
work, and $O(2^{80})$ precomputation time.
Although this attack is of no advantage if \(E_{0}\) is used with
the recommended security parameters (64 bit encryption key), it
shows that no addition security would be made available by enlarging
the encryption key, as discussed in the Bluetooth specification.
(Not So) Random Shuffles of RC4
Most guidelines for implementation of the RC4 stream cipher recommend discarding the first 256 bytes of its output. This recommendation is based on the empirical fact that known attacks can either cryptanalyze RC4 starting at any point, or become harmless after these initial bytes are dumped. The motivation for this paper is to find a conservative estimate for the number of bytes that should be discarded in order to be safe. To this end we propose an idealized model of RC4 and analyze it applying the theory of random shuffles. Based on our analysis of the model we recommend dumping at least 512 bytes.
Black-Box Analysis of the Block-Cipher-Based Hash-Function Constructions from PGV
Uncategorized
Uncategorized
Preneel, Govaerts, and Vandewalle
considered the 64 most basic ways
to construct a hash function
$H:\{0,1\}^*\rightarrow\{0,1\}^n$ from a block cipher
$E:\{0,1\}^n\times\{0,1\}^n\rightarrow\{0,1\}^n$.
They regarded 12 of these 64 schemes as secure,
though no proofs or formal claims were given.
The remaining 52 schemes were shown to be subject to
various attacks.
Here we provide a formal and quantitative treatment of
the 64 constructions considered by PGV.
We prove that, in a black-box model, the 12
schemes that PGV singled out as secure really \textit{are} secure: we
give tight upper and lower bounds
on their collision resistance.
Furthermore, by stepping outside of
the Merkle-Damgard approach to analysis,
we show that an additional 8 of the 64 schemes
are just as collision resistant (up to a small constant) as the first group
of schemes.
Nonetheless, we are able to differentiate among the 20
collision-resistant
schemes by bounding their security as one-way functions.
We suggest that proving black-box bounds,
of the style given here, is a feasible and useful step
for understanding the security of any
block-cipher-based hash-function construction.
Secure Channels based on Authenticated Encryption Schemes: A Simple Characterization
We consider communication sessions in which a pair of parties begin by running an authenticated key-exchange protocol to obtain a shared session key, and then secure successive data transmissions between them via an authenticated encryption scheme based on the session key. We show that such a communication session meets the notion of a secure channel protocol proposed by Canetti and Krawczyk if and only if the underlying authenticated encryption scheme meets two new, simple definitions of security that we introduce, and the key-exchange protocol is secure. In other words, we reduce the secure channel requirements of Canetti and Krawczyk to easier to use, stand-alone security requirements on the underlying authenticated encryption scheme.
Protecting against Key Exposure: Strongly Key-Insulated Encryption with Optimal Threshold
A new framework for protection against key exposure was
recently suggested by Dodis et. al.. We take its realization
further towards practice by presenting simple new schemes that provide benefits
over previous ones in terms of scalability, performance and security. Our first
contribution is a simple, practical, scalable scheme called SKIE-OT that
achieves the best possible security in their framework. SKIE-OT is based on
the Boneh-Franklin identity-based encryption (IBE) scheme and
exploits algebraic properties of the latter. We also show that the role of
identity-based encryption is not coincidental by proving that IBE is equivalent
to (not strongly) key-insulated encryption with optimal threshold and allowing
random-access key updates.
On some Attacks on Multi-prime RSA
Using more than two factors in the modulus of the RSA cryptosystem
has the arithmetic advantage that the private key computations can be
speeded up using Chinese remaindering. At the same time, with a proper
choice of parameters, one does not have to work with a larger
modulus to achieve the same level of security in terms of the difficulty of the integer factorization problem.
However, numerous attacks on specific instances on the RSA cryptosystem are known that apply if, for example, the decryption or encryption exponent are chosen too small, or if partial knowledge of the private key is available. Little work is known on how such attacks perform in the multi-prime case.
It turns out that for most of these attacks it is crucial that the
modulus contains exactly two primes. They become much less effective, or fail, when the modulus factors into more than two distinct primes.
ABC - A Block Cipher
Uncategorized
Uncategorized
The author proposes a block cipher wich is easy to implement in
software on modern 32 bit microprocessorsThe building blocks of the
cipher are from the block ciphers MMB and SAFER. The cipher may be
expanded for use with future 64 bit processors. Also a new diffusion
layer, developed from the SAFER diffusion layer is proposed. It has
complextity $\mathcal{O}(n \enspace log \enspace n)$ and the author
conjectures that it is MDS. Diffusion layers currently known to be MDS
are based on matrices and thus have complexity $\mathcal{O}(n^2)$.
Strengthened Encryption in the CBC Mode
Vaudenay [1] has presented an attack on the CBC mode of block ciphers, which uses padding according to the PKCS#5 standard. One of the countermeasures, which he has assumed, consisted of the encryption of the message M´= M || padding || hash(M || padding) instead of the original M. This can increase the length of the message by several blocks compared with the present padding. Moreover, Wagner [1] showed a security weakness in this proposal. The next correction, which Vaudenay proposed ("A Fix Which May Work") has a general character and doesn't solve practical problems with the real cryptographic interfaces used in contemporary applications. In this article we propose three variants of the CBC mode. From the external point of view they behave the same as the present CBC mode with the PKCS#5 padding, but they prevent Vaudenay's attack.
A Forward-Secure Public-Key Encryption Scheme
Cryptographic computations are often carried out on insecure devices for which the threat of key exposure represents a serious and realistic concern.
In an effort to mitigate the damage caused by exposure of secret data stored on such devices, the paradigm of \emph{forward security} was introduced.
In this model, secret keys are updated at regular intervals throughout the lifetime of the system; furthermore, exposure of a secret key corresponding to a given interval does not enable an adversary to ``break'' the system (in the appropriate sense) for any \emph{prior} time period.
A number of constructions of forward-secure digital signature schemes and symmetric-key schemes are known.
We present the first construction of a forward-secure public-key encryption scheme whose security is based on the bilinear Diffie-Hellman assumption in the random oracle model.
Our scheme can be extended to achieve chosen-ciphertext security at minimal additional cost.
The construction we give is quite efficient: all parameters of the scheme grow (at most) poly-logarithmically with the total number of time periods.
Universally Composable Notions of Key Exchange and Secure Channels
Recently, Canetti and Krawczyk (Eurocrypt 2001) formulated a notion of
security for key-exchange (KE) protocols, called SK-security, and
showed that this notion suffices for constructing secure channels.
Their model and proofs, however, do not suffice for proving more general
composability properties of SK-secure KE protocols.
We show that while the notion of SK-security is strictly
weaker than a fully-idealized notion of key exchange security, it
is sufficiently robust for providing secure composition with
arbitrary protocols. In particular, SK-security guarantees the security
of the key for any application that desires to set-up secret keys
between pairs of parties. We also provide new definitions of
secure-channels protocols with similarly strong composability properties,
and show that SK-security suffices for obtaining these definitions.
To obtain these results we use the recently proposed framework of
"universally composable (UC) security."
We also use a new tool, called "non-information
oracles," which will probably find applications beyond the present case.
These tools allow us to bridge between seemingly limited
indistinguishability-based definitions such as SK-security and
more powerful, simulation-based definitions, such as UC-security,
where general composition theorems can be proven.
Furthermore, based on such composition theorems we reduce the
analysis of a full-fledged multi-session key-exchange protocol to the
(simpler) analysis of individual, stand-alone,
key-exchange sessions.
Construction of UOWHF: Tree Hashing Revisited
Uncategorized
Uncategorized
We present a binary tree based parallel algorithm for extending the domain of a UOWHF.
The key length expansion is $2m$ bits for $t=2$; $m(t+1)$ bits for $3\leq t\leq 6$ and
$m\times(t+\lfloor\log_2 (t-1)\rfloor)$ bits for $t\geq 7$, where $m$ is the length
of the message digest and $t\geq 2$ is the height of the binary tree.
A Simpler Construction of CCA2-Secure Public-Key Encryption Under General Assumptions
In this paper we present a simpler construction of an encryption scheme that achieves adaptive chosen ciphertext security (CCA2), assuming the existence of trapdoor permutations. We build on previous works of Sahai and De Santis et al. and construct a scheme that we believe is the easiest to understand to date. In particular, it is only slightly more involved than the Naor-Yung encryption scheme that is secure against passive chosen-ciphertext attacks (CCA1). We stress that the focus of this paper is on simplicity only.
Hierarchical ID-Based Cryptography
Uncategorized
Uncategorized
We present hierarchical identity-based encryption schemes and signature schemes that have total collusion resistance on an arbitrary number of levels and that have chosen ciphertext security in the random oracle model assuming the difficulty of the Bilinear Diffie-Hellman problem.
Concurrent Zero Knowledge Proofs with Logarithmic Round-Complexity
We consider the problem of constructing Concurrent Zero Knowledge
Proofs, in which the fascinating and useful ``zero
knowledge'' property is guaranteed even in situations where
multiple concurrent proof sessions are executed with many
colluding dishonest verifiers. Canetti et al.
show that black-box concurrent zero knowledge proofs for
non-trivial languages require $\tilde\Omega(\log k)$ rounds where
$k$ is the security parameter. Till now the best known upper
bound on the number of rounds for NP languages was $\omega(\log^2
k)$, due to Kilian, Petrank and Richardson. We establish an
upper bound of $\omega(\log k)$ on the number of rounds for NP
languages, thereby closing the gap between the upper and lower
bounds, up to a $\omega(\log\log k)$ factor.
SiBIR: Signer-Base Intrusion-Resilient Signatures
We propose a new notion of intrusion-resilient signature schemes, which generalizes and improves upon both forward-secure [And97,BM99] and key-insulated [DKXY02] signature schemes.
Specifically, as in the prior notions, time is divided into predefined time periods (e.g., days); each signature includes the number of the time time period in which it was generated; while the public key remains the same, the secret keys evolve with time. Also, as in key-insulated schemes, the user has two modules, signer and home base: the signer generates signatures on his own, and the base is needed only to help update the signer's key from one period to the next.
The main strength of intrusion-resilient schemes, as opposed to prior notions, is that they remain secure even after arbitrarily many compromises of both modules, as long as the compromises are not simultaneous. Moreover, even if the intruder does compromise both modules simultaneously, she will still be unable to generate any signatures for the previous time periods.
We provide an efficient intrusion-resilient signature scheme, provably secure in the random oracle model based on the strong RSA assumption.
We also discuss how such schemes can eliminate the need for certificate revocation in the case of on-line authentication.
Extended Validity and Consistency in Byzantine Agreement
A broadcast protocol allows a sender to distribute a value among a set of
players such that it is guaranteed that all players receive the same
value (consistency), and if the sender is honest, then all players
receive the sender's value (validity). Classical broadcast protocols for
$n$ players provide security with respect to a fixed threshold $t<n/3$,
where both consistency and validity are guaranteed as long as at most $t$
players are corrupted, and no security at all is guaranteed as soon as
$t+1$ players are corrupted. Depending on the environment, validity or
consistency may be the more important property.
We generalize the notion of broadcast by introducing an additional
threshold $t^+\ge t$. In a {\em broadcast protocol with extended
validity}, both consistency and validity are achieved when no more than
$t$ players are corrupted, and validity is achieved even when up to $t^+$
players are corrupted. Similarly, we define {\em broadcast with extended
consistency}. We prove that broadcast with extended validity as well as
broadcast with extended consistency is achievable if and only if
$t+2t^+<n$ (or $t=0$).
For example, six players can achieve broadcast when at most one player is
corrupted (this result was known to be optimal), but they can even
achieve consistency (or validity) when two players are corrupted.
Furthermore, our protocols achieve {\em detection} in case of failure,
i.e., if at most $t$ players are corrupted then broadcast is achieved,
and if at most $t^+$ players are corrupted then broadcast is achieved or
every player learns that the protocol failed. This protocol can be
employed in the precomputation of a secure multi-party computation
protocol, resulting in {\em detectable multi-party computation}, where up
to $t$ corruptions can be tolerated and up to $t^+$ corruptions can
either be tolerated or detected in the precomputation, for any $t,t^+$
with $t+2t^+<n$.
A Variant of the Cramer-Shoup Cryptosystem for Groups with Unknwon Order
The Cramer-Shoup cryptosystem for groups of prime order is a
practical public-key cryptosystem, provably secure in the standard
model under standard assumptions. This paper extends the
cryptosystem for groups of unknown order, namely the group of
quadratic residues modulo a composed N. Two security
results are:
In the standard model, the scheme is provably secure if both the
Decisional Diffie-Hellman assumption for QR_N *and* the
factorisation assumption for N hold. In the random oracle model, the
security of the scheme is provable by a quite efficient reduction.
Fully Distributed Proxy Signature Schemes
In a proxy signature scheme, a potential signer delegates his signing capability to a proxy entity, who signs a message on behalf of the original signer. All the proposals of proxy signature schemes made until now have been based on Schnorr's signature scheme. Threshold versions of these schemes have also been proposed, in which the power of the proxy signer is distributed among a group of players, in such a way that any subset with a minimum number (threshold) of players can sign a message on behalf of the original signer.
We consider a model that is fully distributed, because we want to distribute not only the power of the proxy signer, but also the original signer ability to delegate his signing capability. Furthermore, we consider general structures, instead of only the threshold ones, for both the tolerated subsets of dishonest players and the subsets of honest players authorized to execute a valid instance of the protocol, and in both the original and the proxy signer entities. We find sufficient combinatorial conditions that these structures must satisfy in order to design a fully distributed, secure and robust proxy signature scheme for this general scenario.
We propose such a scheme for this setting. It is also based on Schnorr's signature scheme.
Secret sharing schemes with three or four minimal qualified subsets
In this paper we study secret sharing schemes whose access structure
has three or four minimal qualified subsets. The ideal case is
completely characterized and for the non-ideal case
we provide bounds on the optimal information rate.
Tensor Transform of Boolean Functions and Related Algebraic and Probabilistic Properties
Uncategorized
Uncategorized
We introduce a tensor transform for Boolean functions that covers
the algebraic normal and Walsh transforms but which also allows
for the definition of new, probabilistic and weight transforms,
relating a function to its bias polynomial and to the weights of
its subfunctions respectively. Our approach leads to easy proofs
for some known results and to new properties of the aforecited
transforms. Several new results about algebraic and correlation
properties that depend on the weight transform of Boolean
functions are proved. Finally, we present a new probabilistic
characteristic of a Boolean function that is defined by its
algebraic normal and probabilistic transforms over the reals.
Towards a Uniform Description of Several Group Based Cryptographic Primitives
The public key cryptosystems $MST_1$ and $MST_2$ make use of certain kinds of factorizations of finite groups. We show that generalizing such factorizations to infinite groups allows a uniform description of several proposed cryptographic primitives. In particular, a generalization of $MST_2$ can be regarded as a unifying framework for several suggested cryptosystems including the ElGamal public key system, a public key system based on braid groups and the MOR cryptosystem.
Universal Composition with Joint State
Cryptographic systems often involve running multiple concurrent instances of some protocol, where the instances have some amount of joint state and randomness. (Examples include systems where multiple protocol instances use the same public-key infrastructure, or the same common reference string.) Rather than attempting to analyze the entire system as a single unit, we would like to be able to analyze each such protocol instance as stand-alone, and then use a general composition theorem to deduce the security of the entire system. However, no known composition theorem applies in this setting, since they all assume that the composed protocol instances have disjoint internal states, and that the internal random choices in the various instances are independent.
We propose a new composition operation that can handle the case where
different components have some amount of joint state and randomness,
and demonstrate sufficient conditions for when the new operation preserves security. The new operation, which is called {\em universal composition with joint state} (and is based on the recently proposed universal composition operation), turns out to be very useful in a number of quite different scenarios such as those mentioned above.
On the Security of Joint Signature and Encryption
We formally study the notion of a joint signature and encryption in
the public-key setting. We refer to this primitive as {\em
signcryption}, adapting the terminology of Zheng [Zhe97]. We present
wo definitions for the security of signcryption depending on whether
the adversary is an outsider or a legal user of the system. We then
examine generic sequential composition methods of building
signcryption from a signature and encryption scheme. Contrary to
what recent results in the symmetric setting [BN00,Kra01] might
lead one to expect, we show that classical ``encrypt-then-sign''
(EtS) and ``sign-then-encrypt'' (StE) methods are both {\em secure}
composition methods in the public-key setting.
We also present a new composition method which we call
``commit-then-encrypt-and-sign'' (CtE&S). Unlike the generic
sequential composition methods, CtE&S applies the expensive
signature and encryption operations {\em in parallel}, which could
imply a gain in efficiency over the StE and EtS schemes. We
also show that the new CtE&S method elegantly combines with the
recent ``hash-sign-switch'' technique of Shamir and Tauman [ST01],
leading to efficient {\em on-line/off-line} signcryption.
Finally and of independent interest, we discuss the {\em definitional}
inadequacy of the standard notion of chosen ciphertext (CAA)
security. Motivated by our applications to signcryption, we show
that the notion of CAA-security is syntactically ill-defined, and
leads to artificial examples of ``secure'' encryption schemes which
do not meet the formal definition of CCA-security. We suggest a
natural and very slight relaxation of CAA-security, which we call
generalized CCA-security (gCCA). We show that gCCA-security suffices
for all known uses of CCA-secure encryption, while no longer
suffering from the definitional shortcomings of the latter.
Cryptanalysis of S-DES
This paper describes an effort to attack S-DES using differential cryptanalysis and linear cryptanalysis. S-DES is a reduced version of
the Data Encryption Standard (DES). It also includes a discussion on the subject of cryptology and a literature survey of useful papers regarding cryptography and cryptanalysis. This paper is meant as a tutorial on the fundamentals of differential cryptanalysis and
linear cryptanalysis of a Feistel cipher.
Cryptanalysis of Block Ciphers with Overdefined Systems of Equations
Several recently proposed ciphers are built with layers of small S-boxes, interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds Nr.
In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this hypothesis is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties).
We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt'00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure.
The XSL attack is very powerful, but heuristic and it is very difficult to evaluate its complexity. The XSL attack has a parameter P, and in theory we show that P should be a constant. The XSL attack would then be polynomial in Nr, with a huge constant that is double-exponential in the size of the S-box.
We demonstrated by computer simulations that the XSL attack works well enough on a toy cipher. It seems however that P will rather increase very slowly with Nr. More simulations are needed for bigger ciphers.
Our optimistic evaluation shows that the XSL attack might be able to break Rijndael 256 bits and Serpent for key lengths 192 and 256 bits. However if only $P$ is increased by 2 (respectively 4) the XSL attack on Rijndael (respectively Serpent) would become slower than the exhaustive search. At any rate, it seems that the security of these ciphers does NOT grow exponentially with the number of rounds.
Strict Polynomial-time in Simulation and Extraction
Uncategorized
Uncategorized
The notion of efficient computation is
usually identified in cryptography and complexity with (strict)
probabilistic polynomial time. However, until recently, in order
to obtain *constant-round* zero-knowledge proofs and proofs
of knowledge, one had to allow simulators and knowledge-extractors
to run in time that is only polynomial *on the average*
(i.e., *expected* polynomial time). Recently Barak gave the
first constant-round zero-knowledge argument with a
*strict* (in contrast to expected) polynomial-time
simulator. The simulator in his protocol is a
*non-black-box* simulator (i.e., it makes inherent use of
the description of the *code* of the verifier).
In this paper, we further address the question of strict
polynomial-time in constant-round zero-knowledge proofs and
arguments of knowledge. First, we show that there exists a
constant-round zero-knowledge *argument of knowledge* with
a *strict* polynomial-time *knowledge extractor*. As in
the simulator of Barak's zero-knowledge protocol, the extractor
for our argument of knowledge is not black-box and makes inherent
use of the code of the prover. On the negative side, we show that
non-black-box techniques are *essential* for both strict
polynomial-time simulation and extraction. That is, we show that
no (non-trivial) constant-round zero-knowledge proof or argument
can have a strict polynomial-time *black-box* simulator.
Similarly, we show that no (non-trivial) constant-round
zero-knowledge proof or argument of knowledge can have a strict
polynomial-time *black-box* knowledge extractor.
A Unified Methodology For Constructing Public-Key Encryption Schemes Secure Against Adaptive Chosen-Ciphertext Attack
We introduce a new methodology for achieving security against
adaptive chosen-ciphertext attack (CCA) for
public-key encryption schemes, which we call
the {\em oblivious decryptors model}. The oblivious decryptors model
generalizes both the two-key model of Naor and Yung,
as well the Cramer--Shoup encryption schemes.
The key ingredient in our new paradigm is Sahai's notion of
Simulation-Sound NIZK proofs.
Our methodology is easy to use: First, construct an
encryption scheme which satisfies the ``bare'' oblivious-decryptors
model: This can be done quite easily, with simple proofs
of security. Then, by adding a Simulation-Sound NIZK proof,
the scheme becomes provably CCA-secure. Note that this paradigm
allows for the use of {\em efficient} special-purpose Simulation-Sound
NIZK proofs, such as those recently put forward by Cramer and Shoup.
We also show how to present all known
efficient (provably secure) CCA-secure public-key encryption schemes
as special cases of our model.
New Results on Boomerang and Rectangle Attack
The boomerang attack is a new and very powerful cryptanalytic
technique. However, due to the adaptive chosen plaintext and
ciphertext nature of the attack, boomerang
key recovery attacks
that retrieve key material on both sides of the
boomerang distinguisher are hard to mount.
We also present
a method for using a boomerang distinguisher,
which enables retrieving subkey bits on both sides of the boomerang
distinguisher.
The rectangle attack evolved from the boomerang attack.In this paper we present
a new algorithm which improves the results of the
rectangle attack.
Using these improvements we can attack 3.5-round SC2000 with $2^{67}$
adaptive chosen plaintexts and ciphertexts, and
10-round Serpent
with time complexity of $2^{173.8}$ memory accesses (which are
equivalent to $2^{165.3}$ Serpent encryptions) with data complexity of
$2^{126.3}$ chosen plaintexts.
Secure Computation Without Agreement
It has recently been shown that authenticated Byzantine agreement,
in which more than a third of the parties are corrupted, cannot be
securely realized under concurrent or parallel (stateless)
composition. This result puts into question any usage of
authenticated Byzantine agreement in a setting where many
executions take place. In particular, this is true for the whole
body of work of secure multi-party protocols in the case that a
third or more of the parties are corrupted. This is because these
protocols strongly rely on the extensive use of a broadcast
channel, which is in turn realized using authenticated Byzantine
agreement. We remark that it was accepted folklore that the use of
a broadcast channel (or authenticated Byzantine agreement) is
actually essential for achieving meaningful secure multi-party
computation whenever a third or more of the parties are corrupted.
In this paper we show that this folklore is false. We present a
mild relaxation of the definition of secure computation allowing
abort. Our new definition captures all the central security issues
of secure computation, including privacy, correctness and
independence of inputs. However, the novelty of the definition is
in {\em decoupling} the issue of agreement from these issues. We
then show that this relaxation suffices for achieving secure
computation in a point-to-point network. That is, we show that
secure multi-party computation for this definition can be achieved
for {\em any} number of corrupted parties and {\em without} a
broadcast channel (or trusted preprocessing phase as required for
running authenticated Byzantine agreement). Furthermore, this is
achieved by just replacing the broadcast channel in known
protocols with a very simple and efficient echo-broadcast
protocol. An important corollary of our result is the ability to
obtain multi-party protocols that remain secure under composition,
without assuming a broadcast channel.
Partial Key Escrow Monitoring Scheme
In (Partial) Key Escrow, how to monitoring a user securitly and efficiently is
a very important problem. This paper initially proposes a monitoring scheme of
a typical partial key escrow scheme. In this scheme, the escrowed key is not compromised
even if the user is monitored for many times.
Last updated: 2002-04-11
A Distributed RSA Signature Scheme for General Access Structures
In a distributed digital signature scheme, a set of participants shares a secret information that allows them to compute a valid signature for a given message. These systems are said to be robust if they can tolerate the presence of some dishonest players.
Up to now, all the proposed schemes consider only threshold structures: the tolerated subsets of corrupted players as well as the subsets of players who can sign a message are defined according to their cardinality.
We propose a framework that is more general than the threshold one, considering a general access structure of players allowed to sign and a general family of dishonest players that the scheme can tolerate. If these general structures satisfy some combinatorial conditions, we can design a distributed and secure RSA signature scheme for this setting. Our construction is based on the threshold scheme of Shoup.
An efficient semantically secure elliptic curve cryptosystem based on KMOV
We propose an elliptic curve scheme over the ring $\entq$, which is efficient and semantically secure in the standard model. There appears to be no previous elliptic curve cryptosystem based on factoring that enjoys both of these properties. KMOV scheme has been used as an underlying primitive
to obtain efficiency and probabilistic encryption. Semantic security of the scheme is based on a new decisional assumption, namely, the Decisional Small-$x$ $e$-Multiples Assumption. Confidence on this assumption is also discussed.
Optimal Black-Box Secret Sharing over Arbitrary Abelian Groups
A {\em black-box} secret sharing scheme for the threshold
access structure $T_{t,n}$ is one which works over any finite Abelian group $G$.
Briefly, such a scheme differs from an ordinary linear secret sharing
scheme (over, say, a given finite field) in that distribution matrix
and reconstruction vectors are defined over the integers and are designed {\em
independently} of the group $G$ from which the secret and the shares
are sampled. This means that perfect completeness and perfect
privacy are guaranteed {\em regardless} of which group $G$ is chosen. We define
the black-box secret sharing problem as the problem of devising, for
an arbitrary given $T_{t,n}$, a scheme with minimal expansion factor,
i.e., where the length of the full vector of shares divided by the
number of players $n$ is minimal.
Such schemes are relevant for instance in the context of distributed
cryptosystems based on groups with secret or hard to compute group
order. A recent example is secure general multi-party computation over
black-box rings.
In 1994 Desmedt and Frankel have proposed an
elegant approach to the black-box secret sharing problem
based in part on polynomial interpolation over
cyclotomic number fields. For arbitrary given $T_{t,n}$ with
$0<t<n-1$, the expansion factor of their scheme is $O(n)$. This is
the best previous general approach to the problem.
Using low degree integral extensions of the integers over which there exists a
pair of sufficiently large Vandermonde matrices with co-prime
determinants, we construct, for arbitrary given $T_{t,n}$ with
$0<t<n-1$ , a black-box secret sharing scheme with expansion factor
$O(\log n)$, which we show is minimal.
Tripartite Authenticated Key Agreement Protocols from Pairings
Uncategorized
Uncategorized
Joux's protocol is a one round, tripartite key
agreement protocol that is more bandwidth-efficient than any
previous three-party key agreement protocol. But it is insecure,
suffering from a simple man-in-the-middle attack. This paper shows
how to make Joux's protocol secure, presenting several tripartite,
authenticated key agreement protocols that still require only one
round of communication. A pass-optimal authenticated and key
confirmed tripartite protocol that generalises the
station-to-station protocol is also presented. The security
properties of the new protocols are studied using provable
security methods and heuristic approaches. Applications for the
protocols are also discussed.
An OAEP Variant With a Tight Security Proof
We introduce the OAEP++ encoding method, which is an adaptation of the OAEP encoding method, replacing the last step of the encoding operation with an application of a block cipher such as AES. We demonstrate that if $f$ is a one-way trapdoor function that is hard to invert, then OAEP++ combined with $f$ is secure against an IND-CCA2 adversary in the random oracle model. Moreover, the security reduction is tight; an adversary against $f$-OAEP++ can be extended to an $f$-inverter with a running time linear in the number of oracle queries.
Equivalence between semantic security and indistinguishability against chosen ciphertext attacks
The aim of this work is to examine the relation between the notions of
semantic security and indistinguishability against chosen ciphertext attacks.
For this purpose, a new security notion called non-dividability is introduced
independent of attack models, and is shown to be equivalent to both of the two notions.
This result is expected to provide a clearer understanding of the equivalence between
semantic security and indistinguishability under any form of attack.
Supersingular Hyperelliptic Curve of Genus 2 over Finite Fields
In this paper we describe an elementary criterion to determine
supersingular hyperelliptic curve of genus $2$, using
only the given Weierstrass equation.
Furthermore, we show that the discrete logarithm problem defined on
any
supersingular abelian variety of dimension $2$ over
${\mathbb F}_p, p>16,$ can be embedded to that over the extension field
${\mathbb F}_{p^{k}}$, with $k \leq 6.$
This implies that
supersingular hyperelliptic curves are cryptographically
weaker than the general case due to
the Frey-R$\ddot{u}$ck attack.
A family of the hyperelliptic
curve $H/{\mathbb F}_p$ of the type $v^2=u^5+a$ and $v^2 = u^5 +
au$ have been studied and further examples are listed.
A Parallelizable Design Principle for Cryptographic Hash Functions
We describe a parallel design principle for hash functions. Given a secure hash
function $h:\{0,1\}^n\rightarrow \{0,1\}^m$ with $n\geq 2m$, and a binary tree of
$2^t$ processors we show how to construct a secure hash function $h^{*}$ which can hash
messages of lengths less than $2^{n-m}$ and a secure hash function $h^{\infty}$ which can
hash messages of arbitrary length. The number of parallel rounds required to hash a
message of length $L$ is $\lfloor\frac{L}{2^t}\rfloor+t+2$. Further, our algorithm is incrementally
parallelizable in the following sense : given a digest produced using a binary tree of $2^t$
processors, we show that the same digest can also be produced using a binary tree of
$2^{t^{\prime}}$ $(0\leq t^{\prime}\leq t)$ processors.
Adaptive chi-square test and its application to some cryptographic problems.
We address the problem of testing the hypothesis H_0 that the
letters from some alphabet A= {a_1,a_2,..., a_k }, are
distributed uniformly
against the alternative hypothesis H_1 that the true
distribution is not uniform, in case k is large. (It is typical
for random number testing and some cryptographic problems where
k= 2^{10} - 2^{30} and more). In such
a case it is difficult to use the chi-square test because the
sample size must be greater than k.
We suggest the adaptive chi-square test which can be
successfully applied for testing some kinds of H_1 even in case
when the sample size is much less than k. This statement is
confirmed theoretically and experimentally. The theoretical proof
is based on the consideration of one kind of the alternative
hypothesis H_1 where the suggested test rejects the null
hypothesis when the sample size is O( \sqrt{k} ) (instead of
const k for the usual chi-square test ).
For experimental
investigation of the suggested test we consider a problem of
testing ciphered Russian texts. It turns out that the suggested
test can distinguish the ciphered texts from random sequences
basing on a sample which is much smaller than that required for the
usual chi-square test.
Efficient Computation Modulo a Shared Secret with Application to the Generation of Shared Safe-Prime Products
Uncategorized
Uncategorized
We present a new protocol for efficient distributed computation modulo a
shared secret. We further present a protocol to distributively generate a
random shared prime or safe prime that is much more efficient than
previously known methods. This allows to distributively compute shared RSA
keys, where the modulus is the product of two safe primes, much more
efficiently than was previously known.
A Universal Forgery of Hess's Second ID-based Signature against the Known-message Attack
In this paper we propose a universal forgery attack of Hess's second
ID-based signature scheme against the known-message attack.
Efficient and Non-Malleable Proofs of Plaintext Knowledge and Applications
We describe very efficient protocols for non-malleable (interactive)
proofs of plaintext knowledge for the RSA, Rabin, Paillier, and
El-Gamal encryption schemes whose security can be proven in the
standard model. We also highlight some important applications of
these protocols, where we take care to ensure that our protocols
remain secure when run in an asynchronous, concurrent environment:
--- Chosen-ciphertext-secure, interactive encryption: In some settings
where both parties are on-line (e.g., SSL), an interactive encryption
protocol may be used. We construct chosen-ciphertext-secure interactive
encryption schemes based on any of the schemes above. In each case,
the improved scheme requires only a small overhead beyond the original,
semantically-secure scheme.
--- Password-based authenticated key exchange: We provide efficient
protocols for password-based authenticated key exchange in the public-
key model \cite{HK98,B99}. Security of our protocols may be based on
any of the cryptosystems mentioned above.
--- Deniable authentication: We demonstrate deniable authentication
protocols satisfying the strongest notion of security. These are the
first efficient constructions based on, e.g., the RSA or computational Diffie-Hellman assumptions.
Our techniques provide a general methodology for constructing efficient
\emph{non-malleable} (zero-knowledge) proofs of knowledge when shared
parameters are available (for our intended applications, these
parameters can simply be included as part of users' public keys). Thus,
non-malleable proofs of knowledge are easy to achieve ``in practice''.
Generic Groups, Collision Resistance, and ECDSA
Proved here is the sufficiency of certain conditions to ensure the
Elliptic Curve Digital Signature Algorithm (ECDSA) existentially
unforgeable by adaptive chosen-message attacks. The sufficient
conditions include (i) a uniformity property and collision-resistance
for the underlying hash function, (ii) pseudo-randomness in the
private key space for the ephemeral private key generator, (iii)
generic treatment of the underlying group, and (iv) a further
condition on how the ephemeral public keys are mapped into the private
key space. For completeness, a brief survey of necessary security
conditions is also given. Some of the necessary conditions are weaker
than the corresponding sufficient conditions used in the security
proofs here, but others are identical. Despite the similarity between
DSA and ECDSA, the main result is not appropriate for DSA, because the
fourth condition above seems to fail for DSA. (The corresponding
necessary condition is plausible for DSA, but is not proved here nor
is the security of DSA proved assuming this weaker condition.)
Brickell et al., Jakobsson et al. and Pointcheval et al. only consider
signature schemes that include the ephemeral public key in the hash
input, which ECDSA does not do, and moreover, assume a condition on
the hash function stronger than the first condition above. This work
seems to be the first advance in the provable security of ECDSA.
Making Mix Nets Robust For Electronic Voting By Randomized Partial Checking
We propose a new technique for making mix nets robust, called
randomized partial checking (RPC). The basic idea is that
rather than providing a proof of completely correct operation, each
server provides strong evidence of its correct operation by revealing a pseudo-randomly selected subset of its input/output relations.
Randomized partial checking is exceptionally efficient compared to
previous proposals for providing robustness; the evidence provided at
each layer is shorter than the output of that layer, and producing the
evidence is easier than doing the mixing. It works with mix nets
based on any encryption scheme (i.e., on public-key alone, and on hybrid schemes using public-key/symmetric-key combinations). It also works both with Chaumian mix nets where the messages are successively encrypted with each servers' key, and with mix nets based on a single public key with randomized re-encryption at each layer.
Randomized partial checking is particularly well suited for voting
systems, as it ensures voter privacy and provides assurance of correct
operation. Voter privacy is ensured (either probabilistically or
cryptographically) with appropriate design and parameter selection. Unlike previous work, our work provides voter privacy as a global property of the mix net rather than as a property ensured by a single honest server. RPC-based mix nets also provide very high assurance of a correct election result, since a corrupt server is very likely to be caught if it attempts to tamper with even a couple of ballots.
Last updated: 2002-05-09
Timed Release of Standard Digital Signatures
In this paper we investigate the timed release of standard digital signatures, and demonstrate how to do it for RSA, Schnorr and DSA signatures. Such signatures, once released, cannot be distinguished from signatures of the same type obtained without a timed
release, making it transparent to an observer of the end result.
While previous work has allowed timed release of signatures,
these have not been standard, but special-purpose signatures.
Building on the recent work by Boneh and Naor on timed commitments,
we introduce the notion of a reusable time-line, which,
besides allowing the release of standard signatures, lowers the session costs of existing timed applications.
Almost Optimal Hash Sequence Traversal
Uncategorized
Uncategorized
We introduce a novel technique for computation of
consecutive preimages of hash chains. Whereas traditional
techniques have a memory-times-computation complexity of
$O(n)$ per output generated, the complexity of our technique
is only $O(log^2 \, n)$, where $n$ is the length of the chain.
Our solution is based on the same principal amortization principle
as \cite{J01}, and has the same asymptotic behavior as this solution.
However, our solution decreases the real complexity by approximately
a factor of two. Thus, the computational costs of our solution are approximately $ {1 \over 2} log_2 \, n$ hash function applications, using only a little more than $log_2 \, n$ storage cells.
A result of independent interest is the lower
bounds we provide for the optimal (but to us unknown)
solution to the problem we study. The bounds show that
our proposed solution is very close to optimal. In particular,
we show that there exists no improvement on our scheme that reduces
the complexity by more than an approximate factor of two.
From Identification to Signatures via the Fiat-Shamir Transform: Minimizing Assumptions for Security and Forward-Security
The Fiat-Shamir paradigm for transforming
identification schemes into signature schemes has been popular since
its introduction because it yields efficient signature schemes, and
has been receiving renewed interest of late as the main tool in
deriving forward-secure signature schemes.
We find minimal (meaning
necessary and sufficient) conditions on the identification scheme to
ensure security of the signature scheme in the random oracle model,
in both the usual and the forward-secure cases.
Specifically we show that the signature scheme is secure
(resp. forward-secure) against chosen-message attacks in the random
oracle model if and only if the underlying identification
scheme is secure (resp. forward-secure) against impersonation under
passive (i.e.. eavesdropping only) attacks, and has its
commitments drawn at random from a large space. An extension is
proven incorporating a random seed into the Fiat-Shamir transform so
that the commitment space assumption may be removed.
Spectral Analysis of Boolean Functions under Non-uniformity of Arguments
For independent binary random variables x_1,...,x_n and a Boolean function f(x), x=(x_1,...,x_n), we suppose that |1/2 - P{x_i = 1}|<=e, 1<=i<=n. Under these conditions we present new characteristics D_F(f(),e) = max{|1/2 - P{y=1}|} of the probability properties of Boolean functions, where y = F(x), and F(x) being equal to 1) F(x)=f(x), 2) F(x)=f(x)+(a,x), 3) F(x)=f(x)+f(x+a), and investigate their properties.
Special attention is paid to the classes of balanced and correlation immune functions, bent functions, and second order functions, for which upper estimates of D_F(f(),e) are found and statements
on behaviour of sequences f^{(n)}(x) of functions of n arguments are made.
Cryptanalysis of stream ciphers with linear masking
We describe a cryptanalytical technique for distinguishing some stream ciphers from a truly random process. Roughly, the ciphers to which this method applies consist of a ``non-linear process'' (say, akin to a round function in block ciphers), and a ``linear process'' such as an LFSR (or even fixed tables). The output of the cipher can be the linear sum of both processes. To attack such ciphers, we look for any property of the ``non-linear process'' that can be distinguished from random. In addition, we look for a linear combination of the linear process that vanishes. We then consider the same linear combination applied to the cipher's output, and try to find traces of the distinguishing property.
In this report we analyze two specific ``distinguishing properties''. One is a linear approximation of the non-linear process, which we demonstrate on the stream cipher SNOW. This attack needs roughly $2^{95}$ words of output, with work-load of about $2^{100}$. The other is a ``low-diffusion'' attack, that we apply to the cipher Scream-0. The latter attack needs only about $2^{43}$ bytes of output, using roughly $2^{50}$ space and $2^{80}$ time.
Scream: a software-efficient stream cipher
We report on the design of Scream, a new software-efficient stream
cipher, which was designed to be a ``more secure SEAL''. Following SEAL, the design of Scream resembles in many ways a block-cipher design. The new cipher is roughly as fast as SEAL, but we believe that it offers a significantly higher security level. In the process of designing this cipher, we re-visit the SEAL design paradigm, exhibiting some tradeoffs and limitations.
An Identity-Based Signature from Gap Diffie-Hellman Groups
Uncategorized
Uncategorized
In this paper we propose an identity(ID)-based signature scheme using gap Diffie-Hellman (GDH) groups. Our scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. Using GDH groups obtained from bilinear pairings, as a special case of our scheme, we obtain an ID-based signature scheme that shares the same system parameters and the same private/public key pairs with the ID-based encryption scheme (BF-IBE) by Boneh and Franklin, and is as efficient as the BF-IBE. Combining our signature scheme with the BF-IBE yields a complete solution of an ID-based public key system. It can be an alternative for certificate-based public key infrastructures, especially when efficient key management and moderate security are required.