All papers (Page 220 of 24087 results)
Efficient ID-based Signature Without Trusted PKG
In this paper, we introduce the exact concept of ID-based signature without trusted Private Key Generator (PKG), which solves the key escrow problem through binding two partially public keys with a same identity. In this scheme, PKG is prevented from forging a legal user’s signature because he only generates the partially private key. Using Gap Diffie-Hellman (GDH) groups, we construct an efficient ID-based signature scheme without trusted PKG, which security relies on the hardness of the Computation Diffie-Hellman Problem (CDHP). More precisely, under the random oracle model, our scheme is proved to be secure against existential forgery on adaptively chosen message and ID attack, which is a natural ID-based version of the standard adaptively chosen message attack, assuming CDHP is intractable. Our scheme not only eliminates the inherent key escrow problem but also has a higher efficiency than the existing schemes.
Estimation of keys stored in CMOS cryptographic device after baking by using the charge shift
The threshold voltage VT of EEPROM cells is a very important technological parameter for storing data and keys in a cryptographic device like smartcards. Furthermore, main objective of this paper is to check whether it is possible to get the key stored in the EEPROM cell through measuring the current consumption of the cryptographic device during read key command for encryption before and after baking at a certain temperature. This stress (baking) of the charge in the floating gate of the cells shifts the threshold voltage. Especially this effect will be considered whether the unknown key in the EEPROM cells can be estimated by using the charge shift in the floating gate. The test labs might need to check during an evaluation procedure of the smartcards if parts or whole key can be estimated successfully by stressing the threshold parameter VT. The result of this evaluation is (will be) an input for countermeasures against possible attacks. It is also an additional input for further design structures in order to avoid information gain after baking the EEPROM cells at a certain temperature.
New Communication-Efficient Oblivious Transfer Protocols Based on Pairings
We construct two simple families of two-message $(n,1)$-oblivious transfer protocols based on degree-$t$ homomorphic cryptosystems with the communication of respectively $1+\lceil n/t \rceil$ and $3+\lceil n/(t+1) \rceil$ ciphertexts. The construction of both families relies on efficient cryptocomputable conditional disclosure of secret protocols; the way this is done may be of independent interest. The currently most interesting case $t=2$ can be based on the Boneh-Goh-Nissim cryptosystem. As an important application, we show how to reduce the communication of virtually any existing oblivious transfer protocols by proposing a new related communication-efficient generic transformation from computationally-private information retrieval protocols to oblivious transfer protocols.
Equivocal Blind Signatures and Adaptive UC-Security
Uncategorized
Uncategorized
We study the design of practical blind signatures in the universal
composability (UC) setting against adaptive adversaries. We
introduce a new property for blind signature schemes that is
fundamental for managing adaptive adversaries: an {\em equivocal
blind signature} is a blind signature protocol where a simulator
can construct the internal state of the client so that it matches
a simulated transcript even after a signature was released.
%
We present a general construction methodology for building
practical adaptively secure blind signatures: the starting point
is a 2-move ``lite blind signature'', a lightweight 2-party
signature protocol that we formalize and implement both
generically as well as number theoretically: formalizing a
primitive as ``lite'' means that the adversary is required to show
all private tapes of adversarially controlled parties; this
enables us to conveniently separate zero-knowledge (ZK) related
security requirements from the remaining security properties in
the primitive's design methodology.
%
We then focus on the exact ZK requirements for building blind
signatures. To this effect, we formalize two special ZK ideal
functionalities, single-verifier-ZK (SVZK) and single-prover-ZK
(SPZK) and we investigate the requirements for realizing them in a
commit-and-prove fashion as building blocks for adaptively secure
UC blind signatures. SVZK can be realized without relying on a
multi-session UC commitment; as a result, we realize SVZK in a
very efficient manner using number theoretic mixed commitments
while employing a constant size common reference string and
without the need to satisfy non-malleability. Regarding SPZK we
find the rather surprising result that realizing it only for
static adversaries is sufficient to obtain adaptive security for
UC blind signatures. This important observation simplifies blind
signature design substantially as one can realize SPZK very
efficiently in a commit-and-prove fashion using merely an
extractable commitment.
We instantiate all the building blocks of our design methodology
efficiently thus presenting the first practical UC blind signature
that is secure against adaptive adversaries in the common
reference string model. In particular, we present (1) a lite
equivocal blind signature protocol that is based on elliptic
curves and the 2SDH assumption of Okamoto, (2) efficient
implementations of SPZK, SVZK for the required relations.
%
Our construction also takes advantage of a round optimization
method we discuss and it results in a protocol that has an overall
communication overhead of as little as 3Kbytes, employing six
communication moves and a constant length common reference string.
We also present alternative implementations for our equivocal lite
blind signature thus demonstrating the generality of our approach.
Finally we count the exact cost of realizing blind signatures
with our protocol design by presenting the distance between the
$\Fbsig$-hybrid world and the $\Fcrs$-hybrid world as a function
of environment parameters. The distance calculation is facilitated
by a basic lemma we prove about structuring UC proofs that may be
of independent interest.
Noninteractive Manual Channel Message Authentication Based On eTCR Hash Functions
We present a new non-interactive message authentication protocol in manual channel model
(NIMAP, for short) using the weakest assumption on the manual channel (i.e. assuming the
strongest adversary). Our protocol uses enhanced target collision resistant (eTCR) hash
family and is provably secure in the standard model. We compare our protocol with
protocols with similar properties and show that the new NIMAP has the same security level
as the best previously known NIMAP whilst it is more practical. In particular, to
authenticate a message such as a 1024-bit public key, we require an eTCR hash family that
can be constructed from any off-the-shelf Merkle-Damgård hash function using
randomized hashing mode. The underlying compression function must be {\em evaluated
second preimage resistant} (eSPR), which is a strictly weaker security property than
collision resistance. We also revisit some closely related security notions for hash
functions and study their relationships to help understanding our protocol.
Some Results on Anonymity in Hybrid Encryption
Anonymity(key-privacy) as well as security(data-privacy) are all important features in public-key encryption applications. In this paper two new and general concepts, named “relevant anonymity” and “relevant security”, are defined. Based-upon these concepts some general results on anonymity in public-key encryption are proved, which fall in three categories. The first results are two general relationships between anonymity and security; the second are a sufficient and necessary condition for chosen-plaintext anonymity in Fujisaki-Okamoto hybrid construction and a sufficient condition for its chosen-ciphertext anonymity; the third is a sufficient condition for chosen-ciphertext anonymity in Okamoto-Pointcheval hybrid construction (REACT). All these conditions are also easy-to-use criteria in practice. By examples such general consequences are applied to some specific schemes and as a result anonymity of some well-known schemes are re-established in a simpler way. Furthermore, NISSIE scheme PSEC-/1/2/3’s chosen-ciphertext anonymity are proved.
An Algebraic Analysis of Trivium Ciphers based on the Boolean Satisfiability Problem
Uncategorized
Uncategorized
Trivium is a stream cipher candidate of the eStream project.
It has successfully moved into phase three of the selection process under
the hardware category. No attacks faster than the exhaustive search have
so far been reported on Trivium.
Bivium-A and Bivium-B are simplified versions of Trivium
that are built on the same design principles but with two registers.
The simplified design is useful in investigating Trivium type ciphers
with a reduced complexity and provides insight into effective
attacks which could be extended to Trivium.
This paper focuses on an algebraic analysis which
uses the boolean satisfiability problem in propositional logic.
For reduced variants of the cipher,
this analysis recovers the internal state with
a minimal amount of keystream observations.
Computationally Sound Mechanized Proofs of Correspondence Assertions
We present a new mechanized prover for showing correspondence
assertions for cryptographic protocols in the computational model.
Correspondence assertions are useful in particular for establishing
authentication.
Our technique produces proofs by sequences of games, as standard in
cryptography. These proofs are valid for a number of sessions
polynomial in the security parameter, in the presence of an active
adversary. Our technique can handle a wide variety of cryptographic
primitives, including shared- and public-key encryption, signatures,
message authentication codes, and hash functions. It has been
implemented in the tool CryptoVerif and successfully tested on
examples from the literature.
CCA2-Secure Threshold Broadcast Encryption with Shorter Ciphertexts
In a threshold broadcast encryption scheme, a sender chooses (ad-hoc) a set of $n$ receivers and a threshold $t$, and then encrypts a message by using the public keys of all the receivers, in such a way that the original plaintext can be recovered only if at least $t$ receivers cooperate. Previously proposed threshold broadcast encryption schemes have ciphertexts whose length is $\O(n)$.
In this paper, we propose new schemes, for both PKI and identity-based scenarios, where the ciphertexts' length is $\O(n-t)$. The construction uses secret sharing techniques and the Canetti-Halevi-Katz transformation to achieve chosen-ciphertext security. The security of our schemes is formally proved under the Decisional Bilinear Diffie-Hellman (DBDH) Assumption.
An Interesting Member ID-based Group Signature
We propose an interesting efficient member ID-based group
signatures, i.e., verification of output from algorithm OPEN run by
the group manager does not have to refer to a registration table
(acting as certification list).
The proposal is free of GM-frameability, i.e., secret key of member
is not escrowed to GM, which is unique among all known member
ID-based group signatures as far as we know.
The proposal also has two distinguished extra features, one is that
the group manager does not have to maintain a registration table to
obtain the real identity of the signer in contrast to other schemes,
another is that it provides an alternative countermeasure against
tampered registration table to applying integrity techniques to the
table in case registration table is maintained.
Attacking the IPsec Standards in Encryption-only Configurations
At Eurocrypt 2006, Paterson and Yau demonstrated how flaws in the Linux implementation of IPsec could be exploited to break encryption-only configurations of ESP, the IPsec encryption protocol. Their work highlighted the dangers of not using authenticated encryption in fielded systems, but did not constitute an attack on the actual IPsec standards themselves; in fact, the attacks of Paterson and Yau should be prevented by any standards-compliant IPsec implementation. In contrast, this paper describes new attacks which break any RFC-compliant implementation of IPsec making use of encryption-only ESP. The new attacks are both efficient and realistic: they are ciphertext-only and need only the capability to eavesdrop on ESP-encrypted traffic and to inject traffic into the network. The paper also reports our experiences in applying the attacks to a variety of implementations of IPsec, and reflects on what these experiences tell us about how security standards should be written so as to simplify the task of software developers.
Rebuttal of overtaking VEST
VEST is a set of four stream cipher families targeted to semiconductor applications. All VEST family members support efficient encryption, single pass authenticated encryption, and collision resistant hashing in the one low area module. VEST was submitted by Synaptic
Laboratories to the ECRYPT NoE eSTREAM project in 2005. Recently, a single digit typographical error was identified in the VEST counter diffuser description. Shortly afterwards Antoine Joux and Jean-René Reinhard published collisions in the counter-diffuser based upon the erroneous description. By extending these collisions across the entire cipher state, they were able to explore various attack scenarios. We prove that the correction of the typographical error removes all the exploitable collisions in the counter diffuser during key and IV loading operations; thereby establishing that the Joux-Reinhard attacks are an artefact of the erroneous description. Complete test vectors are included.
Obtaining a secure and efficient key agreement protocol from (H)MQV and NAXOS
LaMacchia, Lauter and Mityagin recently presented a strong security definition for authenticated key agreement strengthening the well-known Canetti-Krawczyk definition. They also described a protocol, called NAXOS, that enjoys a simple security proof in the new model. Compared to MQV and HMQV, NAXOS is less efficient and cannot be readily modified to obtain a one-pass protocol. On the other hand MQV does not have a security proof, and the HMQV security proof is extremely complicated.
This paper proposes a new authenticated key agreement protocol, called
CMQV (`Combined' MQV), which incorporates design principles from MQV,
HMQV and NAXOS. The new protocol achieves the efficiency of HMQV and admits a natural one-pass variant. Moreover, we present a simple and intuitive proof that CMQV is secure in the LaMacchia-Lauter-Mityagin model.
On the Security of three Versions of the WAI Protocol in Chinese WLAN Implementation Plan
In this paper we investigate the security properties of three versions of the WAI protocol in Chinese WLAN implementation plan. We first revisit the security analysis that has been done to the version 1 and 2. we show that the security proof given by Li, Moon, and Ma is incorrect and the alternative protocol EWAP of Zhang and Ma is insecure. We further analyse the third version of the WAI protocol and prove its security in the Canetti-Krawczyk model. In addition, we also provide some practical security analysis of this version.
Certificateless Encryption Schemes Strongly Secure in the Standard Model
This paper presents the first constructions for certificateless encryption (CLE) schemes that are provably secure against strong adversaries in the standard model. It includes both a generic construction for a strongly secure CLE scheme from any passively secure scheme as well as a concrete construction based on the Waters identity-based encryption scheme.
Breaking 104 bit WEP in less than 60 seconds
Uncategorized
Uncategorized
We demonstrate an active attack on the WEP protocol that is able to recover a 104-bit WEP key using less than 40.000 frames in 50% of all cases. The IV of these packets can be randomly chosen. This is an improvement in the number of required frames by more than an order of magnitude over the best known key-recovery attacks for WEP. On a IEEE 802.11g network, the number of frames required can be obtained by re-injection in less than a minute. The required computational effort is approximately 2^{20} RC4 key setups, which on current desktop and laptop CPUs is neglegible.
Rerandomizable RCCA Encryption
We give the first perfectly rerandomizable, Replayable-CCA (RCCA) secure encryption scheme, positively answering an open problem of Canetti et al. [CRYPTO 2003]. Our encryption scheme, which we call the Double-strand Cramer-Shoup scheme, is a non-trivial extension of the popular Cramer-Shoup encryption. Its security is based on the standard DDH assumption. To justify our definitions, we define a powerful "Replayable Message Posting" functionality in the Universally Composable (UC) framework, and show that any encryption scheme that satisfies our definitions of rerandomizability and RCCA security is a UC-secure implementation of this functionality. Finally, we enhance the notion of rerandomizable RCCA security by adding a receiver-anonymity (or key-privacy) requirement, and show that it results in a correspondingly enhanced UC functionality. We leave open the problem of constructing a scheme that achieves this enhancement.
Smooth Projective Hashing and Two-Message Oblivious Transfer
We present a general framework for constructing two-message oblivious transfer protocols using a modification of Cramer and Shoup's notion of smooth projective hashing (2002). This framework is an abstraction of the two-message oblivious transfer protocols of Naor and Pinkas (2001) and Aiello et al. (2001), whose security is based on the Decisional Diffie Hellman Assumption. In particular, we give two new oblivious transfer protocols. The security of one is based on the Quadratic Residuosity Assumption, and the security of the other is based on the $N$'th Residuosity Assumption. Compared to other applications of smooth projective hashing, in our context we must deal also with maliciously chosen parameters, which raises new technical difficulties.
We also improve on prior constructions of factoring-based smooth universal hashing, in that our constructions *do not require that the underlying RSA modulus is a product of safe primes*. (This holds for the schemes based on the Quadratic Residuosity Assumption as well as the ones based on the $N$'th Residuosity Assumption.) In fact, we observe that the safe-prime requirement is unnecessary for many prior constructions. In particular, the factoring-based CCA secure encryption schemes due to Cramer-Shoup, Gennaro-Lindell, and Camenisch-Shoup remain secure even if the underlying RSA modulus is not a product of safe primes.
Improving the lower bound on the higher order nonlinearity of Boolean functions with prescribed algebraic immunity
The recent algebraic attacks have received a lot of attention in
cryptographic literature. The algebraic immunity of a Boolean
function quantifies its resistance to the standard algebraic attacks
of the pseudo-random generators using it as a nonlinear filtering or
combining function. Very few results have been found concerning its
relation with the other cryptographic parameters or with the $r$-th
order nonlinearity. As recalled by Carlet at Crypto'06, many papers
have illustrated the importance of the $r$th-order nonlinearity
profile (which includes the first-order nonlinearity). The role of
this parameter relatively to the currently known attacks has been
also shown for block ciphers. Recently, two lower bounds involving
the algebraic immunity on the $r$th-order nonlinearity have been
shown by Carlet et \emph{al}. None of them improves upon the other
one in all situations. In this paper, we prove a new lower bound on
the $r$th-order nonlinearity profile of Boolean functions, given
their algebraic immunity, that improves significantly upon one of
these lower bounds for all orders and upon the other one for low
orders.
A Zero-Knowledge Identification and Key Agreement Protocol
In this paper, we propose a zero-knowledge authenticated key agreement protocol with key confirmation (AKC) in asymmetric setting. The protocol has several desirable security attributes like some classical AKCs such as STS and MQV. One highlight of our protocol is its zero-knowledge property, which enables succinct proofs of the claimed security attributes, while the overhead in communication and computation resulting from the special design to achieve zero-knowledge is insignificant.
Quadratic Almost Perfect Nonlinear Functions With Many Terms
We introduce a new infinite family of multiterm functions that
are APN on $GF(2^{2k})$ for odd $k$.
High Efficiency Feedback Shift Register: $\sigma-$LFSR
We introduce a new kind of word-oriented linear feedback shift
register called $\sigma-$LFSR which is constructed with the
instructions of the modern processor and have fast software
implementation. We offer an algorithm to search for good primitive
$\sigma-$LFSR. In particular, we give two examples HHZ-1 and HHZ-2
and compare their efficiency and security with those of the LFSRs
appearing in stream ciphers such as SNOW, SOBER and Turing. Our
results show that replacing the LFSRs in SNOW, SOBER and Turing with
HHZ-1 will improve security and the efficiency of fast software
implementation.
An Enhanced ID-based Deniable Authentication Protocol on Pairings
Uncategorized
Uncategorized
Deniability is defined as a privacy property which enables protocol principals to deny their involvement after they had taken part in a particular protocol run. Lately, Chou et al. had proposed their ID-based deniable authentication protocol after proving the vulnerability to Key-Compromise Impersonation (KCI) attack in Cao et al.'s protocol. In addition, they claimed that their protocol is not only secure, but also able to achieve both authenticity and deniability properties. However, in this paper, we demonstrate that Chou et al.'s protocol is not flawless as it remains insecure due to its susceptibility to the KCI attack. Based on this, we propose an enhanced scheme which will in fact preserves the authenticity, the deniability and the resistance against the KCI attack.
Decomposed Attack for the Jacobian of a Hyperelliptic Curve over an Extension Field
Uncategorized
Uncategorized
We study the solution of the discrete logarithm problem for
the Jacobian of a curve of genus g defined over an extension field Fqn, by
decomposed attack, considering a external elements B0 given by points
of the curve whose x-coordinates are defined in Fq. In the decomposed
attack, an element of the group which is written by a sum of some elements
of external elements is called (potentially) decomposed and the
set of the terms, that appear in the sum, is called decomposed factor. In
order for the running of the decomposed attack, a test for the (potential)
decomposeness and the computation of the decomposed factor are
needed. Here, we show that the test to determine if an element of the
Jacobian (i.e., reduced divisor) is written by an ng sum of the elements
of the external elements and the computation of decomposed factor are
reduced to the problem of solving some multivariable polynomial system
of equations by using the Riemann-Roch theorem. In particular, in the
case of a hyperelliptic curve, we construct a concrete system of equations,
which satisfies these properties and consists of (n2¡n)g quadratic
equations. Moreover, in the case of (g; n) = (1; 3); (2; 2) and (3; 2), we
give examples of the concrete computation of the decomposed factors by
using the computer algebra system Magma.
Privacy-Preserving Distributed Set Intersection
With the growing demand of databases outsourcing and its security concerns, we investigate privacy-preserving set intersection in a distributed scenario. We propose a one-round protocol for privacy-preserving set intersection based on a combination of secret sharing scheme and homomorphic encryption. We then show that, with an extra permutation performed by each of contacted servers, the cardinality of set intersection can be computed efficiently. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.
Construction of Pairing-Friendly Elliptic Curves
We explain a method of finding the polynomials representing $\sqrt{-D}$ and $\zeta_k$ over the field containing $\sqrt{-D}$ and $\zeta_k$ and how to construct a pairing friendly elliptic curves over the cyclotomic fields containing ${\mathbb Q} (\zeta_k, \sqrt{-D})$ for arbitrary $k$ and $D$ by CP method. By using the factorization of the cyclotomic polynomial combined some polynomial, we extend the construction over cyclotomic fields to the construction over some extensions of the cyclotomic fields containing ${\mathbb Q} (\zeta_k, \sqrt{-D})$. We explain the limitation of finding more families of pairing friendly elliptic curves with embedding degree 10. For all computation, we use the PARI-GP \cite{GP}.
How to Enrich the Message Space of a Cipher
Given (deterministic) ciphers $\calE$ and~$E$ that can encipher messages of $\el$ and $n$ bits, respectively, we construct a cipher~$\calE^*=XLS[\calE,E]$ that can encipher messages of $\el+s$ bits for any $s<n$. Enciphering such a string will take one call to~$\calE$ and two calls to~$E$. We prove that~$\calE^*$ is a strong pseudorandom permutation as long as~$\calE$ and~$E$ are. Our construction works even in the tweakable and VIL (variable-input-length) settings. It makes use of a multipermutation (a pair of orthogonal Latin squares), a combinatorial object not previously used to get a provable-security result.
An Improved Distinguisher for Dragon
Uncategorized
Uncategorized
Dragon stream cipher is one of the focus ciphers which have reached Phase 2 of the eSTREAM project.
In this paper, we present a new method of building a linear distinguisher for Dragon.
The distinguisher is constructed by exploiting
the biases of two S-boxes and the modular addition
which are basic components of the nonlinear function $F$.
The bias of the distinguisher is estimated to be around $2^{-75.32}$ which is
better than the bias of the distinguisher
presented by Englund and Maximov.
We have shown that Dragon is distinguishable from a random cipher
by using around $2^{150.6}$ keystream words and $2^{59}$ memory.
In addition, we present a very efficient algorithm for computing the bias of linear approximation
of modular addition.
Knapsack Public-Key Cryptosystem Using Chinese Remainder Theorem
The realization of the quantum computer will enable to break
public-key cryptosystems based on factoring problem
and discrete logarithm problem.
It is considered that even the quantum computer can not solve NP-hard problem
in a polynomial time.
The subset sum problem is known to be NP-hard.
Merkle and Hellman proposed a knapsack cryptosystem using the subset sum problem.
However, it was broken by Shamir or Adleman
because there exist the linearity of the modular transformation
and the specialty in the secret keys.
It is also broken with the low-density attack because the density is not sufficiently high.
In this paper, we propose a new class of knapsack scheme without modular transformation.
The specialty and the linearity can be avoidable by using
the Chinese remainder theorem as the trapdoor.
The proposed scheme has a high density and a large dimension
to be sufficiently secure against a practical low-density attack.
A generalization of Secret Sharing Scheme on the Basis of Recovering Algorithm, K-RA
Extensive studies have been made of the Secret Sharing Scheme(SSS). In this paper new classes of SSS, referred to as K-SSS, $\rm{K_I}$-SSS, $\rm{K_{I\hspace{-.1em}I}}$-SSS and $\tilde{{\rm K}}$-SSS
are presented on the basis of recovering algorithm, K-RA. As an application, we shall also present a method for the recovering of secret informations learned only by heart, based on a particular class of K-SSS, $\rm{K_I}$-SSS.
Isodual Reduction of Lattices
We define a new notion of a reduced lattice, based on a quantity
introduced in the LLL paper. We show that lattices reduced in this
sense are simultaneously reduced in both their primal and dual. We
show that the definition applies naturally to blocks, and therefore
gives a new hierarchy of polynomial time algorithms for lattice
reduction with fixed blocksize. We compare this hierarchy of
algorithms to previous ones. We then explore algorithms to provably
minimize the associated measure, and also some more efficient
heuristics. Finally we comment on the initial investigations of
applying our technique to the NTRU family of lattices.
Cryptanalysis of White-Box DES Implementations with Arbitrary External Encodings
Uncategorized
Uncategorized
At DRM 2002, Chow et al. presented a method for implementing the DES block cipher such that it becomes hard to extract the embedded secret key in a white-box attack context. In such a context, an attacker has full access to the implementation and its execution environment. In order to provide an extra level of security, an implementation shielded with external encodings was introduced by Chow et al. and improved by Link and Neumann. In this paper, we present an algorithm to extract the secret key from such white-box DES implementations. The cryptanalysis is a differential attack on obfuscated rounds, and works regardless of the shielding external encodings that are applied. The cryptanalysis has a average time complexity of $2^{14}$ and a negligible space complexity.
Another Look at Square Roots and Traces (and Quadratic Equations) in Fields of Even Characteristic
Uncategorized
Uncategorized
We discuss irreducible polynomials that can be
used to speed up square root extraction in fields of characteristic two.
We call such polynomials \textit{square root friendly}.
The obvious applications are to point halving methods for elliptic curves
and divisor halving methods for hyperelliptic curves.
We note the existence of square root friendly trinomials of a given
degree when we already know that an irreducible trinomial of the same
degree exists, and formulate a conjecture on the degrees of the terms of
square root friendly polynomials.
We also give a partial result
that goes in the direction of the conjecture.
Irreducible polynomials $p(X)$ such that the square root
$\zeta$ of a zero $x$ of $p(X)$ is a sparse polynomial are considered
and those for which $\zeta$ has minimal degree are characterized.
In doing this we discover a surprising connection
these polynomials
and those defining polynomial bases
with an extremal number of trace one elements.
We also show how to improve the speed of solving quadratic equations and
that the increase in the time required to perform modular reduction is
marginal and does not affect performance adversely.
Experimental results confirm that the new polynomials mantain their
promises; These results generalize work by Fong et al.\ to polynomials
other than trinomials. Point halving gets a speed-up of $20\%$ and the
performance of scalar multiplication based on point halving is improved
by at least $11\%$.
On the Role of Scheduling in Simulation-Based Security
In a series of papers, Küsters et al. investigated the
relationships between various notions of simulation-based
security. Two main factors, the placement of a ``master
process'' and the existence of ``forwarder processes'',
were found to affect the relationship between different
definitions. In this extended abstract, we add a new
dimension to the analysis of simulation-based security,
namely, the scheduling of concurrent processes. We show
that, when we move from sequential scheduling (as used in
previous studies) to task-based nondeterministic scheduling,
the same syntactic definition of security gives rise to
incomparable semantic notions of security. Under task-based scheduling, the hierarchy based on placement of ``master
process'' is no longer relevant, because no such designation
is necessary to obtain meaningful runs of a system. On
the other hand, the existence of ``forwarder processes''
remains an important factor.
Practical Password Recovery on an MD5 Challenge and Response
This paper shows an attack against APOP protocol which is a challenge-and-response protocol. We utilize the Wang's attack to make collisions in MD5, and apply it to APOP protocol. We confirmed that the first 3 octets of secret key can be recovered by several hundred queries under the man-in-the-middle environment.
Practical Identity-Based Encryption (IBE) in Multiple PKG Environments and Its Applications
Uncategorized
Uncategorized
Identity-based encryption (IBE) schemes are usually used in multiple-PKG environments --- on the one hand, each administrative domain (e.g., a relatively small and close organization) maintains its own private key generator (PKG); on the other hand, encryption across domains becomes a prevalent requirement. In this paper, we present a new IBE scheme using bilinear pairings. Compared with the famous IBE scheme of Boneh and Franklin, we show that ours is more practical in the multiple-PKG environment. We prove that our scheme meets chosen ciphertext security in the random oracle model, assuming the intractability of the standard Bilinear Diffie-Hellman (BDH) problem. As an application of our IBE scheme, we also propose an escrowed ElGamal scheme which possesses certain good properties in practice.
Inferring sequences produced by a linear congruential generator on elliptic curves missing high--order bits
Let $p$ be a prime and let $E(\F_p)$ be an elliptic curve defined over
the finite field $\F_p$ of $p$ elements. For a given point $G \in
E(\F_p)$ the linear congruential genarator on elliptic curves (EC-LCG)
is a sequence $(U_n)$ of pseudorandom numbers defined by the relation
$$ U_n=U_{n-1}\oplus G=nG\oplus U_0,\quad n=1,2,\ldots,$$ where
$\oplus$ denote the group operation in $E(\F_p)$ and $U_0 \in E(\F_p)$
is the initial value or seed. We show that if $G$ and sufficiently
many of the most significants bits of two consecutive values $U_n,
U_{n+1}$ of the EC-LCG are given, one can recover the seed $U_0$ (even
in the case where the elliptic curve is private) provided that the
former value $U_n$ does not lie in a certain small subset of
exceptional values. We also estimate limits of a heuristic approach
for the case where $G$ is also unknown. This suggests that for
cryptographic applications EC-LCG should be used with great care. Our
results are somewhat similar to those known for the linear and
non-linear pseudorandom number congruential generator.
Classes of Quadratic APN Trinomials and Hexanomials and Related Structures
A method for constructing differentially 4-uniform quadratic hexanomials has been recently introduced by J. Dillon. We give various generalizations of this method and we deduce the constructions of new infinite classes of almost perfect nonlinear quadratic trinomials and hexanomials from $\mathbb{F}_{2^{2m}}$ to $\mathbb{F}_{2^{2m}}$. We check for $m=3$ that some of these functions are CCZ-inequivalent to power functions.
Large Cyclic Subgroups of Jacobians of Hyperelliptic Curves
Uncategorized
Uncategorized
In this paper we obtain conditions on the divisors of the group order of the Jacobian of a hyperelliptic genus 2 curve, generated by the complex multiplication method described by Weng (2003) and Gaudry (2005). Examples, where these conditions imply that the Jacobian has a large cyclic subgroup, are given.
Somos Sequence Near-Addition Formulas and Modular Theta Functions
We have discovered conjectural near-addition formulas for Somos sequences. We have preliminary evidence suggesting the existence of modular theta functions.
Generic Certificateless Encryption in the Standard Model
Despite the large number of certificateless encryption schemes
recently proposed, many of them have been found to be insecure under
a practical attack called \emph{malicious-but-passive} KGC attack,
since they all follow the same key generation procedure as that of
the one proposed by Al-Riyami and Paterson in ASIACRYPT 2003. The
only provably secure certificateless encryption scheme against this
attack is due to Libert and Quisquater (PKC 2006). However, the
security can only be shown in the random oracle model.
%
In this paper, we first show that a scheme which has a different key
generation procedure from that of Al-Riyami and Paterson also suffers
from the malicious-but-passive KGC attack. Our attacking techniques
are different from the previous attacks and may cause greater extent
of damage than the previous ones. We also propose a generic
construction of certificateless encryption which can be proven secure
against this attack \emph{in the standard model}. This generic scheme
is not only the first one proven secure in the standard model, but is
also very efficient to instantiate. We also describe how to use short
signature and hybrid encryption to construct highly efficient
instantiations of this generic scheme.
Mesh Signatures : How to Leak a Secret with Unwitting and Unwilling Participants
We introduce the mesh signature primitive as an anonymous signature that borrows from ring signatures, but with added modularity and a much richer language for expressing signer ambiguity. The language can represent complex access structures, and in particular allows individual signature components to be replaced with modular certificate chains. As a result, withholding one's public key from view is no longer a shield against being named as a possible cosignatory; and hence, a mesh signature may be used as a ring signature substitute with compulsory enrollment.
We give an efficient construction based on bilinear maps in the common random string model. Our mesh signatures have linear size, achieve everlasting perfect anonymity, and as a special case induce the most efficient and first unconditionally anonymous ring signatures without random oracles or trusted setup authorities. We prove non-repudiation from a mild extension of the SDH assumption, which we introduce and justify meticulously.
HAPADEP: Human Asisted Pure Audio Device Pairing
Uncategorized
Uncategorized
The number and diversity of electronic gadgets has been steadily increasing and they are becoming indispensable to more and more professionals and non-professionals alike. At the same time, there has been fairly little progress in secure pairing of such devices. The pairing challenge revolves around establishing on-the-fly secure communication without any trusted (on- or off-line) third parties between devices that have no prior association. The main security issue is the danger of so-called Man-in-the-Middle (MiTM) attacks, whereby an adversary impersonates one of the devices by inserting itself into the pairing protocol. One basic approach to countering these MiTM attacks is to involve the user in the pairing process. Therein lies the usability challenge since it is natural to minimize user burden. Previous research yielded some interesting secure pairing techniques, some of which ask too much of the human user, while others
assume availability of specialized equipment (e.g., wires, photo or video cameras) on devices. Furthermore, all prior methods assumed the existence of a common digital (humanimperceptible) communication medium, such as Infrared, 802.11 or Bluetooth.
In this paper we introduce a very simple technique called HAPADEP (Human-Assisted Pure Audio Device Pairing). It places very little burden on the human user and requires no common means of electronic communication. Instead, HAPADEP uses the audio channel to exchange both data and verification information among devices. It makes secure
pairing possible even if devices are equipped only with a microphone and a speaker. Despite its simplicity, a number of interesting issues arise in the design of HAPADEP. We discuss design and implementation highlights as well as usability features and limitations.
PRIME POINTS ON ELLIPTIC CURVES AND ITS IMPACT ON ECDLP
In this paper we present that some statistical properties of points on elliptic curve can be used to form new equivalence classes. This can have an impact on solving discrete logarithm (ECDLP) owing to the reduction of the number of points among which a logarithm is searched to points of particular features. It should lead to an improvement of the Pollard-rho algorithm.
Arithmetic Operators for Pairing-Based Cryptography
Since their introduction in constructive cryptographic applications,
pairings over (hyper)elliptic curves are at the heart of an ever
increasing number of protocols. Software implementations being rather
slow, the study of hardware architectures became an active research
area. In this paper, we first study an accelerator for the $\eta_T$
pairing over $\mathbb{F}_3[x]/(x^{97}+x^{12}+2)$. Our architecture is
based on a unified arithmetic operator which performs addition,
multiplication, and cubing over $\mathbb{F}_{3^{97}}$. This design
methodology allows us to design a compact coprocessor ($1888$ slices
on a Virtex-II Pro~$4$ FPGA) which compares favorably with other
solutions described in the open literature. We then describe ways to
extend our approach to any characteristic and any extension field.
Last updated: 2007-08-10
On the security of an image encryption scheme
Uncategorized
Uncategorized
This paper studies the security of a recently-proposed image encryption scheme based on chaos, and points out the following problems: 1) there exist a number of invalid keys and weak keys, and some keys are partially equivalent for the encryption/decryption processes; 2) given one chosen plain-image, a sub-key $K_{10}$ can be guessed with a smaller computational complexity than that of the simple brute-force attack; 3) given $O(10)$ (at most 128) chosen plain-images, a chosen-plaintext attack may be able to break the following part of the secret key: $(\{K_i\bmod 128\}_{i=4}^{10})$, which works very well when $K_{10}$ is not too large; 4) when $K_{10}$ is relatively small, a known-plaintext attack can be mounted with only one known plain-image to recover some visual information of other plain-images encrypted by the same key.
Black-Box Extension Fields and the Inexistence of Field-Homomorphic One-Way Permutations
The black-box field (BBF) extraction problem is, for a given field
$\F$, to determine a secret field element hidden in a black-box which
allows to add and multiply values in $\F$ in the box and which reports
only equalities of elements in the box. This problem is of
cryptographic interest for two reasons. First, for $\F=\F_p$ it
corresponds to the generic reduction of the discrete logarithm problem
to the computational Diffie-Hellman problem in a group of prime order
$p$. Second, an efficient solution to the BBF problem proves the
inexistence of certain field-homomorphic encryption schemes whose
realization is an interesting open problems in algebra-based
cryptography. BBFs are also of independent interest in computational
algebra.
In the previous literature, BBFs had only been considered for the
prime field case. In this paper we consider a generalization of the
extraction problem to BBFs that are extension fields. More precisely
we discuss the representation problem defined as follows: For given
generators $g_1,\ldots,g_d$ algebraically generating a BBF and an
additional element $x$, all hidden in a black-box, express $x$
algebraically in terms of $g_1,\ldots,g_d$. We give an efficient
algorithm for this representation problem and related problems for
fields with small characteristic (e.g. $\F=\F_{2^n}$ for some $n$). We
also consider extension fields of large characteristic and show how to
reduce the representation problem to the extraction problem for the
underlying prime field.
These results imply the inexistence of field-homomorphic (as opposed
to only group-homomorphic, like RSA) one-way permutations for fields
of small characteristic.
An Algorithm for Finding Small Roots of Multivariate Polynomials over the Integers
In this paper we present a new algorithm for finding small roots of a system of multivariate polynomials over the integers based on lattice reduction techniques. Our simpler heuristic
method is inspired in algorithms for predicting pseudorandom numbers, and it can be considered as another variant of Coppersmith's method for finding small solutions of integer bivariate polynomials.
We also apply the method to the well known problem of factoring an integer when we know the high-order bits of one of the factors.
Improvement on a Digital Signature Scheme without using One-way Hash and Message Redundancy
Digital signature schemes based on public-key cryptosystems generally permit existential forgery, except the schemes are equipped with some message formatting mechanisms, such as using hash functions or padding redundancies. In 2004, Chang et al. proposed a new digital signature scheme, and claimed the scheme without using any hash function or padding any redundancy can resist forgery attacks. However, many attacks on Chang et al.'s scheme were presented. Kang et al. also gave an effective improvement to resist these forgery attacks. In this letter, we gave a further improvement to shorten the signed signature. Our improvement keeps the security of Kang et al.'s scheme and makes it more efficient in computation and communication.
Non-Interactive Proofs for Integer Multiplication
Uncategorized
Uncategorized
We present two universally composable and practical
protocols by which a dealer can, verifiably and
non-interactively, secret-share an integer among a set of players.
Moreover, at small extra cost and using a distributed verifier proof,
it can be shown in zero-knowledge
that three shared integers $a,b,c$ satisfy $ab =c$. This implies by
known reductions non-interactive zero-knowledge proofs that a shared
integer is in a given interval, or that one secret integer is larger
than another. Such primitives are useful, e.g., for supplying
inputs to a multiparty computation protocol, such as an auction or
an election. The protocols use various set-up assumptions, but do
not require the random oracle model.
MultiCollision Attack on the Compression Functions of MD4 and 3-Pass HAVAL
In this paper, we present a new type of MultiCollision attack on the compression functions both of MD4 and 3-Pass HAVAL. For MD4, we utilize two feasible different collision differential paths to find a 4-collision with 2^{19} MD4 computations. For 3-Pass HAVAL, we present three near-collision differential paths to find a 8 NearCollision with 2^{9} HAVAL computations.
Constant Size Ciphertext HIBE in the Augmented Selective-ID Model and its Extensions
At Eurocrypt 2005, Boneh, Boyen and Goh presented a constant size ciphertext hierarchical identity based encryption (HIBE) protocol. Our main contribution is to present a variant of the BBG-HIBE. The new HIBE is proved to be secure (without any degradation) in an extension of the sID model (denoted the s$^+$-ID model) and the components of the identities are from $\bbbz_p$, where $p$ is a suitable large prime. The BBG-HIBE is proved to be secure in the selective-ID (sID) security model and the components of the identities are from
$\bbbz_p^*$. In the s$^+$-ID model the adversary is allowed to vary the length of the challenge identity whereas this is not allowed in the sID model. The new HIBE shares all the good features of the BBG-HIBE. The drawback is that the public parameters and the private key are longer than that of the BBG-HIBE. We also provide two more extensions of the basic constant size ciphertext HIBE. The first is a constant size ciphertext HIBE secure in the generalised selective-ID model $\clsM_2$. The second one is a product construction composed of two HIBEs and a trade-off is possible between the private key size and the ciphertext size.
Public Key Encryption Which is Simultaneously a Locally-Decodable Error-Correcting Code
Uncategorized
Uncategorized
In this paper, we introduce the notion of a Public-Key Encryption Scheme that is also a Locally-Decodable Error-Correcting Code (PKLDC). In essence, this is a protocol that is semantically-secure in the standard sense, but possesses the additional property that it is a binary error-correcting locally-decodable code against any polynomial-time Adversary. That is, we allow a polynomial-time Adversary to read the entire ciphertext, perform any polynomial-time computation and change an arbitrary (i.e. adversarially chosen) constant fraction of {\em all} bits of the ciphertext. The goal of the Adversary is to cause error in decoding any bit of the plaintext. Nevertheless, the decoding algorithm can decode {\bf all} bits of the plaintext (given the corrupted ciphertext) while making a mistake on \emph{any} bit of the plaintext with only a negligible in $k$ error probability. In addition, the decoding algorithm has a {\bf Local Decodability} property. That is, given a corrupted ciphertext of $E(x)$ the decoding algorithm, for any $1 \le i \le n$, can recover the $i$'th bit of the plaintext $x$ with overwhelming probability reading a sublinear (in $|x|$) number of bits of the corrupted ciphertext and performing computation polynomial in the security parameter $k$.
We present a general reduction from any semantically-secure encryption protocol and any computational Private Information Retrieval (PIR) protocol to a PKLDC. In particular, since it was shown that homomorphic encryption implies PIR, we give a general reduction from any semantically-secure homomorphic encryption protocol to a PKLDC. Applying our construction to the best known PIR protocol (that of Gentry and Ramzan), we obtain a PKLDC, which for messages of size $n$ and security parameter $k$ achieves ciphertexts of size $\O(n)$, public key of size $\O(n+k)$, and locality of size $\O(k^2)$. This means that for messages of length $n = \omega(k^{2+\epsilon})$, we can decode bit of the plaintext from a corrupted ciphertext while doing computation sublinear in $n$. We emphasize that this protocol achieves codewords that are only a \emph{constant} times larger than the underlying plaintext, while the best known locally-decodable codes (due to Yekhanin) have codewords that are only slightly subexponential in the length of the plaintext. In addition, we believe that the tools and techniques developed in this paper will be of independent interest in other settings as well.
Deniable Authentication on the Internet
Deniable authentication is a technique that allows one party to
send messages to another while the latter can not prove to a third
party the fact of communication. In this paper, we first formalize a
natural notion of deniable security and naturally extend the
basic authenticator theorem by Bellare et al. \cite{bck98} to the
setting of deniable authentication. Of independent interest, this
extension is achieved by defining a deniable MT-authenticator
via a game. This game is essentially borrowed from the notion
of universal composition \cite{can01} although we do not assume any
result or background about it. Then we construct two
deniable MT-authenticators: uncontrollable random oracle
based and the PKI based, both of which are
just 3-round protocols. The second construction assumes
the receiver owns a secret key. Such a setup assumption is very
popular in the real world. (Without this assumption), all the
previous protocols do not have a widely satisfiable performance when
applied in the Internet-like environment. Finally, as our application, we
obtain key exchange protocols that is deniably secure in the real
world.
Revisiting an Efficient Elliptic Curve Key Agreement Protocol
A recent paper by Wang \emph{et al.} has revealed a vulnerability in the ECKE-1 key agreement protocol. In particular, contrary to the author's claims, protocol ECKE-1 is shown to be susceptible to a key-compromise impersonation attack. This attack was also independently pointed out by the author in another recent paper published in the EURASIP Journal on Embedded Systems. Here we present a revised version of the protocol, ECKE-1R, that is key-compromise impersonation resilient at the expense of a higher computational workload and communication complexity with respect to the original protocol ECKE-1.
Last updated: 2007-03-30
Weakly only Unforgeable Signature and Its Application in Group Signature
Uncategorized
Uncategorized
If a signature scheme is secure in the sense that no forgery on
any new message (i.e., a message that has never been signed) is
available for any computation restricted adversary, it is said
weakly unforgeable (wUF), in contrast to strongly unforgeable
(sUF) meaning no new signature on any old message (i.e., a valid
signature on the message is already known) is available to such
adversaries. sUF signatures are generally considered advantageous
over wUF ones because of preference for high level security. But
the case may be different when they are employed to construct
group signatures.
wUF but not sUF signatures, called WoUF signatures in this paper,
are investigated in this paper. It is found that by applying a
generic construction to WoUF signatures with indirectly-signability
and perfectly-unlinkability (also defined in this paper), we can
regenerate many efficient group signatures in literature.
We also propose improvements to the group signature schemes of CL04,
NSN04, KY05, in line with our generic construction.
How To Find Many Collisions of 3-Pass HAVAL
Uncategorized
Uncategorized
The hash function HAVAL is an Australian extension
of well known Merkle-Damgård hash functions such as MD4 and MD5.
It has three variants, $3$-, $4$- and $5$-pass HAVAL.
On $3$-pass HAVAL,
the best known attack finds a collision pair
with $2^{7}$ computations of the compression function.
To find $k$ collision pairs,
it requires $2^{7}k$ computations.
In this paper,
we present a better collision attack on $3$-pass HAVAL,
which can find $k$ collision pairs with only $2k+33$ computations.
Further,
our message differential is different from the previous ones.
(It is important to find collisions for different message differentials.)
MPC vs. SFE: Perfect Security in a Unified Corruption Model
Uncategorized
Uncategorized
Secure function evaluation (SFE) allows a set of players to compute
an arbitrary agreed function of their private inputs, even if an
adversary may corrupt some of the players. Secure multi-party
computation (MPC) is a generalization allowing to perform an
arbitrary on-going (also called reactive or stateful) computation
during which players can receive outputs and provide new inputs at
intermediate stages.
At Crypto~2006, Ishai \emph{et al.} considered mixed threshold
adversaries that either passively corrupt some fixed number of players,
or, alternatively, actively corrupt some (smaller) fixed number of
players, and showed that for certain thresholds, cryptographic SFE is
possible, whereas cryptographic MPC is not.
However, this separation does not occur when one considers
\emph{perfect} security. Actually, past work suggests that no such
separation exists, as all known general protocols for perfectly secure SFE
can also be used for MPC. Also, such a separation does not show up with
\emph{general adversaries}, characterized by a collection of corruptible
subsets of the players, when considering passive and active corruption.
In this paper, we study the most general corruption model where the
adversary is characterized by a collection of adversary classes, each
specifying the subset of players that can be actively, passively, or
fail-corrupted, respectively, and show that in this model, perfectly
secure MPC separates from perfectly secure SFE. Furthermore, we derive
the exact conditions on the adversary structure for the existence of
perfectly secure SFE resp.~MPC, and provide efficient protocols for both
cases.
Last updated: 2007-03-01
On bent functions with zero second derivatives
Uncategorized
Uncategorized
It is proved that a bent function has zero second derivative with respect to $a$, $b$, $a \ne b$, if and only if it is affine on all the flats parallel to the two dimensional subspace $V = \langle a, b \rangle$.
Almost Secure (1-Round, n-Channel) Message Transmission Scheme
It is known that perfectly secure ($1$-round, $n$-channel) message transmission (MT) schemes exist if and only if $n \geq 3t+1$,
where $t$ is the number of channels that the adversary can corrupt.
Then does there exist an {\it almost} secure MT scheme for $n=2t+1$ ? In this paper, we first sum up a number flaws of the previous {\it almost} secure MT scheme presented at Crypto 2004. (The authors already noted in thier presentation at Crypto'2004 that their scheme was flawed.) We next show an equivalence between almost secure MT schemes and secret sharing schemes with cheaters. By using our equivalence, we derive a lower bound on the communication complexity
of almost secure MT schemes. Finally, we present a near optimum scheme which meets our bound approximately. This is the first construction of provably secure almost secure ($1$-round, $n$-channel) MT schemes for $n=2t+1$.
Weaknesses in the Pseudorandom Bit Generation Algorithms of the Stream Ciphers TPypy and TPy
Uncategorized
Uncategorized
The stream ciphers Py, Py6 were designed by Biham and Seberry for the ECRYPT-eSTREAM
project in 2005. However, due to several recent cryptanalytic attacks on them, a
strengthened version Pypy was proposed to rule out those attacks. The ciphers have been
promoted to the `Focus' ciphers of the Phase II of the eSTREAM project. The impressive
speed of the ciphers make them the forerunners in the competition. Unfortunately, even the
new cipher Pypy was found to retain weaknesses, forcing the designers to again go for
modifications. As a result, three new ciphers TPypy, TPy and TPy6 were built. Among all the
members of the Py-family of ciphers, the TPypy is conjectured to be the strongest. So far,
there is no known attack on the TPypy. This paper shows that the security of TPypy does not
grow exponentially with the key-size. The main achievement of the paper is the detection of
input-output correlations of TPypy that allow us to build a distinguisher with $2^{281}$
randomly chosen key/IVs and as many outputwords (each key generating one outputword). The
cipher TPypy was claimed by the designers to be secure with keysize up to 256 bytes, i.e.,
2048 bits. Our results establish that the TPypy fails to provide adequate security if the
keysize is longer than 35 bytes, i.e., 280 bits. Note that the distinguisher is built
within the design specifications of the cipher. Because of remarkable similarities between
the TPypy and the TPy, our attacks are shown to be effective for TPy also. The paper also
points out how the other members of the Py-family (i.e., TPy6, Py6, Pypy and Py6) are also
weak against the current and some existing attacks.
A Cramer-Shoup Encryption Scheme from the Linear Assumption and from Progressively Weaker Linear Variants
We describe a CCA-secure public-key encryption scheme, in the
Cramer-Shoup paradigm, based on the Linear assumption of Boneh,
Boyen, and Shacham. Through a comparison to the Kiltz
tag-encryption scheme from TCC 2006, our scheme gives evidence
that the Cramer-Shoup paradigm yields CCA encryption with shorter
ciphertexts than the Canetti-Halevi-Katz paradigm.
We present a generalization of the Linear assumption into a
family of progressively weaker assumptions and show how to
instantiate our Linear Cramer-Shoup encryption using the
progressively weaker members of this family.
Public Key Encryption that Allows PIR Queries
Consider the following problem: Alice wishes to maintain her email
using a storage-provider Bob (such as a Yahoo! or hotmail e-mail
account). This storage-provider should provide for Alice the ability
to collect, retrieve, search and delete emails but, at the same
time, should learn neither the content of messages sent from the
senders to Alice (with Bob as an intermediary), nor the search
criteria used by Alice. A trivial solution is that messages will be
sent to Bob in encrypted form and Alice, whenever she wants to
search for some message, will ask Bob to send her a copy of the
entire database of encrypted emails. This however is highly
inefficient. We will be interested in solutions that are communication-efficient and, at the same time, respect the privacy of Alice. In this paper, we show how to create a public-key encryption scheme for Alice that allows PIR searching over encrypted documents. Our solution provides a theoretical solution to an open problem posed by Boneh, DiCrescenzo, Ostrovsky and Persiano on ``Public-key Encryption with Keyword Search'', providing the first scheme that does not reveal any partial information regarding user's search (including the access pattern) in the public-key setting and with non-trivially
small communication complexity.
The main technique of our solution also allows for Single-Database PIR writing with sub-linear communication complexity, which we consider of independent interest.
Last updated: 2007-06-05
A Hybrid Approach to Concurrent Error Detection for a Compact ASIC Implementation of the Advanced Encryption Standard
In this paper, we investigate the application of concurrent error detection circuitry to a compact application-specific integrated circuit (ASIC) implementation of the Advanced Encryption Standard (AES). The specific objective of the design is to develop a method suitable for compact ASIC implementations targeted to embedded systems such that the system is resistant to fault attacks. To provide the error detection, recognizing that previously proposed schemes are not well suited to compact implementations, it is proposed to adopt a hybrid approach consisting of parity codes in combination with partial circuit redundancy. For compact ASIC implementations, taking such an approach gives a better ability to detect faults than simple parity codes, with less area cost than proposed schemes which use full hardware redundancy. The results of the implementation analysis in this paper show that it is possible to implement an error detection scheme that is robust to multiple faults in a compact AES design such that about 39% of the overall system is devoted to the error detection functionality.
Knowledge-Binding Commitments with Applications in Time-Stamping (Full Version)
We prove in a non-black-box way that every bounded list and set commitment scheme is knowledge-binding. This is a new and rather strong security condition, which makes the security definitions for time-stamping much more natural compared to the previous definitions, which assume unpredictability of adversaries. As a direct consequence, list and set commitment schemes with partial opening property are sufficient for secure time-stamping if the number of elements has an explicit upper bound N. On the other hand, white-box reductions are in a sense strictly weaker than black-box reductions. Therefore, we also extend and generalize the previously known reductions. The corresponding new reductions are Theta(sqrt(N)) times more efficient, which is important for global-scale time-stamping schemes where N is very large.
Two Linear Distinguishing Attacks on VMPC and RC4A and Weakness of RC4 Family of Stream Ciphers (Corrected)
At FSE 2004 two new stream ciphers VMPC and RC4A have been proposed.
VMPC is a generalisation of the stream cipher RC4, whereas RC4A is an
attempt to increase the security of RC4 by introducing an additional
permuter in the design. This paper is the first work
presenting attacks on VMPC and RC4A. We propose two linear
distinguishing attacks, one on VMPC of complexity $2^{39.97}$, and
one on RC4A of complexity $2^{58}$. We investigate the RC4 family of
stream ciphers and show some theoretical weaknesses of such constructions.
Nominative Signature: Application, Security Model and Construction
Since the introduction of nominative signature in 1996, there have been only a few schemes proposed and all of them have already been found flawed. In addition, there is no formal security model defined. Even more problematic, there is no convincing application proposed. Due to these problems, the research of nominative signature has almost stalled and it is unknown if a secure nominative signature scheme can be built or there exists an application for it. In this paper, we give positive answers to these problems. First, we illustrate that nominative signature is a better tool for building user certification systems which are originally believed to be best implemented using a universal designated-verifier signature. Second, we propose a formal definition and a rigorous set of adversarial models for nominative signature. Third, we show that Chaum's undeniable signature can be transformed efficiently to a nominative signature and prove its security.
Last updated: 2007-11-02
Efficient Hierarchical Identity Based Signature in the Standard Model
The only known constructions of Hierarchical Identity Based Signatures that are proven secure in the strongest model without random oracles are based on the approach of attaching certificate chains or hierarchical authentication tree with one-time signature. Both construction methods lead to schemes that are somewhat inefficient and leave open the problem of efficient direct construction. In this paper, we propose the first direct construction of Hierarchical Identity Based Signature scheme that is
proven under the strongest model without relying on random oracles and using more standard $q$-SDH assumption. It is computationally efficient and the signature size is constant.
When the number of hierarchical level is set to be one, our scheme is a normal identity based signature scheme. It enjoys the shortest size in public parameters and signatures when compare with others in the literature, with the same security level.
Last updated: 2007-09-26
withdrawn
Uncategorized
Uncategorized
withdrawn
Low-Density Attack Revisited
The low-density attack proposed by Lagarias and Odlyzko is a powerful algorithm against the subset sum problem. The improvement algorithm due to Coster et al. would solve almost all the problems of density < 0.9408... in the asymptotical sense. On the other hand, the subset sum problem itself is known as an NP-hard problem, and a lot of efforts have been paid to establish public-key cryptosystems based on the problem. In these cryptosystems, densities of the subset sum problems should be higher than 0.9408... in order to avoid the low-density attack. For example, the Chor-Rivest cryptosystem adopted subset sum problems with relatively high densities. In this paper, we further improve the low-density attack by incorporating an idea that integral lattice points can be covered with polynomially many spheres of shorter radius and of lower dimension. As a result, the success probability of our attack can be higher than that of Coster et al.'s attack for fixed dimensions. The density bound is also improved for fixed dimensions. Moreover, we numerically show that our improved low-density attack makes the success probability higher in case of low Hamming weight solution, such as the Chor-Rivest cryptosystem, if we assume SVP oracle calls.
How to Derive Lower Bound on Oblivious Transfer Reduction
Suppose that we are given an ideal oblivious transfer protocol (OT). We wish to construct a larger OT by using the above OT as a blackbox. Then how many instances of the given ideal OT should be invoked ? For this problem, some lower bounds were derived using entropy. In this paper, we show more tight lower bounds
by using combinatorial techniques. Roughly speaking, our lower bounds are two times larger than the previous bounds.
Algebraic Lower Bounds for Computing on Encrypted Data
In cryptography, there has been tremendous success in building
primitives out of homomorphic semantically-secure encryption
schemes, using homomorphic properties in a black-box way. A few
notable examples of such primitives include items like private
information retrieval schemes and collision-resistant hash functions. In this paper, we illustrate a general
methodology for determining what types of protocols can be
implemented in this way and which cannot. This is accomplished by
analyzing the computational power of various algebraic structures
which are preserved by existing cryptosystems. More precisely, we
demonstrate lower bounds for algebraically generating generalized
characteristic vectors over certain algebraic structures, and
subsequently we show how to directly apply this abstract algebraic
results to put lower bounds on algebraic constructions of a number of
cryptographic protocols, including PIR-writing and private keyword
search protocols. We hope that this work will provide a simple
``litmus test'' of feasibility for use by other cryptographic
researchers attempting to develop new protocols that require
computation on encrypted data. Additionally, a precise mathematical
language for reasoning about such problems is developed in this
work, which may be of independent interest.
Constructing new APN functions from known ones
Uncategorized
Uncategorized
We present a method for constructing new quadratic APN functions from known ones. Applying this method to the Gold power functions we construct an APN function $x^3+\tr(x^9)$ over $\F_{2^n}$. It is proven that in general this function is CCZ-inequivalent to the Gold functions (and therefore EA-inequivalent to power functions), to the inverse and Dobbertin mappings, and in the case $n=7$ it is CCZ-inequivalent to all power mappings.
Algebraic and Slide Attacks on KeeLoq
KeeLoq is a block cipher used in wireless devices that unlock doors in cars manufactured by Chrysler, Daewoo, Fiat, GM, Honda, Jaguar, Toyota, Volvo, Volkswagen, etc. It was designed in the 80's by Willem Smit from South Africa and in 1995 was sold to Microchip Technology Inc for more than 10 million USD. Though no attack on this cipher have been found thus far, the 64-bit key size makes it no longer secure. Hackers and car thieves exploit this, to recover the key by brute force with FPGA's.
From our point of view however, this cipher is interesting for other reasons. Compared to typical block ciphers that have a few carefully designed rounds, this cipher has 528 extremely simple rounds with extremely few intermediate variables (one per round). This seems a perfect target to study algebraic attacks on block ciphers. The cipher also has a periodic structure with period of 64 rounds, and an unusually small block size of 32 bits.
We present several slide-algebraic attacks on KeeLoq, the best of which allows one to recover the full key for the full cipher within 2^48 CPU clocks.
Until now algebraic attacks didn't give interesting results
on block ciphers and most researchers seriously doubted if any block cipher will EVER be broken by such attacks.
In this paper however, we show that, for the first time in history,
a full round real-life block cipher is broken by an algebraic attack.
Moreover, our attacks are easy to implement,
have been tested experimentally, and the full key
can be recovered in practice on a PC.
Accelerating SSL using the Vector processors in IBM's Cell Broadband Engine for Sony's Playstation 3
Recently the major performance chip manufacturers have turned to multi-core technology as a more cost effective alternative to ever increasing clock speeds. IBM have introduced the Cell Broadband Engine (Cell) as their next generation CPU to feed the insatiable appetite modern multimedia and number crunching applications have for processing power.
The Cell is the technology at the heart of Sonys Playstation 3. The Cell contains a number of specialist synergistic processor units (SPUs) optimised for multimedia processing and offer a rich programming interface to applications that can make use of the vector processing capabilities. Multiprecision number manipulation for use in cryptography is one such application. This paper explores the implementation and performance gains when using these capabilities for SSL.
Security Against Covert Adversaries: Efficient Protocols for Realistic Adversaries
In the setting of secure multiparty computation, a set of mutually distrustful parties wish to securely compute some joint function of their private inputs. The computation should be carried out in a secure way, meaning that no coalition of corrupted parties should be able to learn more than specified or somehow cause the result to be ``incorrect''. Typically, corrupted parties are either assumed to be semi-honest (meaning that they follow the protocol specification) or malicious (meaning that they may deviate arbitrarily from the protocol). However, in many settings, the assumption regarding semi-honest behavior does not suffice and security in the presence of malicious adversaries is excessive and expensive to achieve.
In this paper, we introduce the notion of {\em covert adversaries}, which we believe faithfully models the adversarial behavior in many commercial, political, and social settings. Covert adversaries have the property that they may deviate arbitrarily from the protocol specification in an attempt to cheat, but do not wish to be ``caught'' doing so. We provide a definition of security for covert adversaries and show that it is possible to obtain highly efficient protocols that are secure against such adversaries. We stress that in our definition, we quantify over all (possibly malicious) adversaries and do not assume that the adversary behaves in any particular way. Rather, we guarantee that if an adversary deviates from the protocol in a way that would enable it to ``cheat'' (meaning that it can achieve something that is impossible in an ideal model where a trusted party is used to compute the function), then the honest parties are guaranteed to detect this cheating with good probability. We argue that this level of security is sufficient in many settings.
A Survey of Single Database PIR: Techniques and Applications
In this paper we survey the notion of Single-Database Private
Information Retrieval (PIR). The first Single-Database PIR was
constructed in 1997 by Kushilevitz and Ostrovsky and since then
Single-Database PIR has emerged as an important cryptographic
primitive. For example, Single-Database PIR turned out to be
intimately connected to collision-resistant hash functions,
oblivious transfer and public-key encryptions with additional
properties. In this survey, we give an overview of many of the
constructions for Single-Database PIR (including an abstract
construction based upon homomorphic encryption) and describe some of
the connections of PIR to other primitives.
The simplest method for constructing APN polynomials EA-inequivalent to power functions
The first APN polynomials EA-inequivalent to power functions have been constructed in [1,2] by applying CCZ-equivalence to the Gold APN functions. It is a natural question whether it is possible to construct APN polynomials EA-inequivalent to power functions by using only EA-equivalence and inverse transformation on a power APN function: this would be the simplest method to construct APN polynomials EA-inequivalent to power functions. In the present paper we prove that the answer to this question is positive. By this method we construct a class of APN polynomials EA-inequivalent to power functions. On the other hand it is shown that the APN polynomials from [1,2] cannot be obtained by the introduced method.
[1] L. Budaghyan, C. Carlet, A. Pott. New Classes of Almost Bent and Almost Perfect Nonlinear Functions. IEEE Trans. Inform. Theory, vol. 52, no. 3, pp. 1141-1152, March 2006.
[2] L. Budaghyan, C. Carlet, A. Pott. New Constructions of Almost Bent and Almost Perfect Nonlinear Functions. Proceedings of the Workshop on Coding and Cryptography 2005, pp. 306-315, 2005.
Constructing pairing-friendly genus 2 curves over prime fields with ordinary Jacobians
We provide the first explicit construction of genus 2 curves over finite fields whose Jacobians are ordinary, have large prime-order subgroups, and have small embedding degree. Our algorithm works for arbitrary embedding degrees $k$ and prime subgroup orders $r$. The resulting abelian surfaces are defined over prime fields $\F_q$ with $q \approx r^4$. We also provide an algorithm for constructing genus 2 curves over prime fields $\F_q$ with ordinary Jacobians $J$ having the property that $J[r] \subset J(\F_{q})$ or $J[r] \subset J(\F_{q^k})$ for any even $k$.
Enforcing Semantic Integrity on Untrusted Clients in Networked Virtual Environments
During the last years, large-scale simulations of realistic physical
environments which support the interaction of multiple participants
over the Internet have become increasingly available and
economically viable, most notably in the computer gaming
industry. Such systems, commonly called networked virtual
environments (NVEs), are usually based on a client-server
architecture where for performance reasons and bandwidth
restrictions, the simulation is partially delegated to the clients.
This inevitable architectural choice renders the simulation
vulnerable to attacks against the semantic integrity of the
simulation: malicious clients may attempt to compromise the physical
and logical rules governing the simulation, or to alter the causality
of events a posteriori.
In this paper, we initiate the systematic study of semantic
integrity in NVEs from a security point of view. We argue that
naive policies to enforce semantic integrity involve intolerable
network load, and are therefore not practically feasible. We present
a new provably secure semantic integrity protocol based on
cryptographic primitives which enables the server system to audit
the local computations of the clients on demand. Our approach
facilitates low network and CPU load, incurs reasonable
engineering overhead, and maximally decouples the auditing process
from the soft real time constraints of the simulation.
Cryptanalysis of the KeeLoq block cipher
KeeLoq is a block cipher used in numerous widespread passive entry and remote keyless entry systems as well as in various component identification applications. The KeeLoq algorithm has a 64-bit key and operates on 32-bit blocks. It is based on an NLFSR with a nonlinear feedback function of 5 variables.
In this paper a key recovery attack with complexity of about $2^{52}$ steps is proposed (one step is equivalent to a single KeeLoq encryption operation). In our attack we use the techniques of guess-and-determine, slide, and distinguishing attacks. Several real-world applications are vulnerable to the attack. To our best knowledge this is the first paper to describe and cryptanalyze the KeeLoq block cipher.
Cryptanalysis of Stream Ciphers Based on Arrays and Modular Addition
In modern cryptography, stream ciphers are most useful in applications where information
needs to be encrypted/decrypted at high speed (e.g. high resolution streaming video data)
or when low footprint (gates/memory) encryption is required. In the literature, there exist
plenty of stream ciphers whose internal states are based on arrays and that they use
modular additions to generate output streams. The abundance of array-based stream ciphers
with modular additions can be attributed to the fact that, when implemented in software
skillfully, they are able to produce outputs at a very high speed. The main contribution of
this thesis is a unified analysis of stream ciphers based on arrays and modular addition.
During the process, we detect cryptographic weaknesses in the designs of 9 widely known
stream ciphers or pseudorandom bit generators (PRBGs).
At first, we show some theoretical results on solving an important class of equations known
as \emph{differential equations of addition} (DEA) that combine modular additions over two
different algebraic groups such as GF(2) and GF($2^{32}$). The results include, \bite \item
proof of the fact that the satisfiability of an arbitrary set of DEA is in the complexity
class \pP,\item deriving all the solutions of an arbitrary set of DEA. \eite Next, we apply
these results to attack a practical stream cipher named Helix (designed by Ferguson
\emph{et al.}) with both chosen plaintexts and adaptive chosen plaintexts.
In the second phase, the thesis closely scrutinizes a number of array-based stream ciphers
(or PRBGs) in order to estimate their resistance against distinguishing attacks. We
eventually discover, counter-intuitively, that the correlations between the array-indices
and their associated array-elements, which apparently seem to be useful from the point of
view of implementation purposes, can be exploited to mount distinguishing attacks on such
type of ciphers if adequate precautions are not taken. In support of our theoretical
findings, we point out distinguishing attacks on 8 practical array-based stream ciphers (or
PRBGs), namely RC4 (designed by Rivest), RC4A (designed by Paul and Preneel), Py, Py6
(designed by Biham and Seberry), IA, ISAAC (designed by Jenkins Jr.), GGHN, NGG (by Gong
\emph{et al.}); our attacks are based on the dependence of array-elements on array-indices.
In all the cases we work under the assumption that the key-setup algorithms of the ciphers
produce uniformly distributed internal states. We detect flaws in the mixing of bits in the
keystream generation algorithms. Our analysis can be explained as the extension,
development, adaptation and deeper characterization of the \ti{fortuitous states attacks}
on the RC4 cipher by Fluhrer and McGrew in 2000.
Compiler Assisted Elliptic Curve Cryptography
Although cryptographic implementation tasks are often undertaken by expert programmers, a plethora of performance and security driven options, as well as more mundane software engineering issues, still make this a challenge. In an attempt to transfer expert knowledge into automated tools, we investigate the use of domain specific language and compilation techniques for cryptographic software, focusing on ECC in particular. Specifically, we describe experiments for specialisation of finite field arithmetic from general purpose code, and the description and optimisation of ECC point arithmetic using a cryptography-aware language and compiler. Our main results show that it is possible to allow description of ECC based software in a manner close to the original mathematics, while allowing the automatic production of an executable whose performance is close to that of a hand-optimised implementation.
Forward-Secure Sequential Aggregate Authentication
Wireless sensors are employed in a wide range of applications. One common feature of most sensor settings is the need to communicate sensed data to some collection point or sink. This communication can be direct (to a mobile collector) or indirect -- via other sensors towards a remote sink. In either case, a sensor might not be able to communicate to a sink at will. Instead it collects data and waits (for a potentially long time) for a signal to upload accumulated data directly.
In a hostile setting, a sensor may be compromised and its post-compromise data can be manipulated. One important issue is Forward Security -- how to ensure that pre-compromise data cannot be manipulated? Since a typical sensor is limited in storage and communication facilities, another issue is how to minimize resource consumption due to accumulated data. It turns out that current techniques are insufficient to address both challenges. To this end, we explore the notion of Forward-Secure Sequential Aggregate (FssAgg) Authentication Schemes. We consider FssAgg authentication schemes in the contexts of both conventional and public key cryptography and construct a FssAgg MAC scheme and a FssAgg signature scheme, each suitable under different assumptions. This work represents the initial investigation of Forward-Secure Aggregation and, although the proposed schemes are not optimal, it opens a new direction for follow-on research.
Forward-secure RFID Authentication and Key Exchange
Security and privacy in RFID systems is an important and active research area. A number of challenges arise due to the extremely limited computational, storage and communication abilities of a typical RFID tag. This work describes two families of simple, inexpensive, and untraceable identification protocols for RFID tags. The proposed protocols involve minimal interaction between a tag and a reader and place low computational burden on the tag, requiring only a pseudo-random generator. They also impose low computational load on the back-end server.
The paper also describes a universally composable security model tuned for RFID applications. By making specific setup, communication, and concurrency assumptions that are realistic in the RFID application setting, we arrive at a model that guarantees strong security and availability properties, while still permitting the design of practical RFID protocols. We show that our protocols are provably secure within the new security model. The security supports, availability, authentication, forward-secure anonymity and key exchange, and modularity. The last attribute is most appropriate for ubiquitous applications.
Special block cipher family DN and new generation SNMAC-type hash function family HDN
Special block cipher is a new cryptographic primitive designed to be a building block of the new generation hash functions SNMAC [Kl06]. Contrary to classical block ciphers it is knowingly designed focusing to those properties which are expected to be just a “side effect” on usual cipher constructions. Its design anticipates that an attacker could exploit or know its key, or even he/she could discretionarily tamper with the key. The design criteria of SNMAC hash functions are publicly known. Limitly, these functions approach a random oracle, they are computationally resistant against pre-image and collision attacks, and different special block cipher instances can be used in their design.
In this paper, we present special block cipher family Double Net DN(n,k)-rho with n-bit block, k-bit key and rho rounds, their building blocks construction principles and design criteria. Based on DN, we define hash functions family HDN(n,k)-rho with n-bit hash code working on blocks of k-n bits.
We introduce and propose to use DN(512,8192)-10 and HDN(512,8192)-10 as example instances. It turns out these are not just theoretical concepts, but practically employable functions with speeds only 2-3 times lower than SHA-512 and Whirlpool.
Basic idea behind the special block cipher DN is simple – contrary to classical block cipher approach, the same attention is paid to key and plaintext processing. One SP network ensures key mixing, while the second one mixes the plaintext with the key.
Once the special block cipher concept is examined and accepted in hash functions, it can be used in advance in its original purpose – data encryption. We suggest the transition from the classical block ciphers to more secure special block ciphers in the future. Its advantage is its readiness for various attacks on the secret key; the attacks which have recently started to emerge in classical block cipher cryptanalysis. Among others, these include side-channel attacks, related keys attacks and rectangular attacks (see e.g. [Bi93], [Bi03], [Ki04], [Ho05], [Ki05], [Bi05], and [Bi06]). With the expansion of the cryptographic instruments and cryptanalytic methods, these attacks will appear more and more frequently. Their common traits are the various attempts to exploit the original assumption on the attacker’s limited power over the secret key or its knowledge. The defence against these attacks is illustrated by the evolution of the functions processing the secret key, starting with simple copy-type functions used in DES and TripleDES to weak non-linear functions in AES. We believe that this trend will continue to strong non-linear functions (similar to the ones used in DN). The employment of these stronger functions in the encryption might not seem as a must in the present, but it probably will be in the future. In the hash functions, it is a necessity today already.
Security Arguments for a Class of ID-based Signatures
Provable security based on complexity theory provides an efficient way for providing the convincing evidences of security. In this paper, we present a definition of generic ID-based signature schemes (GIBSS) by extending the definition of generic signature schemes, and prove the Forking lemma for GIBSS. That is, we provide the Forking lemma for ID-based signature schemes. The theoretical result can be viewed as an extension of the Forking Lemma due to Pointcheval and Stern for ID-based signature schemes, and can help to understand and simplify the security proofs. Then we propose a new and efficient ID-based signature scheme built upon bilinear maps. We prove its security under k-CAA computational assumption in the random oracle model.
A Security Analysis of the NIST SP 800-90 Elliptic Curve Random Number Generator
An elliptic curve random number generator (ECRNG) has been approved
in a NIST standards and proposed for ANSI and SECG draft standards.
This paper proves that, if three
conjectures are true, then the ECRNG is secure. The three
conjectures are hardness of the elliptic curve decisional
Diffie-Hellman problem and the hardness of two newer problems, the
x-logarithm problem and the truncated point problem.
The x-logarithm problem is shown to be hard if the decisional
Diffie-Hellman problem is hard, although the reduction is not tight.
The truncated point problem is shown to be solvable when the minimum
amount of bits allowed in NIST standards are truncated, thereby
making it insecure for applications such as stream
ciphers. Nevertheless, it is argued that for nonce and key
generation this distinguishability is harmless.
New Constructions of Fuzzy Identity-Based Encryption
In this paper we construct two new fuzzy identity-based encryption (IBE) schemes in the random oracle model. Not only do our schemes provide public parameters whose size is independent of the number of attributes in each identity (used as public key) but they also
have useful structures which result in more e±cient key extraction and/or encryption than the random oracle version of Sahai and Water's fuzzy IBE scheme, considered recently by Pirretti et al. We prove that the confidentiality of the proposed schemes is relative to the Bilinear Decisional Bilinear Diffie-Hellman problem.
Direct Reduction of String (1,2)-OT to Rabin's OT
It is known that string (1,2)-OT and Rabin's OT are equivalent. However,
two steps are required to construct a string $(1,2)$-OT from Rabin's OT.
The first step is a construction of a bit (1,2)-OT from Rabin's OT, and
the second step is a construction of a string $(1,2)$-OT from the bit (1,2)-OT. No direct reduction is known. In this paper, we show a direct reduction of string (1,2)-OT to Rabin's OT by using a deterministic randomness extractor. Our reduction is much more efficient than the previous two-step reduction.
A Coprocessor for the Final Exponentiation of the $\eta_T$ Pairing in Characteristic Three
Since the introduction of pairings over (hyper)elliptic curves in
constructive cryptographic applications, an ever increasing number of
protocols based on pairings have appeared in the literature. Software
implementations being rather slow, the study of hardware architectures
became an active research area. Beuchat et al. proposed for
instance a coprocessor which computes the characteristic three
$\eta_T$ pairing, from which the Tate pairing can easily be derived,
in $33$\,$\mu$s on a Cyclone II FPGA. However, a final exponentiation
is required to ensure a unique output value and the authors proposed
to supplement their $\eta_T$ pairing accelerator with a coprocessor
for exponentiation. Thus, the challenge consists in designing the
smallest possible piece of hardware able to perform this task in less
than $33$\,$\mu$s on a Cyclone~II device. In this paper, we propose a
novel arithmetic operator implementing addition, cubing, and
multiplication over $\mathbb{F}_{3^{97}}$ and show that a coprocessor
based on a single such operator meets this timing constraint.
Design and Primitive Specification for Shannon
Shannon is a synchronous stream cipher with message authentication functionality, designed according to the ECrypt NoE call for stream cipher primitives, profile 1A (but well after the call).
Shannon is named in memory of Claude E. Shannon[20] of Bell Labs and MIT, founder of Information Theory. Shannon is an entirely new design, influenced by members of the SOBER family of stream ciphers, Helix/Phelix, Trivium, Scream, and SHA-256. It consists of a single
32-bit wide, 16-element nonlinear feedback shift register and an extra word, which is supplemented for message authentication with 32 parallel CRC-16 registers.
Shannon is free to use for any purpose, and reference source code can be found at http://www.qualcomm.com.au/Shannon.html .
Reflection Attacks on Product Ciphers
Uncategorized
Uncategorized
In this paper we describe a novel attack method on product ciphers, the {\em reflection attack}. The attack method exploits certain similarities among round functions which have not been utilized in previous self similarity attacks. We give practical examples illustrating the power of the reflection attack on several ciphers such as GOST, DEAL and some variants of DES and Magenta. Many interesting and exceptional properties of the attack are also presented in these examples. In addition, we discuss new design criteria that make product ciphers resistant to self similarity attacks and introduce a definition of similarity degree.
Authorship Proof for Textual Document
In this paper, we investigate the problem of how to prove the authorship of textual documents. First we define the basic functionalities of an authorship proof scheme (APS) based on natural language watermarking, and identify two essential security requirements for an APS to be secure against various attacks. We review existing natural language watermarking schemes, and we propose two new schemes with improved security.
Symmetric Tardos fingerprinting codes for arbitrary alphabet sizes
Uncategorized
Uncategorized
Fingerprinting provides a means of tracing unauthorized redistribution
of digital data by individually marking each authorized copy with a personalized serial
number. In order to prevent a group of users from collectively escaping identification,
collusion-secure fingerprinting codes have been proposed. In this
paper, we introduce a new construction of a collusion-secure fingerprinting
code which is similar to a recent construction by Tardos but achieves
shorter code lengths and allows for codes over arbitrary alphabets.
For binary alphabets, $n$ users and a false accusation probability of $\eta$,
a code length of $m\approx \pi^2 c_0^2\ln(n/\eta)$ is provably sufficient to
withstand collusion attacks of at most $c_0$ colluders. This improves Tardos'
construction by a factor of $10$. Furthermore, invoking the Central Limit Theorem
we show that even a code length of $m\approx \half\pi^2 c_0^2\ln(n/\eta)$ is
sufficient in most cases. For $q$-ary alphabets, assuming the
restricted digit model, the code size can be further reduced.
Numerical results show that a reduction of 35\% is achievable for $q=3$ and 80\% for~$q=10$.
Efficient Quintuple Formulas for Elliptic Curves and Efficient Scalar Multiplication Using Multibase Number Representation
Uncategorized
Uncategorized
In the current work we propose two efficient formulas for computing the $5$-fold ($5P$) of an elliptic curve point $P$. One formula is for curves over finite fields of even characteristic and the other is for curves over prime fields. Double base number systems (DBNS) have been gainfully exploited to compute scalar multiplication efficiently in ECC. Using the proposed point quintupling formulas one can use 2,5 and 3,5 (besides 3,5) as bases of the double base number system. In the current work we propose a scalar multiplication algorithm, which expands the scalar using three bases 2, 3 and 5 and computes the scalar multiplication very efficiently. The proposed scheme is faster than all sequential scalar multiplication algorithms reported in literature.
New Branch Prediction Vulnerabilities in OpenSSL and Necessary Software Countermeasures
Software based side-channel attacks allow an unprivileged spy
process to extract secret information from a victim
(cryptosystem) process by exploiting some indirect leakage of
``side-channel'' information. It has been realized that some
components of modern computer microarchitectures leak certain
side-channel information and can create unforeseen security
risks. An example of such MicroArchitectural Side-Channel
Analysis is the Cache Attack --- a group of attacks that exploit
information leaks from cache latencies.
Public awareness of Cache Attack vulnerabilities lead software
writers of OpenSSL (version 0.9.8a and subsequent versions) to
incorporate countermeasures for preventing these attacks.
In this paper, we present a new and yet unforeseen side channel
attack that is enabled by the recently published Simple Branch
Prediction Analysis (SBPA) which is another type of
MicroArchitectural Analysis. We
show that modular inversion --- a critical primitive in public
key cryptography --- is a natural target of SBPA attacks because
it typically uses the Binary Extended Euclidean algorithm whose
nature is an input-centric sequence of conditional branches. Our
results show that SBPA can be used to extract secret parameters
during the execution of the Binary Extended Euclidean algorithm.
This poses a new potential risk to crypto-applications such as
OpenSSL, which already employs Cache Attack countermeasures.
Thus, it is necessary to develop new software mitigation
techniques for BPA and incorporate them with cache analysis
countermeasures in security applications.
To mitigate this new risk in full generality, we apply a
security-aware algorithm design methodology and propose some
changes to the CRT-RSA algorithm flow. These changes either avoid
some of the steps that require modular inversion, or remove the
critical information leak from this procedure.
In addition, we also show by example that, independently of the
required changes in the algorithms, careful software analysis is
also required in order to assure that the software implementation
does not inadvertently introduce branches that may expose the
application to SBPA attacks.
These offer several simple ways for modifying OpenSSL in order to
mitigate Branch Prediction Attacks.
Multiple Modular Additions and Crossword Puzzle Attack on NLSv2
Uncategorized
Uncategorized
NLS is a stream cipher which was submitted to the eSTREAM project.
A linear distinguishing attack against NLS was presented by Cho and Pieprzyk,
which was called Crossword Puzzle (CP) attack.
NLSv2 is the tweak version of NLS which aims mainly at avoiding the CP attack.
In this paper, a new distinguishing attack against NLSv2 is presented.
The attack exploits high correlation amongst neighboring bits of the cipher.
The paper first shows that the modular addition preserves pairwise correlations
as demonstrated by existence of linear approximations with large biases.
Next it shows how to combine these results with the existence of high correlation
between bits 29 and 30 of the S-box to obtain a distinguisher whose bias is around $2^{-37}$.
Consequently, we claim that NLSv2 is distinguishable from a random process after
observing around $2^{74}$ keystream words.
Best Quadratic Approximations of Cubic Boolean Functions
The problem of computing best low order approximations of Boolean functions is treated in this paper. We focus on the case of best quadratic approximations of a wide class of cubic functions of arbitrary number of variables, and provide formulas for their efficient calculation. Our methodology is developed upon Shannon's expansion formula and properties of best affine approximations of quadratic functions, for which we prove formulas for their direct computation, without use of the Walsh-Hadamard transform. The notion of nonquadricity is introduced, as the minimum distance from all quadratic functions, and cubic functions that achieve the maximum possible nonquadricity are determined, leading to a lower bound for the covering radius of second order Reed-Muller code $\mthf{R}(2,n)$ in $\mthf{R}(3,n)$.
Chosen-Ciphertext Secure Key-Encapsulation Based on Gap Hashed Diffie-Hellman
We propose a practical key encapsulation mechanism with a simple and intuitive design concept. Security against chosen-ciphertext attacks can be proved in the standard model under a new assumption, the Gap Hashed Diffie-Hellman (GHDH) assumption. The security reduction is tight and simple.
Secure key encapsulation, combined with an appropriately secure symmetric encryption scheme, yields a hybrid public-key encryption scheme which is secure against chosen-ciphertext attacks. The implied encryption scheme is very efficient: compared to the previously most efficient scheme by Kurosawa and Desmedt [Crypto 2004] it has 128 bits shorter ciphertexts, between 25-50% shorter public/secret keys, and it is slightly more efficient in terms of encryption/decryption speed. Furthermore, our scheme enjoys (the option of) public verifiability of the ciphertexts and it inherits all practical advantages of secure hybrid encryption.