All papers (Page 224 of 24087 results)
Deterministic Authenticated-Encryption: A Provable-Security Treatment of the Key-Wrap Problem
Uncategorized
Uncategorized
Standards bodies have been addressing the key-wrap problem, a
cryptographic goal that has never received a provable-security
treatment. In response, we provide one, giving definitions,
constructions, and proofs. We suggest that key-wrap’s goal is
security in the sense of deterministic authenticated-encryption
(DAE), a notion that we put forward. We also provide an alternative
notion, a pseudorandom injection (PRI), which we prove to be
equivalent. We provide a DAE construction, SIV, analyze its concrete
security, develop a blockcipher-based instantiation of it, and
suggest that the method makes a desirable alternative to the
schemes of the X9.102 draft standard. The construction incorporates
a method to turn a PRF that operates on a string into an equally
efficient PRF that operates on a vector of strings, a problem of
independent interest. Finally, we consider IV-based authenticated-
encryption (AE) schemes that are maximally forgiving of repeated
IVs, a goal we formalize as misuse-resistant AE.We show that a DAE
scheme with a vector-valued header, such as SIV, directly realizes
this goal.
Multi-Dimensional Montgomery Ladders for Elliptic Curves
Montgomery's ladder algorithm for elliptic curve scalar multiplication uses only the x-coordinates of points. Avoiding calculation of the y-coordinates saves time for certain curves. Montgomery introduced his method to accelerate Lenstra's elliptic curve method for integer factoring. Bernstein extended Montgomery's ladder algorithm by computing integer combinations of two points, thus accelerating signature verification over certain curves. This paper modifies and extends Bernstein's algorithm to integer combinations of two or more points.
Cryptographically Sound Security Proofs for Basic and Public-Key Kerberos
We present a computational analysis of basic Kerberos with and without its public-key extension PKINIT in which we consider authentication and key secrecy properties. Our proofs rely on the Dolev--Yao-style model of Backes, Pfitzmann, and Waidner, which allows for mapping results obtained symbolically within this model to cryptographically sound proofs if certain assumptions are met. This work was the first verification at the computational level of such a complex fragment of an industrial protocol. By considering a recently fixed version of PKINIT, we extend symbolic correctness results we previously attained in the Dolev--Yao model to cryptographically sound results in the computational model.
Computationally Sound Symbolic Secrecy in the Presence of Hash Functions
The standard symbolic, deducibility-based notions of secrecy are in
general insufficient from a cryptographic point of view, especially in
presence of hash functions. In this paper we devise and motivate a
more appropriate secrecy criterion which exactly captures a standard
cryptographic notion of secrecy for protocols involving public-key
enryption and hash functions: protocols that satisfy it are
computationally secure while any violation of our criterion directly
leads to an attack. Furthermore, we prove that our criterion is
decidable via an NP decision procedure. Our results hold for standard
security notions for encryption and hash functions modeled as random
oracles.
Statistical Analysis of the MARS Block Cipher
Uncategorized
Uncategorized
The work contains a statistical investigation of the MARS block cipher --- one of the AES finalists.
It is shown that 8 MARS round ciphertext can be recognized from a uniform
distribution with the help of the ``Book Stack"\, test providing that $2^{18}$ blocks of plaintexts
and $2^{20}$ bytes of memory are avaliable. The previous published attacks on this
cipher were only theoretical with unrealistic resource requirements.
Fast and Secure Elliptic Curve Scalar Multiplication Over Prime Fields Using Special Addition Chains
In this paper, we propose a new fast and secure point multiplication algorithm. It is based on a particular kind of addition chains involving only additions (no doubling), providing a natural protection against side channel attacks. Moreover, we propose new addition formulae that take into account the specific structure of those chains making point multiplication very efficient.
Cryptanalysis of an Image Scrambling Scheme without Bandwidth Expansion
Uncategorized
Uncategorized
Recently, a novel image scrambling (i.e., encryption) scheme without bandwidth expansion was proposed based on two-dimensional (2-D)discrete prolate spheroidal sequences (DPSS). This paper gives a comprehensive cryptanalysis of the image scrambling scheme, and draw a conclusion that it is not sufficiently secure against various cryptographical attacks, including ciphertext-only attack, known/chosen-plaintext attack and chosen-ciphertext attack. The cryptanalytic results suggest that the image scrambling scheme can only be used to realize perceptual encryption, instead of provide content protection for digital images.
Password-Authenticated Group Key Establishment from Smooth Projective Hash Functions
Uncategorized
Uncategorized
Password-authenticated key exchange (PAKE) protocols allow users sharing a password to agree upon a high entropy secret. In this paper, a provably secure password-authenticated pro-
tocol for group key establishment in the common reference string (CRS) model is presented. Our protocol is quite efficient, as regardless of the number of involved participants it can be imple-
mented with only three communication rounds. We use a (by now classical) trick of Burmester and Desmedt for deriving group key exchange protocols using a two-party construction as main building block. In our case, the two party PAKE used as a base is a one-round protocol by Katz and Vaikuntanatan, which in turn builds upon a special kind of smooth projective hash functions (KV-SPHFs). As evidenced by Benhamouda et al., KV-SPHFs can be instantiated on Cramer-Shoup ciphertexts, thus yielding very efficient (and pairing free) constructions.
Luby-Rackoff Ciphers from Weak Round Functions?
The Feistel-network is a popular structure underlying many block-ciphers where the cipher is constructed from many simpler rounds, each defined by some function which is derived from the secret key.
Luby and Rackoff showed that the three-round Feistel-network -- each round instantiated with a pseudorandom function secure against adaptive chosen plaintext attacks (CPA) -- is a CPA secure pseudorandom permutation, thus giving some confidence in the soundness of using a Feistel-network to design block-ciphers.
But the round functions used in actual block-ciphers are -- for efficiency reasons -- far from being pseudorandom. We investigate the security of the Feistel-network against CPA distinguishers when the only security guarantee we have for the round functions is that they are secure against non-adaptive chosen plaintext attacks (NCPA). We show that in the information-theoretic setting, four rounds with NCPA secure round functions are sufficient (and necessary) to get a CPA secure permutation. Unfortunately, this result does not translate into the more interesting pseudorandom setting. In fact, under the so-called Inverse Decisional Diffie-Hellman assumption the Feistel-network with four rounds, each instantiated with a NCPA secure pseudorandom function, is in general not a CPA secure pseudorandom permutation.
We also consider other relaxations of the Luby-Rackoff construction and prove their (in)security against different classes of attacks.
Reverse SSL: Improved Server Performance and DoS Resistance for SSL Handshakes
Common occurrence of server overload and the threat of denial-of-service (DoS) attacks makes highly desirable to improve the performance and DoS resistance of SSL handshakes. In this paper, we tackle these two related problems by proposing reverse SSL, an extension in which the server is relieved from the heavy public key decryption operation and authenticated by means of a digital signature instead. On the server side, reverse SSL employs online/offline signatures to minimize the online computation required to generate the signature and on the client side, RSA key generation computation can be used as a client puzzle when clients do not have a public key certificate. The preliminary performance results show that reverse SSL is a promising technique for improving the performance and DoS resistance of SSL servers.
A Survey of Certificateless Encryption Schemes and Security Models
This paper surveys the literature on certificateless encryption schemes. In particular, we examine the (large number of) security models that have been proposed to prove the security of certificateless encryption schemes and propose a new nomenclature for these models. This allows us to "rank" the notions of security for a certificateless encryption scheme against an outside attacker and a passive key generation centre, and we suggest which of these notions should be regarded as the "correct" model for a secure certificateless
encryption scheme.
We also examine the security models that aim to provide security against an actively malicious key generation centre and against an outside attacker who attempts to deceive a legitimate sender into using an incorrect public key (with the intention to deny the the legitimate receiver that ability to decrypt the ciphertext). We note that the existing malicious key generation centre model fails to capture realistic attacks that a malicious key generation centre might make and propose a new model.
Lastly, we survey the existing certificateless encryption schemes and compare their security proofs. We show that few schemes provide the correct notion of security without appealing to the random oracle model. The few schemes that do provide sufficient security guarantees are comparatively inefficient. Hence, we conclude that more research is needed before certificateless encryption schemes can be thought to
be a practical technology.
Searchable Symmetric Encryption: Improved Definitions and Efficient Constructions
Searchable symmetric encryption (SSE) allows a party to outsource the storage of his data to another party in a private manner, while maintaining the ability to selectively search over it. This problem has been the focus of active research and several security definitions and constructions have been proposed. In this paper we begin by reviewing existing notions of security and propose new and stronger security definitions. We then present two constructions that we show secure under our new definitions. Interestingly, in addition to satisfying stronger security guarantees, our constructions are more efficient than all previous constructions.
Further, prior work on SSE only considered the setting where only the owner of the data is capable of submitting search queries. We consider the natural extension where an arbitrary group of parties other than the owner can submit search queries. We formally define SSE in this multi-user setting, and present an efficient construction.
Minimal Weight and Colexicographically Minimal Integer Representations
Redundant number systems (e.g. signed binary representations) have
been utilized to efficiently implement algebraic operations required
by public-key cryptosystems, especially those based on elliptic
curves. Several families of integer representations have been
proposed that have a minimal number of nonzero digits (so-called
\emph{minimal weight} representations). We observe that many of the
constructions for minimal weight representations actually work by
building representations which are minimal in another sense. For a
given set of digits, these constructions build
\emph{colexicographically minimal} representations; that is, they
build representations where each nonzero digit is positioned as far
left (toward the most significant digit) as possible. We utilize
this strategy in a new algorithm which constructs a very general
family of minimal weight dimension-$d$ \emph{joint} representations
for any $d \geq 1$. The digits we use are from the set $\{a \in
\ZZ: \ell \leq a \leq u\}$ where $\ell \leq 0$ and $u \geq 1$ are
integers. By selecting particular values of $\ell$ and $u$, it is
easily seen that our algorithm generalizes many of the minimal
weight representations previously described in the literature. From
our algorithm, we obtain a syntactical description of a particular
family of dimension-$d$ joint representations; any representation
which obeys this syntax must be both colexicographically minimal and
have minimal weight; moreover, every vector of integers has exactly
one representation that satisfies this syntax. We utilize this
syntax in a combinatorial analysis of the weight of the
representations.
Private Information Retrieval Using Trusted Hardware
Many theoretical PIR (Private Information Retrieval) constructions have been proposed in the past
years. Though information theoretically secure, most of them are impractical to deploy due to the
prohibitively high communication and computation complexity. The recent trend in outsourcing
databases fuels the research on practical PIR schemes. In this paper, we propose a new PIR system
by making use of trusted hardware. Our system is proven to be information theoretically secure.
Furthermore, we derive the computation complexity lower bound for hardware-based PIR schemes and
show that our construction meets the lower bounds for both the communication and computation costs,
respectively.
The Kurosawa-Desmedt Key Encapsulation is not Chosen-Ciphertext Secure
At CRYPTO 2004, Kurosawa and Desmedt presented a hybrid public-key encryption scheme that is chosen-ciphertext secure in the standard model. Until now it was unknown if the key-encapsulation part of the Kurosawa-Desmedt scheme by itself is still chosen-ciphertext secure or not. In this short note we answer this question to the negative, namely we present a simple chosen-ciphertext attack on the Kurosawa-Desmedt key encapsulation mechanism.
On the Provable Security of an Efficient RSA-Based Pseudorandom Generator
Uncategorized
Uncategorized
Pseudorandom Generators (PRGs) based on the RSA inversion
(one-wayness) problem have been extensively studied in the
literature over the last 25 years. These generators have the
attractive feature of provable pseudorandomness security assuming
the hardness of the RSA inversion problem. However, despite
extensive study, the most efficient provably secure RSA-based
generators output asymptotically only at most $O(\log n)$ bits per
multiply modulo an RSA modulus of bitlength $n$, and hence are too
slow to be used in many practical applications.
To bring theory closer to practice, we present a simple
modification to the proof of security by Fischlin and Schnorr of
an RSA-based PRG, which shows that one can obtain an RSA-based PRG
which outputs $\Omega(n)$ bits per multiply and has provable
pseudorandomness security assuming the hardness of a well-studied
variant of the RSA inversion problem, where a constant fraction of
the plaintext bits are given. Our result gives a positive answer to an open question posed by Gennaro (J. of Cryptology, 2005) regarding finding a PRG beating the rate $O(\log n)$ bits per multiply at the cost of a reasonable assumption on RSA inversion.
Last updated: 2007-11-02
ID-Based Ring Signature Scheme secure in the Standard Model
Uncategorized
Uncategorized
The only known construction of ID-based ring signature schemes which
maybe secure in the standard model is to attach certificates to
non-ID-based ring signatures. This method leads to schemes that are
somewhat inefficient and it is an open problem to find more
efficient and direct constructions. In this paper, we propose two
such constructions. Our first scheme, with signature size linear in
the cardinality of the ring, is secure in the standard model under
the computational Diffie-Hellman assumption. The second scheme,
achieving constant signature size, is secure in a weaker attack
model (the selective ID and weak chosen message model), under the
Diffie-Hellman Inversion assumption.
Towards Minimizing Memory Requirement for Implementation of Hyperelliptic Curve Crytosystems
Uncategorized
Uncategorized
Elliptic (ECC) and hyperelliptic curve cryptosystems (HECC) have emerged as cryptosystems of choice for small handheld and mobile devices. A lot of research has been devoted to the secure and efficient implementation on these devices. As such devices come with very low amount of resources, efficient memory management is an important issue in all such implementations. HECC arithmetic is now generally performed using so called explicit formulae. In literature, there is no result which focuses on the exact memory requirement for implementation these formulae. This is the first work to report such minimal memory requirement. Also, in the work we have provided a general methodology for realization of explicit formulae with minimal number of registers. Applying such methodology this work settles the issue for some important explicit formula available in the literature. This is an attempt to experimentally solve a particular instance based on HECC explicit formulae of the so called ``Register Sufficiency Problem", which is an NP-complete problem.
Generalization of the Selective-ID Security Model for HIBE Protocols
We generalize the selective-ID security model for HIBE by introducing two new
security models. Broadly speaking, both these models allow the adversary to commit
to a set of identities and in the challenge phase choose any one of the previously
committed identities. Two constructions of HIBE are presented which are
secure in the two models. Further, we show that the HIBEs can be
modified to obtain a multiple receiver IBE which is secure in the selective-ID
model without the random oracle assumption.
Ate pairing for $y^{2}=x^{5}-\alpha x$ in characteristic five
Uncategorized
Uncategorized
Recently, the authors proposed a method for computing the Tate pairing using a distortion map for $y^{2}=x^{5} -\alpha x$
($\alpha = \pm2$) over finite fields of characteristic five.
In this paper, we show the Ate pairing, an invariant of the Tate pairing, can be applied to this curve.
This leads to about $50\%$ computational cost-saving
over the Tate pairing.
Efficient Tate Pairing Computation Using Double-Base Chains
Pairing-based cryptosystems have been developing very fast in the last few years. The efficiencies of the cryptosystems are determined by the computation of the Tate pairing. In this paper a new efficient algorithm based on double-base chain for computing the Tate pairing is proposed for odd characteristic $p>3$. The inherent sparseness of double-base number system reduces the computational cost for computing the Tate pairing evidently. It is $9\%$ faster than the previous fastest method for MOV degree k=6.
Improvement of recently proposed Remote User Authentication Schemes
Uncategorized
Uncategorized
Abstract. Recently Manik et al. [13] proposed a novel remote user authentication scheme using bilinear pairings. Chou et al. [14] identified a weakness in Manik et al.’s scheme and made an improvement. Thulasi et al. [15] show that both Manik et al.’s and Chou et al.’s schemes are insecure against forgery attack and replay attack. But Thulasi et al. do not propose a improvement. In this paper, we propose an improvement to over come the flaws.
Identity-based Key Agreement Protocols From Pairings
In recent years, a large number of identity-based key agreement
protocols from pairings have been proposed. Some of them are
elegant and practical. However, the security of this type of
protocols has been surprisingly hard to prove. The main issue is
that a simulator is not able to deal with reveal queries, because
it requires solving either a computational problem or a decisional
problem, both of which are generally believed to be hard (i.e.,
computationally infeasible). The best solution of security proof
published so far uses the gap assumption, which means assuming
that the existence of a decisional oracle does not change the
hardness of the corresponding computational problem. The
disadvantage of using this solution to prove the security for this
type of protocols is that such decisional oracles, on which the
security proof relies, cannot be performed by any polynomial time
algorithm in the real world, because of the hardness of the
decisional problem. In this paper we present a method
incorporating a built-in decisional function in this type of
protocols. The function transfers a hard decisional problem in the
proof to an easy decisional problem. We then discuss the resulting
efficiency of the schemes and the relevant security reductions in
the context of different pairings one can use. We pay particular
attention, unlike most other papers in the area, to the issues
which arise when using asymmetric pairings.
Cryptographically Private Support Vector Machines
We study the problem of private classification using kernel methods. More specifically, we propose private protocols implementing the Kernel Adatron and Kernel Perceptron learning algorithms, give private classification protocols and private polynomial kernel computation protocols. The new protocols return their outputs---either the kernel value, the classifier or the classifications---in encrypted form so that they can be decrypted only by a common agreement by the protocol participants. We also show how to use the encrypted classifications to privately estimate many properties of the data and the classifier. The new SVM classifiers are the first to be proven private according to the standard cryptographic definitions.
A Novel Algorithm for Solving the LPN Problem and its Application to Security Evaluation of the HB Protocol for RFID Authentication
A novel algorithm for solving the LPN problem is proposed and analyzed. The algorithm originates from the recently proposed advanced fast correlation attacks, and it employs the concepts of decimation, linear combining, hypothesizing and minimum distance decoding. The proposed algorithm appears as more powerful than the best one previously reported known as the BKW algorithm. In fact the BKW algorithm is shown to be a special instance of the proposed algorithm, but without optimized parameters. An improved security evaluation of the HB protocol for RFID authentication is then developed. Employing the proposed algorithm, the security of the HB protocol is reevaluated, implying that the previously reported security margins appear as overestimated.
On ZK-Crypt, Book Stack, and Statistical Tests
The algorithms submitted to the ECRYPT Stream Cipher Project (eSTREAM)
were tested using the recently suggested statistical test named ``Book Stack''.
All the ciphers except ZK-Crypt have passed the tests.
The paper briefly describes the essence of the test.
Computer implementation of the test in C++ language is supplied.
An Efficient ID-based Digital Signature with Message Recovery Based on Pairing
Signature schemes with message recovery have been wildly investigated
a decade ago in the literature, but the first ID-based signature with message recovery goes out into the world until 2005. In this paper, we first point out and revise one little but important problem
which occurs in the previous ID-based signature with message recovery scheme. Then, by completely different setting, we propose a new ID-based signature scheme with message recovery. Our scheme is much more efficient than the previous scheme. In our scheme (as well as other signature schemes with message recovery), the message itself is not required to be transmitted together with the signature,
it turns out to have the least data size of communication cost comparing with generic (not short) signature schemes. Although the communication overhead is still larger than Boneh et al. 's short signature (which is not ID-based), the computational cost of our scheme is more efficient than Boneh et al. 's scheme in the verification phase. We will also prove that the proposed scheme is provably secure in the random oracle model under CDH Assumption.
Last updated: 2006-10-27
Self-Generated-Certificate Public Key Cryptosystem
Certificateless Public Key Cryptography (CL-PKC) enjoys a number of features of Identity-Based
Cryptography (IBC) while without having the problem of key escrow. However, it \textit{does} suffer to
an attack where the adversary, Carol, replaces Alice's public key by someone's public key so that
Bob, who wants to send an encrypted message to Alice, uses Alice's identity and other's public
key as the inputs to the encryption function. As a result, Alice cannot decrypt the message
while Bob is unaware of this. We call it \textit{Denial-of-Decryption (DoD) Attack}
as its nature is similar to the well known Denial-of-Service (DoS) Attack. Based on CL-PKC,
we propose a new paradigm called \textit{Self-Generated-Certificate Public Key Cryptography (SGC-PKC)}
that captures the DoD Attack. We also provide a generic construction of
a self-generated-certificate public key encryption scheme
in the standard model. In addition, we further
propose a certificateless signature and a certificateless encryption scheme with concrete implementation.
They are all
provably secure in the standard model, which are
the first in the literature regardless of the generic
constructions by Yum and Lee which may contain security weaknesses as pointed out by others.
We believe these concrete implementations are of independent interest.
(Hierarchical Identity-Based) Threshold Ring Signatures
We construct the first several efficient threshold ring signatures (TRS) without random oracles. Specializing to a threshold of one, they are the first several efficient ring signatures without random oracles after the only earlier instantiation of Chow, Liu, Wei, and Yuen. Further specializing to a ring of just one user, they are the short (ordinary) signatures without random oracles summarized in Wei and Yuen.
We also construct the first hierarchical identity-based threshold ring signature without random oracles. The signature size is $O(n\lambda_s)$ bits, where $\lambda_s$ is the security parameter and $n$ is the number of users in the ring. Specializing to a threshold of one, it is the first hierarchical identity-based ring signature without random oracles. Further specializing to a ring of one user, it is the constant-size hierarchical identity-based signature (HIBS) without random oracles in Yuen-Wei - the signature size is $O(\lambda_s)$ bits which is independent of the number of levels in the hierarchy.
DPA attacks on keys stored in CMOS cryptographic devices through the influence of the leakage behavior
Uncategorized
Uncategorized
Abstract:
This paper describes the influences of the threshold voltage VT on the leakage behavior of the dice after a fabrication process. By measuring the current consumption (leakage) on a CMOS cryptographic device like smartcard security controller and using the DPA analysis it is possible to make the key visible which is used during a cryptographic operation. Therefore, in this paper not only the security risks by using the smartcard security controller will be shown where no DPA attacks have been performed. Furthermore, it will be shown that the results of DPA analysis only on a coincidentally selected die cannot be representative for the whole production. Rather the DPA analysis must be performed on a particularly selected die with the smallest VT parameter (worst case in the leakage behavior), so that the result for all other dice on the wafer (or for the whole production) can be considered as relevant. Thus, it will be shown that the test labs must use different methods regarding the DPA analysis in order to be able to cover the leakage behavior on all wafers of a production. For further re-evaluation of smartcards it is important that the manufacturer and the test labs can save time and costs by DPA measuring on the special selected worst case die.
A PUBLIC KEY CRYPTOSYSTEM BASED ON PELL EQUATION
RSA type public key cryptosystems based on the Pell's equation are proposed in the honor of an Indian mathematician Brahmgupta who studied Pell's equation long before European mathematicians came to know about it. Three RSA type schemes are proposed, first two are not semantically secure where as the other two schemes are semantically secure. The decryption speed of the proposed schemes is about two times as fast as RSA for a 2 log n-bit message. It is shown that the proposed schemes are more secure than the RSA scheme when purely common plaintexts are encrypted in the broadcast application and are as secure as the RSA scheme against ciphertext attack. In addition the proposed schemes are also secure against partially known plaintext attack. First two are not semantically secure but the third one is semantically secure.
Cryptanalysis of the Dual Elliptic Curve Pseudorandom Generator
Uncategorized
Uncategorized
The Dual Elliptic Curve Pseudorandom Generator (DEC PRG) is
proposed by Barker and Kelsey in a draft NIST Special Publication.
It is claimed that the pseudorandom generator is
secure unless the adversary can solve the elliptic curve discrete
logarithm problem (ECDLP) for the corresponding elliptic curve.
The claim is supported only by an informal discussion. No security
reduction is given, that is, it is not shown that an adversary
that breaks the pseudorandom generator implies a solver for the
ECDLP.
Our experimental results and also empirical argument show that the
DEC PRG is insecure. The attack does not imply
solving the ECDLP for the corresponding elliptic curve. The attack
is very efficient.
Unconditionally secure chaffing and winnowing with short authentication tags
Rivest proposed the idea of a chaffing-and-winnowing scheme, in which confidentiality is achieved through the use of an authentication code. Thus it would still be possible to have confidential communications even if conventional encryption schemes were outlawed. Hanaoka et al. constructed unconditionally secure chaffing-and-winnowing schemes which achieve perfect secrecy in the sense of Shannon. Their schemes are constructed from unconditionally secure authentication codes.
In this paper, we construct unconditionally secure chaffing-and-winnowing schemes from unconditionally secure authentication codes in which the authentication tags are very short. This could be a desirable feature, because certain types of unconditionally secure authentication codes can provide perfect secrecy if the length of an authentication tag is at least as long as the length of the plaintext. The use of such a code might be prohibited if encryption schemes are made illegal, so it is of interest to construct chaffing-and-winnowing schemes based on "short'' authentication tags.
New Blockcipher Modes of Operation with Beyond the Birthday Bound Security
In this paper, we define and analyze a new blockcipher mode of operation for encryption, CENC, which stands for Cipher-based ENCryption. CENC has the following advantages: (1) beyond the birthday bound security, (2) security proofs with the standard PRP assumption, (3) highly efficient, (4) single blockcipher key, (5) fully parallelizable, (6) allows precomputation of keystream, and (7) allows random access. CENC is based on the new construction of ``from PRPs to PRF conversion,'' which is of independent interest. Based on CENC and a universal hash-based MAC (Wegman-Carter MAC), we also define a new authenticated-encryption with associated-data scheme, CHM, which stands for CENC with Hash-based MAC. The security of CHM is also beyond the birthday bound.
On the Security of HMAC and NMAC Based on HAVAL, MD4, MD5, SHA-0 and SHA-1
HMAC is a widely used message authentication code and a
pseudorandom function generator based on cryptographic hash
functions such as MD5 and SHA-1. It has been standardized by ANSI,
IETF, ISO and NIST. HMAC is proved to be secure as long as the
compression function of the underlying hash function is a
pseudorandom function. In this paper we devise two new
distinguishers of the structure of HMAC, called {\em differential}
and {\em rectangle distinguishers}, and use them to discuss the
security of HMAC based on HAVAL, MD4, MD5, SHA-0 and SHA-1. We
show how to distinguish HMAC with reduced or full versions of
these cryptographic hash functions from a random function or from
HMAC with a random function. We also show how to use our
differential distinguisher to devise a forgery attack on HMAC. Our
distinguishing and forgery attacks can also be mounted on NMAC
based on HAVAL, MD4, MD5, SHA-0 and SHA-1. Furthermore, we show
that our differential and rectangle distinguishers can lead to
second-preimage attacks on HMAC and NMAC.
Deterministic and Efficiently Searchable Encryption
Uncategorized
Uncategorized
We present as-strong-as-possible definitions of privacy, and constructions achieving them, for public-key encryption schemes where the encryption algorithm is \textit{deterministic}. We obtain as a consequence database encryption methods that permit fast (i.e.~sub-linear, and in fact logarithmic, time) search while provably providing privacy that is as strong as possible subject to this fast search constraint. One of our constructs, called RSA-DOAEP, has the added feature of being length preserving, so that it is the first example of a public-key cipher. We generalize this to obtain a notion of efficiently-searchable encryption schemes which permit more flexible privacy to search-time trade-offs via a technique called bucketization. Our results answer much-asked questions in the database community and provide foundations for work done there.
Statistical Zero-Knowledge Arguments for NP from Any One-Way Function
We show that every language in NP has a *statistical* zero-knowledge
argument system under the (minimal) complexity assumption that
one-way functions exist. In such protocols, even a computationally
unbounded verifier cannot learn anything other than the fact that the
assertion being proven is true, whereas a polynomial-time prover
cannot convince the verifier to accept a false assertion except with
negligible probability. This resolves an open question posed by Naor,
Ostrovsky, Venkatesan, and Yung (CRYPTO `92, J. Cryptology `98).
Departing from previous works on this problem, we do not construct
standard statistically hiding commitments from any one-way function.
Instead, we construct a relaxed variant of commitment schemes called
"1-out-of-2-binding commitments," recently introduced by Nguyen and
Vadhan (STOC `06).
On Signatures of Knowledge
Uncategorized
Uncategorized
In a traditional signature scheme, a signature $\sigma$ on a message $m$ is issued under a public key $\pk$, and can be interpreted as follows: "The owner of the public key $\pk$ and its corresponding secret key has signed message $m$." In this paper we consider schemes that allow one to issue signatures on behalf of any NP statement, that can be interpreted as follows: "A person in possession of a witness $w$ to the statement that $x \in L$ has signed message $m$." We refer to such schemes as \emph{signatures of knowledge}.
We formally define the notion of a signature of knowledge. We begin by extending the traditional definition of digital signature schemes, captured by Canetti's ideal signing functionality, to the case of signatures of knowledge. We then give an alternative definition in terms of games that also seems to capture the necessary properties one
may expect from a signature of knowledge. We then gain additional
confidence in our two definitions by proving them equivalent.
We construct signatures of knowledge under standard complexity assumptions in the common-random-string model.
We then extend our definition to allow signatures of knowledge to be \emph{nested} i.e., a signature of knowledge (or another accepting
input to a UC-realizable ideal functionality) can itself serve as a
witness for another signature of knowledge. Thus, as a corollary, we obtain the first \emph{delegatable} anonymous credential system, i.e., a system in which one can use one's anonymous credentials as a secret key for issuing anonymous credentials to others.
Information-Theoretic Conditions for Two-Party Secure Function Evaluation
The standard security definition of unconditional secure function evaluation, which is based on the ideal/real model paradigm, has the disadvantage of being overly complicated to work with in practice. On the other hand, simpler ad-hoc definitions tailored to special scenarios have often been flawed. Motivated by this unsatisfactory situation, we give an information-theoretic security definition of secure function evaluation which is very simple yet provably equivalent to the standard, simulation-based definitions.
On the Limits of Point Function Obfuscation
Uncategorized
Uncategorized
We study the problem of circuit obfuscation, ie, transforming the circuit in a way that hides everything except its input-output behavior. Barak et al. showed that a universal obfuscator that obfuscates every circuit class cannot exist, leaving open the possibility of special-purpose obfuscators. Known positive results for obfuscation are limited to point functions (boolean functions that return 1 on exactly one input) and simple extensions thereof in the random oracle model, ie, assuming black-box access to a true random function. It was also shown by Wee how to instantiate random oracles so as to achieve a slightly weaker form of point function obfuscation. Two natural questions arise: (i) what circuits have obfuscators whose security can be reduced in a black-box way to point function obfuscation? and (ii) for what circuits obfuscatable in the random oracle model can we instantiate the random oracles to build obfuscators in the plain model?
We give partial answers to these questions: there is a circuit in AC^0 which can be obfuscated in the random oracle model, but not secure when random oracles are instantiated with Wee's construction. More generally, we present evidence for the impossibility of a black-box reduction of the obfuscatability of this circuit to point function obfuscation. Conversely, this result shows that to instantiate random oracles in general obfuscators, one needs to utilize properties of the instantiation that are not satisfied by point function obfuscators.
There exist Boolean functions on $n$ (odd) variables having nonlinearity $> 2^{n-1} - 2^{\frac{n-1}{2}}$ if and only if $n > 7$
For the first time we find Boolean functions on 9 variables having nonlinearity 241, that remained as an open question in literature for almost three decades. Such functions are discovered using a suitably modified steepest-descent based iterative heuristic search in the class of rotation symmetric Boolean functions (RSBFs). This shows that there exist Boolean functions on $n$ (odd) variables having nonlinearity $> 2^{n-1} - 2^{\frac{n-1}{2}}$ if and only if $n > 7$. Using the same search method, we also find several other important functions and we study the autocorrelation, propagation characteristics and resiliency of the RSBFs (using proper affine transformations, if required). The results show that it is possible to get balanced Boolean functions on $n=10$ variables having autocorrelation spectra with maximum absolute value $< 2^{\frac{n}{2}}$, which was not known earlier. In certain cases the functions can be affinely transformed to get first order propagation characteristics. We also obtain 10-variable functions having first order resiliency and nonlinearity 492 which was posed as an open question in Crypto 2000.
Divisibility of the Hamming Weight by $2^k$ and Monomial Criteria for Boolean Functions
Uncategorized
Uncategorized
In this paper we consider the notions of the Hamming weight and the algebraic normal form.
We solve an open problem devoted to checking divisibility of the weight
by $2^k$. We generalize the criterion for checking the evenness of the weight in two ways. Our main result states that for checking whether the Hamming weight of $f$ is divisible by $2^k, \,k>1$, it is necessary and sufficient to know its algebraic normal form accurate to an additive constant.
FPGA Accelerated Tate Pairing Based Cryptosystems over Binary Fields
Though the implementation of the Tate pairing is commonly believed to be computationally more intensive than other cryptographic operations, such as ECC point multiplication, there has been a substantial progress in speeding up the Tate pairing computations. Because of their inherent parallelism, the existing Tate pairing algorithms are very suitable for hardware implementation aimed at achieving a high operation speed. Supersingular elliptic curves over binary fields are good candidates for hardware implementation due to their simple underlying algorithms and binary arithmetic. In this paper we propose efficient Tate pairing implementations over binary fields $\mathbb F_{2^{239}}$ and $\mathbb F_{2^{283}}$ via FPGA. Though our field sizes are larger than those used in earlier architectures with the same security strength based on cubic elliptic curves or binary hyperelliptic curves, fewer multiplications in the underlying field
are required, so that the computational latency for one pairing can
be reduced. As a result, our pairing accelerators implemented via
FPGA can run 15-to-25 times faster than other FPGA realizations at
the same level of security strength, and at the same time achieve
lower product of latency by area.
A New Cryptosystem Based On Hidden Order Groups
Let $G_1$ be a cyclic multiplicative group of order $n$. It is known that the Diffie-Hellman problem is random self-reducible in $G_1$ with respect to a fixed generator $g$ if $\phi(n)$ is known. That is, given $g, g^x\in G_1$ and having oracle access to a ``Diffie-Hellman Problem solver'' with fixed generator $g$, it is possible to compute $g^{1/x} \in G_1$ in polynomial time (see theorem 3.2). On the other hand, it is not known if such a reduction exists when $\phi(n)$ is unknown (see conjuncture 3.1). We exploit this ``gap'' to construct a cryptosystem based on hidden order groups and present a practical implementation of a novel cryptographic primitive called an \emph{Oracle Strong Associative One-Way Function} (O-SAOWF). O-SAOWFs have applications in multiparty protocols. We demonstrate this by presenting a key agreement protocol for dynamic ad-hoc groups.
On the (Im-)Possibility of Extending Coin Toss
We consider the cryptographic two-party protocol task of extending a given coin toss. The goal is to generate n common random coins from a single use of an ideal functionality which gives m<n common random coins to the parties. In the framework of Universal Composability we show the impossibility of securely extending a coin toss for statistical and perfect security. On the other hand, for computational security the existence of a protocol for coin toss extension depends on the number m of random coins which can be obtained "for free".
For the case of stand-alone security, i.e., a simulation based security definition without an environment, we present a novel protocol for unconditionally secure coin toss extension. The new protocol works for superlogarithmic m, which is optimal as we show the impossibility of statistically secure coin toss extension for smaller m.
Combining our results with already known results, we obtain a (nearly) complete characterization under which circumstances coin toss extension is possible.
Counting points on elliptic curves in medium characteristic
In this paper, we revisit the problem of computing the kernel of a separable isogeny of degree $\ell$ between two elliptic curves
defined over a finite field $\GF{q}$ of characteristic $p$. We
describe an algorithm the asymptotic time complexity of which is
equal to $\SoftO(\ell^2(1+\ell/p)\log q)$ bit operations. This
algorithm is particularly useful when $\ell > p$ and as a
consequence, we obtain an improvement of the complexity of the SEA
point counting algorithm for small values of $p$. More precisely, we
obtain a heuristic time complexity $\SoftO(\log^{4} q)$ and a space
complexity $O(\log^{2} q)$, in the previously unfavorable case where
$p \simeq \log q$. Compared to the best previous algorithms, the
memory requirements of our SEA variation are smaller by a $\log^2 q$
factor.
Tight Bounds for Unconditional Authentication Protocols in the Manual Channel and Shared Key Models
We address the message authentication problem in two seemingly
different communication models. In the first model, the sender and
receiver are connected by an insecure channel and by a
low-bandwidth auxiliary channel, that enables the sender to
``manually'' authenticate one short message to the receiver (for
example, by typing a short string or comparing two short strings).
We consider this model in a setting where no computational
assumptions are made, and prove that for any $0 < \epsilon < 1$
there exists a $\log^* n$-round protocol for authenticating
$n$-bit messages, in which only $2 \log(1 / \epsilon) + O(1)$ bits
are manually authenticated, and any adversary (even
computationally unbounded) has probability of at most $\epsilon$
to cheat the receiver into accepting a fraudulent message.
Moreover, we develop a proof technique showing that our protocol
is essentially optimal by providing a lower bound of $2 \log(1 /
\epsilon) - O(1)$ on the required length of the manually
authenticated string.
The second model we consider is the traditional message
authentication model. In this model the sender and the receiver
share a short secret key; however, they are connected only by an
insecure channel. We apply the proof technique above to obtain a
lower bound of $2 \log(1 / \epsilon) - 2$ on the required Shannon
entropy of the shared key. This settles an open question posed by
Gemmell and Naor (CRYPTO '93).
Finally, we prove that one-way functions are {\em necessary} (and
sufficient) for the existence of protocols breaking the above
lower bounds in the computational setting.
Last updated: 2006-06-10
Frobenius expansion and the Diffie Hellman problem
This paper proposes investigation of special sessions of the Diffie Hellman (DH) key exchange scheme on elliptic curves for which the shared key can be computed by a polynomial time algorithm. Such sessions are called \emph{singular}. Existence of singular sessions are demonstrated using the Frobenius expansion and polynomial representation of public keys which lead to an expression for the shared key. When the Weil pairing can be computed on the elliptic curve along with a modified pairing defined by a distortion map efficiently, a sufficient condition is obtained for sessions to be singular which can be verified in polynomial time. Hence this condition identifies sessions whose singular nature can be determined in polynomial time. A single round three party key exchange scheme is proposed using singular sessions in which efficient computation of the shared key of a pair of users by the third party is a necessary requirement. This scheme is thus a positive application of singular sessions and offers a possible alternative to the need for using super singular curves on which pairings can be computed efficiently.
Some Practical Public-Key Encryption Schemes in both Standard Model and Random Oracle Model
In this paper, we present some more results about the security of the Kurosawa-Desmedt encryption scheme and a variant of it. We prove that after a modification, those schemes are secure against adaptive chosen-ciphertext attack not only under the decisional Diffie-Hellman assumption in standard model as before but also under the computational Diffie-Hellman assumption in the random oracle model. These results ensure that both the Kurosawa-Desmedt scheme and the variant have similar security merits as the Cramer-Shoup encryption scheme, which is proposed as a standard.
On Computing Products of Pairings
In many pairing-based protocols often the evaluation of the product
of many pairing evaluations is required. In this paper we consider
methods to compute such products efficiently. Focusing on pairing-friendly fields
in particular, we evaluate methods for the Weil, Tate and Ate pairing algorithms
for ordinary elliptic curves at various security levels. Our operation counts
indicate that the minimal cost of each additional pairing relative to the cost of
one is $\approx 0.61$, $0.45$, and $0.43$, for each of these pairings
respectively at the 128-bit security level.
For larger security levels the Ate pairing can have a relative
additional cost of as low as $0.13$ for each additional pairing.
These estimates allow implementors to make optimal algorithm choices for
given scenarios, in which the number of pairings in the product,
the security level, and the embedding degree are
factors under consideration.
Key confirmation and adaptive corruptions in the protocol security logic
Uncategorized
Uncategorized
Cryptographic security for key exchange and secure session
establishment protocols is often defined in the so called
``adaptive corruptions'' model. Even if the adversary corrupts
one of the participants in the middle of the protocol execution
and obtains the victim's secrets such as the private signing key,
the victim must be able to detect this and abort the protocol.
This is usually achieved by adding a key confirmation message
to the protocol. Conventional symbolic methods for protocol
analysis assume unforgeability of digital signatures, and thus
cannot be used to reason about security in the adaptive corruptions
model.
We present a symbolic protocol logic for reasoning about
authentication and key confirmation in key exchange protocols.
The logic is cryptographically sound: a symbolic proof of
authentication and secrecy implies that the protocol is secure
in the adaptive corruptions model. We illustrate our method by
formally proving security of an authenticated Diffie-Hellman
protocol with key confirmation.
Visual Cryptography Schemes with Optimal Pixel Expansion
Uncategorized
Uncategorized
A visual cryptography scheme encodes a black&white secret image
into n shadow images called shares which are distributed to the n
participants. Such shares are such that only qualified subsets of
participants can ``visually'' recover the secret image.
Usually, the reconstructed image will be darker than the background of
the image itself. In this paper we consider visual cryptography schemes
satisfying the model introduced by Tzeng and Hu (Designs, Codes and
Cryptography, Vol. 27, No. 3, pp. 207-227, 2002). In such a model
the recovered secret image can be darker or lighter than the background.
We prove a lower bound on the pixel expansion of the scheme and,
for (2,n)-threshold visual cryptography schemes, we provide schemes
achieving the bound. Our schemes improve on the ones proposed by
Tzeng and Hu.
Simplified pairing computation and security implications
Recent progress on pairing implementation has made certain pairings
extremely simple and fast to compute. Hence, it is natural to examine if there are consequences for the security of pairing-based cryptography.
This paper gives a method to compute eta pairings in a way which avoids the requirement for a final exponentiation. The method does not lead to any improvement in the speed of pairing implementation. However, it seems appropriate to re-evaluate the security of pairing based cryptography in light of these new ideas. A multivariate attack on the pairing inversion problem is proposed and analysed. Our findings support the belief that pairing inversion is a hard computational problem.
How Fast can be Algebraic Attacks on Block Ciphers ?
In this paper we give a specification of a new block cipher
that can be called the Courtois Toy Cipher (CTC).
It is quite simple, and yet very much like any other known block cipher. If the parameters are large enough, it should evidently be
secure against all known attack methods.
However, we are not proposing a new method for encrypting sensitive data, but rather a research tool that should allow us (and other researchers) to experiment with algebraic attacks on block ciphers
and obtain interesting results using a PC with reasonable quantity of RAM. For this reason the S-box of this cipher has only 3-bits,
which is quite small.
Ciphers with very small S-boxes are believed quite secure,
for example the Serpent S-box has only 4 bits,
and in DES all the S-boxes have 4 output bits.
The AES S-box is not quite as small but can be described
(in many ways) by a very small systems of equations
with only a few monomials (and this fact can also be exploited in algebraic cryptanalysis).
We believe that results on algebraic cryptanalysis of this cipher
will have very deep implications for the security of ciphers in general.
Towards Trustworthy e-Voting using Paper Receipts
Uncategorized
Uncategorized
Current electronic voting systems are not sufficient to satisfy
trustworthy elections as they do not provide any proofs or
confirming evidences of their honesty. This lack of trustworthiness
is the main reason why e-voting is not widely spread even though
e-voting is expected to be more efficient than the current plain
paper voting. Many experts believe that the only way to assure
voters that their intended votes are casted is to use paper
receipts. In this paper, we propose an efficient scheme for issuing
receipts to voters in e-voting using the well-known
divide-and-choose method. Our scheme does not require any special
printers or scanners, nor frequent observations to voting machines.
In addition to that, our scheme is more secure than the previous
ones.
General Secret Sharing Based on the Chinese Remainder Theorem
In this paper we extend the threshold secret sharing schemes
based on the Chinese remainder theorem
in order to deal with more general access structures.
Aspects like verifiability, secret sharing homomorphisms
and multiplicative properties are also discussed.
Pairings for Cryptographers
Many research papers in pairing based cryptography
treat pairings as a ``black box''. These papers build
cryptographic schemes making use of various properties of
pairings.
If this approach is taken, then it is easy for authors to
make invalid assumptions concerning the properties of pairings.
The cryptographic schemes developed
may not be realizable in practice, or may not be
as efficient as the authors assume.
The aim of this paper is to outline, in as simple a fashion
as possible, the basic choices that are available when
using pairings in cryptography. For each choice, the main
properties and efficiency issues are summarized. The paper is
intended to be of use to non-specialists who are interested
in using pairings to design cryptographic schemes.
Classification of Signature-only Signature Models
We introduce a set of criterions for classifying signature-only
signature models. By the criterions, we classify signature models into 5 basic types and 69 general classes. Theoretically, 21140 kinds of signature models can be deduced by appropriately combining different general classes. The result comprises almost existing signature models. We also contribute a lot of new signature models. Moreover, we find the three signature models, i.e., group-nominee signature, multi-nominee signature and threshold-nominee
signature, are of great importance in light of our classification.
Achieving a log(n) Speed Up for Boolean Matrix Operations and Calculating the Complexity of the Dense Linear Algebra step of Algebraic Stream Cipher Attacks and of Integer Factorization Methods
The purpose of this paper is to calculate the running time of dense boolean matrix operations,
as used in stream cipher cryptanalysis and integer factorization. Several variations of Gaussian
Elimination, Strassen's Algorithm and the Method of Four Russians are analyzed. In particular,
we demonstrate that Strassen's Algorithm is actually slower than the Four Russians algorithm for
matrices of the sizes encountered in these problems. To accomplish this, we introduce a new model
for tabulating the running time, tracking matrix reads and writes rather than field operations, and
retaining the coefficients rather than dropping them. Furthermore, we introduce an algorithm known
heretofore only orally, a ``Modified Method of Four Russians'', which has not appeared in the literature
before. This algorithm is $\log n$ times faster than Gaussian Elimination for dense boolean
matrices. Finally we list rough estimates for the running time of several recent stream cipher cryptanalysis
attacks.
A Summary of McEliece-Type Cryptosystems and their Security
In this paper we give an overview of some of the cryptographic
applications which were derived from the proposal of R.J. McEliece to
use error correcting codes for cryptographic purposes.
Code based cryptography is an interesting alternative to number theoretic cryptography. Many basic cryptographic functions like encryption, signing, hashing, etc. can be
realized using code theoretic concepts.
In this paper we briefly show how to correct errors in transmitted data by employing Goppa codes and describe possible applications to public key cryptography.
The main focus of this paper is to provide detailed insight
into the state of art of cryptanalysis of the McEliece
cryptosystem and the effect on different cryptographic applications.
We conclude, that for code based cryptography a public key of $88$KB
offers sufficient security for encryption, while we need a public key
of at least $597$KB for secure signing.
Cryptanalysis of 4-Pass HAVAL
Uncategorized
Uncategorized
HAVAL is a cryptographic hash function proposed by Zheng et al. Rompay et al and Wang et al found collisions of full 3-Pass HAVAL. In this paper, we study the security of 4-Pass HAVAL. We find collisions of full versions of 4-Pass HAVAL. The attack is similar to the two-block attack of MD5 proposed by Wang et al. The computational complexity of the attack is about 2^30-2^32 for the first block and 2^27-2^29 for the second block. We use this attack to find 256bit collisions of 4-Pass HAVAL in 3-4 hour on a common PC.
Last updated: 2006-05-04
A Built-in Decisional Function and Security Proof of ID-based Key Agreement Protocols from Pairings
In recent years, a large number of identity-based key agreement
protocols from pairings have been proposed. Some of them are elegant
and practical. However, the security of this type of protocols has
been surprisingly hard to prove. The main issue is that a simulator
is not able to deal with reveal queries, because it requires solving
either a computational problem or a decisional problem, both of
which are generally believed to be hard (i.e., computationally
infeasible). The best solution of security proof published so far
uses the gap assumption, which means assuming that the existence of a
decisional oracle does not change the hardness of the corresponding
computational problem. The disadvantage of using this solution to
prove the security for this type of protocols is that such
decisional oracles, on which the security proof relies, cannot be
performed by any polynomial time algorithm in the real world,
because of the hardness of the decisional problem. In this paper we
present a method incorporating a built-in decisional function in
this type of protocols. The function transfers a hard decisional
problem in the proof to an easy decisional problem.
We then discuss the resulting efficiency of the schemes and the
relevant security reductions in the context of different pairings
one can use.
Last updated: 2006-05-04
Repairing a Security-Mediated Certificateless Encryption Scheme from PKC 2006
At PKC 2006, Chow, Boyd, and Nieto introduced the concept of security-mediated certificateless (SMC) cryptography. This notion can be considered as a variant of certificateless cryptography with the property of instantaneous key revocation, or a variant of mediated cryptography without full key escrow. They presented a definition of security for SMC encryption, which covers (fully-adaptive) chosen ciphertext attack with public key replacement considered as a strong but essential attack on certificateless cryptographic schemes. They proposed two SMC encryption schemes, one is a generic construction based on any public key encryption, identity-based encryption and one-time signature schemes and the other is a concrete construction based on bilinear pairings, which were shown to be secure under their security definition. In this note, we, however, present two types of attacks demonstrating that their generic construction for SMC encryption fails to meet their security requirement. We then discuss how to repair the scheme and provide a provably-secure solution.
An Efficient ID-based Proxy Signature Scheme from Pairings
This paper proposes a new ID-based proxy signature scheme based on the bilinear pairings. The number of paring operation involved in the verification procedure of our scheme is only one, so our scheme is more efficient comparatively. The new scheme can be proved secure
with the hardness assumption of the k-Bilinear Diffie-Hellman Inverse
problem, in the random oracle model.
An efficient way to access an array at a secret index
We propose cryptographic primitives for reading and assigning the
(shared) secret found at a secret index in a vector of secrets. The
problem can also be solved in constant round with existing general
techniques based on arithmetic circuits and the ``equality test''
in [Damgard.et.al 05]. However the proposed technique requires to
exchange less bits. The proposed primitives require a number of rounds
that is independent of the size N of the vector, and only depends
(linearly) on the number t of computing servers. A previously known
primitive for reading a vector at a secret index works only for
2-party computations. Our primitives work for any number of computing
participants/servers.
The proposed techniques are secure against passive attackers, and zero
knowledge proofs are provided to show that exactly one index of the array is read/written. The techniques work both with multiparty computations based on secret sharing and with multiparty computations based on threshold homomorphic encryption.
The Hardness of the DHK Problem in the Generic Group Model
In this note we prove that the controversial Diffie-Hellman Knowledge problem is secure in the generic group model. This appears to be the first paper that presents any evidence as to whether the Diffie-Hellman Knowledge problem is true or false.
Independent Zero-Knowledge Sets
We define and construct Independent Zero-Knowledge Sets (ZKS) protocols. In a ZKS protocols, a Prover commits to a set $S$, and for any $x$, proves non-interactively to a Verifier if $x \in S$ or $x \notin S$ without revealing any other information about $S$.
In the {\em independent} ZKS protocols we introduce, the adversary is
prevented from successfully correlate her set to the one of a honest prover. Our notion of independence in particular implies that the
resulting ZKS protocol is non-malleable.
On the way to this result we define the notion of {\em independence} for commitment schemes. It is shown that this notion implies non-malleability, and we argue that this new notion has the potential to
simplify the design and security proof of non-malleable commitment schemes.
Efficient implementations of ZKS protocols are based on the notion of mercurial commitments. Our efficient constructions of independent
ZKS protocols requires the design of {\em new} commitment schemes that are simultaneously independent (and thus non-malleable) and mercurial.
New Public Key Authentication Frameworks with Lite Certification Authority
Two variants of CA-based public key authentication framework are
proposed in this paper. The one is termed as public key cryptosystem
without certificate management center (PKCwCMC) and the other is
termed as proxy signature based authentication framework (PS-based
AF). Moreover, we give an implementation of the former based on
quadratic residue theory and an implementation of the latter from
RSA. Both of the two variants can be looked as lite-CA based
authentication frameworks since the workload and deployment of CAs
in these systems are much lighter and easier than those of in the
traditional CA-based PKC.
On the Relationships Between Notions of Simulation-Based Security
Several compositional forms of simulation-based security have been proposed in the literature, including universal composability, black-box simulatability, and variants thereof. These relations between a protocol and an ideal functionality are similar enough that they can be ordered from strongest to weakest according to the logical form of their definitions. However, determining whether two relations are in fact identical depends on some subtle features that have not been brought out in previous studies. We identify the position of a ``master process" in the distributed system, and some limitations on transparent message forwarding within computational complexity bounds, as two main factors. Using a general computational framework, called Sequential Probabilistic Process Calculus (SPPC), we clarify the relationships between the simulation-based security conditions.
We also prove general composition theorems in SPPC. Many of the proofs are carried out based on a small set of equivalence principles involving processes and distributed systems. This gives us results that carry over to a variety of computational models.
Pairing based Mutual Authentication Scheme Using Smart Cards
Bilinear pairings based mutual authentication scheme using smart card is presented. We propose a novel technique of using two different servers, one for registration and other for authentication. The
scheme is resilient to replay, forgery, man-in-the-middle and insider attacks.
Simulation-Based Security with Inexhaustible Interactive Turing Machines
Recently, there has been much interest in extending models for simulation-based security in such a way that the runtime of protocols may depend on the length of their input. Finding such extensions has turned out to be a non-trivial task. In this work, we propose a simple, yet expressive general computational model for systems of Interactive Turing Machines (ITMs) where the runtime of the ITMs may be polynomial per activation and may depend on the length of the input received. One distinguishing feature of our model is that the systems of ITMs that we consider involve a generic mechanism for addressing dynamically generated copies of ITMs. We study properties of such systems and, in particular, show that systems satisfying a certain acyclicity condition run in polynomial time. Based on our general computational model, we state different notions of simulation-based security in a uniform and concise way, study their relationships, and prove a general composition theorem for composing a polynomial number of copies of protocols, where the polynomial is determined by the environment. The simplicity of our model is demonstrated by the fact that many of our results can be proved by mere equational reasoning based on a few equational principles on systems.
Demonstrating data possession and uncheatable data transfer
We observe that a certain RSA-based secure hash function is homomorphic. We describe a protocol based on this hash function which prevents `cheating' in a data transfer transaction, while placing little burden on the trusted third party that oversees the protocol. We also describe a cryptographic protocol based on similar principles, through which a prover can demonstrate possession of an arbitrary set of data known to the verifier. The verifier isn't required to have this data at hand during the protocol execution, but rather only a small hash of it. The protocol is also provably as secure as integer factoring.
A method of construction of balanced functions with optimum algebraic immunity
Because of the recent algebraic attacks, a high algebraic immunity is now an absolutely necessary (but not sufficient) property for Boolean functions used in stream ciphers. A difference of only 1 between the algebraic immunities of two functions can make a crucial difference with respect to algebraic attacks. Very few examples of (balanced) functions with high algebraic immunity have been found so far. These examples seem to be isolated and no method for obtaining such functions is known. In this paper, we introduce a general method for proving that a given function, in any number of variables, has a prescribed algebraic immunity. We deduce an algorithm for generating balanced functions in any odd number of variables, with optimum algebraic immunity. We also give an algorithm, valid for any even number of variables, for constructing (possibly) balanced functions with optimum (or, if this can be useful, with high but not optimal) algebraic immunity. We also give a new example of an infinite class of such functions. We study their Walsh transforms. To this aim, we completely characterize the Walsh transform of the majority function.
Computational Indistinguishability between Quantum States and Its Cryptographic Application
We introduce a computational problem of distinguishing between two specific quantum states as a new cryptographic problem to design a quantum cryptographic scheme that is ``secure'' against any polynomial-time quantum adversary. Our problem QSCDff is to distinguish between two types of random coset states with a hidden permutation over the symmetric group of finite degree. This naturally generalizes the commonly-used distinction problem between two probability distributions in computational cryptography. As our major contribution, we show three cryptographic properties: (i) QSCDff has the trapdoor property; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is computationally at least as hard in the worst case as the graph automorphism problem. These cryptographic properties enable us to construct a quantum public-key cryptosystem, which is likely to withstand any chosen plaintext attack of a polynomial-time quantum adversary. We further discuss a generalization of QSCDff, called QSCDcyc, and introduce a multi-bit encryption scheme relying on the cryptographic properties of QSCDcyc.
New Integrated proof Method on Iterated Hash Structure and New Structures
Uncategorized
Uncategorized
In this paper, we give a integrated proof
method on security proof of iterated hash structure. Based on the
proof method, we can distinguish the security of Merkel-Damagård
structure, wide-pipe hash, double-pipe hash and 3c hash and know the
requirement on true design compression function, we also give a new
recommend structure. At last, we give a new hash structure, MAC
structure, encryption model, and which use same block cipher round
function and key schedule algorithm and are based on Feistel structure, the security proofs on those structures are also given.
Completeness of Formal Hashes in the Standard Model
We study an extension of the well-known Abadi-Rogaway logic with hashes. Previously, we have given a sound computational interpretation of this extension using Canetti's oracle hashing. This paper extends Micciancio and Warinschi's completeness result for the original logic to this setting.
PUBLIC-KEY CRYPTOSYSTEM BASED ON ISOGENIES
A new general mathematical problem, suitable for public-key cryptosystems, is proposed: morphism computation in a category of Abelian groups. In connection with elliptic curves over finite fields, the problem becomes the following: compute an isogeny (an algebraic homomorphism) between the elliptic curves given. The problem seems to be hard for solving with a quantum computer. ElGamal public-key encryption and Diffie-Hellman key agreement are proposed for an isogeny cryptosystem. The paper describes theoretical background and a public-key encryption technique, followed by security analysis and consideration of cryptosystem parameters selection. A demonstrative example of encryption is included as well.
Implementing Cryptographic Pairings on Smartcards
Pairings on elliptic curves are fast coming of age as cryptographic primitives for deployment in new security applications, particularly in the context of implementations of Identity-Based Encryption (IBE). In this paper we describe the implementation of various pairings on a contemporary 32-bit smart-card, the Philips Hi{P}er{S}mart\texttrademark , an instantiation of the MIPS-32 based Smart{MIPS}\texttrademark architecture. Three types of pairing are considered, first the standard Tate pairing on a nonsupersingular curve $E(\F_p)$, second the Ate pairing, also on a nonsupersingular curve $E(\F_p)$, and finally the $\eta_T$ pairing on a supersingular curve
$E(\F_{2^m})$. We demonstrate that pairings can be calculated as efficiently as classic cryptographic primitives on this architecture, with a calculation time of as little as 0.15 seconds.
Blinded Fault Resistant Exponentiation
As the core operation of many public key cryptosystems, group exponentiation is central to cryptography. Attacks on its implementation in embedded device setting is hence of great concern. Recently, implementations resisting both simple side-channel analysis and fault attacks were proposed.
In this paper, we go further and present an algorithm that also inherently thwarts differential side-channel attacks in any finite abelian group with only limited time and storage overhead.
Rational Secret Sharing, Revisited
We consider the problem of secret sharing among $n$ rational players. This problem was introduced by Halpern and Teague (STOC 2004), who claim that a solution is impossible for $n=2$ but show a solution for the case $n\geq 3$. Contrary to their claim, we show a protocol for rational secret sharing among $n=2$ players; our protocol extends to the case $n\geq 3$, where it is simpler than the Halpern-Teague solution and also offers a number of other advantages. We also show how to avoid the continual involvement of the dealer, in either our own protocol or that of Halpern and Teague.
Our techniques extend to the case of rational players trying to securely compute an arbitrary function, under certain assumptions on the utilities of the players.
Linear Sequential Circuit Approximation of Grain and Trivium Stream Ciphers
Grain and Trivium are two hardware oriented synchronous stream ciphers proposed as the simplest candidates to the ECRYPT Stream Cipher Project, both dealing with 80-bit secret keys. In this paper we apply the linear sequential circuit approximation method to evaluate the strength of these stream ciphers against distinguishing attack. In this approximation method which was initially introduced by Golic in 1994, linear models are effectively determined for autonomous finite-state machines. We derive linear functions of consecutive key-stream bits which are held with correlation coefficient of about 2^-63.7 and 2^-126 for Grain and Trivium ciphers, respectively. Then using the concept of so-called generating function, we turn them into linear functions with correlation coefficient of 2^-29 for Grain and 2^-72 for Trivium. It shows that the Grain output sequence can be distinguished from a purely random sequence, using about 2^58 bits of the output sequence with the same time complexity. However, our attempt fails to find a successful distinguisher for Trivium.
GVG-RP: A Net-centric Negligibility-based Security Model for Self-organizing Networks
We present a rigorous approach to building a secure self-organizing
mobile ad hoc network (MANET). In a highly dynamic environment like
MANET, it is impossible to ensure absolute security to protect
everything. We have to speak of the "infeasibility" of breaking the
security system rather than the "impossibility" of breaking the same
system. More formally, security is defined on the concept of
"negligible", which is asymptotically sub-polynomial with respect to a
pre-defined system parameter $n$. Intuitively, the parameter $n$ in
modern cryptography is the key length. The crypto-system's security is
broken if the adversary's capability is of exponentials of $n$, and the
efficiency of all related algorithms is measured in polynomials of $n$.
We adopt the same formal security notion in ad hoc network security
research. In network security, the network scale (i.e., number of
network members) $N$ replaces the role of key length $n$ in
cryptography. If a security scheme can be devised to ensure that the
probability of security failure is negligible, then the larger the
network scale is or the more complex the network system is, the more
secure the network is. In other words, given a negligibility-based
protection against a specific security attack, larger or more complex
systems are favored over smaller or simpler systems. Intuitively, this
is consistent with the evolution theory where more complex entities
probabilistically emerge from and likely survive longer than their less
complex counterparts.
In this paper, we use ``rushing attack'' as the exemplary security
attack to disrupt mobile ad hoc routing. We show that ``rushing
attack'' is a severe attack against on-demand ad hoc routing schemes.
Fortunately, ``localized forwarding community area'' is an available
countermeasure to ensure that the failure probability of packet
forwarding is negligible. This demonstrates the usefulness of our
negligibility-based network security model. We expect to augment the
pool of negligibility-based protections and explore the general notion
in other types of networks.\\
\emph{Keywords}---Net-centric Security = Negligibility + Scalability
A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks (extended version)
Uncategorized
Uncategorized
The fair evaluation and comparison of side-channel attacks and countermeasures has been a long standing open question, limiting further developments in the field. Motivated by this challenge, this work makes a step in this direction and proposes a framework for the analysis of cryptographic implementations that includes a theoretical model and an application methodology. The model is based on commonly accepted hypotheses about side-channels that computations give rise to. It allows quantifying the effect of practically relevant leakage functions with a combination of information theoretic and security metrics, measuring the quality of an implementation and the strength of an adversary, respectively. From a theoretical point of view, we demonstrate formal connections between these metrics and discuss their intuitive meaning. From a practical point of view, the model implies a unified methodology for the analysis of side-channel key recovery attacks. The proposed solution allows getting rid of most of the subjective parameters that were limiting previous specialized and often ad hoc approaches in the evaluation of physically observable devices. It typically determines the extent to which basic (but practically essential) questions such as "How to compare two implementations?" or "How to compare two side-channel adversaries?" can be answered in a sound fashion.
Trace-Driven Cache Attacks on AES
Uncategorized
Uncategorized
Cache based side-channel attacks have recently been attracted significant attention due to the new developments in the field. In this paper, we present efficient trace-driven cache attacks on a widely used implementation of the AES cryptosystem. We also evaluate the cost of the proposed attacks in detail under the assumption of a noiseless environment. We develop an accurate mathematical model that we use in the cost analysis of our attacks. We use two different metrics, specifically, the expected number of necessary traces and the cost of the analysis phase, for the cost evaluation purposes. Each of these metrics represents the cost of a different phase of the attack.
Defining Strong Privacy for RFID
In this work, we consider privacy in Radio Frequency IDentification (RFID) systems. Our contribution is threefold: (1) We propose a simple, formal definition of strong privacy useful for basic analysis of RFID systems, as well as a different (weaker) definition applicable to multi-verifier systems; (2) We apply our definition to reveal vulnerabilities in several proposed privacy-enhancing RFID protocols; and (3) We formally analyze and suggest improvements to ``Hash-Locks,'' one of the first privacy-enhancing RFID protocols in the literature.
A Challenging but Feasible Blockwise-Adaptive Chosen-Plaintext Attack on SSL
This paper introduces a chosen-plaintext vulnerability in the Secure
Sockets Layer (SSL) and Trasport Layer Security (TLS) protocols which
enables recovery of low entropy strings such as can be guessed from a
likely set of 2--1000 options. SSL and TLS are widely used for
securing communication over the Internet. When utilizing block ciphers
for encryption, the SSL and TLS standards mandate the use of the
cipher block chaining (CBC) mode of encryption which requires an
initialization vector (IV) in order to encrypt. Although the first IV
used by SSL is a (pseudo)random string which is generated and shared
during the initial handshake phase, subsequent IVs used by SSL are
chosen in a deterministic, predictable pattern; in particular, the IV
of a message is taken to be the final ciphertext block of the
immediately-preceding message, and is therefore known to the adversary.
The one-channel nature of web proxies, anonymizers or Virtual Private
Networks (VPNs), results in all Internet traffic from one machine
traveling over the same SSL channel. We show this provides a feasible
``point of entry'' for this attack. Moreover, we show that the
location of target data among block boundaries can have a profound
impact on the number of guesses required to recover that data,
especially in the low-entropy case.
The attack in this paper is an application of the blockwise-adaptive
chosen-plaintext attack paradigm, and is the only feasible attack to
use this paradigm with a reasonable probability of success. The
attack will work for all versions of SSL, and TLS version 1.0. This
vulnerability and others are closed in TLS 1.1 (which is still in
draft status) and OpenSSL after 0.9.6d. It is hoped this paper will
encourage the deprecation of SSL and speed the adoption of OpenSSL
or TLS 1.1/1.2 when they are finially released.
The Design Principle of Hash Function with Merkle-Damgård Construction
The paper discusses the security of
hash function with Merkle-Damgård construction and provides
the complexity bound of finding a collision
and primage of hash function based on the condition probability of
compression function $y=F(x,k)$. we make a conclusion that in
Merkle-Dammaård construction, the requirement of free start
collision resistant and free start collision resistant on
compression function is not necessary and it is enough if the
compression function with properties of fix start collision
resistant and fix start preimage resistant. However, the condition
probability $P_{Y|X=x}(y)$ and $P_{Y|K=k}(y)$ of compression
function $y=F(x,k)$ have much influence on the security of the hash
function. The best design of compression function should have
properties of that $P_{Y|X=x}(y)$ and $P_{Y|K=k}(y)$ are both
uniformly distributed for all $x$ and $k$. At the end of the paper,
we discussed the block cipher based hash function, point out among
the the 20 schemes, selected by PGV\cite{Re:Preneel} and
BPS\cite{Re:JBlack}, the best scheme is block cipher itself, if the
block cipher with perfect security and perfect key distribution.
Identity Based Strong Designated Verifier Signature Scheme
Identity based cryptosystem simplifies the key management and revocation problem. Here we propose an Identity Based Strong Designated Verifier Signature (IBSDVS) scheme using bilinear pairings. The Designated Verifier Signature scheme described in [10] is identity based but it suffers from the deligatability as pointed out in [4]. We analyse the security of the scheme and show that the problem of delegatability does not exist in our scheme.
Low Complexity Bit-Parallel Square Root Computation over GF($2^m$) for all Trinomials
In this contribution we introduce a low-complexity bit-parallel algorithm for computing square roots over binary extension fields. Our proposed method can be applied for any type of irreducible polynomials. We derive explicit formulae for the space and time complexities associated to the square root operator when working with binary extension fields generated using irreducible trinomials. We show that for those finite fields, it is possible to compute the square root of an arbitrary field element with equal or better hardware efficiency than the one associated to the field squaring operation. Furthermore, a practical application of the square root operator in the domain of field exponentiation computation is presented. It is shown that by using as building blocks squarers, multipliers and square root blocks, a parallel version of the classical square-and-multiply exponentiation algorithm can be obtained. A hardware implementation of that parallel version may provide a speedup of up to 50\% percent when compared with the traditional version.
Conditional Reactive Simulatability
Simulatability has established itself as a salient notion for defining
and proving the security of cryptographic protocols since it entails
strong security and compositionality guarantees, which are achieved by
universally quantifying over all environmental behaviors of the
analyzed protocol. As a consequence, however, protocols that are
secure except for certain environmental behaviors are not simulatable,
even if these behaviors are efficiently identifiable and thus can be
prevented by the surrounding protocol.
We propose a relaxation of simulatability by conditioning the
permitted environmental behaviors, i.e., simulation is only required
for environmental behaviors that fulfill explicitly stated
constraints. This yields a more fine-grained security definition that
is achievable i) for several protocols for which unconditional
simulatability is too strict a notion or ii) at lower cost for the
underlying cryptographic primitives. Although imposing restrictions
on the environment destroys unconditional composability in general, we
show that the composition of a large class of conditionally
simulatable protocols yields protocols that are again simulatable
under suitable conditions. This even holds for the case of cyclic
assume-guarantee conditions where protocols only guarantee suitable
behavior if they themselves are offered certain guarantees.
Furthermore, composing several commonly investigated protocol classes
with conditionally simulatable subprotocols yields protocols that are
again simulatable in the standard, unconditional sense.
Provably Secure Ubiquitous Systems: Universally Composable RFID Authentication Protocols
This paper examines two unlinkably anonymous, simple RFID identification protocols that require only the ability
to evaluate hash functions and generate random values, and that are provably secure against Byzantine adversaries.
The main contribution is a universally composable security model tuned for RFID applications. By making specific
setup, communication, and concurrency assumptions that are realistic in the RFID application setting, we arrive
at a model that guarantees strong security and availability properties,
while still permitting the design of
practical RFID protocols. We show that the two previously proposed protocols are provably secure within the new security model.
Our proofs do not employ random oracles---the protocols are shown to be
secure in the standard model under the assumption of
existence of pseudo-random function families.
Simulatable Security and Polynomially Bounded Concurrent Composition
Simulatable security is a security notion for multi-party protocols that implies strong composability features. The main definitional flavours of simulatable security are standard simulatability, universal simulatability, and black-box simulatability. All three come in "computational," "statistical" and "perfect" subflavours indicating the considered adversarial power. Universal and black-box simulatability, in all of their subflavours, are already known to guarantee that the concurrent composition even of a polynomial number of secure protocols stays secure.
We show that computational standard simulatability does not allow for secure concurrent composition of polynomially many protocols, but we also show that statistical standard simulatability does. The first result assumes the existence of an interesting cryptographic tool (namely time-lock puzzles), and its proof employs a cryptographic multi-party computation in an interesting and unconventional way.
Some Remarks on the TKIP Key Mixing Function of IEEE 802.11i
Uncategorized
Uncategorized
Temporal Key Integrity Protocol (TKIP) is a sub-protocol of IEEE 802.11i. TKIP remedies some security flaws in Wired Equivalent Privacy (WEP) Protocol. TKIP adds four new algorithms to WEP: a Message Integrity Code (MIC) called Michael, an Initialization Vector (IV) sequencing discipline, a key mixing function and a re-keying mechanism. The key mixing function, also called temporal key hash, de-correlates the IVs from weak keys. Some cryptographic properties of the S-box used in the key mixing function are investigated in this paper, such as regularity, avalanche effect, differ uniform and linear structure. V.Moen, H.Raddum and K.J.Hole point out that there exists a temporal key recovery attack in TKIP key mixing function. In this paper a method is proposed to defend against the attack, and the resulting effect on performance is also discussed.
On the existence of distortion maps on ordinary elliptic curves
Uncategorized
Uncategorized
Distortion maps allow one to solve the Decision Diffie-Hellman problem on subgroups of points on an elliptic curve. In the case of ordinary elliptic curves over finite fields, it is known that in most cases there are no distortion maps for the Frobenius eigenspaces. In this article we characterize the existence of distortion maps in the remaining cases.
A New Cryptanalytic Time/Memory/Data Trade-off Algorithm
Uncategorized
Uncategorized
In 1980, Hellman introduced a time/memory trade-off (TMTO) algorithm satisfying
the TMTO curve $TM^2=N^2$, where $T$ is the online time, $M$ is the memory and $N$ is the size
of the search space. Later work by Biryukov-Shamir incorporated multiple data to
obtain the curve $TM^2D^2=N^2$, where $D$ is the number of data points.
In this paper, we describe a new table structure obtained by combining Hellman's
structure with a structure proposed by Oechslin. Using the new table structure, we
design a new multiple data TMTO algorithm both with and without the DP method.
The TMTO curve for the new algorithm is obtained to be $T^3M^7D^8=N^7$. This curve is
based on a conjecture on the number of distinct points covered by the new table. Support
for the conjecture has been obtained through some emperical observations. For $D>N^{1/4}$,
we show that the trade-offs obtained by our method are better than the trade-offs
obtained by the BS method.
ECGSC: Elliptic Curve based Generalized Signcryption Scheme
Signcryption is a new cryptographic primitive that simultaneously fulfills both the functions of signature and encryption. The definition of generalized signcryption is proposed in the paper firstly. Generalized signcryption has a special feature that provides confidentiality or authenticity separately under the condition of specific inputs. So it is more useful than common ones. Based on ECDSA, a signcryption scheme called ECGSC is designed. It will be equivalent to an AtE(OTP$,MAC) encryption scheme or ECDSA when one of party is absent. A third party can verify the signcryption text publicly in the method of ECDSA. Security properties are proven based on Random Oracle mode: confidentiality (CUF-CPA), unforgeability (UF-CMA) and non-repudiation. Compared with the others, ECGSC presents a 78% reduction in computational cost for typical security parameters for high level security applications.
Fast computation of Tate pairing on general divisors of genus 3 hyperelliptic curves
Uncategorized
Uncategorized
For the Tate pairing computation over hyperelliptic
curves, there are developments by Duursma-Lee and Barreto et al.,
and those computations are focused on {\it degenerate} divisors.
As divisors are not degenerate form in general, it is necessary to
find algorithms on {\it general} divisors for the Tate pairing
computation. In this paper, we present two efficient methods for
computing the Tate pairing over divisor class groups of the
hyperelliptic curves $y^2 = x^p - x + d, ~ d = \pm 1$ of genus 3.
First, we provide the {\it pointwise} method, which is a
generalization of the previous developments by Duursma-Lee and
Barreto et al. In the second method, we use the {\it resultant}
for the Tate pairing computation. According to our theoretical
analysis of the complexity, the {\it resultant} method is $48.5
\%$ faster than the pointwise method in the best case and $15.3
\%$ faster in the worst case, and our implementation result shows
that the {\it resultant} method is much faster than the pointwise
method. These two methods are completely general in the sense that
they work for general divisors with Mumford representation, and
they provide very explicit algorithms.
Fast Elliptic Scalar Multiplication using New Double-base Chain and Point Halving
The fast implementation of elliptic curve cryptosystems relies on the efficient computation of scalar multiplication. Based on the double-base chain representation of scalar using powers of 2 and 3, we propose a new representation with powers of ½ and 3 instead. Thus the efficient point halving operation can be incorporated in the new double-base chain to achieve fast scalar multiplication. Experimental results show that our approach leads to a lower complexity which contributes to the efficient implementation of elliptic curve cryptosystems.
Designated Confirmer Signatures Revisited
Uncategorized
Uncategorized
Previous definitions of designated confirmer signatures in the literature are incomplete, and the proposed security definitions fail to capture key security properties, such as unforgeability against malicious confirmers and non-transferability. We propose new definitions.
Previous schemes rely on the random oracle model or set-up assumptions, or are secure with respect to relaxed security definitions. We construct a practical scheme that is provably secure with respect to our security definition under the strong RSA-assumption, the decision composite residuosity assumption, and the decision Diffie-Hellman assumption.
To achieve our results we introduce several new relaxations of standard notions. We expect these techniques to be useful in the construction and analysis of other efficient cryptographic schemes.
Chosen-Ciphertext Secure Identity-Based Encryption in the Standard Model with short Ciphertexts
We describe a practical identity-based encryption scheme that is secure in the standard model against chosen-ciphertext (CCA2) attacks. Security is based on an assumption comparable to (but slightly stronger than) Bilinear Decisonal Diffie-Hellman (BDDH).
A comparison shows that our construction outperforms all known identity-based encryption schemes in the standard model and its performance is even comparable with the one from the random-oracle based Boneh/Franklin IBE scheme.
Our proposed IBE scheme has furthermore the property that it fulfills some notion of ``redundancy-freeness", i.e. the encryption algorithm is not only a probabilistic injection but also a surjection. As a consequence the ciphertext overhead is nearly optimal: to encrypt $k$ bit messages for $k$ bit identities and with $k$ bit randomness we get $3k$ bit ciphertexts to guarantee (roughly) $k$ bits of security.