All papers (Page 221 of 24087 results)
Cryptanalysis of white box DES implementations
Uncategorized
Uncategorized
Obfuscation is a method consisting in hiding information of some parts of a computer program.
According to the Kerckhoffs principle, a cryptographical algorithm should be kept public while the whole
security should rely on the secrecy of the key. In some contexts, source codes are publicly available, while the key should be kept secret; this is the challenge of code obfuscation. This paper deals with the cryptanalysis of such methods of obfuscation applied to the DES.
Such methods, called the ``naked-DES'' and ``nonstandard-DES'', were proposed by Chow et al. in 2002.
Some methods for the cryptanalysis of the ``naked-DES'' were proposed by Chow et al., Jacob et al., and Link and Neuman. In their paper, Link and Neuman proposed another method for the obfuscation of the DES.
In this paper, we propose a general method that applies to all schemes. Moreover, we provide a theoretical analysis. We implemented our method with a C code and applied it successfully to thousands of obfuscated implementations of DES (both ``naked'' and ``non-standard'' DES). In each case, we recovered enough information to be able to invert the function.
A New Type of Cipher: DICING_CSB
In this paper, we will propose a new type of cipher named DICING_CSB, which come from our previous a synchronous stream cipher DICING. It applies a stream of subkeys and a encryption form of block ciphers, so, it can be viewed a combinative of stream cipher and block cipher. Hence, the new type of cipher has fast speed like a stream cipher and no need MAC.
From Selective-ID to Full Security: The Case of the Inversion-Based Boneh-Boyen IBE Scheme
In this note we remark that the inversion-based selective-ID secure identity-based encryption (IBE) scheme from Boneh and Boyen
can be bootstrapped to full-ID security using a technique by Waters.
An improved collision probability for CBC-MAC and PMAC
In this paper we compute the collision probability of
CBC-MAC for suitably chosen messages. We show
that the probability is $\Omega(\ell q^2/N)$ where $\ell$ is the
number of message block, $N$ is the size of the domain and $q$ is
the total number of queries. For random oracle the probability is
$O(q^2/N)$. This improved collision probability will help us to have
an efficient distinguishing attack and MAC-forgery attack. We also
show collision probability for PMAC with collision probability
$\Omega(q^2/N)$ (strictly more than birth day bound). We have used a
purely combinatorial approach to obtain this bound. The similar
analysis can be made for other MAC like XCBC,
TMAC, OMAC etc. We hope
this approach will help us to calculate related probabilities.
Improved Security Analysis of PMAC
In this paper we provide a simple, concrete and improved security
analysis of {\bf PMAC}, a Parallelizable Message Authentication
Code. We show that the advantage of any distinguisher for {\bf PMAC}
based on a random permutation is at most $\mathbf{\frac{5q\sigma -
3.5 q^2}{2^n}}$, where $\sigma$ is the total number of message
blocks in all $q$ queries made by the distinguisher. In the original
paper by Black and Rogaway in Eurocrypt-2002, the bound was
$\frac{(\sigma+1)^2}{2^{n-1}}$. Very recently, Minematsu and
Matsushima in FSE-2007, have provided a bound $\frac{10\ell
q^2}{2^n}$ where $\ell$ is the maximum block length of all messages
queried by the distinguisher. Our new bound is better than both
original and recently proposed bound and guarantees much more
security of PMAC. We also have provided a complete, independent and
simple combinatorial proof. This proof idea may help us to find a
similar result for other MAC algorithms.
Formal Security Treatments for IBE-to-Signature Transformation: Relations among Security Notions
In a seminal paper of identity based encryption (IBE), Boneh and Franklin [BF01] mentioned an interesting transform from an IBE scheme to a signature scheme, which was observed by Moni Naor. In this paper, we give formal security treatments for this transform and discover several implications and separations among security notions of IBE and transformed signature. For example, we show for such a successful transform, one-wayness of IBE is an essential condition. Additionally, we give a sufficient and necessary condition for converting a semantically secure IBE scheme into an existentially unforgeable signature scheme. Our results help establish strategies on design and automatic security proof of signature schemes from (possibly weak) IBE schemes. We also show some separation results which strongly support that one-wayness, rather than semantic security, of IBE captures an essential condition to achieve secure signature.
A General Construction of Tweakable Block Ciphers and Different Modes of Operations
This work builds on earlier work by Rogaway at Asiacrypt 2004 on
tweakable block cipher (TBC) and modes of operations. Our first
contribution is to generalize Rogaway's TBC construction by working
over a ring {\ring} and by the use of a masking sequence of
functions. The ring {\ring} can be instantiated as either $GF(2^n)$
or as $\bbbz_{2^n}$. Further, over $GF(2^n)$, efficient
instantiations of the masking sequence of functions can be done
using either a binary Linear Feedback Shift Register (LFSR); a powering
construction; a cellular automata map; or by using a word oriented LFSR.
Rogaway's TBC construction
was built from the powering construction over $GF(2^n)$. Our second
contribution is to use the general TBC construction to instantiate
constructions of various modes of operations including authenticated
encryption (AE) and message authentication code (MAC). In particular,
this gives rise to a family of efficient one-pass AE mode of operation.
Out of these, the mode of operation obtained by the use of word oriented
LFSR promises to provide a masking method which is more efficient than the
one used in the well known AE protocol called OCB.
HCH: A New Tweakable Enciphering Scheme Using the Hash-Counter-Hash Approach
The notion of tweakable block ciphers was formally introduced by
Liskov-Rivest-Wagner at Crypto 2002. The extension and the first construction,
called CMC, of this notion to tweakable enciphering schemes which can handle
variable length messages was given by Halevi-Rogaway at Crypto 2003.
In this paper, we present {\hch}, which is a new construction of such a scheme.
The construction uses two universal hash computations with a counter mode
of encryption in-between. This approach was first proposed by McGrew-Viega
to build a scheme called XCB and later used by Wang-Feng-Wu, to obtain a
scheme called HCTR. Among the hash-Ctr-hash type constructions, an important
advantage of {\hch} compared to the others is that {\hch}
has a quadratic security bound; XCB does not provide any security bound
while HCTR has a cubic security bound. A unique feature of {\hch} compared
to all known tweakable enciphering schemes is that {\hch} uses a
single key, can handle
arbitrary length messages and has a quadratic security bound. An important
application of a tweakable enciphering scheme is disk encryption.
{\hch} is well suited for this application. We also describe a variant, which
can utilize pre-computation and makes one less block cipher call. This
compares favourably to other hash-encrypt-hash type constructions; supports
better key agility and requires less key material.
Last updated: 2007-02-02
Verifying Data Integrity with Few Queries to Untrusted Memory
We present a novel technique for verifying the integrity of data stored in an untrusted memory with a small number of memory accesses.
Memory integrity verification, which enables detection of tampering of data stored in untrusted memory, is an essential requirement of
secure processors that provide private and tamper-proof computation.
Limited on-chip storage in a secure processor makes it necessary
for it to store data (including program code) in an untrusted external memory where it is easily susceptible to adversarial tampering. Thus, to ensure validity of computation, it is extremely important to have techniques that can verify integrity of data stored in untrusted memory. Existing memory integrity verification techniques, like Merkle trees, impose very high communication overhead, i.e., large number of queries from processor to memory, in order to perform data integrity
verification. Given that memory latency is very high compared to execution speed of the processor, this imposes a significant running time penalty for applications executing on the processor. Our proposed technique, which is based on Chinese remaindering theorem, performs integrity verification with low communication
overhead while incurring a modest increase in on-chip storage requirement. We present the details of the proposed technique and provide corresponding proofs of security and correctness.
Cryptanalysis and Improvement of an Elliptic Curve Diffie-Hellman Key Agreement Protocol
Uncategorized
Uncategorized
In SAC'05, Strangio proposed protocol ECKE-1 as an efficient elliptic curve Diffie-Hellman two-party key agreement protocol using public key authentication. In this letter, we show that despite the author's claims protocol ECKE-1 is vulnerable to key-compromise
impersonation attacks.
We also present an improved protocol --- ECKE-1N, which can withstand such attacks. The improved protocol's performance is comparable to the well-known MQV protocol and maintains the same remarkable list of security properties.
Private Locally Decodable Codes
Uncategorized
Uncategorized
We consider the problem of constructing efficient locally decodable
codes in the presence of a computationally bounded adversary.
Assuming the existence of one-way functions, we construct {\em
efficient} locally decodable codes with positive information rate
and \emph{low} (almost optimal) query complexity which can correctly
decode any given bit of the message from constant channel error rate
$\rho$. This compares favorably to our state of knowledge
locally-decodable codes without cryptographic assumptions. For all
our constructions, the probability for any polynomial-time
adversary, that the decoding algorithm incorrectly decodes any bit
of the message is negligible in the security parameter.
Efficient Methods for Conversion and Solution of Sparse Systems of Low-Degree Multivariate Polynomials over GF(2) via SAT-Solvers
The computational hardness of solving large systems of sparse and low-degree multivariate equations is a necessary condition for the security of most modern symmetric cryptographic schemes. Notably, most cryptosystems can be implemented with inexpensive hardware, and have a low gate counts, resulting in a sparse system of equations, which in turn renders such attacks feasible. On one hand, numerous recent papers on the XL algorithm and more sophisticated Groebner-bases techniques [5, 7, 13, 14] demonstrate that systems of equations are efficiently solvable when they are sufficiently overdetermined or have a hidden internal algebraic structure that implies the existence of some useful algebraic relations. On the other hand, most of this work, as well as most successful algebraic attacks, involve dense, not sparse systems, at least until linearization by XL or a similar algorithm. No polynomial-system-solving algorithm we are aware of, demonstrates that a significant benefit is obtained from the extreme sparsity of some systems of equations.
In this paper, we study methods for efficiently converting systems of low-degree sparse multivariate equations into a conjunctive normal form satisfiability (CNF-SAT) problem, for which excellent heuristic algorithms have been developed in recent years. A direct application of this method gives very efficient results: we show that sparse multivariate quadratic systems (especially if over-defined) can be solved much faster than by exhaustive search if beta < 1/100. In particular, our method requires no additional memory beyond that required to store the problem, and so often terminates with an answer for problems that cause Magma and Singular to crash. On the other hand, if Magma or Singular do not crash, then they tend to be faster than our method, but this case includes only the smallest sample problems.
Efficient Hybrid Encryption from ID-Based Encryption
This paper deals with generic transformations from ID-based key encapsulation mechanisms (IBKEM) to hybrid public-key encryption (PKE). The best generic transformation known until now is by Boneh and Katz and requires roughly 704-bit overhead in the ciphertext. We present two new such generic transformations that are applicable to partitioned IBKEMs. A partitioned IBKEM is an IBKEM that provides some extra structure. Such IBKEMs are quite natural and in fact nearly all known IBKEMs have this additional property. Our first transformation yields chosen-ciphertext secure PKE schemes from selective-ID secure partitioned IBKEMs with a 256-bit overhead in ciphertext size plus one extra exponentiation in encryption/decryption. As the central tool a Chameleon Hash function is used to map the identities. The second transformation transforms adaptive-ID secure partitioned IBKEMs into chosen-ciphertext secure PKE schemes with no additional overhead.
Applying our transformations to existing IBKEMs we propose a number of novel PKE schemes with different trade-offs. In some concrete instantiations the Chameleon Hash can be made implicit which results in improved efficiency by eliminating the additional exponentiation. Since our transformations preserve the public verifiability property of the IBE schemes it is possible to extend our results to build threshold hybrid PKE schemes. We show an analogue generic transformation in the threshold setting and present a concrete scheme which results in the most efficient threshold PKE scheme in the standard model.
On Perfectly Balanced Boolean Functions
Perfectly balanced functions were introduced by Sumarokov. A well known class of such functions are those linear either in the first or in the last variable. We present a novel technique to construct perfectly balanced functions not in the above class.
Two Trivial Attacks on Trivium
Trivium is a stream cipher designed in 2005 by C. De Cannière and B. Preneel for the European project eSTREAM. It has successfully passed the first phase of the project and has been selected for a special focus in the second phase for the hardware portfolio of the project. Trivium has an internal state of size 288 bits and the key of length 80 bits. Although the design has a simple and elegant structure, no attack on it has been found yet.
In this paper we study a class of Trivium-like designs. We propose a set of techniques that one can apply in cryptanalysis of such constructions. The first group of methods is for recovering the internal state and the secret key of the cipher, given a piece of a known keystream. Our attack is more than $2^{30}$ faster than the best known attack. Another group of techniques allows to gather statistics on the keystream, and to build a distinguisher.
We study two designs: the original design of Trivium and a truncated version Bivium, which follows the same design principles as the original. We show that the internal state of the full Trivium can be recovered in time around $c\cdot 2^{83.5}$, and for Bivium this complexity is $c\cdot 2^{36.1}$. These are the best known results for these ciphers. Moreover, a distinguisher for Bivium with working time $2^{32}$ is presented, the correctness of which has been verified by simulations.
TinyTate: Identity-Based Encryption for Sensor Networks
Uncategorized
Uncategorized
In spite of several years of intense research, the area of security
and cryptography in Wireless Sensor Networks (WSNs) still has a
number of open problems. On the other hand, the advent of
Identity-Based Encryption (IBE) has enabled a wide range of new
cryptographic solutions. In this work, we argue that IBE is ideal for
WSNs and vice versa. We discuss the synergy between the systems,
describe how WSNs can take advantage of IBE, and present results for
computation of the Tate pairing over resource constrained nodes.
Fast Digital Signature Schemes as Secure as Diffie-Hellman Assumptions
This paper presents two fast digital signature schemes based on Diffie-Hellman assumptions. In the random oracle model, the first scheme S1 has a tight security reduction to the computational Diffie-Hellman (CDH) problem; and the second scheme S2 has a tight security reduction to the decisional Diffie-Hellman (DDH) problem.
Comparing with existing signature schemes (whose security is tightly related to CDH problem) like EDL signature schemes, the signature generation of S1 is about 27% faster, and the verification is about 35% faster, if without considering the hash function evaluations. Comparing with existing signature schemes (whose security is tightly related to DDH problem) like KW-DDH signature scheme, the signing of S2 is about 40% faster and the verification is about 35% faster.
The high efficiency of the proposed schemes is attributed to a new
protocol EDL_mwz which implements the proof of equality of discrete logarithm. The EDL_mwz protocol outperforms its counterpart, the Chaum and Pedersen protocol, as its computation is about 38% faster and its bandwidth is |G| bits shorter. This new protocol may be of independent interests.
Strongly-Secure Identity-based Key Agreement and Anonymous Extension
We study the provable security of identity-based (ID-based) key agreement protocols. Although several published protocols have been proven secure in the random oracle model, only a weak adversarial model is considered -- the adversary is not allowed to ask Session-Key Reveal queries that will allow the adversary to learn previously established session keys. Recent research efforts devoted to providing a stronger level of security require strong assumptions, such as assuming that the simulator has access to a non-existential computational or decisional oracle. In this work, we propose an ID-based key agreement protocol and prove its security in the widely accepted indistinguishability-based model of Canetti and Krawczyk. In our proof, the simulator does not require access to any non-existential computational or decisional oracle. We then extend our basic protocol to support ad-hoc anonymous key agreement with bilateral privacy. To the best of our knowledge, this is the first protocol of its kind as previously published protocols are for fixed group and provide only unilateral privacy (i.e., only one of the protocol participants enjoy anonymity).
Group Decryption
Uncategorized
Uncategorized
Anonymity is one of the main concerns in group-oriented cryptography. However, most efforts, for instance, group signatures and ring signatures, are only made to provide anonymity on the sender's point of view. There is only a few work done to ensure anonymity in a cryptographic sense on the recipient's point of view n group-oriented communications. In this paper, we formalize the notion of group decryptions. It can be viewed as an analogousof group signatures in the context of public key encryptions. In this notion, a sender can encrypt a committed message intended to any member of a group, managed by a group manager, while the recipient of the ciphertext remains anonymous. The sender can convince a verifier about this fact without leaking the plaintext or the identity of the recipient. If required, the group manager can verifiably open the identity of the recipient. We propose an efficient group decryption scheme that is proven secure in the random oracle model. The overhead in both computation and communication is independent of the group size. A full ciphertex is about 0.2K bytes in a typical implementation and the scheme is practical to protect the recipient identity in privacy-sensitive group-oriented communications.
Last updated: 2007-04-03
VEST Ciphers
VEST (Very Efficient Substitution-Transposition) is a set of families of counter-assisted substitution-transposition ciphers designed and optimised specifically for ASIC and FPGA hardware. VEST ciphers provide fast scalable keystream generation, authenticated encryption and collision-resistant hashing at a very low cost in area and power consumption. All VEST ciphers support variable-length keys and IVs and are naturally very slow in software. Cores of VEST ciphers can be viewed as light-weight T-functions or large bijective nonlinear feedback shift registers (NLFSRs) with massively parallel feedback, assisted by a nonlinear residue number system (RNS) based counter with a very long period. Four VEST cipher family trees are introduced: 80 bit secure VEST4-80, 128 bit secure VEST8-128, 160 bit secure VEST16-160 and 256 bit secure VEST32-256, returning 4 to 32 bits of output per clock cycle while occupying ~3K to ~28K ASIC gates.
Group Encryption
We present group encryption, a new cryptographic primitive which is
the encryption analogue of a group signature. It possesses similar
verifiability, security and privacy properties, but whereas a group
signature is useful whenever we need to conceal the source (signer)
within a group of legitimate users, a group encryption is useful
whenever we need to conceal a recipient (decryptor) within a group of
legitimate receivers.
We introduce and model the new primitive and present sufficient as
well as necessary conditions for its generic implementation.
We then develop an efficient novel number theoretic construction
for group encryption of discrete logarithms whose complexity is
independent of the group size. To achieve this we construct a
new public-key encryption for discrete logarithms that satisfies
CCA2-key-privacy and CCA2-security in the standard model.
Applications of group encryption include settings where a user wishes to
hide her preferred trusted third party or even
impose a hidden hierarchy of trusted parties, or
settings where verifiable well-formed ciphertexts are kept in a
untrusted storage server that must be prevented from both
learning the content of records as well as analyzing the
identities of their retrievers.
Invertible Universal Hashing and the TET Encryption Mode
This work describes a mode of operation, TET, that turns a regular block cipher into a length-preserving enciphering scheme for messages of (almost) arbitrary length. When using an n-bit block cipher, the resulting scheme can handle input of any bit-length between n and 2^n and associated data of arbitrary length.
The mode TET is a concrete instantiation of the generic mode of operation that was proposed by Naor and Reingold, extended to handle tweaks and inputs of arbitrary bit length. The main technical tool is a construction of invertible ``universal hashing'' on wide blocks, which is as efficient to compute and invert as polynomial-evaluation hash.
Optimised versions of the Ate and Twisted Ate Pairings
The Ate pairing and the twisted Ate pairing for ordinary elliptic curves
which are generalizations of the $\eta_T$ pairing for supersingular curves have previously been proposed.
It is not necessarily the case that both pairings are faster than the Tate pairing.
In this paper we propose optimized versions of the Ate and twisted Ate pairings with the loop reduction method and show that both pairings are always at least as fast as the Tate pairing.
We also provide suitable families of elliptic curves that our optimized Ate and optimized twisted Ate pairings can be computed with half the loop length compared to the Tate pairing.
Interactive two-channel message authentication based on interactive-collision Resistant hash functions
We propose an interactive message authentication protocol (IMAP)
using two channels: an insecure broadband channel and an
authenticated narrow-band channel. We consider the problem in the
context of ad hoc networks, where it is assumed that there is
neither a secret key shared among the two parties, nor a public-key
infrastructure in place. The security of our IMAP is based on the
existence of Interactive-Collision Resistant (ICR) hash functions, a
new notion of hash function security.
Our IMAP is based on the computational assumption that ICR hash
functions exist. It performs better than message authentication
protocols that are based on computational assumptions. That is,
while achieving the same level of security, the amount of
information sent over the authenticated channel in our IMAP is
smaller than the most secure IMAP and Non-interactive Message
Authentication Protocol (NIMAP) in the literature. In other words,
if we send the same amount of information over the authenticated
channel, we can allow much stronger adversaries compared to the
existing protocols in the literature.
Moreover, our IMAP benefits from a simple structure and works under
fewer security assumptions compared to other IMAPs in the
literature. The efficient and easy-to-use structure of our IMAP
makes it very practical in real world ad hoc network scenarios.
Universally Composable Key-evolving Signature
The standard digital signature scheme can be easily subject to key exposure problem In order to overcome this problem; a feasible and effective approach is employed by key-evolving signature scheme. In this paper, we study key- evolving signature within the UC framework and propose an appropriate ideal functionality that captures the basic security requirements of key-evolving signature. Then, we present a generic way to transform a key-evolving signature scheme into a real-life protocol. Finally, we show that UC definition of security is equivalent to previous definition of security which is termed as EU-CMA security.
Computing endomorphism rings of Jacobians of genus 2 curves over finite fields
We present probabilistic algorithms which, given a genus 2 curve C defined over a finite field and a quartic CM field K, determine whether the endomorphism ring of the Jacobian J of C is the full ring of integers in K. In particular, we present algorithms for computing the field of definition of, and the action of Frobenius on, the subgroups J[l^d] for prime powers l^d. We use these algorithms to create the first implementation of Eisentrager and Lauter's algorithm for computing Igusa class polynomials via the Chinese Remainder Theorem, and we demonstrate the algorithm for a few small examples. We observe that in practice the running time of the CRT algorithm is dominated not by the endomorphism ring computation but rather by the need to compute p^3 curves for many small primes p.
New Public Key Cryptosystems Using Polynomials over Non-commutative Rings
In this paper, we propose a new method for designing public key cryptosystems based on general non-commutative rings. The key idea of our proposal is that for a given non-commutative ring, we can define polynomials and take them as the underlying work structure. By doing so, it is easy to implement Diffie-Helman-like key exchange protocol. And consequently, ElGamal-like cryptosystems can be derived immediately. Moreover, we show how to extend our method to non-commutative groups (or semi-groups).
Security analysis of the variant of the self-shrinking generator proposed at ICISC 2006
In this paper, we revisit the variant of the self-shrinking
generator(SSG) proposed by Chang et al. at ICISC 2006. This variant,
which we call SSG-XOR was claimed to have better cryptographic
properties than SSG in a practical setting. But we show that SSG-XOR
has no advantage over SSG from the viewpoint of practical cryptanalysis.
One-Round ID-Based Blind Signature Scheme without ROS Assumption
In this paper, we propose a new ID-based blind signature scheme based
on bilinear pairings from scratch (i.e. without using existing ID-based signature schemes, and without using existing computational assumptions). First, the round complexity of our ID-based blind signature scheme is optimal. Namely, each interactive signature generation requires the requesting user and the signer to transmit only one message each. Second, the proposed scheme is provably secure against generic parallel attack without using the ROS assumption. Indeed, the security of the proposed scheme is based on a new formalized assumption called one-more bilinear Diffie-Hellman Inversion (1m-BDHI) assumption.
Efficient Dynamic k-Times Anonymous Authentication
In k-times anonymous authentication (k-TAA) schemes,
members of a group can be anonymously authenticated to access
applications for a bounded number of times determined by
application providers. Dynamic $k$-TAA allows application
providers to independently grant or revoke group members from
accessing their applications. Dynamic $k$-TAA can be applied in
several scenarios, such as $k$-show anonymous credentials, digital
rights management, anonymous trial of Internet services, e-voting,
e-coupons etc. This paper proposes the first provably secure
dynamic $k$-TAA scheme, where authentication costs do not depend
on $k$. This efficiency is achieved by using a technique
called ``efficient provable e-tag'', proposed in \cite{Nguyen06},
which could be applicable to other e-tag systems.
Privacy-Protecting Coupon System Revisited
At FC’05, Chen et al. introduced an elegant privacy protecting
coupon (PPC) system, CESSS05, in which users can purchase
multi-coupons and redeem them unlinkably while being prevented from
overspending or sharing the coupons. However, the costs for issuing and
redeeming coupons are linear to the redeeming limit. Security of the system
is not proved and only some arguments on system properties are
provided. Coupons last indefinitely and can not be terminated. In this
paper, we propose the first PPC system with constant costs for communication
and computation. Coupons are revokable and the system is
provably secure.
Cryptanalysis of Hwang-Chang’s a Time-Stamp Protocol for Digital Watermarking
In 2005, Hwang et al. [17] proposed a time-stamping protocol for digit watermarking. They claimed that their scheme is secure against attacks. However, in this article, we will show that their scheme is not secure enough for that when the owner of the image sends both the encrypted session key and image to the TSS, the attacker can intercept these transmitted data. Then, he can launch an off-line attack to analyze these intercepted data. We will describe the attacker’s action in this article. After that, we propose an improved scheme to prevent this off-line attack.
The Energy Cost of Cryptographic Key Establishment in Wireless Sensor Networks
Wireless sensor nodes generally face serious limitations in terms of computational power, energy supply, and network bandwidth. Therefore, the implementation of effective and secure techniques for setting up a shared secret key between sensor nodes is a challenging task. In this paper we analyze and compare the energy cost of two different protocols for authenticated key establishment. The first protocol employs a ``light-weight'' variant of the Kerberos key distribution scheme with 128-bit AES encryption. The second protocol is based on ECMQV, an authenticated version of the elliptic curve Diffie-Hellman key exchange, and uses a 256-bit prime field GF($p$) as underlying algebraic structure. We evaluate the energy cost of both protocols on a Rockwell WINS node equipped with a 133 MHz StrongARM processor and a 100 kbit/s radio module. The evaluation considers both the processor's energy consumption for calculating cryptographic primitives and the energy cost of radio communication for different transmit power levels. Our simulation results show that the ECMQV key exchange consumes up to twice as much energy as the Kerberos key distribution. However, in large-scale networks, ECMQV is more energy-efficient than Kerberos.
Last updated: 2007-01-11
Cryptanalysis of An Oblivious Polynomial Evaluation Protocol Based On Polynomial Reconstruction Problem
Uncategorized
Uncategorized
In 1999, Naor and Pinkas \cite {NP99} presented a useful protocol
called oblivious polynomial evaluation(OPE). In this paper, the
cryptanalysis of the OPE protocol is presented. It's shown that the
receiver can successfully get the sender's secret polynomial $P$
after executing the OPE protocol only once, which means the privacy
of the sender can be violated and the security of the OPE protocol
will be broken. It's also proven that the complexity of the
cryptanalysis is the same with the corresponding protocols
cryptanalyzed.
Families of genus 2 curves with small embedding degree
Uncategorized
Uncategorized
Hyperelliptic curves of small genus have the advantage of
providing a group of comparable size as that of elliptic curves,
while working over a field of smaller size. Pairing-friendly
hyperelliptic curves are those whose order of the Jacobian is
divisible by a large prime, whose embedding degree is small enough
for computations to be feasible, and whose minimal embedding field
is large enough for the discrete logarithm problem in it to be
difficult. We give a sequence of $\F_q$-isogeny classes for a family
of Jacobians of genus two curves over $\F_{q}$, for $q=2^m$, and
their corresponding small embedding degrees. We give examples of
the parameters for such curves with embedding degree $k<(\log q)^2$,
such as $k=8,13,16,23,26,37,46,52$.
For secure and efficient implementation of pairing-based
cryptography on genus g curves over $\F_q$, it is desirable that the
ratio $\rho=\frac{g\log_2 q}{\log_2N}$ be approximately 1, where $N$
is the order of the subgroup with embedding degree $k$. We show that
for our family of curves, $\rho$ is often near 1 and never more than
2.
We also give a sequence of $\F_q$-isogeny classes for a family of
Jacobians of genus 2 curves over $\F_{q}$ whose minimal embedding
field is much smaller than the finite field indicated by the
embedding degree $k$. That is, the extension degrees in this
example differ by a factor of $m$, where $q=2^m$, demonstrating that
the embedding degree can be a far from accurate measure of security.
As a result, we use an indicator $k'=\frac{\ord_N2}{m}$ to examine
the cryptographic security of our family of curves.
Inductive Trace Properties for Computational Security
Protocol authentication properties are generally trace-based, meaning that authentication holds for the protocol if authentication holds for individual traces (runs of the protocol and adversary). Computational secrecy conditions, on the other hand, often are not trace based: the ability to computationally distinguish a system that transmits a secret from one that does not is measured by overall success on the \textit{set} of all traces of each system. This presents a challenge for inductive or compositional methods: induction is a natural way of reasoning about traces of a system, but it does not appear applicable to non-trace properties. We therefore investigate the semantic connection between trace properties that could be established by induction and non-trace-based security requirements. Specifically, we prove that a certain trace property implies computational secrecy and authentication properties, assuming the encryption scheme provides chosen ciphertext security and ciphertext integrity. We also prove a similar theorem for computational secrecy assuming Decisional Diffie-Hellman and a chosen plaintext secure encryption scheme.
Indifferentiability of Single-Block-Length and Rate-1 Compression Functions
Uncategorized
Uncategorized
The security notion of indifferentiability was proposed by Maurer,
Renner, and Holenstein in 2004. In 2005, Coron, Dodis, Malinaud, and
Puniya discussed the indifferentiability of hash functions. They
showed that the Merkle-Damgaard construction is not secure in the
sense of indifferentiability. In this paper, we analyze the security
of single-block-length and rate-1 compression functions in the sense
of indifferentiability. We formally show that all single-block-length
and rate-1 compression functions, which include the Davies-Meyer
compression function, are insecure. Furthermore, we show how to
construct a secure single-block-length and rate-1 compression function
in the sense of indifferentiability. This does not contradict our
result above.
Last updated: 2008-01-15
A New Identity Based Encryption Scheme From Pairing
Uncategorized
Uncategorized
We construct an efficient identity based encryption scheme from pairing. The
basic version of the new scheme is provably secure against chosen plaintext
attack, and the full version of the new scheme is provably secure against
adaptive chosen ciphertext attack. Our scheme is based on a new assumption
(decision weak bilinear Diffie-Hellman assumption ) which is no stronger than
decision bilinear Diffie-Hellman assumption.
New Constructions for Provably-Secure Time-Bound Hierarchical Key Assignment Schemes
Uncategorized
Uncategorized
A time-bound hierarchical key assignment scheme is a method
to assign time-dependent encryption keys to a set of classes in a
partially ordered hierarchy, in such a way that each class in the
hierarchy can compute the keys of all classes lower down in the
hierarchy, according to temporal constraints.
In this paper we propose new constructions for time-bound
hierarchical key assignment schemes which are provably secure with
respect to key indistinguishability. Our constructions use as a
building block any provably-secure hierarchical key assignment
scheme without temporal constraints and exhibit a tradeoff among
the amount of private information held by each class, the amount
of public data, the complexity of key derivation, and the
computational assumption on which their security is based.
Moreover, the proposed schemes support updates to the access
hierarchy with local changes to the public information and without
requiring any private information to be re-distributed.
Countermeasures for the Simple Branch Prediction Analysis
Branch Prediction Analysis has been proposed as an attack method to obtain key bits from a cryptographic application.
In this report, we put forth several solutions to avoid or prevent this attack.
The reported countermeasures require only minimal hardware support that is commonly available in modern superscalar processors.
A Practical Limit of Security Proof in the Ideal Cipher Model : Possibility of Using the Constant As a Trapdoor In Several Double Block Length Hash Functions
Recently, Shoichi Hirose \cite{Hirose06} proposed several double
block length (DBL) hash functions. Each DBL hash function uses a
constant which has a role to make the DBL hash function
collision-resistant in the ideal cipher model. However, we have to
instantiate a block cipher. In this paper, we show that the constant
may be used as a trapdoor to help a attacker to find a collision
easily. In case of 256-bit output size, we can find a collision with
the complexity $2^{64}$. This is a gap between the security of the
DBL hash function in the ideal cipher model and the security of the
DBL hash function based on any block cipher.
Cryptanalysis of REESSE1+ Public Key Cryptosystem
Uncategorized
Uncategorized
A new public key cryptosystem, called REESSE1+, was proposed.
REESSE1 consists of two primitive algorithms, a public key
encryptio/decryption algorithm and a digital signature algorithm.
We give some analysis to REESSE1+, and show that the system is
totally unsecure. We show how to derive the private key from the
public key. As the same time, we also show how to forge signatures
for any messages, given two valid signatures.
Efficient Provably-Secure Hierarchical Key Assignment Schemes
Uncategorized
Uncategorized
A hierarchical key assignment scheme is a method to assign
some private information and encryption keys to a set of classes
in a partially ordered hierarchy, in such a way that the private
information of a higher class can be used to derive the keys of
all classes lower down in the hierarchy.
In this paper we design and analyze hierarchical key assignment
schemes which are provably-secure and support dynamic updates to
the hierarchy with local changes to the public information and
without requiring any private information to be re-distributed.
We first consider the problem of constructing a hierarchical key assignment scheme by using as a building block a symmetric encryption scheme. We propose a new construction which is provably secure with respect to key indistinguishability, requires a single computational assumption, and improves on previous proposals.
Then, we show how to reduce key derivation time at the expense of an
increment of the amount of public information, by improving a
previous result.
Finally, we show how to construct a hierarchical key assignment scheme by using as a building block a public-key broadcast encryption
scheme. In particular, one of our constructions provides constant
private information and public information linear in the number of
classes in the hierarchy.
Near-Collision Attack and Collision-Attack on Double Block Length Compression Functions based on the Block Cipher IDEA
Uncategorized
Uncategorized
IDEA is a block cipher designed by Xuejia Lai and James L. Massey
and was first described in 1991. IDEA does not vary the constant in
its key schedule. In \cite{ChYu06}, Donghoon Chang and Moti Yung
showed that there may be a weakness of hash function based on block
cipher whose key schedule does not use various constants. Based on
their result, we investigate the security of double block length
compression functions based on the block cipher IDEA such that the
key size of IDEA is 128 bits and its block length is 64 bits. We use
the double block length hash functions proposed by Shoichi Hirose in
the second hash workshop in 2006 \cite{Hirose06}. Then, we can
easily find a near-collision by hand. And also, for a constant $c$
of DBL hash functions, we can find a collision by hand. This means
that the constant $c$ may be used as a trapdoor to make the attacker
find collision easily.
Dynamic Cryptographic Hash Functions
We present the dynamic cryptographic hash function, a new type of hash function which takes two parameters instead of one. The additional parameter, the security parameter, specifies the internal workings and size of the digest produced. We provide a formal definitions for a dynamic cryptographic hash function and for the traditional security properties, modified for dynamic hash functions. Two additional properties, security parameter collision resistance and digest resistance, are also defined. The additional properties are motivated by scenarios where a dynamic hash functions more cleanly provides a solution to a typical cryptographic problem.
Password-Authenticated Multi-Party Key Exchange with Different Passwords
Password-authenticated key exchange (PAKE) allows two or multiple parties to share a session key
using a human-memorable password only. PAKE has been applied in various environments, especially in the "clientserver"
model of remotely accessed systems. Designing a secure PAKE scheme has been a challenging task because
of the low entropy of password space and newly recognized attacks in the emerging environments. In this paper, we
study PAKE for multi-party with different passwords which allows group users with different passwords to agree
on a common session key by the help of a trusted server using their passwords only. In this setting, the users do not
share a password between themselves but only with the server. The fundamental security goal of PAKE is security
against dictionary attacks. We present the first two provably secure protocols for this problem in the standard
model under the DDH assumption; our first protocol is designed to provide forward secrecy and to be secure against
known-key attacks. The second protocol is designed to additionally provide key secrecy against curious servers. The
protocols require a constant number of rounds.
New Technique for Solving Sparse Equation Systems
Most of the recent cryptanalysis on symmetric key ciphers have
focused on algebraic attacks. The cipher being attacked is
represented as a non-linear equation system, and various techniques
(Buchberger, F4/F5, XL, XSL) can be tried in order to solve the
system, thus breaking the cipher. The success of these attacks has
been limited so far. In this paper we take a different approach to
the problem of solving non-linear equation systems, and propose a
new method for solving them. Our method differs from the others in
that the equations are not represented as multivariate polynomials,
and that the core of the algorithm for finding the solution can be
seen as message-passing on a graph. Bounds on the complexities for
the main algorithms are presented and they compare favorably with
the known bounds. The methods have also been tested on
reduced-round versions of DES with good results. This paper was
posted on ECRYPT's STVL website on January 16th 2006.
Speeding up the Bilinear Pairings Computation on Curves with Automorphisms
In this paper we present an algorithm for computing the
bilinear pairings on a family of non-supersingular elliptic curves with non-trivial automorphisms. We obtain a short iteration loop in Miller's algorithm using non-trivial efficient automorphisms. The proposed algorithm is as efficient as Scott's algorithm in [12].
Identity-Based Proxy Re-encryption
In a proxy re-encryption scheme a semi-trusted proxy converts a ciphertext for Alice into a ciphertext for Bob {\em without} seeing the underlying plaintext. A number of solutions have
been proposed in the public-key setting. In this paper, we address the problem of Identity-Based proxy re-encryption, where ciphertexts are transformed from one {\it identity} to another. Our schemes are compatible with current IBE deployments and do not require any extra work from the IBE trusted-party key generator. In addition, they are non-interactive and one of them permits multiple re-encryptions. Their security is based on a standard assumption (DBDH) in the random oracle model.
A Framework for Interactive Argument Systems using Quasigroupic Homorphic Commitment
Using a model based on \textit{probabilistic functions} (\textit{PF}), it's introduced the concept of \textit{perfect zero knowledge} (\textit{PZK}) \textit{commitment scheme} (\textit{CS}) allowing \textit{quasigroupic} \textit{homomorphic commitment} (\textit{QHC}). Using \textit{QHC} of $+_m$ (modular sum in $\mathbb{Z}_m$), application is considered in interactive argument systems (\textit{IAS}) for several languages. In four of the examples -- generalized nand ($\Lnandalpha$), string equality ($\left[=_{\left(m,\alpha,\right)}\right]$), string inequality ($\left[\neq_{\left(m,\alpha,\right)}\right]$) and graph three-colourations ($G3C$) -- complexity improvements are obtained, in comparison to other established results. Motivation then arises to define a general framework for \textit{PZK}-\textit{IAS} for membership in language with committed alphabet (\textit{MLCA}), such that the properties of soundness and \textit{PZK} result from high-level parametrizable aspects. A general simulator is constructed for sequential and (most interestingly) for parallel versions of execution. It therefore becomes easier to conceptualize functionalities of this kind of \textit{IAS} without the consideration of low level aspects of cryptographic primitives. The constructed framework is able to embrace \AcroCS\; allowing \textit{QHC} of functions that are not themselves quasigroupic. Several theoretical considerations are made, namely recognizing a necessary requirements to demand on an eventual \AcroCS \;allowing \textit{QHC} of some complete function in a Boolean sense.
Multiplication and Squaring on Pairing-Friendly Fields
Uncategorized
Uncategorized
Pairing-friendly fields are finite fields that are suitable for the implementation of cryptographic bilinear pairings. In this paper we review multiplication and squaring methods for pairing-friendly fields $\fpk$ with $k \in \{2,3,4,6\}$. For composite $k$, we consider every possible towering construction. We compare the methods to determine which is most efficient based on the number of basic $\fp$ operations, as well as the best constructions for these finite extension fields. We also present experimental results for every method.
On the security of a group key agreement protocol
In this paper we show that the group key agreement protocol proposed by Tseng suffers from a number of serious security vulnerabilities.
An Attack on Disguised Elliptic Curves
Uncategorized
Uncategorized
We present an attack on one of the Hidden Pairing schemes proposed by Dent and Galbraith. We drastically reduce the number of variables necessary to perform a multivariate attack and in some cases we can completely recover the private key. Our attack relies only on knowledge of the public system parameters.
White Box Cryptography: Another Attempt
At CMS 2006 Bringer et al. show how to conceal the algebraic structure of a ``traceable block cipher'' by adding perturbations to its description. We here exploit and strengthen their ideas by further perturbing the representation of a cipher towards a white box implementation. Our technique is quite general, and we apply it -- as a challenging example in the domain of white box cryptography -- to a variant of the block cipher AES.
Do We Need to Vary the Constants? (Methodological Investigation of Block-Cipher Based Hash Functions)
Uncategorized
Uncategorized
The recent collision attacks on the MD hash function family do not depend
on the constants used in the function, but rather on its structure
(i.e., changing the constants will not affect the differential analysis
based attacks). Thus, is seems that the role of constants in maintaining
security and preventing these attacks is unclear, at best, for this case
and in particular fixing or varying the constants will not matter
for these analyses.
%
In this work we present a methodological investigation into the case
of block-cipher based PGV hash functions family, and investigate the
importance of constants in securing these designs.
%
To this end we consider the
twelve variants of the PGV family that yield secure
hash in the generic ideal cipher case (as was shown by
Black, Rogaway and Shrimpton), but consider them under concrete
instantiation.
%
%
To investigate the role of constant in the key derivation procedure we
just ignore the constants. In this more uniform setting we further
consider a very regular cipher, namely AES modified to have Mixcolumn
also in the last round (which should still be a strong cipher).
%
Analyzing this modified-AES based hashing, we show that with about 16\%
probability we can find collisions with complexity $2^{49}$ (much
smaller than the birthday attack complexity $2^{64}$).
While we do not claim to break the AES based version, this
nevertheless shows that constants in block cipher have an important
role in resisting collision attack (in particular there is a need to
vary the constant). It also shows that (in the symmetric modified
version) merely the concrete AES structure does not guarantee the security of
AES-based hash function (shown secure under the ideal cipher
model). This is undesirable and
non-robust, because this means that even though a
block cipher has complicated structures in its round function and its key
scheduling algorithm, we can not have a confidence about the security
of hash functions based solely on it (note that there are several
block ciphers such as IDEA, 3-key triple DES which do not use any
constants).
%
Given the above methodological findings, we suggest new AES-based hash
function constructions (essentially modified PGV) which can be
generalized to any block cipher. The functions inherit the security
under the ideal cipher model on the one hand, while, on the other
hand, concretely assure in their structure that the weakness exhibited
herein is dealt with.
Prime Order Primitive Subgroups in Torus-Based Cryptography
Uncategorized
Uncategorized
We use the Bateman-Horn conjecture to study the order of the set of $\mathbb{F}_q$-rational points of primitive subgroups that arise in torus-based cryptography. We provide computational evidence to support the heuristics and make some suggestions regarding parameter selection for torus-based cryptography.
Security and Composition of Cryptographic Protocols: A Tutorial
What does it mean for a cryptographic protocol to be "secure"?
Capturing the security requirements of cryptographic tasks in
a meaningful way is a slippery business: On the one hand, we want security criteria that prevent "all feasible attacks" against a protocol. On the other hand, we want our criteria to not be overly restrictive; that is, we want them to accept those protocols that do not succumb to "feasible attacks".
This tutorial studies a general methodology for defining security of
cryptographic protocols. The methodology, often dubbed the "trusted party paradigm", allows for defining the security requirements of practically any cryptographic task in a unified and natural way.
We first review a basic formulation that captures security in isolation from other protocol instances. Next we address the secure composition problem, namely the vulnerabilities resulting from the often unexpected interactions among different protocol instances that run alongside each other in the same system. We demonstrate the limitations of the basic formulation and review a formulation that guarantees security of protocols even in general composite systems.
Remarks on "Analysis of One Popular Group Signature Scheme'' in Asiacrypt 2006
Uncategorized
Uncategorized
In \cite{Cao}, a putative framing ``attack'' against the ACJT group signature scheme \cite{ACJT00} is presented. This note shows that the attack framework considered in \cite{Cao} is \emph{invalid}. As we clearly illustrate, there is \textbf{no security weakness} in the ACJT group signature scheme as long as all the detailed specifications in \cite{ACJT00} are being followed.
Obfuscation for Cryptographic Purposes
An obfuscation O of a function F should satisfy two requirements: firstly, using O it should be possible to evaluate F; secondly, O should not reveal anything about F that cannot be learnt from oracle access to F. Several definitions for obfuscation exist. However, most of them are either too weak for or incompatible with cryptographic applications, or have been shown impossible to achieve, or both.
We give a new definition of obfuscation and argue for its reasonability and usefulness. In particular, we show that it is strong enough for cryptographic applications, yet we show that it has the potential for interesting positive results. We illustrate this with the following two results:
- If the encryption algorithm of a secure secret-key encryption scheme can be obfuscated according to our definition, then the result is a secure public-key encryption scheme.
- A uniformly random point function can be easily obfuscated according to our definition, by simply applying a one-way permutation. Previous obfuscators for point functions, under varying notions of security, are either probabilistic or in the random oracle model (but work for arbitrary distributions on the point function).
On the negative side, we show that
- Following Hada and Wee, any family of deterministic functions that can be obfuscated according to our definition must already be ``approximately learnable.'' Thus, many deterministic functions cannot be obfuscated. However, a probabilistic functionality such as a probabilistic secret-key encryption scheme can potentially be obfuscated. In particular, this is possible for a public-key encryption scheme when viewed as a secret-key scheme.
- There exists a secure probabilistic secret-key encryption scheme that cannot be obfuscated according to our definition. Thus, we cannot hope for a general-purpose cryptographic obfuscator for encryption schemes.
Improved Collision and Preimage Resistance Bounds on PGV Schemes
Uncategorized
Uncategorized
Preneel, Govaerts, and Vandewalle[14](PGV) considered 64 most basic ways to construct a hash function from a block cipher, and regarded 12 of those 64 schemes as secure. Black, Pogaway and Shrimpton[3](BRS) provided a formal and quantitative treatment of those 64 constructions and proved that, in black-box model, the 12 schemes (
group-1 ) that PGV singled out as secure really are secure. By step
ping outside of the Merkle-Damgard[4] approach to analysis, an additional 8 (group-2) of the 64 schemes are just as collision resistant as the first group of schemes. Tight upper and lower bounds on collision resistance of those 20 schemes were given. In this paper, those collision resistance and preimage resistance bounds are improved, which shows that, in black box model, collision bounds of those 20 schemes are same. In Group-1 schemes, 8 out of 12 can find fixed point easily. Bounds on second preimage, multicollisions of Joux[6], fixed-point multicollisons[8] and combine of the two kinds multicollisions are also given. From those bounds, Group-1 schemes can also be deviled into two group.
On Post-Modern Cryptography
This essay relates to a recent article of Koblitz & Menezes
(Cryptology ePrint Report 2004/152)
that ``criticizes several typical `provable security' results''
and argues that the ``theorem-proof paradigm of theoretical
mathematics is often of limited relevance'' to cryptography.
Although it feels ridiculous to answer such a claim,
we undertake to do so in this essay.
In particular, we point out some of the fundamental philosophical flaws
that underly the said article and some of its misconceptions regarding
theoretical research in Cryptography in the last quarter of a century.
Preimage Attacks On Provably Secure FFT Hashing proposed at Second Hash Workshop in 2006
`Provably Secure FFT Hashing' (We call FFT-Hash in this paper) was
suggested by Lyubashevsky et al.. in Second Hash Workshop in Aug.
2006. This paper shows preimage attacks on hash functions based on
three modes of FFT-Hash. In case of `Nano' whose output size is 513
bits, we can find a preimage with complexity $2^{385}$. In case of
`Mini' whose output size is 1025 bits, we can find a preimage with
complexity $2^{769}$. In case of `Mini' whose output size is 28672
bits, we can find a preimage with complexity $2^{24576}$. This means
that the structure of FFT-Hash is weak in the viewpoint of the
preimage resistance. We recommend that FFT-Hash can not be used in
case of the output size less than 256 bits because the full security
against the preimage attack are crucial in such a short output size.
And also we should not chop the hash output in order to get a short
hash output like SHA-224 and SHA-384, because for example we can
find a preimage with complexity $2^{128}$ (not $2^{256}$) in case of
`Nano' with chopping 257 bits whose hash output is 256 bits.
Recursive lower bounds on the nonlinearity profile of Boolean functions and their applications
The nonlinearity profile of a Boolean function (i.e. the sequence of its minimum Hamming distances $nl_r(f)$ to all functions of degrees at most $r$, for $r\geq 1$) is a cryptographic criterion whose role against attacks on stream and block ciphers has been illustrated by many papers. It plays also a role in coding theory, since it is related to the covering radii of Reed-Muller codes. We introduce a method for lower bounding its values and we deduce bounds on the second order nonlinearity for several classes of cryptographic Boolean functions, including the Welch and the multiplicative inverse functions (used in the S-boxes of the AES). In the case of this last infinite class of functions, we are able to bound the whole profile, and we do it in an efficient way when the number of variables is not too small. This allows showing the good behavior of this function with respect to this criterion as well.
Copyrighting Public-key Functions and Applications to Black-box Traitor Tracing
Copyrighting a function is the process of embedding hard-to-remove
marks in the function's implementation while retaining its original
functionality. Here we consider the above problem in the context of
public-key encryption and we parallel the process of copyrighting a function to the process of designing traitor tracing schemes.
We derive two copyrighted public-key encryption functions for the
$2$-key setting, solving an open question left by earlier work
with respect to copyrighting discrete-logarithm based functions. We then follow a modular design approach and show how to
elevate the $2$-key case to the multi-user setting, employing
collusion secure codes. Our methodology provides a general framework for constructing
public-key traitor tracing schemes that has the interesting property
that the transmission rate remains constant if the plaintext size can be
calibrated to reach an appropriate minimal length.
Achieving a constant rate, i.e., constant expansion in the
size of ciphertexts and keys, is an important open problem
in the area of traitor tracing schemes. Our design shows
how one can solve it for settings that accommodate the
required plaintext calibration (e.g., when a bulk of
symmetric cipher keys can be encrypted in one message).
Our constructions support ``black-box traitor
tracing'', the setting where the tracer only accesses
the decryption box in input/output queries/responses. For the first time here we provide a modeling of black-box traitor tracing that takes into account adversarially chosen plaintext distributions, a security notion we call {\em semantic black-box traceability}. In order to facilitate the design of schemes with semantic black-box traceability we introduce as part of our modular design approach a simpler notion called semantic user separability and we show that this notion implies semantic black-box traceability. In the multi-user setting our constructions also demonstrate how one can derive public-key traitor tracing by reducing the required ``marking assumption'' of collusion-secure codes to cryptographic hardness assumptions.
Linear Approximating to Integer Addition
The integer addition is often applied in ciphers as a cryptographic means. In this paper we will present some results about the linear approximating for the integer addition.
Indistinguishability Amplification
A random system is the abstraction of the input-output behavior of
any kind of discrete system, in particular cryptographic systems. Many
aspects of cryptographic security analyses and proofs can be seen as
the proof that a certain random system (e.g. a block cipher) is
indistinguishable from an ideal system (e.g. a random permutation),
for different types of distinguishers.
This paper presents a new generic approach to proving upper
bounds on the distinguishing advantage of a combined system,
assuming upper bounds of various types on the component systems. For
a general type of combination operation of systems (including the
combination of functions or the cascade of permutations), we prove
two amplification theorems.
The first is a direct-product theorem, similar in spirit to the
XOR-Lemma: The distinguishing advantage (or security) of the
combination of two (possibly stateful) systems is twice the product
of the individual distinguishing advantages, which is optimal.
The second theorem states that the combination of systems is
secure against some strong class of distinguishers, assuming
only that the components are secure against some weaker
class of attacks. As a corollary we obtain tight bounds on the
adaptive security of the cascade and parallel composition of
non-adaptively (or only random-query) secure component systems.
A key technical tool of the paper is
to show a tight two-way correspondence, previously only
known to hold in one direction, between the distinguishing
advantage of two systems and the probability of provoking an
appropriately defined event on one of the systems.
On Achieving the ''Best of Both Worlds'' in Secure Multiparty Computation
Two settings are typically considered for secure multiparty computation, depending on whether or not a majority of the parties are assumed to be honest. Protocols designed under this assumption provide full security (and, in particular, guarantee output delivery and fairness) when this assumption is correct; however, if half or more of the parties are dishonest then security is completely compromised. On the other hand, protocols tolerating arbitrarily-many faults do not provide fairness or guaranteed output delivery even if only a single party is dishonest. It is natural to wonder whether it is possible to achieve the ''best of both worlds''; namely, a single protocol that simultaneously achieves the best possible security in both the above settings. Ishai, et al. (Crypto 2006) recently addressed this question, and ruled out constant-round protocols of this type.
As our main result, we completely settle the question by ruling out
protocols using any (expected) polynomial number of rounds. Given this stark negative result, we ask what can be achieved if we are willing to assume simultaneous message transmission (or, equivalently, a non-rushing adversary). In this setting, we show that impossibility still holds for logarithmic-round protocols. We also show, for any polynomial $p$, a protocol (whose round complexity depends on $p$) that can be simulated to within closeness $O(1/p)$.
How to Win the Clone Wars: \\ Efficient Periodic n-Times Anonymous Authentication
We create a credential
system that lets a user anonymously authenticate at most $n$ times in
a single time period. A user withdraws a dispenser of $n$ e-tokens.
She shows an e-token to a verifier to authenticate herself; each
e-token can be used only once, however, the dispenser automatically
refreshes every time period.
The only prior solution to this problem,
due to Damgård et al.~[DDP05], uses protocols that are a factor of $k$ slower for the user and verifier, where $k$ is the security parameter.
Damgård et al. also only support one authentication per time
period, while we support $n$. Because our construction is based on
e-cash, we can use existing techniques to identify a cheating user,
trace all of her e-tokens, and revoke her dispensers. We also offer a
new anonymity service: glitch protection for basically honest users
who (occasionally) reuse e-tokens. The verifier can always recognize
a reused e-token; however, we preserve the anonymity of users who do
not reuse e-tokens too often.
Key Replacement Attack on a Certificateless Signature Scheme
Yap, Heng and Goi propose an efficient certificateless signature
scheme based on the intractability of the computational
Diffie-Hellman problem, and prove that the scheme is secure in the
random oracle model. This paper shows that their certificateless
signature scheme is vulnerable to key replacement attacks, where
an adversary who replaces the public key of a signer can forge
valid signatures on any messages for that signer without knowing
the signer's private key.
Hybrid Protocol For Password-based Key Exchange in Three-party Setting
Modular design is a common approach for dealing with complex tasks in modern cryptology. The critical of this approach is that designing a secure hybrid protocol. In this paper, we study password-based key exchange in the three-party setting within the UC framework and design a hybrid protocol that UC-securely realizes such task. That is, we firstly define an appropriate ideal functionality F3-pwKE for password-based three-party key exchange. Next we partition the task into two sub-tasks, three-party key distribution and password-based two-party key exchange, and propose relevant two ideal functionalities, F3-KD, FpwKE. Finally, we present a (F3-KD, FpwKE) -hybrid protocol for password-based three-party key exchange that is proved to be UC-secure with respect to non- adaptive party corruption.
Combined Differential, Linear and Related-Key Attacks on Block Ciphers and MAC Algorithms
Differential and linear attacks are the most widely used
cryptanalytic tools to evaluate the security of symmetric-key
cryptography. Since the introduction of differential and linear
attacks in the early 1990's, various variants of these attacks have
been proposed such as the truncated differential attack, the
impossible differential attack, the square attack, the boomerang
attack, the rectangle attack, the differential-linear attack, the
multiple linear attack, the nonlinear attack and the bilinear
attack. One of the other widely used cryptanalytic tools is the
related-key attack. Unlike the differential and linear attacks, this
attack is based on the assumption that the cryptanalyst can obtain
plaintext and ciphertext pairs by using different, but related keys.
This thesis provides several new combined differential, linear and
related-key attacks, and shows their applications to block ciphers,
hash functions in encryption mode and message authentication code
(MAC) algorithms. The first part of this thesis introduces how to
combine the differential-style, linear-style and related-key
attacks: we combine them to devise the
differential-nonlinear attack, the square-(non)linear
attack, the related-key differential-(non)linear attack, the
related-key boomerang attack and the related-key
rectangle attack. The second part of this thesis presents some
applications of the combined attacks to exiting symmetric-key
cryptography. Firstly, we present their applications to the block
ciphers SHACAL-1, SHACAL-2 and AES. In particular, we show that the
differential-nonlinear attack is applicable to 32-round SHACAL-2,
which leads to the best known attack on SHACAL-2 that uses a single
key. We also show that the related-key rectangle attack is
applicable to the full SHACAL-1, 42-round SHACAL-2 and 10-round
AES-192, which lead to the first known attack on the full SHACAL-1
and the best known attacks on SHACAL-2 and AES-192 that use related
keys. Secondly, we exploit the related-key boomerang attack to
present practical distinguishing attacks on the cryptographic hash
functions MD4, MD5 and HAVAL in encryption mode. Thirdly, we show
that the related-key rectangle attack can be used to distinguish
instantiated HMAC and NMAC from HMAC and NMAC with a random
function.
Secure Cryptographic Workflow in the Standard Model
Following the work of Al-Riyami et al. we define the notion of key encapsulation mechanism supporting cryptographic workflow (WF-KEM) and prove a KEM-DEM composition theorem which extends the notion of hybrid encryption to cryptographic workflow. We then generically construct a WF-KEM from an identity-based encryption (IBE) scheme and a secret sharing scheme. Chosen ciphertext security is achieved using one-time signatures. Adding a public-key encryption scheme we are able to modify the construction to obtain escrow-freeness. We prove all our constructions secure in the standard model.
Robust Computational Secret Sharing and a Unified Account of Classical Secret-Sharing Goals
We give a unified account of classical secret-sharing goals from a modern cryptographic vantage. Our treatment encompasses perfect, statistical, and computational secret sharing; static and dynamic adversaries; schemes with or without robustness; schemes where a participant recovers the secret and those where an external party does so. We then show that Krawczyk's 1993 protocol for robust computational secret sharing (RCSS) need not be secure, even in the random-oracle model and for threshold schemes, if the encryption primitive it uses satisfies only one-query indistinguishability (ind1), the only notion Krawczyk defines. Nonetheless, we show that the protocol is secure (in the random-oracle model, for threshold schemes) if the encryption scheme also satisfies one-query key-unrecoverability (key1). Since practical encryption schemes are ind1+key1 secure, our result effectively shows that Krawczyk's RCSS protocol is sound (in the random-oracle model, for threshold schemes). Finally, we prove the security for a variant of Krawczyk's protocol, in the standard model and for arbitrary access structures, assuming ind1 encryption and a statistically-hiding, weakly-binding commitment scheme.
Universally Composable and Forward Secure RFID Authentication and Key Exchange
Protocols proven secure in universally composable models remain secure under concurrent and modular composition,
and may be easily plugged into more complex protocols without having their security re-assessed with each new use.
Recently, a universally composable framework has been proposed for Radio-Frequency Identification (RFID) authentication protocols, that simultaneously provides for availability, anonymity, and authenticity. In this paper we extend that framework to support
key-compromise and forward-security issues.
We also introduce new, provably secure, and highly practical protocols for anonymous authentication and key-exchange by RFID devices.
The new protocols are lightweight, requiring only a pseudo-random bit generator. The new protocols satisfy forward-secure anonymity,
authenticity, and availability requirements in the Universal Composability model. The proof exploits pseudo-randomness in the standard model.
Towards a Separation of Semantic and CCA Security for Public Key Encryption
We address the question of whether or not semantically secure public-key encryption primitives imply the existence of chosen ciphertext attack (CCA) secure primitives. We show a black-box separation, using the methodology introduced by Impagliazzo and Rudich, for a large non-trivial class of constructions. In particular, we show that if the proposed CCA construction's decryption algorithm does not query the semantically secure primitive's encryption algorithm, then the proposed construction cannot be CCA secure
New Identity-Based Authenticated Key Agreement Protocols from Pairings (without Random Oracles)
Uncategorized
Uncategorized
We present the first provably secure ID-based key agreement protocol, inspired by the ID-based encryption scheme of Gentry, in the standard (non-random-oracle) model. We show how this key agreement can be used in either escrowed or escrowless mode. We also give a protocol which enables users of separate private key generators to agree on a shared secret key. All our proposed protocols have comparable performance to all known protocols that are proven secure in the random oracle model.
A class of quadratic APN binomials inequivalent to power functions
We exhibit an infinite class of almost perfect nonlinear quadratic binomials from $\mathbb{F}_{2^n}$ to $\mathbb{F}_{2^n}$ ($n\geq 12$, $n$ divisible by 3 but not by 9). We prove that these functions are EA-inequivalent to any power function and that they are CCZ-inequivalent to any Gold function and to any Kasami function. It means that for $n$ even they are CCZ-inequivalent to any known APN function, and in particular for $n=12,24$, they are therefore CCZ-inequivalent to any power function.
It is also proven that, except in particular cases, the Gold mappings are CCZ-inequivalent to the Kasami and Welch functions.
Lattices that Admit Logarithmic Worst-Case to Average-Case Connection Factors
We demonstrate an \emph{average-case} problem which is as hard as
finding $\gamma(n)$-approximate shortest vectors in certain
$n$-dimensional lattices in the \emph{worst case}, where $\gamma(n)
= O(\sqrt{\log n})$. The previously best known factor for any class
of lattices was $\gamma(n) = \tilde{O}(n)$.
To obtain our results, we focus on families of lattices having
special algebraic structure. Specifically, we consider lattices
that correspond to \emph{ideals} in the ring of integers of an
algebraic number field. The worst-case assumption we rely on is
that in some $\ell_p$ length, it is hard to find approximate
shortest vectors in these lattices, under an appropriate form of
preprocessing of the number field. Our results build upon prior
works by Micciancio (FOCS 2002), Peikert and Rosen (TCC 2006), and
Lyubashevsky and Micciancio (ICALP 2006).
For the connection factors $\gamma(n)$ we achieve, the corresponding
\emph{decisional} promise problems on ideal lattices are \emph{not}
known to be NP-hard; in fact, they are in P. However, the
\emph{search} approximation problems still appear to be very hard.
Indeed, ideal lattices are well-studied objects in computational
number theory, and the best known algorithms for them seem to
perform \emph{no better} than the best known algorithms for general
lattices.
To obtain the best possible connection factor, we instantiate our
constructions with infinite families of number fields having
constant \emph{root discriminant}. Such families are known to exist
and are computable, though no efficient construction is yet known.
Our work motivates the search for such constructions. Even
constructions of number fields having root discriminant up to
$O(n^{2/3-\epsilon})$ would yield connection factors better than the
current best of~$\tilde{O}(n)$.
Scalable Authenticated Tree Based Group Key Exchange for Ad-Hoc Groups
Task-specific groups are often formed in an ad-hoc manner within big structures, like companies. Take the following typical scenario: A high rank manager decides that a task force group for some project needs to be built. This order is passed down the hierarchy where it finally reaches a manager who calls some employees to form a group. The members should communicate in a secure way and for efficiency reasons symmetric systems are the common choice. To establish joint secret keys for groups, group key exchange (GKE) protocols were developed. If the users are part of e.g. a Public Key Infrastructure (PKI), which is usually the case within a company or a small network, it is possible to achieve authenticated GKE by modifying the protocol and particularly by including signatures.
In this paper we recall a GKE due to Burmester and Desmedt which needs only $O(\log n)$ communication and computation complexity per user, rather than $O(n)$ as in the more well-known Burmester-Desmedt protocol, and runs in a constant number of rounds. To achieve authenticated GKE one can apply compilers, however, the existing ones would need $O(n)$ computation and communication thereby mitigating the advantages of the faster protocol. Our contribution is to extend an existing compiler so that it preserves the computation and communication complexity of the non-authenticated protocol. This is particularly important for tree based protocols.
An attack on the certificateless signature scheme from EUC Workshops 2006
In this paper, we will show that the certificateless signature scheme recently proposed by Yap, Heng and Goi at EUC Workshops 2006 is insecure against a key replacement attack. Our attack shows that anyone who replaces a signer's public key can forge valid signatures for that signer without knowledge of the signer's private key.
General Distinguishing Attacks on NMAC and HMAC with Birthday Attack Complexity
Uncategorized
Uncategorized
Kim {\em et al}. \cite{KiBiPrHo06} and Contini {\em et al}.
\cite{CoYi06} studied on the security of HMAC and NMAC based on
HAVAL, MD4, MD5, SHA-0 and SHA-1. Especially, they considered the
distinguishing attacks. However, they did not describe generic
distinguishing attacks on NMAC and HMAC. In this paper, we describe
the generic distinguishers to distinguish NMAC and HMAC with the
birthday attack complexity and we prove the security bound when the
underlying compression function is the random oracle.
A New Type of Group Signature Scheme
On the basis of BB short signature scheme, this paper derives a new signature scheme, from which we construct a new kind of group signature scheme. The security of the new group signature scheme is based on the q-Strong Diffie-Hellman assumption and the Decisional Diffie-Hellman assumption in the random oracle model. The length of the new group signature is a little longer than that of BBS short group signature. However, in the new group signature scheme, giving certificates and private keys to group members do not need any third trusted party, while in BBS short group signature scheme it does need.
A New Type of Group Blind Signature Scheme Based on Bilinear Pairings
This paper constructs a new group blind signature scheme on the basis of CZK group signature scheme. The security of the new scheme is under the computational Diffie-Hellman problem in the random oracle model. In our scheme, to blind the group signature of CZK only adds the computation of modular addition; while the scheme in LR98 adds the computation of double discreet logarithm, root of the discreet logarithm and random permutation in order to blind the group signature of CS97. As far as the computation complexity is concerned, our scheme is more efficient than LR98.
On the pseudo-random generator ISAAC
Uncategorized
Uncategorized
This paper presents some properties of he deterministic random bit
generator ISAAC (FSE'96), contradicting several statements of its
introducing article. In particular, it characterizes huge subsets of
internal states which induce a strongly non-uniform distribution in
the $8\,192$ first bits produced. A previous attack on ISAAC presented
at Asiacrypt'06 by Paul and Preneel is demonstrated to be non
relevant, since relies on an erroneous algorithm. Finally, a
modification of the algorithm is proposed to fix the weaknesses
discovered.
On Zigzag Functions and Related Objects in New Metric
In \cite{BCS96}, the concept of zigzag function was introduced in
relation with oblivious transfer \cite{R84}. This subject has
later been studied in \cite{S99,DS01,CFW01}. The definition
of zigzag functions has been generalized to $s$-zigzag functions for
$2\leq s\leq n$. It turns out that zigzag functions are also interesting
combinatorial objects, thanks to their relation with self-intersecting codes and
orthogonal arrays \cite{BCS96,S99}.
The aim of this work is to formulate these objects with respect to a new metric
following the approach proposed in \cite{BNNP} and to investigate the properties
of the generalized zigzag functions and related concepts.
Statistically-Hiding Commitment from Any One-Way Function
We give a construction of statistically-hiding commitment schemes (ones where the hiding property holds information theoretically), based on the minimal cryptographic assumption that one-way functions exist. Our construction employs two-phase commitment schemes, recently constructed by Nguyen, Ong and Vadhan (FOCS `06), and universal one-way hash functions introduced and constructed by Naor and Yung (STOC `89) and Rompel (STOC `90).
Searching for Shapes in Cryptographic Protocols (extended version)
We describe a method for enumerating all essentially
different executions possible for a cryptographic protocol.
We call them the shapes of the protocol. Naturally
occurring protocols have only finitely many, indeed very few
shapes. Authentication and secrecy properties are easy to
determine from them, as are attacks and anomalies. CPSA,
our Cryptographic Protocol Shape Analyzer, implements the
method.
In searching for shapes, CPSA starts with some initial
behavior, and discovers what shapes are compatible with it.
Normally, the initial behavior is the point of view of one
participant. The analysis reveals what the other principals
must have done, given this participant's view.
The search is complete, i.e. every shape can in fact be
found in a finite number of steps. The steps in question
are applications of two authentication tests, fundamental
patterns for protocol analysis and heuristics for protocol
design. We have formulated the authentication tests in a
new, stronger form, and proved completeness for a search
algorithm based on them.
Balanced Boolean Functions with (more than) Maximum Algebraic Immunity
In this correspondence, construction of balanced Boolean functions with maximum possible algebraic (annihilator) immunity (AI) is studied with an additional property which is necessary to resist fast algebraic attack. The additional property considered here is, given an $n$-variable ($n$ even) balanced function $f$ with maximum possible AI $\frac{n}{2}$, and given two $n$-variable Boolean functions $g, h$ such that $fg = h$, if $\deg(h) = \frac{n}{2}$, then $\deg(g)$ must be greater than or equal to $\frac{n}{2}$. Our results can also be used to present theoretical construction of resilient Boolean functions having maximum possible AI.
Information Theoretic Bounds on Authentication Systems in Query Model
Authentication codes provide message integrity guarantees
in an information theoretic sense
within a symmetric key setting. Information theoretic
bounds on the success probability of an adversary who has access to
previously authenticated messages have been derived by Simmons and
Rosenbaum, among others. In this paper we consider a strong attack
scenario where the adversary is adaptive and has access to
authentication and verification oracles. We derive information
theoretic bounds on the success probability of the adversary and on
the key size of the code. This brings the study of unconditionally
secure authentication systems on a par with the study of
computationally secure ones. We characterize the codes that meet
these bounds and compare our result with the earlier ones.
Universally Composable Security with Global Setup
Cryptographic protocols are often designed and analyzed under some trusted setup assumptions, namely in settings where the participants have access to global information that is trusted to have some basic security properties. However, current modeling of security in the presence of such setup falls short of providing the expected security guarantees. A quintessential example of this phenomenon is the deniability concern: there exist natural protocols that meet the strongest known composable security notions, and are still vulnerable to bad interactions with rogue protocols that use the same setup.
We extend the notion of universally composable (UC) security in a way that re-establishes its original intuitive guarantee even for protocols that use globally available setup. The new formulation prevents bad interactions even with adaptively chosen protocols that use the same setup. In particular, it guarantees deniability. While for protocols that use no setup the proposed requirements are the same as in traditional UC security, for protocols that use global setup the proposed requirements are significantly stronger. In fact, realizing Zero Knowledge or commitment becomes provably impossible, even in the Common Reference String model. Still, we propose reasonable alternative setup assumptions and protocols that allow realizing practically any cryptographic task under standard hardness assumptions even against adaptive corruptions.
Some Efficient Algorithms for the Final Exponentiation of $\eta_T$ Pairing
Recently Tate pairing and its variations are attracted in cryptography. Their operations consist of a main iteration loop and a final exponentiation. The final exponentiation is necessary for generating a unique value of the bilinear pairing in the extension fields. The speed of the main loop has become fast by the recent improvements, e.g., the Duursma-Lee algorithm and $\eta_T$ pairing. In this paper we discuss how to enhance the speed of the final exponentiation of the $\eta_T$ pairing in the extension field ${\mathbb F}_{3^{6n}}$. Indeed, we propose some efficient algorithms using the torus $T_2({\mathbb F}_{3^{3n}})$ that can efficiently compute an inversion and a powering by $3^{n}+1$. Consequently, the total processing cost of computing the $\eta_T$ pairing can be reduced by 17% for n=97.
From Weak to Strong Watermarking
The informal goal of a watermarking scheme is to ``mark'' a digital object,
such as a picture or video, in such a way that it is difficult for an
adversary to remove the mark without destroying the content of the object.
Although there has been considerable work proposing and breaking
watermarking schemes, there has been little attention given to the
formal security goals of such a scheme. In this work, we provide a
new complexity-theoretic definition of security for watermarking
schemes. We describe some shortcomings of previous attempts at
defining watermarking security, and show that security under our
definition also implies security under previous definitions. We
also propose two weaker security conditions that seem to capture the
security goals of practice-oriented work on watermarking and show
how schemes satisfying these weaker goals can be strengthened to
satisfy our definition.
On a new invariant of Boolean functions
A new invariant of the set of $n$-variable Boolean functions with respect to the action of $AGL(n,2)$ is studied. Application of this invariant to prove affine nonequivalence of two Boolean functions is outlined. The value of this invariant is computed for $PS_{ap}$ type bent functions.
Another class of quadratic APN binomials over $\F_{2^n}$: the case $n$ divisible by 4
We exhibit an infinite class of almost perfect nonlinear quadratic binomials from $\mathbb{F}_{2^{n}}$ to $\mathbb{F}_{2^{n}}$ with $n=4k$ and $k$ odd. We prove that these functions are CCZ-inequivalent to known APN power functions when $k\ne 1$. In particular it means that for $n=12,20,28$, they are CCZ-inequivalent to any power function.
Pairing-friendly elliptic curves with small security loss by Cheon's algorithm
Pairing based cryptography is a new public key cryptographic scheme. An elliptic curve suitable for pairing based cryptography is called a ``pairing-friendly'' elliptic curve. After Mitsunari, Sakai and Kasahara's traitor tracing scheme and Boneh and Boyen's short signature scheme, many protocols based on pairing-related problems such as the $q$-weak Diffie-Hellman problem have been proposed.
In Eurocrypt 2006, Cheon proposed a new efficient algorithm to solve pairing-related problems and recently the complexity of Cheon's algorithm has been improved by Kozaki, Kutsuma and Matsuo.
Due to these two works, an influence of Cheon's algorithm should be considered when we construct a suitable curves for the use of a protocol based on a pairing-related problem.
Among known methods for constructing pairing-friendly elliptic curves, ones using cyclotomic polynomials such as the Brezing-Weng method and the Freeman-Scott-Teske method are affected by Cheon's algorithm.
In this paper, we study how to reduce a security loss of a cyclotomic family by Cheon's algorithm. The proposed method constructs many pairing-friendly elliptic curves with small security loss by Cheon's algorithm suitable for protocols based on pairing-related problems.
Last updated: 2007-01-09
The Bilinear Pairing-based Accumulator Proposed at CT-RSA'05 is not Collision Resistant
Uncategorized
Uncategorized
In this paper, we demonstrate that the construction proposed by Lan Nguyen at CT-RSA'05 does lead to a cryptographic accumulator which is not collision resistant.
Last updated: 2006-12-04
A protocol
Uncategorized
Uncategorized
Sorry for the absence. But I want revise this part after anoymous review!
Security Analysis of Voice-over-IP Protocols
Uncategorized
Uncategorized
The transmission of voice communications as datagram
packets over IP networks, commonly known as Voice-over-IP
(VoIP) telephony, is rapidly gaining wide acceptance.
With private phone conversations being conducted on
insecure public networks, security of VoIP communications
is increasingly important. We present a structured
security analysis of the VoIP protocol stack, which
consists of signaling (SIP), session description (SDP),
key establishment (SDES, MIKEY, and ZRTP) and secure
media transport (SRTP) protocols. Using a combination
of manual and tool-supported formal analysis, we uncover
several design flaws and attacks, most of which are caused
by subtle inconsistencies between the assumptions that
protocols at different layers of the VoIP stack make about
each other.
The most serious attack is a replay attack on SDES,
which causes SRTP to repeat the keystream used for media
encryption, thus completely breaking transport-layer
security. We also demonstrate a man-in-the-middle
attack on ZRTP, which allows the attacker to convince the
communicating parties that they have lost their shared
secret. If they are using VoIP devices without displays
and thus cannot execute the ``human authentication''
procedure, they are forced to communicate insecurely,
or not communicate at all, i.e., this becomes a denial of
service attack. Finally, we show that the key derivation
process used in MIKEY cannot be used to prove security of
the derived key in the standard cryptographic model for
secure key exchange.
Perfect NIZK with Adaptive Soundness
This paper presents a very simple and efficient adaptively-sound perfect NIZK argument system for any NP-language. In contrust to recently proposed schemes by Groth, Ostrovsky and Sahai, our scheme does not pose any restriction on the statements to be proven. Besides, it enjoys a number of desirable properties: it allows to re-use the common reference string (CRS), it can handle
arithmetic circuits, and the CRS can be set-up very efficiently
without the need for an honest party.
We then show an application of our techniques in constructing efficient NIZK schemes for proving arithmetic relations among committed secrets, whereas previous methods required expensive generic NP-reductions.
The security of the proposed schemes is based on a strong non-standard assumption, an extended version of the so-called Knowledge-of-Exponent Assumption (KEA) over bilinear groups. We give some justification for using such an assumption by showing that the commonly-used approach for proving NIZK arguments sound does not allow for adaptively-sound statistical NIZK arguments
(unless NP is in P/poly). Furthermore, we show that the assumption used in our construction holds with respect to generic adversaries that do not exploit the specific representation of the group elements. We also discuss how to avoid the non-standard
assumption in a pre-processing model.
Long-term Security and Universal Composability
Uncategorized
Uncategorized
Algorithmic progress and future technological advances threaten
today's cryptographic protocols. This may allow adversaries to break a
protocol retrospectively by breaking the underlying complexity
assumptions long after the execution of the protocol. Long-term secure
protocols, protocols that after the end of the execution do not reveal
any information to a then possibly unlimited adversary, could meet
this threat. On the other hand, in many applications, it is necessary
that a protocol is secure not only when executed alone, but within
arbitrary contexts. The established notion of universal composability
(UC) captures this requirement.
This is the first paper to study protocols which are simultaneously
long-term secure and universally composable. We show that the usual
set-up assumptions used for UC protocols (e.g., a common reference
string) are not sufficient to achieve long-term secure and composable
protocols for commitments or zero-knowledge protocols.
We give practical alternatives (e.g., signature cards) to these usual
setup-assumptions and show that these enable the implementation of the
important primitives commitment and zero-knowledge protocols.