All papers (Page 222 of 24087 results)
Universally Composable Three-Party Key Distribution
In this paper, we formulate and realize a definition of security for three-party key distribution within the universally composable (UC) framework. That is, an appropriate ideal functionality that captures the basic security requirements of three-party key distribution is formulated. We show that UC definition of security for three-party key distribution protocol is strictly more stringent than a previous definition of security which is termed AKE-security. Finally, we present a real-life protocol that securely realizes the formulated ideal functionality with respect to non-adaptive adversaries.
The REESSE1+ Public Key Cryptosystem v 2.21
In this paper, the authors give the definitions of a coprime sequence and a lever function, and describe the five algorithms and six characteristics of a prototypal public key cryptosystem which is used for encryption and signature, and based on three new problems and one existent problem: the multivariate permutation problem (MPP), the anomalous subset product problem (ASPP), the transcendental logarithm problem (TLP), and the polynomial root finding problem (PRFP). Prove by reduction that MPP, ASPP, and TLP are computationally at least equivalent to the discrete logarithm problem (DLP) in the same prime field, and meanwhile find some evidence which inclines people to believe that the new problems are harder than DLP each, namely unsolvable in DLP subexponential time. Demonstrate the correctness of the decryption and the verification, deduce the probability of a plaintext solution being nonunique is nearly zero, and analyze the exact securities of the cryptosystem against recovering a plaintext from a ciphertext, extracting a private key from a public key or a signature, and forging a signature through known signatures, public keys, and messages on the assumption that IFP, DLP, and LSSP can be solved. Studies manifest that the running times of effectual attack tasks are greater than or equal to O(2^n) so far when n = 80, 96, 112, or 128 with lg M = 696, 864, 1030, or 1216. As viewed from utility, it should be researched further how to decrease the length of a modulus and to increase the speed of the decryption.
Some New Hidden Ideal Cryptosystems
We propose public-key cryptosystems with public key a system of polynomial equations and private key an ideal.
Analysis of Privacy-Preserving Element Reduction of Multiset
Uncategorized
Uncategorized
Among private set operations, the privacy preserving element reduction of a multiset
can be an important tool for privacy enhancing technology as itself or in the combination
with other private set operations. Recently, a protocol, over-threshold-set-union-protocol, for a
privacy preserving element reduction method of a multiset was proposed by Kissner and Song in
Crypto 2005. In this paper, we point out that there is a mathematical flaw in their polynomial
representation of element reduction of a multiset and the resulting protocol error from the flaw
in the polynomial representation of a multiset. We correct their polynomial representation of a
multiset and propose an over-threshold-set-operation-protocol based on the corrected representation.
Our over-threshold-set-operation-protocol can be combined with a privacy preserving set
operation and outputs those elements appears over the predetermined threshold number times
in the resulting multiset of set operation.
The Recent Attack of Nie et al On TTM is Faulty
Uncategorized
Uncategorized
Recently there is a paper entitled "{\it Breaking a New Instance of TTM Cryptosystem}" by Xuyun Nie, Lei
Hu, Jianyu Li, Crystal Updegrove and Jintai Ding [1] claiming a successive attack on the scheme
of TTM presented in [3]. In the present article, we will show that their claim is a
{\bf misunderstanding}.
Authenticated Interleaved Encryption
We present AIE (Authenticated Interleaved Encryption), a new scheme that allows nodes of a network to exchange messages securely (i.e. encrypted and authenticated) without sharing a common key or using public key cryptography.
Our scheme is well adapted to networks, such as ad hoc, overlay or sensor networks, where nodes have limited capabilities and can share only a small number of symmetric keys. It provides privacy and integrity. An eavesdropper listening to a communication is unable to decrypt it and modify it without being detected. We show that our proposal can be used in wireless sensor networks to send encrypted packets to very dynamic sets of nodes without having to establish and maintain group keys. These sets of nodes can be explicitly specified by the source or can be specified by the network according to some criteria, such as their location, proximity to an object, temperature range. As a result, a node can, for example, send encrypted data to all the nodes within a given geographical area, without having to identify the destination nodes in advance. Finally we show that our proposal can be used to implement a secure and scalable aggregation scheme for wireless sensor networks.
On the Minimal Embedding Field
Uncategorized
Uncategorized
We discuss the underlying mathematics that causes the embedding
degree of a curve of any genus to not necessarily correspond to the
minimal embedding field, and hence why it may fail to capture the
security of a pairing-based cryptosystem. Let $C$ be a curve of
genus $g$ defined over a finite field $\F_q$, where $q=p^m$ for a
prime $p$. The Jacobian of the curve is an abelian variety,
$J_C(\F_q)$, of dimension $g$ defined over $\F_q$. For some prime
$N$, coprime to $p$, the embedding degree of $J_C(\F_q)[N]$ is
defined to be the smallest positive integer $k$ such that $N$
divides $q^k-1$. Hence, $\F_{q^k}^*$ contains a subgroup of order
$N$. To determine the security level of a pairing-based
cryptosystem, it is important to know the minimal field containing
the $N$th roots of unity, since the discrete logarithm problem can
be transported from the curve to this field, where one can perform
index calculus. We show that it is possible to have a dramatic
(unbounded) difference between the size of the field given by the
embedding degree, $\F_{p^{mk}}$, and the minimal embedding field
that contains the $N$th roots of unity, $\F_{p^d}$, where $d\mid
mk$.
The embedding degree has utility as it indicates the field one must
work over to compute the pairing, while a security parameter should
indicate the minimal field containing the embedding. We discuss a
way of measuring the difference between the size of the two fields
and we advocate the use of two separate parameters. We offer a
possible security parameter, $k'=\frac{\ord_Np}{g}$, and we present
examples of elliptic curves and genus 2 curves which highlight the
difference between them. While our observation provides a proper
theoretical understanding of minimal embedding fields in
pairing-based cryptography, it is unlikely to affect curves used in
practice, as a discrepancy may only occur when $q$ is non-prime.
Nevertheless, it is an important point to keep in mind and a
motivation to recognize two separate parameters when describing a
pairing-based cryptosystem.
Zero Knowledge and Soundness are Symmetric
We give a complexity-theoretic characterization of the class of problems in NP having zero-knowledge argument systems. This characterization is symmetric in its treatment of the zero knowledge and the soundness conditions, and thus we deduce that the class of problems in NP intersect coNP having zero-knowledge arguments is closed under complement. Furthermore, we show that a problem in NP has a *statistical* zero-knowledge argument system if and only if its complement has a computational zero-knowledge *proof* system. What is novel about these results is that they are *unconditional*, i.e., do not rely on unproven complexity assumptions such as the existence of one-way functions.
Our characterization of zero-knowledge arguments also enables us to prove a variety of other unconditional results about the class of problems in NP having zero-knowledge arguments, such as equivalences between honest-verifier and malicious-verifier zero knowledge, private coins and public coins, inefficient provers and efficient provers, and non-black-box simulation and black-box simulation. Previously, such results were only known unconditionally for zero-knowledge *proof systems*, or under the assumption that one-way functions exist for zero-knowledge argument systems.
Preimage Attack on Parallel FFT-Hashing
Uncategorized
Uncategorized
Parallel FFT-Hashing was designed by C. P. Schnorr and S. Vaudenay
in 1993. The function is a simple and light weight hash algorithm
with 128-bit digest. Its basic component is a multi-permutation
which helps in proving its resistance to collision attacks.
%
In this work we show a preimage attack on Parallel FFT-Hashing with
complexity $2^{t+64}+2^{128-t}$ and memory $2^{t}$ which is less
than the generic complexity $2^{128}$. When $t=32$, we can find a
preimage with complexity $2^{97}$ and memory $2^{32}$. Our method
can be described as ``disseminative-meet-in-the-middle-attack'' we
actually use the properties of multi-permutation (helpful against
collision attack) to our advantage in the attack. Overall, this type
of attack (beating the generic one) demonstrates that the structure
of Parallel FFT-Hashing has some weaknesses when preimage attack is
considered. To the best of our knowledge, this is the first attack
on Parallel FFT-Hashing.
Preimage Attacks on CellHash, SubHash and Strengthened Versions of CellHash and SubHash
CellHash \cite{DaGoVa91} and SubHash \cite{DaGoVa92} were suggested
by J. Daemen, R. Govaerts and J. Vandewalle in 1991 and 1992.
SubHash is an improved version from CellHash. They have 257-bit
internal state and 256-bit hash output. In this paper, we show a
preimage attack on CellHash (SubHash) with the complexity
$2^{129+t}$ and the memory $2^{128-t}$ for any $t$ (with the
complexity about $2^{242}$ and the memory size $2^{17}$). Even
though we modify them in a famous way, we show that we can find a
preimage on the modified CellHash (the modified SubHash) with the
complexity $2^{200}$ and the memory size $2^{59}$ (with the
complexity about $2^{242}$ and the memory size $2^{17}$).
Preimage Attack on Hashing with Polynomials proposed at ICISC'06
Uncategorized
Uncategorized
In this paper, we suggest a preimage attack on Hashing with
Polynomials \cite{Shpilrain06b}. The algorithm has $n$-bit hash
output and $n$-bit intermediate state. (for example, $n=163$). The
algorithm is very simple and light so that it can be implement in
low memory environment. Our attack is based on the
meet-in-the-middle attack. We show that we can find a preimage with
the time complexity $2^{n-t}+2^{t}*(n+1/33)$ and the memory $2^{t}$
even though the recursive formula $H$ uses any \textsf{f} whose each
term's degree in terms of \textsf{x} is $2^a$ for a non-negative
integer $a$. We recommend that hash functions such as Hashing with
Polynomials should have the intermediate state size at least two
times bigger than the output size.
Galois Field Commitment Scheme
In [3] the authors give the first mathematical formalization of an unconditionally secure commitment
scheme. Their construction has some similarities to one used to build authentication
codes, so they raise the question whether there is some relation between commitment schemes and
authentication schemes. They conjecture that authentication schemes with arbitration can be used,
but they stress that the information flows are different.
In this paper, we show that there is indeed a relation between unconditionally secure commitment
schemes and unconditionally secure authentication schemes, and that an unconditionally
secure commitment scheme can be built from such an authentication scheme and an unconditionally
secure cipher system. This parallel is then used to analyse a new attack against commitment
schemes that is the counterpart of the impersonation attack in an authentication system.
To investigate the opposite direction, we start by defining an optimal commitment system
and showing that this must be a resolvable design commitment scheme as proposed in the aforementioned
paper. Then, a proof is given that the resolvable design commitment schemes are a
composition of an authentication system and a cipher system and the conclusion follows that this
is the case for all optimal commitment systems.
We prove that there is a commitment scheme based on Galois Fields that uses the One-Time
Pad as the cipher system, which to our knowledge is new in the literature. The main technique
in the proof is the construction of an appropriate design for any n, originating an authentication
system that is perfectly secure against deception attacks of levels 0 and 1. The commitment scheme
here proposed uses only very simple operations and can be very efficiently implemented both in
hardware and software.
Finally, we give a brief look at the possibility of building commitment schemes from other
primitives.
A NEW MAC: LAMA
In this paper, we will propose a MAC with two versions, which is called LAMA. The proposal MAC has the input size of 128 bits and 256 bits in the versions 1, 2 respectively, and output size of 128 bits. There have not been found a attack better than the exhaustive search attack for the MAC, and it has a fast implementations in about 5 cycles/byte in the both versions.
A Generic Construction of CCA-Secure Cryptosystems without NIZKP for a Bounded Number of Decryption Queries
In this paper, we propose a generic construction of chosen-ciphertext secure cryptosystems against adversaries with a bounded number of decrytion queries from arbitrary semantically secure encryption in a black box manner. Our construction is not only an alternative to the previously known technique, i.e. the Naor-Yung paradigm, but also has some interesting properties. Especially, (1) it does not require non-interactive zero-knowledge proof, and (2) its component ciphertexts can be compressed into only one if the underlying encryption has a certain homomorphic property. Consequently, when applying our construction to the ElGamal encryption, ciphertext overhead of the resulting scheme will be only one group element which is considered optimal since it is the same as the original ElGamal. Disadvantages to previous schemes are that the upper bound of the number of decryption queries (e.g. 2^{30}) has to be known before set-up phase, and the size of public key is large.
Cryptography in the Multi-string Model
The common random string model permits the construction of cryptographic protocols that are provably impossible to realize in the standard model. In this model, a trusted party generates a random string and gives it to all parties in the protocol. However, the introduction of such a third party should set alarm bells going off: Who is this trusted party? Why should we trust that the string is random? Even if the string is uniformly random, how do we know it does not leak private information to the trusted party? The very point of doing cryptography in the first place is to prevent us from trusting the wrong people with our secrets.
In this paper, we propose the more realistic multi-string model. Instead of having one trusted authority, we have several authorities that generate random strings. We do not trust any single authority, we only assume a majority of them generate the random string honestly. We demonstrate the use of this model for two fundamental cryptographic taks. We define non-interactive zero-knowledge in the multi-string model and construct NIZK proofs in the multi-string model. We also consider multi-party computation and show that any functionality can be securely realized in the multi-string model.
Redundancy of the Wang-Yu Sufficient Conditions
Uncategorized
Uncategorized
Wang and Yu showed that MD5 was not collision-resistant, but it is known that their sufficient conditions for finding a collision of MD5 includes some mistakes. In this paper, we examine the sufficient conditions by computer simulation. We show that the Wang-Yu conditions include 16 unnecessary conditions for making a collision. Sasaki et al. claimed that modifying one condition made it possible to remove eleven conditions. However, the result of our computer simulation shows that their conditions does not make a collision.
Universally Composable Blind Signatures in the Plain Model
In the universal composability framework, we define an ideal functionality for blind signatures,
as an alternative to a functionality recently proposed by Fischlin. Fischlin proves that
his functionality cannot be realized in the plain model, but this result does not apply to our
functionality. We show that our functionality is realized in the plain model by a blind signature
protocol if and only if the corresponding blind signature scheme is secure with respect to
blindness and non-forgeability, as defined by Juels, Luby and Ostrovsky.
Faugere's F5 Algorithm Revisited
Uncategorized
Uncategorized
We present and analyze the F5 algorithm for computing Groebner bases. On the practical side, we correct minor errors in Faugere's pseudo code, and report our experiences implementing the -- to our knowledge -- first working public version of F5. While not designed for efficiency, it will doubtless be useful to anybody implementing or experimenting with F5. In addition, we list some experimental results, hinting that the version of F5 presented in Faugere's original paper can be considered as more or less naive, and that Faugere's actual implementations are a lot more sophisticated. We also suggest further improvements to the F5 algorithm and point out some problems we encountered when attempting to merge F4 and F5 to an "F4.5" algorithm. On the theoretical side, we slightly refine Faugere's theorem that it suffices to consider all normalized critical pairs, and give the first full proof, completing his sketches. We strive to present a more accessible account of the termination and correctness proofs of F5. Unfortunately, we still rely on a conjecture about the correctness of certain optimizations. Finally, we suggest directions of future research on F5.
Non-Wafer-Scale Sieving Hardware for the NFS: Another Attempt to Cope with 1024-bit
Significant progress in the design of special purpose hardware for supporting the Number Field Sieve (NFS) has been made. From a practical cryptanalytic point of view, however, none of the published proposals for coping with the sieving step is satisfying. Even for the best known designs, the technological obstacles faced for the parameters expected for a 1024-bit RSA modulus are significant.
Below we present a new hardware design for implementing the sieving step. The suggested chips are of moderate size and the inter-chip communication does not seem unrealistic. According to our preliminary analysis of the 1024-bit case, we expect the new design to be about 2 to 3.5 times slower than TWIRL (a wafer-scale design). Due to the more moderate technological requirements, however, from a practical cryptanalytic point of view the new design seems to be no less attractive than TWIRL.
Algebraic Cryptanalysis of the Data Encryption Standard
In spite of growing importance of AES, the Data Encryption Standard is by no means obsolete. DES has never been broken from the practical point of view. The triple DES is believed very secure, is widely used, especially in the financial sector, and should remain so for many many years to come. In addition, some doubts have been risen whether its replacement AES is secure, given the extreme level of ``algebraic vulnerability'' of the AES S-boxes (their low I/O degree and exceptionally large number of quadratic I/O equations).
Is DES secure from the point of view of algebraic cryptanalysis, a new very fast-growing area of research? We do not really hope to break it, but just to advance the field of cryptanalysis. At a first glance, DES seems to be a very poor target - as there is (apparently) no strong algebraic structure of any kind in DES. However Courtois and Pieprzyk showed that ``small'' S-boxes always have a low I/O degree (cubic for DES as we show). In addition, due to their low gate count requirements, by introducing additional variables, we can always get an extremely sparse system of quadratic equations.
To assess the algebraic vulnerabilities is the easy part, that may appear unproductive. In this paper we demonstrate that in this way,
several interesting attacks on a real-life ``industrial'' block cipher can be found. One of our attacks is the fastest known algebraic attack on 6 rounds of DES. Yet, it requires only ONE SINGLE known plaintext (instead of a very large quantity) which is quite interesting in itself.
Though (on a PC) we recover the key for only six rounds, in a much weaker sense we can also attack 12 rounds of DES. These results are very interesting because DES is known to be a very robust cipher,
and our methods are very generic. They can be applied to DES with modified S-boxes and potentially other reduced-round block ciphers.
Last updated: 2007-01-15
On the cost of cryptanalytic attacks
This note discusses the complexity evaluation of cryptanalytic attacks, with the example of exhaustive key search, illustrated with several ciphers from the eSTREAM project. A measure is proposed to evaluate the effective computational cost of cryptanalytic algorithms, based on the observation that the standard one is not precise enough.
Concurrent Statistical Zero-Knowledge Arguments for NP from One Way Functions
In this paper we show a general transformation from any honest
verifier statistical zero-knowledge argument to a concurrent
statistical zero-knowledge argument. Our transformation relies only on the existence of one-way functions. It is known that the existence of zero-knowledge systems for any non-trivial language implies one way functions. Hence our transformation \emph{unconditionally} shows that concurrent statistical zero-knowledge arguments for a non-trivial language exist if and only if standalone secure statistical zero-knowledge arguments for that language exist.
Further, applying our transformation to the recent statistical zero-knowledge argument system of Nguyen et al (STOC'06) yields the first concurrent statistical zero-knowledge argument system for all languages in \textbf{NP} from any one way function.
Multi-Property-Preserving Hash Domain Extension and the EMD Transform
We point out that the seemingly strong pseudorandom oracle preserving (PRO-Pr) property of hash function domain-extension transforms defined and implemented by Coron et. al. [12] can actually weaken our guarantees on the hash function, in particular producing a hash function that fails to be even collision-resistant (CR) even though the compression function to which the transform is applied is CR. Not only is this true in general, but we show that all the transforms presented in [12] have this weakness. We suggest that the appropriate goal of a domain extension transform for the next generation of hash functions is to be multi-property preserving, namely that one should have a single transform that is simultaneously at least collision-resistance preserving, pseudorandom function preserving and PRO-Pr. We present an efficient new transform that is proven to be multi-property preserving in this sense.
The Layered Games Framework for Specifications and Analysis of Security Protocols
We establish rigorous foundations to the use of modular, layered design for building complex distributed systems. Layering is key to the design of the Internet and other distributed systems, hence such solid, theoretical foundations are essential, especially when considering adversarial settings, such as for security and cryptographic protocols.
We define the basic concepts for modular, layered design: protocols, systems, configurations, executions, and models, and three relations: indistinguishability (between two systems), satisfaction (of a model by a system), and realization (by protocol, of one model over another model). We prove several basic properties, including the {\em layering lemma} and the {\em indistinguishability lemma}. The indistinguishability lemma shows that if two systems \Gamma_L, \Gamma_R are indistinguishable, and \Gamma_L satisfies some model M, then \Gamma_R also satisfies M. The layering lemma shows that given protocols {\pi_i}^u_{i=1}, if every protocol \pi_i realizes model M_i over model M_{i-1}, then the composite protocol \pi_{1||...||u} realizes model M_u over M_0. This allows specification, design and analysis of each layer independently, and combining the results to ensure properties of the complete system.
Our framework is based on {\em games}, following many works in applied cryptography. This differs from existing frameworks allowing compositions of cryptographic protocols, based on {\em simulatability of ideal functionality}. Game-based models are more general and flexible than ideal functionality specifications, supporting different adversarial models and avoiding over-specification, which is essential for practical distributed systems and networks.
Revisiting the Efficiency of Malicious Two-Party Computation
In a recent paper Mohassel and Franklin study the efficiency of secure two-party computation in the presence of malicious behavior. Their aim is to make classical solutions to this problem, such as zero-knowledge compilation, more efficient. The authors provide several schemes which are the most efficient to date. We propose a modification to their main scheme using expanders. Our modification asymptotically improves at least one measure of efficiency of all known schemes. We also point out an error, and improve the analysis of one of their schemes.
Security Protocols with Isotropic Channels
We investigate the security properties of "isotropic channels",
broadcast media in which a receiver cannot reliably determine whether
a message originated from any particular sender and a sender cannot
reliably direct a message away from any particular receiver. We show
that perfect isotropism implies perfect (information-theoretic)
secrecy, and that asymptotically close to perfect secrecy can be
achieved on any channel that provides some (bounded) uncertainty as to
sender identity. We give isotropic security protocols under
both passive and active adversary models, and discuss the practicality
of realizing isotropic channels over various media.
Security-Focused Survey on Group Key Exchange Protocols
In this paper we overview a large number of currently known group key exchange
protocols while focusing on the protocols designed for more than three participants
(for an overview of two- and three-party key exchange protocols we refer to
[BM03, DB05c]). For each mentioned protocol we briefly describe the current state
of security based on the original analysis as well as later results appeared in the literature.
We distinguish between (i) protocols with heuristic security arguments based
on informally defined security requirements and (ii) protocols that have been proven
secure in one of the existing security models for group key exchange. Note, this paper
continues the work started in Manulis (ePrint Rep. 2006/388) which provides an analytical survey on security
requirements and currently known models for group key exchange. We emphasize that
the following survey focuses on the security aspects of the protocols and does not aim
to provide any efficiency comparison. The reader interested in this kind of surveys we
refer to Rafaeli and Hutchison (ACM Comp. Surveys, 2003) and Amir et al. (ACM Trans. on Inf. and Syst. Sec., 2004).
Identity Based Strong Designated Verifier Proxy Signature Schemes
The paper proposes four new ID based strong designated verifier proxy signature (SDVPS) scheme. The schemes are formed by introducing proxy in ID based SDVS, ID based in SDVPS and ID based proxy in SDVS. We have also analyzed the security of the schemes and their computation aspects.
Last updated: 2007-01-05
The Identity Escrow (Group Signature) Scheme at CT-RSA'05 Is Not Non-frameable
Uncategorized
Uncategorized
Following an attack against exculpability, put forward at Asiacrypt'06, of ACJT's group signature, we further found Nguyen's
identity escrow (group Signature) scheme did not satisfy
non-frameabiliy either.
The Tate Pairing via Elliptic Nets
We derive a new algorithm for computing the Tate pairing on an elliptic curve over a finite field. The algorithm uses a generalisation of elliptic divisibility sequences known as elliptic nets, which are maps from $\Z^n$ to a ring that satisfy a certain recurrence relation. We explain how an elliptic net is associated to an elliptic curve and reflects its group structure. Then we give a formula for the Tate pairing in terms of values of the net. Using the recurrence relation we can calculate these values in linear time.
Computing the Tate pairing is the bottleneck to efficient pairing-based cryptography. The new algorithm has time complexity comparable to Miller's algorithm, and is likely to yield to further optimisation.
A Note on Bounded Chosen Ciphertext Security from Black-box Semantical Security
Designing public key encryption schemes withstanding chosen
ciphertext attacks, which is the highest security level for such
schemes, is generally perceived as a delicate and intricate task,
and for good reason. In the standard model, there are essentially
three well-known but quite involved approaches.
This state of affairs is to be contrasted with the situation for
semantically secure encryption schemes, a much weaker security
notion that only guarantees security in the absence of active
attack but that appears to be much easier to fulfill,
both conceptually and practically. Thus, the boundary
between passive attack and active attack seems to make up the
dividing line between which security levels are relatively easily
achieved and which are not. Our contributions are two-fold.
First, we show a simple, efficient black-box construction of a
public key encryption scheme withstanding chosen ciphertext attack
from any given semantically secure one. Our scheme is $q$-bounded in
the sense that security is only guaranteed if the adversary makes at
most $q$ adaptive chosen ciphertext queries. Here, $q$ is an
arbitrary polynomial that is fixed in advance in the key-generation.
Our work thus shows that whether or not the number of active,
adversarial queries is known in advance is the dividing line, and
not passive versus active attack. In recent work, Gertner, Malkin
and Myers show that such black-box reductions are impossible if
instead $q$ is a polynomial that only depends on the adversary.
Thus, in a sense, our result appears to be the best black-box result
one can hope for. Second, we give a non-blackbox reduction from bounded chosen ciphertext security to semantic security where the length of the public/secret keys and ciphertexts drops from quadratic to linear in $q$, compared to our black-box construction. This latter scheme, however, is only of theoretical interest as it uses general NP-reductions, and our blackbox construction is in fact much more practical.
Last updated: 2008-01-15
Revisit of CS98
Uncategorized
Uncategorized
Cramer and Shoup proposed the first provably secure practical public-key
encryption scheme in the standard model (CS98). We find new way to construct
the secure reduction in which the decryption oracle is not needed yet. Thus we
get a simplified version of CS98 which is more efficient than the original
scheme, and also provably secure against chosen ciphertext attack in standard
model.
Traceable Ring Signature
The ring signature allows a signer to leak secrets anonymously,
without the risk of identity escrow. At the same time,
the ring signature provides great flexibility: No group manager,
no special setup, and the dynamics of group choice.
The ring signature is, however, vulnerable to malicious or irresponsible signers in some applications,
because of its anonymity. In this paper, we propose a traceable ring signature scheme. A traceable ring scheme is a ring signature
except that it can restrict ``excessive'' anonymity.
The traceable ring signature has a tag that consists of a list of ring members and an issue that refers to, for instance, a social affair or an election. A ring member can make any signed but anonymous opinion regarding the issue, but only once (per tag).
If the member submits another signed opinion, possibly pretending to be another person who supports the first opinion, the identity of the member is immediately revealed. If the member submits the same opinion, for instance, voting ``yes'' regarding the same issue twice, everyone can see that these two are linked.
The traceable ring signature can suit to many applications,
such as an anonymous voting on a BBS, a dishonest whistle-blower problem, and unclonable group identification.
We formalize the security definitions for this primitive
and show an efficient and simple construction.
Survey on Security Requirements and Models for Group Key Exchange
In this report we provide an analytical survey on security issues that are relevant for group key exchange (GKE) protocols. We start with the description of the security requirements that have been informally described in the literature and widely used to analyze security of earlier GKE protocols. Most of these definitions were originally stated for two-party protocols and then adapted to a group setting. These informal definitions are foundational for the later appeared formal security models for GKE protocols whose development, strengths, and weaknesses are also described and analyzed.
A Note on the Security of NTRUSign
At Eurocrypt '06, Nguyen and Regev presented a new key-recovery attack on the
Goldreich-Goldwasser-Halevi (GGH) lattice-based signature scheme:
when applied to NTRUSign-251 without perturbation, the attack recovers the secret key
given only 90,000 signatures. At the rump session, Whyte speculated whether the number
of required signatures might be significantly decreased to say 1,000, due to the special
properties of NTRU lattices. This short note shows that this is indeed the case: it turns out that as few as 400 NTRUSign-251 signatures are sufficient in practice to recover the
secret key. Hence, NTRUSign without perturbation should be considered totally insecure.
The Wrestlers Protocol: A simple, practical, secure, deniable protocol for key-exchange
We describe and prove (in the random-oracle model) the security of a simple but efficient zero-knowledge identification scheme, whose security is based on the computational Diffie-Hellman problem. Unlike other recent proposals for efficient identification protocols, we don't need any additional assumptions, such as the Knowledge of Exponent assumption.
From this beginning, we build a simple key-exchange protocol, and prove that it achieves `SK-security' -- and hence security in Canetti's Universal Composability framework.
Finally, we show how to turn the simple key-exchange protocol into a slightly more complex one which provides a number of valuable `real-life' properties, without damaging its security.
On Security Models and Compilers for Group Key Exchange Protocols
Uncategorized
Uncategorized
Group key exchange (GKE) protocols can be used to guarantee confidentiality and group authentication in a variety of group applications. The notion of provable security subsumes the existence of an abstract formalization (security model) that considers the environment of the protocol and identifies its security goals. The first security model for GKE protocols was proposed by Bresson, Chevassut, Pointcheval, and Quisquater in 2001, and has been subsequently applied in many security proofs. Their definitions of AKE- and MA-security became meanwhile standard.
In this paper we analyze the BCPQ model and some of its later appeared modifications and identify several security risks resulting from the technical construction of this model – the notion of partnering. Consequently, we propose a revised model with extended definitions for AKE- and MA-security capturing, in addition, attacks of malicious protocol participants.
Further, we analyze some well-known generic solutions (compilers) for AKE- and MA-security of GKE protocols proposed based on the definitions of the BCPQ model and its variants and identify several limitations resulting from the underlying assumptions.
In order to remove these limitations and at the same time to show that our revised security model is in fact practical enough for the construction of reductionist security proofs we describe a modified compiler which provides AKE- and MA-security for any GKE protocol, under standard cryptographic assumptions.
Design and Analysis of a Hash Ring-iterative Structure
Uncategorized
Uncategorized
The authors propose a new type of hash iterative structure ─ the ring-iterative structure with feedback which is subdivided into the single feedback ring iteration and the multiple feedback ring iteration, namely SFRI and MFRI. Prove that SFRI is at least equivalent to the MD structure in security, and MFRI is at least equivalent to SFRI in security (property 1 makes people incline to believe MFRI is more secure than MD). Analyze the resistance of MFRI, which results from the joint event on message modification, endless loop on message modification and incompatibility of the sufficient conditions, to the multi-block differential collision attack. Argue the ineffectiveness of the D-way second preimage attack on MFRI. Discuss the time and space expenses of MFRI, and point out the advantage of MFRI over the tree-iterative structure and the zipper-iterative structure.
Traitor tracing scheme with constant ciphertext rate against powerful pirates
Traitor tracing schemes are used to fight piracy when distributing securely some data to multiple authorized receivers: if some receivers collude and share their decryption keys to build some pirate decoder, a tracing procedure should be able to find at least one of these ``traitors'' from the pirate decoder. In this paper, we consider powerful pirate decoders, which may sometimes refuse to decrypt, or may try to detect when the tracing procedure is running. Most known traitor tracing schemes are not secure against this kind of pirate decoders: this is in particular the case of the schemes with constant ciphertext rate, which are the most efficient ones. We build then a new scheme, with constant ciphertext rate and security against powerful pirate decoders, using watermarking techniques. This scheme has the interesting feature that a receiver may decrypt the ciphertexts progressively, when it was not possible in previous schemes with constant ratio between ciphertext and plaintext.
Provisioning Protected Resource Sharing in Multi-Hop Wireless Networks
We propose a protection framework for resource sharing to promote cooperation among nodes in multi-hop wireless networks.
In the resource sharing protocol, a node claims credits when it relays others' packets.
A node also issues rewards to the node which relays its packets.
Rewards are used to validate the correctness of credits.
In order to protect credits and rewards, we devise a secure registry scheme that supports the timed test of credit validation, and then prove that the scheme is not susceptible to various security attacks.
Without any trusted authority in the operation of the framework, we make cryptographic definitions for the scheme, construct a provably secure registry scheme, and implement the timed test of credit validation with one-way chains.
Finally, simulation results observed in J-Sim simulator corroborate that resource sharing is correctly supported and that credits and rewards are secured from selfish behaviors.
Cryptanalysis on an Algorithm for Efficient Digital Signatures
The total computation of the generation and verification of personnel identification or digital signature is heavy. For many schemes of them, the total computation is not less than hundreds of modular multiplications. Efficient schemes of personnel identification and digital signature were proposed, which require no more than 10 modular multiplications on generation and verification of challenge-response or digital signature. However, the schemes are weak in security. The paper will show that by interception of a transcript of communications between the prover and verifier, the private key of the prover is revealed.
On Security of Sovereign Joins
The goal of a sovereign join operation is to compute a
query across independent database relations such that nothing
beyond the join results is revealed. Each relation involved
in a sovereign join is owned by a distinct entity and
the party posing the query is distinct from the relation owners;
it is not permitted to access the original relations.
One notable recent research result proposed a secure
technique for executing sovereign joins. It entails data owners
sending their relations to an independent database service
provider which executes a sovereign join with the aid
of a tamper-resistant secure coprocessor. This achieves the
goal of preventing information leakage during query execution.
However, as we show in this paper, the proposed technique
is actually insecure as it fails to prevent an attacker
from learning the query results. We also suggest some measures
to remedy the security problems.
Security Bounds for the NIST Codebook-based Deterministic Random Bit Generator
Uncategorized
Uncategorized
The NIST codebook-based deterministic random bit generators are analyzed in the context of being indistinguishable from random. Upper and lower bounds based on the probability of distinguishing the output are proven. These bounds imply that the security of the designs are bounded by the codebook width, or more precisely on the property that the
codebooks act like a random permutation, as opposed to their underlying security parameter or key length. This paper
concludes that these designs fail to support security parameters larger than the codebook width.
A New Key Exchange Primitive Based on the Triple Decomposition Problem
We present a new key exchange primitive based on the decomposition
problem over non-commutative groups. Different from the key
establishment schemes that rely on the decomposition problem where
the problem is decomposing an element into three parts where the
middle piece is known, our scheme relies on decomposing an element
into three parts, all unknown. We call this problem "Triple
Decomposition Problem". This seems to be a harder problem because
it requires quadratic systems to be solved instead of linear
systems. We discuss the new primitive over two different protocols.
The underlying problems in the two protocols differ slightly. We
discuss the system and the underlying problems in one of the
protocols in detail over braid groups. We manage to provide a
setting which resists against linear algebra attacks and length
based attacks.
Efficient Chosen-Ciphertext Secure Identity-Based Encryption with Wildcards
We propose new instantiations of chosen-ciphertext secure identity-based encryption schemes with wildcards (WIBE). Our schemes outperform all existing alternatives in terms of efficiency as well as security. We achieve these results by extending the hybrid encryption (KEM--DEM) framework to the case of WIBE schemes. We propose and prove secure one generic construction in the random oracle model, and one direct construction in the standard model.
A New Concept of Hash Functions SNMAC Using a Special Block Cipher and NMAC/HMAC Constructions
In this paper, we present new security proofs of well-known hash constructions NMAC/HMAC proposed by Bellare et al. in 1996. We show that block ciphers should be used in hash functions in another way than it has been so far. We introduce a new cryptographic primitive called special block cipher (SBC) which is resistant to attacks specific for block ciphers used in hash functions. We propose to use SBC in the NMAC/HMAC constructions, what gives rise to the new concept of hash functions called Special NMAC (SNMAC). From our new NMAC/HMAC security proofs it follows that SNMAC hash functions are computationally resistant to preimage and collision attacks. Moreover, at CRYPTO 2005 Coron et al. proved that SNMAC is indifferentiable from a random oracle in the limit. SNMAC construction is general and it enables various proposals using different instances of the special block ciphers. We propose a special block cipher DN (Double Net) and define hash function HDN (Hash Double Net) as the SNMAC construction based on DN.
Distortion maps for genus two curves
Distortion maps are a useful tool for pairing based cryptography.
Compared with elliptic curves, the case of hyperelliptic curves
of genus $g > 1$ is more complicated since the full torsion subgroup
has rank $2g$. In this paper we prove that distortion maps always exist for supersingular curves of genus $g>1$ and we give several examples in genus $2$.
Robust Final-Round Cache-Trace Attacks Against AES
This paper describes an algorithm to attack AES using side-channel information from the final round cache lookups performed by the encryption, specifically whether each access hits or misses in the cache, building off of previous work by Aciicmez and Koc. It is assumed that an attacker could gain such a trace through power consumption analysis or electromagnetic analysis. This information has already been shown to lead to an effective attack. This paper interprets cache trace data available as binary constraints on pairs of key bytes then reduces key search to a constraint-satisfaction problem. In this way, an attacker is guaranteed to perform as little search as is possible given a set of cache traces, leading to a natural tradeoff between online collection and offline processing. This paper also differs from previous work in assuming a partially pre-loaded cache, proving that cache trace attacks are still effective in this scenario with the number of samples required being inversely related to the percentage of cache which is pre-loaded.
Self-Generated-Certificate Public Key Cryptography and Certificateless Signature / Encryption Scheme in the Standard Model
Certificateless Public Key Cryptography (CL-PKC) enjoys a number of features of Identity-Based
Cryptography (IBC) while without having the problem of key escrow. However, it \textit{does} suffer to
an attack where the adversary, Carol, replaces Alice's public key by someone's public key so that
Bob, who wants to send an encrypted message to Alice, uses Alice's identity and other's public
key as the inputs to the encryption function. As a result, Alice cannot decrypt the message
while Bob is unaware of this. We call it \textit{Denial-of-Decryption (DoD) Attack}
as its nature is similar to the well known Denial-of-Service (DoS) Attack. Based on CL-PKC,
we propose a new paradigm called \textit{Self-Generated-Certificate Public Key Cryptography (SGC-PKC)}
that captures the DoD Attack. We also provide a generic construction of
a self-generated-certificate public key encryption scheme
in the standard model. Our generic construction uses certificateless signature
and certificateless encryption as the building block.
In addition, we further
propose a certificateless signature and a certificateless encryption scheme with concrete implementation that
are all
provably secure in the standard model, which are
the first in the literature regardless of the generic
constructions by Yum and Lee which may contain security weaknesses as pointed out by others.
We believe these concrete implementations are of independent interest.
A taxonomy of pairing-friendly elliptic curves
Elliptic curves with small embedding degree and large prime-order subgroup are key ingredients for implementing pairing-based cryptographic systems. Such "pairing-friendly" curves are rare and thus require specific constructions. In this paper we give a single coherent framework that encompasses all of the constructions of pairing-friendly elliptic curves currently existing in the literature. We also include new constructions of pairing-friendly curves that improve on the previously known constructions for certain embedding degrees. Finally, for all embedding degrees up to 50, we provide recommendations as to which pairing-friendly curves to choose to best satisfy a variety of performance and security requirements.
Hardware Implementation of the $\eta_T$ Pairing in Characteristic 3
Recently, there have been many proposals for secure and novel cryptographic protocols that are built on bilinear pairings. The $\eta_T$ pairing is one such pairing and is closely related to the Tate pairing. In this paper we consider the efficient hardware implementation of this pairing in characteristic 3. All characteristic 3 operations required to compute the pairing are outlined in detail. An efficient, flexible and reconfigurable processor for the $\eta_T$ pairing in characteristic 3 is presented and discussed. The processor can easily be tailored for a low area implementation, for a high throughput implementation, or for a balance between the two. Results are provided for various configurations of the processor when implemented over the field $\mathbb{F}_{3^{97}}$ on an FPGA. As far as we are aware, the processor returns the first characteristic 3 $\eta_T$ pairing in hardware that includes a final exponentiation to a unique value.
A DoS Attack Against the Integrity-Less ESP (IPSec)
This paper describes a new practical DoS attack that can be mounted
against the ``encryption-only'' configuration (i.e. without
authenticated integrity) of ESP as allowed by IPSec.
This finding can serve as a strong argument to convince those in
charge of the IPSec standardization to improve it by banning the
``encryption-only'' configuration from the standard.
RadioGatún, a belt-and-mill hash function
We present an approach to design cryptographic hash functions that builds on and improves the one underlying the Panama hash function. We discuss the properties of the resulting hash functions that need to be investigated and give a concrete design called RadioGatún that is quite competitive with SHA-1 in terms of performance. We are busy performing an analysis of RadioGatún and present in this paper some preliminary results.
Practical Hierarchical Identity Based Encryption and Signature schemes Without Random Oracles
In this paper, we propose a Hierarchical Identity Based Encryption scheme that is
proven secure under the strongest model of \cite{BonehFr01} directly, without relying on random oracles.
The size of the ciphertext is a constant while the size of public parameters is independent
to the number of bit representing an identity. It is the first in the literature to achieve
such a high security level and space efficiency at the same time. In addition, we also propose
the first Hierarchical Identity Based Signature scheme that is proven under the strongest model
without relying on random oracles and using more standard $q$-SDH assumption. Similar to the
proposed encryption scheme, the space complexity of the signature and public parameters are as efficient
as the proposed encryption scheme.
An Attack on a Certificateless Signature Scheme
This paper demonstrates that a certificateless signature scheme recently proposed by Gorantla and Saxena is insecure. It is shown that an adversary who replaces the public key of a signer can then forge valid signatures for that signer without knowledge of the signer's private key.
A Latency-Free Election Scheme
We motivate and describe the problem of finding protocols for
multiparty computations that only use a single broadcast round per
computation (latency-free computations). We show that solutions
exists for one multiparty computation problem, that of elections,
and more generally, addition in certain groups. The protocol
construction is based on an interesting pseudo-random function
family with a novel property.
Last updated: 2008-01-15
Revisit of KD04
Uncategorized
Uncategorized
KD04 proposed by K. Kurosawa and Y. Desmedt is the most efficient public key
encryption scheme provably secure against adaptive chosen ciphertext attack in
standard model based on decision diffie-hellman problem. We proposed a simplify
version of KD04 which is more efficient than KD04 while still can be proved to
be secure against adaptive chosen ciphertext attack in standard model based on
decision diffie-hellman problem.
Spelling-Error Tolerant, Order-Independent Pass-Phrases via the Damerau-Levenshtein String-Edit Distance Metric
It is well understood that passwords must be very long and complex to
have sufficient entropy for security purposes. Unfortunately, these
passwords tend to be hard to memorize, and so alternatives are
sought. Smart Cards, Biometrics, and Reverse Turing Tests (human-only
solvable puzzles) are options, but another option is to use
pass-phrases.
This paper explores methods for making pass-phrases suitable for use
with password-based authentication and key-exchange (PAKE) protocols,
and in particular, with schemes resilient to server-file
compromise. In particular, the $\Omega$-method of Gentry, MacKenzie
and Ramzan, is combined with the Bellovin-Merritt protocol to provide
mutual authentication (in the random oracle model
[CGH04,BBP04,MRH04]. Furthermore, since common password-related
problems are typographical errors, and the CAPSLOCK key, we show how a
dictionary can be used with the Damerau-Levenshtein string-edit
distance metric to construct a case-insensitive pass-phrase system
that can tolerate zero, one, or two spelling-errors per word, with no
loss in security. Furthermore, we show that the system can be made to
accept pass-phrases that have been arbitrarily reordered, with a
security cost that can be calculated.
While a pass-phrase space of $2^{128}$ is not achieved by this scheme,
sizes in the range of $2^{52}$ to $2^{112}$ result from various
selections of parameter sizes. An attacker who has acquired the
server-file must exhaust over this space, while an attacker without
the server-file cannot succeed with non-negligible probability.
Last updated: 2006-11-28
A Weakness in Some Oblivious Transfer and Zero-Knowledge Protocols
We consider oblivious transfer protocols and their applications that
use underneath semantically secure homomorphic encryption scheme
(e.g. Paillier's). We show that some oblivious transfer protocols
and their derivatives such as private matching, oblivious polynomial
evaluation and private shared scalar product could be subject to an
attack. The same attack can be applied to some non-interactive
zero-knowledge arguments which use homomorphic encryption schemes
underneath. The roots of our attack lie in the additional property
that some semantically secure encryption schemes possess, namely,
the decryption also reveals the random coin used for the encryption,
and that the (sender's or prover's) inputs may belong to a space,
that is very small compared to the plaintext space. In this case it
appears that even a semi-honest chooser (verifier) can derive from
the random coin bounds for all or some of the sender's (prover's)
private inputs with non-negligible probability. We propose a fix
which precludes the attacks.
Construction of a Hybrid (Hierarchical) Identity-Based Encryption Protocol Secure Against Adaptive Attacks
Uncategorized
Uncategorized
The current work considers the problem of obtaining a hierarchical
identity-based encryption (HIBE) protocol which is secure against adaptive key
extraction and decryption queries. Such a protocol is obtained by modifying
an earlier protocol by Chatterjee and Sarkar (which, in turn, is based on a
protocol due to Waters) which is secure only against adaptive key
extraction queries. The setting is quite general in the sense that random
oracles are not used and security is based on the hardness of the decisional
bilinear Diffie-Hellman (DBDH) problem. In this setting, the new construction
provides the most efficient (H)IBE protocol known till date. The technique
for answering decryption queries in the proof is based on earlier work by
Boyen, Mei and Waters.
Ciphertext validity testing is done indirectly through a symmetric authentication
algorithm in a manner similar to the Kurosawa-Desmedt public key
encryption protocol. Additionally, we perform symmetric encryption and
authentication by a single authenticated encryption
algorithm.
Generic Construction of (Identity-based) Perfect Concurrent Signatures
The notion of concurrent signatures was recently introduced by Chen, Kudla and Paterson. In concurrent signature schemes, two entities can produce two signatures that are not binding, until an extra piece of information (namely the keystone) is released by one of the parties. Subsequently, it was noted that the concurrent signature scheme proposed in the seminal paper cannot provide perfect ambiguity. Then, the notion of perfect concurrent signatures was introduced. In this paper, we define the notion of identity-based (or ID-based) perfect concurrent signature schemes. We provide the first generic construction of (ID-based) perfect concurrent signature schemes from ring signature schemes. Using the proposed framework, we give two concrete ID-based perfect concurrent signature schemes based on two major paradigms of ID-based ring signature schemes. Security proofs are based on the random oracle model.
Target Collisions for MD5 and Colliding X.509 Certificates for Different Identities
We have shown how, at a cost of about $2^{52}$ calls to the MD5 compression
function, for any two target messages $m_1$ and $m_2$,
values $b_1$ and $b_2$ can be constructed such that
the concatenated values $m_1\|b_1$ and $m_2\|b_2$ collide under MD5.
Although the practical attack potential of this construction of \emph{target collisions}
is limited, it is of greater concern than random collisions for MD5.
In this note we sketch our construction. To illustrate its practicality, we
present two MD5 based X.509 certificates with identical
signatures but different public keys \emph{and} different
Distinguished Name fields, whereas our previous construction
of colliding X.509 certificates required identical name fields.
We speculate on other possibilities for abusing target collisions.
On Probabilistic versus Deterministic Provers in the Definition of Proofs Of Knowledge
This note points out a gap between two natural formulations of
the concept of a proof of knowledge, and shows that in all
natural cases (e.g., NP-statements) this gap can be closed.
The aforementioned formulations differ by whether they refer to
(all possible) probabilistic or deterministic prover strategies.
Unlike in the rest of cryptography, in the current context,
the obvious transformation of probabilistic strategies
to deterministic strategies does not seem to suffice per se.
Public Key Encryption with Keyword Search based on K-Resilient IBE
Abstract. An encrypted email is sent from Bob to Alice. A gateway
wants to check whether a certain keyword exists in an email or not for
some reason (e.g. routing). Nevertheless Alice does not want the email
to be decrypted by anyone except her including the gateway itself. This
is a scenario where public key encryption with keyword search (PEKS)
is needed. In this paper we construct a new scheme (KR-PEKS) the KResilient
Public Key Encryption with Keyword Search. The new scheme
is secure under a chosen keyword attack without the random oracle. The
ability of constructing a Public Key Encryption with Keyword Search
from an Identity Based Encryption was used in the construction of the
KR-PEKS. The security of the new scheme was proved by showing that
the used IBE has a notion of key privacy. The scheme was then modified
in two different ways in order to fulfill each of the following: the first
modification was done to enable multiple keyword search and the other
was done to remove the need of secure channels.
Cryptanalysis of a homomorphic public-key cryptosystem over a finite group
The paper cryptanalyses a public-key cryptosystem recently proposed by Grigoriev and Ponomarenko, which encrypts an element from a fixed finite group defined in terms of generators and relations to produce a ciphertext from SL(2, Z). The paper presents a heuristic method for recovering the secret key from the public key, and so this cryptosystem should not be used in practice.
Black-Box Knowledge Extraction Revisited: Universal Approach with Precise Bounds
Rewinding techniques form the essence of many security reductions including proofs for identification and signature schemes. We propose a simple and modular approach for the construction of such proofs.
Straightforward applications of our central result include, but are not limited to, the security of identification schemes, generic signatures and ring signatures. These results are well known, however, we generalise them in such a way that our technique can be used off-the-shelf for future applications. We note that less is more: as a side-effect of our less complex analysis, all our proofs are more precise; for example, we get a new proof of the forking lemma that is $2^{15}$ times more precise than the original result by Pointcheval and Stern. Finally, we give the first precise security analysis of Blum's coin flipping protocol with $k$-bit strings, as yet another example of the strength of our results.
Concurrent Non-Malleable Zero Knowledge
We provide the first construction of a
concurrent and non-malleable zero knowledge argument for every
language in NP. We stress that our construction is in the plain
model with no common random string, trusted parties, or
super-polynomial simulation. That is, we construct a zero knowledge
protocol $\Pi$ such that for every polynomial-time adversary that
can adaptively and concurrently schedule polynomially many
executions of $\Pi$, and corrupt some of the verifiers and some of
the provers in these sessions, there is a polynomial-time simulator
that can simulate a transcript of the entire execution, along with
the witnesses for all statements proven by a corrupt prover to an
honest verifier.
Our security model is the traditional model for concurrent zero
knowledge, where the statements to be proven by the honest provers
are fixed in advance and do not depend on the previous history (but
can be correlated with each other); corrupted provers, of course,
can chose the statements adaptively. We also prove that there exists
some functionality F (a combination of zero knowledge and
oblivious transfer) such that it is impossible to obtain a
concurrent non-malleable protocol for F in this model.
Previous impossibility results for composable protocols ruled out
existence of protocols for a wider class of functionalities
(including zero knowledge!) but only if these protocols were
required to remain secure when executed concurrently with
arbitrarily chosen different protocols (Lindell, FOCS 2003) or if
these protocols were required to remain secure when the honest
parties' inputs in each execution are chosen adaptively based on the
results of previous executions (Lindell, TCC 2004).
We obtain an $\Tilde{O}(n)$-round protocol under the assumption that
one-to-one one-way functions exist. This can be improved to
$\Tilde{O}(k\log n)$ rounds under the assumption that there exist
$k$-round statistically hiding commitment schemes. Our protocol is a
black-box zero knowledge protocol.
A new stream cipher: DICING
In this paper, we will propose a new synchronous stream cipher named DICING, which can be taken as a clock-controlled one but with a new mechanism of altering steps. With the simple construction, DICING has satisfactory performance, faster than AES about two times. For the security, there have not been found weakness for the known attacks, the key sizes can be 128bits and 256bits respectively.
Analysis and Improvements of Two Identity-Based Perfect Concurrent Signature Schemes
The notion of concurrent signatures was introduced by Chen, Kudla
and Paterson in their seminal paper in Eurocrypt 2004. In concurrent
signature schemes, two entities can produce two signatures that are
not binding, until an extra piece of information (namely the
keystone) is released by one of the parties. Upon release of the
keystone, both signatures become binding to their true signers
concurrently. In ICICS 2005, two identity-based perfect concurrent
signature schemes were proposed by Chow and Susilo. In this paper,
we show that these two schemes are unfair, in which the initial
signer can cheat the matching signer. We present a formal definition
of ID-based concurrent signatures which redress the flaw of Chow et
al.'s definition and then propose two simple but significant
improvements to fix our attacks.
Foundations of Secure E-Commerce: The Order Layer
Uncategorized
Uncategorized
We present specifications and provable protocol, for secure ordering and provision of digital goods and services. Notably, our protocol includes fully automated resolution of disputes between providers and customers. Disputes may involve the timely receipt of orders and goods, due to communication failures and malicious faults, as well as disputes of fitness of goods and order. The protocol and specifications are modular, with precise yet general-purpose interfaces. This allows usage as an underlying service to different e-commerce scenarios and applications, in particular secure online banking and brokerage. The protocol is practical, efficient, reliable and secure, under realistic failure and delay conditions. Our design and specifications are a part of a layered architecture for secure e-commerce applications.
On the Power of Simple Branch Prediction Analysis
Very recently, a new software side-channel attack, called Branch Prediction Analysis (BPA) attack, has been discovered and also demonstrated to be practically feasible on popular commodity PC platforms. While the above recent attack still had the flavor of a classical timing attack against RSA, where one uses many execution-time measurements under the same key in order to statistically amplify some small but key-dependent timing differences, we dramatically improve upon the former result. We prove that a carefully written spy-process running simultaneously with an RSA-process, is able to collect during one \emph{single} RSA signing execution almost all of the secret key bits. We call such an attack, analyzing the CPU's Branch Predictor states through spying on a single quasi-parallel computation process, a \emph{Simple Branch Prediction Analysis (SBPA)} attack --- sharply differentiating it from those one relying on statistical methods and requiring many
computation measurements under the same key. The successful extraction of almost all secret key bits by our SBPA attack against an openSSL RSA implementation proves that the often recommended blinding or so called randomization techniques to protect RSA
against side-channel attacks are, in the context of SBPA attacks, totally useless. Additional to that very crucial security implication, targeted at such implementations which are assumed to be
at least statistically secure, our successful SBPA attack also bears another equally critical security implication. Namely, in the context of simple side-channel attacks, it is widely believed that equally balancing the operations after branches is a secure countermeasure against such simple attacks. Unfortunately, this is not true, as even such ``balanced branch'' implementations can be completely broken by our SBPA attacks. Moreover, despite sophisticated hardware-assisted partitioning methods such as memory
protection, sandboxing or even virtualization, SBPA attacks empower an unprivileged process to successfully attack other processes running in parallel on the same processor. Thus, we conclude that SBPA attacks are much more dangerous than previously anticipated,
as they obviously do not belong to the same category as pure timing attacks.
Impossible Differential Cryptanalysis of ARIA and Camellia
This paper studies the security of the block ciphers ARIA and Camellia against impossible differential cryptanalysis. Our work improves the best impossible differential cryptanalysis of ARIA and Camellia known so far. The designers of ARIA expected no impossible differentials exist for 4-round ARIA. However, we found some nontrivial 4-round impossible differentials, which may lead to a possible attack on 6-round ARIA. Moreover, we found some nontrivial 8-round impossible differentials for Camellia, whereas only 7-round
impossible differentials were previously known. By using the
8-round impossible differentials, we presented an attack on
12-round Camellia without $FL/FL^{-1}$ layers.
A Note On Side-Channels Resulting From Dynamic Compilation
Dynamic compilation systems are of fundamental importance to high
performance execution of interpreted languages such as Java. These systems analyse the performance of an application at run-time and aggressively re-compile and optimise code which is deemed critical to performance. However, the premise that the code executed is not the same code as written by the programmer raises a number of important security concerns. In this paper we examine the specific problem that dynamic compilation, through transformation of the code, may introduce side-channel vulnerabilities where before there were none.
Non-Trivial Black-Box Combiners for Collision-Resistant Hash-Functions don't Exist
A $(k,\ell)$-robust combiner for collision-resistant hash-functions is a
construction which from $\ell$ hash-functions constructs a
hash-function which is collision-resistant if at least $k$ of the
components are collision-resistant. One trivially gets a $(k,\ell)$-robust
combiner by concatenating the output of any $\ell-k+1$ of the
components, unfortunately this is not very practical as the length
of the output of the combiner is quite large. We show that this is
unavoidable as no black-box $(k,\ell)$-robust combiner whose output is
significantly shorter than what can be achieved by concatenation
exists. This answers a question of Boneh and Boyen (Crypto'06).
Classification of Weil Restrictions Obtained by (2,...,2) Coverings of P^1
In this paper, we show a general classification of
cryptographically used
elliptic and hyperelliptic curves which can be attacked by
the Weil descent attack and index calculus algorithms.
In particular, we classfy all the Weil
restriction of these curves obtained by $(2,...,2)$ covering.
Density analysis of these curves are shown. Explicit
defintion equations of such weak curves are also provided.
Generic Transformation to Strongly Unforgeable Signatures
Recently, there are several generic transformation techniques proposed
for converting unforgeable signature schemes (the message in the
forgery has not been signed yet) into strongly unforgeable ones (the
message in the forgery could have been signed previously). Most of the
techniques are based on trapdoor hash functions and all of them require
adding
supplementary components onto the original key pair of the signature
scheme. In this paper, we propose a new generic transformation which
converts \emph{any} unforgeable signature scheme into a strongly
unforgeable one, and also keeps the key pair of the signature scheme
unchanged. Our technique is based on \emph{strong one-time signature
schemes}. We show that they can be constructed efficiently from
any one-time signature scheme that is based on one-way functions.
The performance of our technique also compares favorably
with that of those trapdoor-hash-function-based ones. In addition,
this new generic transformation can also be used for attaining
strongly unforgeable signature schemes in other cryptographic
settings which include certificateless signature, identity-based
signature, and several others. To the best of our knowledge, similar
extent of versatility is not known to be supported by any of those
comparable techniques. Finally and of independent interest, we show
that our generic transformation technique can be modified to an
\emph{on-line/off-line} signature scheme, which possesses a very
efficient signing process.
Private and Efficient Stable Marriages (Matching)
Uncategorized
Uncategorized
We provide algorithms guaranteeing high levels of privacy by computing
uniformly random solutions to stable marriages problems. We also
provide efficient algorithms extracting a non-uniformly random
solution and guaranteeing t-privacy for any threshold t. The most
private solution is expensive and is based on a distributed/shared CSP model of the problem. The most efficient version is based on running the Gale-Shapley algorithm after shuffling the men (or women) in the
shared secret description of the problem.
We introduce an efficient arithmetic circuit for the Gale-Shapley
algorithm that can employ a cryptographic primitive
we propose for vector access with an arbitrary number of participants.
Participants want to find a stable matching
as defined by their secret preferences and without leaking any of these secrets.
An additional advantage of the solvers based on secure simulations of
arithmetic circuits is that it returns a solution picked randomly
among existing solutions. Besides the fact that this increases privacy
to a level of requested t-privacy, it also provides fairness
to participants.
A real implementation of a described secure solution usable by
participants on distinct computers on the Internet is implemented (by students in a class assignment) and is available on our web-site.
A Subject-Delegated Decryption Scheme with ``Tightly" Limited Authority
In this paper, we present a new proxy cryptosystem named subject-delegated decryption scheme, in which the original decryptor delegates
decryption authority to multiple proxies according to different subjects. The advantage of our scheme is that the proxy authorities are tightly limited (``Tightly" Limited Authority). This means that the proxy authority can be temporarily aborted even if the validity period of the proxy key does not expire. Consequently, our protocol is
more practical than the existential protocols because the secrecy of the original decryptor can be protected efficiently from his proxy, especially when the proxy becomes corrupted. Our scheme is efficient because the encryption method in our scheme is based on a hybrid of symmetric key and public key cryptographic techniques. We give the provable security using a variant decisional Bilinear Diffie-Hellman (BDH) assumption.
Verifiably Encrypted Signature Scheme with Threshold Adjudication
Verifiably encrypted signature is useful in handling the fair exchange problem especially, online contract signing. In this paper, we propose a verifiably encrypted signature scheme using bilinear pairings. Our scheme facilitates the adjudication to be done in a threshold manner to achieve robustness. We show that the distribution of adjudication capability is robust and unforgeable. Our scheme is secure against extraction and existential forgery in the random oracle model.
A Novel Secure Electronic Voting Protocol Based On Bilinear Pairings
In 1997, Cranor and Cytron proposed an electronic voting protocol, Sensus
protocol, intended to be applied in a real election. However, in 2005 Fabrizio et.al.
pointed out there is a vulnerability exists in their protocol that the validator can
impersonate anyone of those abstained voters to cast vote. They proposed a scheme,
Seas protocol, to solve this weakness. But in this paper, we will show that Seas
protocol is not only inefficient but also impractical. Moreover, we also propose a
sound electronic voting protocol based on Sensus protocol from bilinear pairings,
which can really satisfy the security requirements of an e-voting system.
MV3: A new word based stream cipher using rapid mixing and revolving buffers
MV3 is a new word based stream cipher for encrypting long streams of data. A direct adaptation of a byte based cipher such as RC4 into a 32- or 64-bit word version will obviously need vast amounts of memory. This scaling issue necessitates a look for new components and principles, as well as mathematical analysis to justify their use. Our approach, like RC4's, is based on rapidly mixing random walks on directed graphs (that is, walks which reach a random state quickly, from any starting point). We begin with some well understood walks, and then introduce nonlinearity in their steps in order to improve security and show long term statistical correlations are negligible. To minimize the short term correlations, as well as to deter attacks using equations involving successive outputs, we provide a method for sequencing the outputs derived from the walk using three revolving buffers. The cipher is fast --- it runs at a speed of less than 5 cycles per byte on a Pentium IV processor. A word based cipher needs to output more bits per step, which exposes more correlations for attacks. Moreover we seek simplicity of construction and transparent analysis. To meet these requirements, we use a larger state and claim security corresponding to only a fraction of it. Our design is for an adequately secure word-based cipher; our very preliminary estimate puts the security close to exhaustive search for keys of size < 256 bits.
Cryptanalyses of Some Multimedia Encryption Schemes
Uncategorized
Uncategorized
Since early 1990s, chaos has been widely investigated to construct multimedia encryption scheme for its good cryptography-like characteristics, such as the ergodicity, mixing and exactness property and the sensitivity to initial conditions.
This thesis is concerned with the cryptanalyses of some recently-proposed chaos related multimedia encryption schemes. The security of the schemes against some familiar attack methods, such as brute-force attack, known/chosen-plaintext attack, is investigated in detail with theoretical analyses and experimental results.
The main achievements are as follows:
1. Based on a normalized encryption/decryption model, from a general perspective this thesis analyzes the security of permutation-only multimedia ciphers.
It is pointed out that all permutation-only image ciphers are insecure against known/chosen-plaintext attacks in the sense that only O (log_L(MN)) known/chosen plain-images are enough to break the ciphers, where MN is the size of the image and L is the number of all possible different pixel values. Also, it is found that the attack complexity is only O(n(MN)^2), where n is the number of known/chosen plain-images used. A recently proposed permutation-only image cipher called hierarchical chaotic image encryption
(HCIE) is served as a concretized example to show how the attack work. Experiments are shown to verify the feasibility of the known/chosen-plaintext attacks.
2. The security of a recently proposed chaos-based image encryption scheme called RCES (also called RSES) was analyzed and we found that it can be broken with only one or two known/chosen-plaintexts. In addition, the security of RCES against the brute-force attack was overestimated. Both theoretical and experimental analyses are given to show the performance of the suggested known/chosen-plaintext attacks.
3. This thesis analyzes the security of a new multistage encryption system (MES) recently proposed in ISCAS'2004. It is found that MES is insecure against a differential chosen-plaintext/ciphertext attack. Experiments are given to support the proposed attack. It is also pointed out that the security of MES against brute-force attacks is not sufficiently high.
4. This thesis analyzes the security of a new domino signal encryption algorithm(DSEA), and points out the following weaknesses: 1) its security against the brute-force attack was overestimated; 2) it is not sufficiently secure against ciphertext-only attacks, and only one ciphertext is enough to get some information
about the plaintext and to break the value of a sub-key; 3) it is insecure against known/chosen-plaintext attacks, in the sense that the secret key can be recovered from a number of continuous bytes of only one known/chosen plaintext and the corresponding ciphertext. Experimental results are given to show the performance of the proposed attacks.
5. A comprehensive analysis on the security of two-dimensional circulation encryption algorithm (TDCEA) is presented. The following security problems are found: 1) there exist some essential security defects in TDCEA; 2) two known-plaintext attacks can break TDCEA; 3) the chosen-plaintext versions of the aforementioned two known-plaintext attacks can break TDCEA even with a smaller complexity and a better performance. Some experiments are given to show the security defects of TDCEA and the feasibility of the proposed known-plaintext attacks.
6. The security of two neural-network-based encryption schemes, which are proposed by Yen et al. and Zhou et al. respectively, are analyzed in detail. It is found that the former can be easily broken by known/chosen-plaintext attacks and the latter can be broken by a chosen-plaintext attack. Experimental analyses are given to support the feasibility of the proposed attacks.
7. Some insecure properties of a VoIP encryption scheme named hierarchical data security protection (HDSP) are pointed out, which are then used to develop known/chosen-plaintext attacks. The following facts are found: 1) given n known plaintexts, only about (50/2n)% of secret chaotic bits cannot be uniquely determined; 2) given only one specially-chosen plaintext, all secret chaotic bits can be uniquely derived; 3) the secret key can be derived
with a practically small complexity even when only one plaintext is known(or chosen). Experiments are given to show the feasibility of the proposed attacks. In addition, it is found that the security of HDSP against the bruteforce attack is not practically strong.
Last updated: 2006-11-03
A New family of Ideal Multipartite Access Structure Based on MSP
Any access structure is the multipartite access structure, The set of players is devided into K different entities, in such a way that all players of the same role in the multipartite access structure, we suppose there is a threshold access structure in every different entity, there is also a threshold access structure in K different entities, we consider their composite access structure, then a new family of Multipartite Access Structure will be given, we will provide secret shring scheme realizing it and prove it is ideal.
Efficient and Provably Secure Multi-Recipient Signcryption from Bilinear Pairings
Signcryption is a
cryptographic primitive that performs signature and encryption
simultaneously, at a lower computational costs and communication
overheads than the signature-then-encryption approach. In this
paper, we propose an efficient multi-recipient signcryption scheme
based on the bilinear pairings which broadcasts a message to
multiple users in a secure and authenticated manner. We prove its
semantic security and unforgeability under the Gap Diffie-Hellman
problem assumption in the random oracle model. The proposed scheme
is more efficient than re-signcrypting a message n times using a
signcryption scheme in terms of computational costs and
communication overheads.
An Efficient and Secure Two-flow Zero-Knowledge Identification Protocol
In this paper, we propose a new zero-knowledge identification
protocol. While the protocol consists of only two message flows, it does
not rely on any underlying signature or encryption scheme. Its zero-knowledge
property is preserved under concurrent composition and reset
settings. It is secure under the strongest attack model which
incorporates concurrent attacks, active-intruder attacks and reset
attacks. Meanwhile its performance in computation and communication
is close to that of the most efficient identification protocols not
based on signature or encryption systems, most of which are insecure in this
strong
attack model.
High Order Linearization Equation (HOLE) Attack on Multivariate Public Key Cryptosystems
In the CT-track of the 2006 RSA conference, a new multivariate
public key cryptosystem, which is called the Medium Field Equation
(MFE) multivariate public key cryptosystem, is proposed by Wang,
Yang, Hu and Lai. We use the second order linearization equation
attack method by Patarin to break MFE. Given a ciphertext, we can
derive the plaintext within $2^{23}$ $\F_{2^{16}}$-operations,
after performing once for any public key a computation of
complexity less than $2^{52}$. We also propose a high order
linearization equation (HOLE) attack on multivariate public key
cryptosystems, which is a further generalization of the (first and
second order) linearization equation (LE). This method can be used
to attack extensions of the current MFE.
A ID-Based Deniable Authentication Protocol on pairings
Recently, Yoon et al. and Cao et al. propose two deniable authentication protocols
respectively. They both claim that their protocols can achieve the deniable property.
However, in this paper, we will point out that their protocols each suffers from some
malicious attacks. After that, we propose a new identity-based deniable authentication
protocol on pairings which can not only attain the desired deniable property but also
can prevent attacks.
Colliding Message Pair for 53-Step HAS-160
Uncategorized
Uncategorized
We present a collision attack on the hash function HAS-160 reduced
to 53-steps. The attack has a complexity of about $2^{35}$ hash
computations. The attack is based on the work of Cho etal.
presented at ICISC 2006. In this article, we improve their attack
complexity by a factor of about $2^{20}$ using a slightly different
strategy for message modification in the first 20 steps of the hash
function.
Discrete Logarithms in Generalized Jacobians
Déchène has proposed generalized Jacobians as a source of groups for public-key cryptosystems based on the hardness of the Discrete Logarithm Problem (DLP). Her specific proposal gives rise to a group isomorphic to the semidirect product of an elliptic curve and a multiplicative group of a finite field. We explain why her proposal has no advantages over simply taking the direct product of groups. We then argue that generalized Jacobians offer poorer security and efficiency than standard Jacobians.
Improved Efficiency for Private Stable Matching
Uncategorized
Uncategorized
At Financial Crypto 2006, Golle presented a novel framework for the privacy preserving computation of a stable matching (stable marriage). We show that the communication complexity of Golle's main protocol is substantially greater than what was claimed in that paper, in part due to surprising pathological behavior of Golle's variant of the Gale-Shapley stable matching algorithm. We also develop new protocols in Golle's basic framework with greatly reduced communication complexity.
On the Security of Generalized Jacobian Cryptosystems
Uncategorized
Uncategorized
Generalized Jacobians are natural candidates to use in discrete logarithm (DL) based cryptography since they include the multiplicative group of finite fields, algebraic tori, elliptic curves as well as all Jacobians of curves. This thus led to the study of the simplest nontrivial generalized Jacobians of an elliptic curve, for which an efficient group law algorithm was recently obtained. With these explicit equations at hand, it is now possible to concretely study the corresponding discrete logarithm problem (DLP); this is what we undertake in this paper. In short, our results highlight the close links between the DLP in these generalized Jacobians and the ones in the underlying elliptic curve and finite field.
Extended Double-Base Number System with applications to Elliptic Curve Cryptography
We investigate the impact of larger digit sets on the length of
Double-Base Number system (DBNS) expansions.
We present a new representation system called {\em extended DBNS}
whose expansions can be extremely sparse.
When compared with double-base chains, the average length of
extended DBNS expansions of integers of size in the range 200--500 bits
is approximately reduced by $20\%$ using one precomputed point, $30\%$ using two, and $38\%$ using
four.
We also discuss a new approach to approximate an integer $n$ by $d2^a3^b$ where $d$ belongs to a given digit set.
This method, which requires some precomputations as well, leads to realistic DBNS implementations.
Finally, a left-to-right scalar multiplication relying on extended DBNS is given.
On an elliptic curve where operations are performed in Jacobian coordinates, improvements of up to $13\%$
overall can be expected with this approach when compared to window NAF methods
using the same number of precomputed points.
In this context, it is therefore the fastest method known to date to compute a scalar multiplication on a
generic elliptic curve.
Designated Verifier Signature Scheme Based on Braid Groups
Uncategorized
Uncategorized
Artin's braid groups have been recently suggested as a new source for public-key cryptography. In this paper we first propose
the designated verifier group signature scheme based on the conjugacy search problem and the
root problem in the braid groups which are believed to be hard problems. Furthermore, our scheme can conceal
the message to be signed so that it can be applied to E-voting and calling for tenders
Anonymous Secure Communication in Wireless Mobile Ad-hoc Networks
The main characteristic of a mobile ad-hoc network is its infrastructure-less, highly dynamic topology, which is subject to malicious traffic analysis. Malicious intermediate nodes in wireless mobile ad-hoc networks are a threat concerning security as well as anonymity of exchanged information. To protect anonymity and achieve security of nodes in mobile ad-hoc networks, an anonymous on-demand routing protocol, termed RIOMO, is proposed. For this purpose, pseudo IDs of the nodes are generated considering Pairing-based Cryptography. Nodes can generate their own pseudo IDs independently. As a result RIOMO reduces pseudo IDs maintenance costs. Only trust-worthy nodes are allowed to take part in routing to discover a route. To ensure trustiness each node has to make authentication to its neighbors through an anonymous authentication process. Thus RIOMO safely communicates between nodes without disclosing node identities; it also provides different desirable anonymous properties such as identity privacy, location privacy, route anonymity, and robustness against several attacks.
An Algorithm for the $\eta_T$ Pairing Calculation in Characteristic Three and its Hardware Implementation
In this paper, we propose a modified $\eta_T$ pairing algorithm in
characteristic three which does not need any cube root extraction. We
also discuss its implementation on a low cost platform which hosts an
Altera Cyclone~II FPGA device. Our pairing accelerator is ten times
faster than previous known FPGA implementations in characteristic
three.
Analyzing the HB and HB+ Protocols in the ``Large Error'' Case
HB and HB+ are two shared-key, unidirectional authentication protocols whose extremely low computational cost makes them potentially well-suited for severely resource-constrained devices. Security of these protocols is based on the conjectured hardness of learning parity with noise; that is, learning a secret $s$ given ``noisy'' dot products of $s$ that are incorrect with probability $\epsilon$.
Although the problem of learning parity with noise is meaningful for any constant $\epsilon < 1/2$, existing proofs of security for HB and HB+ only imply security when $\epsilon < 1/4$. In this note, we show how to extend these proofs to the case of arbitrary $\epsilon < 1/2$.
Invisible Designated Confirmer Signatures without Random Oracles
Uncategorized
Uncategorized
We construct the first $O(1)$-size designated confirmer signatures (DCS) with security in the state-of-the-art model of Camenisch and Michels, Eurocrypt 2000, without random oracles. In particular, we achieve the security notion called the "invisibility of signature" therein.
The Average Transmission Overhead of Broadcast Encryption
We consider broadcast encryption schemes wherein a center needs to
broadcast a secret message to a privileged set of receivers. We
prescribe a probability distribution $P$ on the privileged set. In
this setting, the transmission overhead can be viewed as a random
variable over $P$ and we define its expected value as the average transmission overhead (or ato).
Given $P$, the Shannon's entropy function $H(.)$ provides a lower
bound on the average number of bits required to identify every
privileged set. This provides a natural lower bound for the ato in
terms of $H(P)$. For session key distribution, we consider the
subset cover framework and bound the ato in terms of the size of the
cover. We further specialize our bound to accommodate storage
constraints at receivers.
We consider two families of distributions for $P$ that occur
naturally in broadcast networks.
-- Each receiver independently joins the privileged set with
probability $p$.
-- The privileged set is selected uniformly from a collection
of subsets of receivers.
We evaluate the ato of some practical schemes such as the subset
difference method, the LSD scheme and the Partition-and-Power scheme
under these distributions. Our investigations lead us to conclude
that each scheme is inherently tailored to perform optimally for
specific distributions.
Computational Soundness of Formal Indistinguishability and Static Equivalence
In the research of the relationship between the formal and the computational view of cryptography, a recent approach uses static equivalence from cryptographic pi calculi as a notion of formal indistinguishability. Previous work has shown that this yields the soundness of natural interpretations of some interesting equational theories, such as certain cryptographic operations and a theory of XOR. In this paper however, we argue that static equivalence is too coarse for sound interpretations of equational theories in general. We show some explicit examples how static equivalence fails to work in interesting cases. To fix this problem, we propose a notion of formal indistinguishability that is more flexible than static equivalence. We provide a general framework along with general theorems, and then discuss how this new notion works for the explicit examples where static equivalence failed to ensure soundness. We also improve the treatment by using ordered sorts in the formal view, and by allowing arbitrary probability distributions of the interpretations.
Algebraic Immunity of S-boxes Based on Power Mappings: Analysis and Construction
The algebraic immunity of an S-box depends on the number and type
of linearly independent multivariate equations it satisfies. In
this paper techniques are developed to find the number of linearly
independent, multivariate, bi-affine and quadratic equations for
S-boxes based on power mappings. These techniques can be used to
prove the exact number of equations for any class of power
mappings. Two algorithms to calculate the number of bi-affine and
quadratic equations for any $(n,n)$ S-box based on power mapping
are also presented. The time complexity of both algorithms is only
$O(n^2)$. To design algebraically immune S-boxes four new classes
of S-boxes that guarantee zero bi-affine equations and one class
of S-boxes that guarantees zero quadratic equations are presented.
The algebraic immunity of power mappings based on Kasami, Niho,
Dobbertin, Gold, Welch and Inverse exponents are discussed along
with other cryptographic properties and several cryptographically
strong S-boxes are identified. It is conjectured that a known
Kasami like APN power mapping is maximally nonlinear and a known
Kasami like maximally nonlinear power mapping is differentially
4-uniform. Finally an open problem to find an $(n,n)$ bijective
nonlinear S-box with more than $5n$ quadratic equations is solved
and it is conjectured that the upper bound on this number is
$\frac{n(n-1)}{2}$.