All papers in 2010 (Page 4 of 660 results)
On the Security of Identity Based Threshold Unsigncryption Schemes
Signcryption is a cryptographic primitive that provides confidentiality and authenticity simultaneously at a cost significantly lower than that of the naive combination of encrypting and signing the message. Threshold signcryption is used when a message to be sent needs the authentication of a certain number of members in an organisation, and until and unless a given number of members (known as the threshold) join the signcyption process, a particular message cannot be signcrypted. Threshold unsigncryption is used when this constraint is applicable during the unsigncryption process. In this work, we cryptanalyze two threshold unsigncryption schemes. We show that both these schemes do not meet the stringent requirements of insider security and propose attacks on both confidentiality and unforgeability. We also propose an improved identity based threshold unsigncryption scheme and give the formal proof of security in a new stronger security model.
Identity Based Self Delegated Signature - Self Proxy Signatures
A proxy signature scheme is a variant of digital signature scheme in which a signer delegates his signing rights to another person called proxy signer, so that the proxy signer can generate the signature of the actual signer in his absence. Self Proxy Signature (SPS) is a type of proxy signature wherein, the original signer delegates the signing rights to himself (Self Delegation), there by generating temporary public and private key pairs for himself. Thus, in SPS the user can prevent the exposure of his private key from repeated use. In this paper, we propose the first identity based self proxy signature scheme. We give a generic scheme and a concrete instantiation in the identity based setting. We have defined the appropriate security model for the same and proved both the generic and identity based schemes in the defined security model.
The Fiat--Shamir Transform for Group and Ring Signature Schemes
Uncategorized
Uncategorized
The Fiat-Shamir (FS) transform is a popular tool to produce
particularly efficient digital signature schemes out of identification protocols.
It is known that the resulting signature scheme is secure (in the
random oracle model) if and only if the identification protocol is secure
against passive impersonators. A similar results holds for constructing
ID-based signature schemes out of ID-based identification protocols.
The transformation had also been applied to identification protocols with
additional privacy properties. So, via the FS transform, ad-hoc group
identification schemes yield ring signatures and identity escrow schemes
yield group signature schemes. Unfortunately, results akin to those above
are not known to hold for these latter settings and the security of the
resulting schemes needs to be proved from scratch, or worse, it is often
simply assumed. Therefore, the security of the schemes obtained this
way does not clearly follow from that of the base identification protocol
and needs to be proved from scratch. Even worse, some papers seem to
simply assume that the transformation works without proof.
In this paper we provide the missing foundations for the use of the FS
transform in these more complex settings.We start with defining a formal
security model for identity escrow schemes (a concept proposed earlier
but never rigorously formalized). Our main result constists of necessary
and sufficient conditions for an identity escrow scheme to yield (via the
FS transform) a secure group signature schemes. In addition, we discuss
several variants of this result that account for the constructions of group
signatures that fulfill weaker notions of security. In addition, using the
similarity between group and ring signature schemes we give analogous
results for the latter primitive.
Last updated: 2010-08-29
CCA-Secure PRE Scheme without Public Verifiability
In a proxy re-encryption (PRE) scheme, a semi-trusted proxy can
transform a ciphertext under Alice's public key into another
ciphertext that Bob can decrypt. However, the proxy cannot access
the plaintext. Due to its transformation property, PRE can be used
in many applications, such as encrypted email forwarding. All
the existing CCA-secure PRE schemes have a crucial property: the
public verifiability of the original ciphertext, i.e., everyone can
check the validity of the original ciphertext. In this paper, we
propose a novel CCA-secure PRE scheme without public
verifiability. This proposal is proven-secure based on the DDH
assumption in the standard model. To the best of our knowledge, our
proposal is the first CCA-secure unidirectional PRE scheme
without pairings in the standard model, which answers an open
problem in the PRE field.
Secure Connectivity Model In Wireless Sensor Network(WSN) Using 1st Order Reed Muller Codes
In this paper, we suggest the idea of separately treating the connectivity and communication model of a Wireless Sensor Network(WSN). We then propose a novel connectivity model for a WSN using first order Reed-Muller Codes. While the model has a hierarchical structure, we have shown it works equally well for Distributed WSN. Though one can use any communication model, we prefer to use communication model suggested by Ruj and Roy [1] for all computations and results in our work. One might use two suitable secure (symmetric) cryptosystems on the two different models viz. connectivity and communication. By doing so we have shown how resiliency and scalability are appreciably improved as compared to Ruj and Roy [1].
Near-Collisions on the Reduced-Round Compression Functions of Skein and BLAKE
Uncategorized
Uncategorized
The SHA-3 competition organized by NIST aims
to find a new hash standard as a replacement of SHA-2. Till now, 14
submissions have been selected as the second round candidates,
including Skein and BLAKE, both of which have components based on
modular addition, rotation and bitwise XOR (ARX). In this paper, we
propose improved near-collision attacks on the reduced-round
compression functions of Skein and a variant of BLAKE. The attacks
are based on linear differentials of the modular additions. The
computational complexity of near-collision attacks on a 4-round
compression function of BLAKE-32, 4-round and 5-round compression
functions of BLAKE-64 are 2^{21}, 2^{16} and 2^{216}
respectively, and the attacks on a 24-round compression functions of
Skein-256, Skein-512 and Skein-1024 have a complexity of 2^{60},
2^{230} and 2^{395} respectively.
High-Speed Software Implementation of the Optimal Ate Pairing over Barreto-Naehrig Curves
This paper describes the design of a fast software library for the computation of the optimal ate pairing on a Barreto--Naehrig elliptic curve. Our library is able to compute the optimal ate pairing over a $254$-bit prime field $\mathbb{F}_{p}$, in just $2.63$ million of clock cycles on a single core of an Intel Core i7 $2.8$GHz processor, which implies that the pairing computation takes $0.942$msec. We are able to achieve this performance by a careful implementation of the base field arithmetic through the usage of the customary Montgomery multiplier for prime fields. The prime field is constructed via the Barreto--Naehrig polynomial parametrization of the prime $p$ given as, $p = 36t^4 +36t^3 +24t^2 +6t+1$, with $t = 2^{62} - 2^{54} + 2^{44}$. This selection of $t$ allows us to obtain important savings for both the Miller loop as well as the final exponentiation steps of the optimal ate pairing.
Cryptographic Pairings Based on Elliptic Nets
Uncategorized
Uncategorized
In 2007, Stange proposed a novel method of computing the Tate pairing on an elliptic curve over a finite field.
This method is based on elliptic nets,
which are maps from $\mathbb{Z}^n$ to a ring that satisfy a certain recurrence relation.
In this paper, we explicitly give formulae for computing some variants
of the Tate pairing:
Ate, Ate$_i$, R-Ate and Optimal pairings, based on elliptic nets.
We also discuss their efficiency by using some experimental results.
A Digital Signature Using Multivariate Functions on Quaternion Ring
We propose the digital signature scheme on non-commutative quaternion ring over finite fields in this paper. We generate the multivariate function of high degree F(X) . We construct the digital signature scheme using F(X). Our system is immune from the Gröbner bases attacks because obtaining parameters of F(X) to be secret keys arrives at solving the multivariate algebraic equations that is one of NP complete problems .
Decentralizing Attribute-Based Encryption
Uncategorized
Uncategorized
We propose a Multi-Authority Attribute-Based Encryption (ABE) system.
In our system, any party can become an authority and there is no
requirement for any global coordination other than the creation of an
initial set of common reference parameters. A party can simply act as
an ABE authority by creating a public key and issuing private keys to
different users that reflect their attributes. A user can encrypt
data in terms of any boolean formula over attributes issued from any
chosen set of authorities. Finally, our system does not require any
central authority.
In constructing our system, our largest technical hurdle is to make it collusion resistant. Prior Attribute-Based Encryption systems achieved collusion resistance when the ABE system authority ``tied'' together different components (representing different attributes) of a user's private key by randomizing the key. However, in our system each component will come from a potentially different authority, where we assume no coordination between such authorities. We create new techniques to tie key components together and prevent collusion attacks between users with different global identifiers.
We prove our system secure using the recent dual system encryption
methodology where the security proof works by first converting the
challenge ciphertexts and private keys to a semi-functional form and
then arguing security. We follow a recent variant of the dual system
proof technique due to Lewko and Waters and build our system using
bilinear groups of composite order. We prove security under similar
static assumptions to the LW paper in the random oracle model.
A Security Enhancement and Proof for Authentication and Key Agreement (AKA)
In this work, we consider Authentication and Key Agreement (AKA),
a popular client-server Key Exchange (KE) protocol, commonly used in wireless standards (e.g., UMTS), and widely considered for new applications. We discuss natural potential usage scenarios for AKA,
attract attention to subtle vulnerabilities, propose a simple and efficient AKA enhancement, and provide its formal proof of security.
The vulnerabilities arise due to the fact that AKA is not a secure KE in the standard cryptographic sense, since Client C does not contribute randomness to the session key. We argue that AKA remains secure in current deployments where C is an entity controlled by a single tamper-resistant User Identity Module (UIM). However, we also show that AKA is insecure if several Client's devices/UIMs share his identity and key.
We show practical applicability and efficiency benefits of such multi-UIM scenarios. As our main contribution, we adapt AKA for this setting, with only the minimal changes, while adhering to AKA design goals, and preserving its advantages and features. Our protocol involves one extra PRFG evaluation and no extra messages. We formally prove security of the resulting protocol. We discuss how our security improvement allows simplification of some of AKA security heuristics, which may make our protocol more efficient and robust than AKA even for the current deployment scenarios.
Improved Algebraic Cryptanalysis of QUAD, Bivium and Trivium via Graph Partitioning on Equation Systems
We present a novel approach for solving systems of polynomial equations via graph partitioning. The concept of a variable-sharing graph of a system of polynomial equations is defined. If such graph is disconnected, then the corresponding system of equations can be split into smaller ones that can be solved individually. This can provide a significant speed-up in computing the solution to the system, but is unlikely to occur either randomly or in applications. However, by deleting a certain vertices on the graph, the variable-sharing graph could be disconnected in a balanced fashion, and in turn the system of polynomial equations are separated into smaller ones of similar sizes. In graph theory terms, this process is equivalent to finding balanced vertex partitions with minimum-weight vertex separators. The techniques of finding these vertex partitions are discussed, and experiments are performed to evaluate its practicality for general graphs and systems of polynomial equations. Applications of this approach in algebraic cryptanalysis on symmetric ciphers are presented. For the QUAD family of stream ciphers, we show how a malicious party can manufacture conforming systems that can be easily broken. For the stream cipher Trivium and its variants, we achieve significant speedups in algebraic attacks launched against them. In each of these cases, the systems of polynomial equations involved are well-suited to our graph partitioning method. These results may open a new avenue for evaluating the security of symmetric ciphers against algebraic attacks.
Lattice-theoretic Characterization of Secret Sharing Representable Connected Matroids
Necessary and sufficient conditions for a connected matroid to be secret sharing (ss-)representable are obtained. We show that the flat lattices of ss-representable matroids are closely related with well-studied algebraic objects called linear lattices. This fact implies that new powerful methods (from lattice theory and mathematical logic) for investigation of ss-representable matroids can be applied. We also obtain some necessary conditions for a connected matroid to be ss-representable. Namely, we construct an infinite set of sentences (like to Haiman’s “higher Arguesian identities”) which are hold in all ss-representable matroids.
Time-Specific Encryption
This paper introduces and explores the new concept of Time-Specific Encryption (TSE). In (Plain) TSE, a Time Server broadcasts a key at the beginning of each time unit, a Time Instant Key (TIK). The sender of a message can specify any time interval during the encryption process; the receiver can decrypt to recover the message only if it has a TIK that corresponds to a time in that interval. We extend Plain TSE to the public-key and identity-based settings, where receivers are additionally equipped with private keys and either public keys or identities, and where decryption now requires the use of the private key as well as an appropriate TIK. We introduce security models for the plain, public-key and identity-based settings. We also provide constructions for schemes in the different settings, showing how to obtain Plain TSE using identity-based techniques, how to combine Plain TSE with public-key and identity-based encryption schemes, and how to build schemes that are chosen-ciphertext secure from schemes that are chosen-plaintext secure. Finally, we suggest applications for our new primitive, and discuss its relationships with existing primitives, such as Timed Release Encryption and Broadcast Encryption.
Efficient Generalized Signcryption Schemes
Generalized signcryption is a new cryptographic primitive which works as a signcryption scheme, a signature scheme and an encryption scheme as per need. Recently Ji et al. proposed a security model for certificateless generalized signcryption scheme and also proposed a scheme which they claim is secure under the proposed security model. In this paper we show that Ji et al. scheme is not existentially unforgeable against Type-I adversary and propose a simplified certificateless generalized signcryption. We also present an efficient identity based generalized signcryption scheme.
Robust RFID Authentication Protocol with Formal Proof and Its Feasibility
Uncategorized
Uncategorized
The proloferation of RFID tags enhances everyday activities, such as by letting us reference the price, origin and circulation route of specific goods. On the other hand, this lecel of traceability gives rise to new privacy issues and the topic of developing cryptographic protocols for RFID- tags is garnering much attention. A large amount of research has been conducted in this area. In this paper, we reconsider the security model of RFID- authentication with a man-in-the-middle adversary and communication fault. We define model and security proofs via a game-based approach makes our security models
compatible with formal security analysis tools. We show that an
RFID authentication protocol is robust against the above attacks, and
then provide game-based (hand-written) proofs and their erification by using CryptoVerif.
Generating more Kawazoe-Takahashi Genus 2 Pairing-friendly Hyperelliptic Curves.
Uncategorized
Uncategorized
Constructing pairing-friendly hyperelliptic curves with small $\rho$-values is one of challenges for practicability of pairing-friendly hyperelliptic curves. In this paper, we describe a method that extends the Kawazoe-Takahashi method of generating families of genus $2$ ordinary pairing-friendly hyperelliptic curves by parameterizing the parameters as polynomials. With this approach we construct genus $2$ ordinary pairing-friendly hyperelliptic curves with $2 <\rho \le 3$.
Identity Based Public Verifiable Signcryption Scheme
Signcryption as a single cryptographic primitive offers both confidentiality and authentication simultaneously. Generally in signcryption schemes, the message is hidden and thus the validity of the ciphertext can be verified only after unsigncrypting the ciphertext. Thus, a third party will not be able to verify whether the ciphertext is valid or not. Signcryption schemes that allow any user to verify the validity of the ciphertext without the knowledge of the message are called public verifiable signcryption schemes. Third Party verifiable signcryption schemes allow the receiver to convince a third party, by providing some additional information along with the signcryption other than his private key with/without exposing the message.
In this paper, we show the security weaknesses in three existing schemes \cite{BaoD98}, \cite{TsoOO08} and \cite{ChowYHC03}. The schemes in \cite{BaoD98} and \cite{TsoOO08} are in the Public Key Infrastructure (PKI) setting and the scheme in \cite{ChowYHC03} is in the identity based setting. More specifically, \cite{TsoOO08} is based on elliptic curve digital signature algorithm (ECDSA). We also, provide a new identity based signcryption scheme that provides public verifiability and third party verification. We formally prove the security of the newly proposed scheme in the random oracle model.
Fixed Argument Pairings
Uncategorized
Uncategorized
A common scenario in many pairing-based cryptographic protocols is that one argument in the pairing is fixed as a long term secret key or a constant parameter in the system. In these situations, the runtime of Miller’s algorithm can be significantly reduced by storing precomputed values that depend on the fixed argument, prior to the input or existence of the second argument. In light of recent developments in pairing computation, we show that the computation of the Miller loop can be sped up by up to 37% if precomputation is employed, with our method being up to 19.5% faster than the previous precomputation techniques.
A New Class of Public Key Cryptosystems Constructed Based on Error-Correcting Codes, Using K(III) Scheme
In this paper, we present a new scheme referred to as K(III) scheme which would be effective for improving a certain class of PKC's. Using K(III) scheme, we propose a new method for constructing the public-key cryptosystems based on error-correcting codes. The constructed PKC is referred to as K(V)SE(1)PKC. We also present more secure version of K(V)SE(1)PKC, referred to as K*(V)SE(1)PKC, using K(I) scheme previously proposed by the present author, as well as K(III) scheme.
A secure Deniable Authentication Protocol based on Bilinear Diffie-Hellman Algorithm
This paper describes a new deniable authentication protocol whose security is based Diffe-Hellman (CDH) Problem of type Decisional Diffie-Hellman(DDH) and the Hash Diffie-Hellman (HDDH) problem.This protocol can be implemented in low power and small processor mobile devices such as smart card, PDA etc which work in low power and small processor. A deniable authentication protocol enables a receiver to identify the true source of a given message, but not to prove the identity of the sender to a third party. This property is very useful for providing secure negotiation over the internet. Our proposed protocol will be achieving the most three security requirement like deniable authentication, Confidentialities and also it is resistant against Man-in middle Attack
A Certifying Compiler for Zero-Knowledge Proofs of Knowledge Based on $\Sigma$-Protocols
Zero-knowledge proofs of knowledge (ZK-PoK) are important building blocks for numerous cryptographic applications. Although ZK-PoK have very useful properties, their real world deployment is typically hindered by their significant complexity compared to other (non-interactive) crypto primitives. Moreover, their design and implementation is time-consuming and error-prone.
We contribute to overcoming these challenges as follows: We present a comprehensive specification language and a certifying compiler for ZK-PoK protocols based on $\Sigma$-protocols and composition techniques known in literature.
The compiler allows the fully automatic translation of an abstract description of a proof goal into an executable implementation. Moreover, the compiler overcomes various restrictions of previous approaches, e.g., it supports the important class of exponentiation homomorphisms with hidden-order co-domain, needed for privacy-preserving applications such as idemix.
Finally, our compiler is certifying, in the sense that it automatically produces a formal proof of security (soundness) of the compiled protocol (currently covering special homomorphisms) using the Isabelle/HOL theorem prover.
Efficient SIMD arithmetic modulo a Mersenne number
Uncategorized
Uncategorized
This paper describes carry-less arithmetic operations modulo an integer $2^M - 1$ in the thousand-bit range, targeted at single instruction multiple data platforms and applications where overall throughput is the main performance criterion. Using an implementation on a cluster of PlayStation 3 game consoles a new record was set for the elliptic curve
method for integer factorization.
Practical-Titled Attack on AES-128 Using Chosen-Text Relations
Uncategorized
Uncategorized
A new attack on AES-128.
Efficient Differential Fault Analysis for AES
This paper proposes improved post analysis methods for Differential Fault Analysis (DFA) against AES. In detail, we propose three techniques to improve the attack efficiency as 1) combining previous DFA methods, 2) performing a divide-and-conquer attack by considering the AES key-schedule structure, and 3) taking the linearity of the MixColumns operation into account. As a result, the expectation of the analysis time in the previous work can be reduced to about one sixteenth.
Notice that these improvements are based on the detailed analysis of the previous DFA methods and the calculation time and memory cost in practical implementations. Moreover, the proposed techniques can be widely applied to DFA attacks under different assumptions.
Analysis of Efficient Techniques for Fast Elliptic Curve Cryptography on x86-64 based Processors
In this work, we analyze and present experimental data evaluating the efficiency of several techniques for speeding up the computation of elliptic curve point multiplication on emerging x86-64 processor architectures. In particular, we study the efficient combination of such techniques as elimination of conditional branches and incomplete reduction to achieve fast field arithmetic over GF(p). Furthermore, we study the impact of (true) data dependencies on these processors and propose several generic techniques to reduce the number of pipeline stalls, memory reads/writes and function calls. We also extend these techniques to field arithmetic over GF(p^2), which is utilized as underlying field by the recently proposed Galbraith-Lin-Scott (GLS) method to achieve higher performance in the point multiplication. By efficiently combining all these methods with state-of-the-art elliptic curve algorithms we obtain high-speed implementations of point multiplication that are up to 31% faster than the best previous published results on similar platforms. This research is crucial for advancing high-speed cryptography on new emerging processor architectures.
Security Proof of AugPAKE
In this paper, we show that the AugPAKE protocol provides
the semantic security of session keys under the strong Diffie-Hellman (SDH) assumption in the random oracle model.
Last updated: 2010-06-09
Cryptanalysis of Libert-Vergnaud Proxy Re-encryption Scheme
In 2008, Libert and Vergnaud put forth a proxy re-encryption scheme (LV08 for short). Unlike some earlier PRE schemes, the LV08 scheme specifies a validity-checking process to guarantee that the received ciphertext is well-formed. In this paper, we clarify an ultimate fact that the received message is well-formed is the premise to decryption in all encryption schemes. The underlying mechanism to keep the communicated message well-formed in encryption schemes is another topic. The authors ignored the fact and proposed a cumbersome presentation. We will simplify the LV08 scheme and show its security level is the same as that of the scheme proposed by Ateniese et al in 2005. Therefore, the LV08 scheme can not ensure chosen-ciphertext security as claimed.
Relay Attacks on Passive Keyless Entry and Start Systems in Modern Cars
We demonstrate relay attacks on Passive Keyless Entry and Start (PKES) systems used
in modern cars. We build two efficient and inexpensive attack realizations, wired
and wireless physical-layer relays, that allow the attacker to enter and start a car
by relaying messages between the car and the smart key. Our relays are completely
independent of the modulation, protocol, or presence of strong authentication and
encryption. We perform an extensive evaluation on 10 car models from
8 manufacturers. Our results show that relaying the signal in one
direction only (from the car to the key) is sufficient to perform the attack while
the true distance between the key and car remains large (tested up to 50 meters, non
line-of-sight). We also show that, with our setup, the smart key can be excited from
up to 8 meters. This removes the need for the attacker to get close to the key in
order to establish the relay. We further analyze and discuss critical system
characteristics. Given the generality of the relay attack and the number of
evaluated systems, it is likely that all PKES systems based on similar designs are
also vulnerable to the same attack. Finally, we propose immediate mitigation
measures that minimize the risk of relay attacks as well as recent solutions that
may prevent relay attacks while preserving the convenience of use, for which PKES
systems were initially introduced.
A Distinguisher for High Rate McEliece Cryptosystems
Uncategorized
Uncategorized
The Goppa Code Distinguishing (GD) problem consists in distinguishing the matrix of Goppa code from a random matrix.
Up to now, it was widely believed that this problem is computationally hard. The hardness of this problem was a mandatory assumption to prove the security of code-based cryptographic primitives like McEliece's cryptosystem. We present a polynomial time distinguisher for alternant and Goppa codes of high rate over any field. The key ingredient is an algebraic technique already used to asses the security McEliece's cryptosystem. Our method permits to indeed distinguish public keys of the CFS signature
scheme for all parameters considered as secure and some realistic secure parameters of McEliece. The idea is to
consider the dimension of the solution space of a linearized system deduced from a polynomial one. It turns out that this dimension depends on the type of code considered. We provide explicit formulas for the value of the dimension for ``generic" random, alternant, and Goppa code over any alphabet.
Distributed Rational Consensus
Uncategorized
Uncategorized
The \textit{consensus} is a very important problem in distributed computing, where among the $n$ players, the honest players try to come to an agreement even in the presence of $t$ malicious players. In game theoretic environment, \textit{the group choice problem} is similar to the \textit{rational consensus problem}, where every player $p_i$ prefers come to consensus on his value $v_i$ or to a value which is as close to it as possible. All the players need to come to an agreement on one value by amalgamating individual preferences to form a group or social choice. In rational consensus problem, there are no malicious players. We consider the rational consensus problem in the presence of few malicious players. The players are assumed to be rational rather than honest and there exist few malicious players among them. Every rational player primarily prefers to come to consensus on his value and secondarily, prefers to come to consensus on other player's value. In other words, if $w_1$, $w_2$ and $w_3$ are the payoffs obtained when $p_i$ comes to consensus on his value, $p_i$ comes to consensus on other's value and $p_i$ does not come to consensus respectively, then $w_1 > w_2 > w_3$. We name it as \textit{distributed rational consensus problem} DRC. The players can have two values, either 1 or 0, i.e binary consensus.
The rational majority is defined as number of players, who wants to
agree on one particular value, and they are more than half of the
rational players. Similarly rational minority can be defined.
We have considered EIG protocol, and characterized the rational behaviour,
and shown that EIG protocol will not work in rational environment.
We have proved that, there exists no protocol,
which solves distributed consensus problem in fixed
running time, where players have knowledge of other players values during the protocol.
This proof is based on Maskin's monotonicity property.
The good news is, if the players do not have knowledge about other players values,
then it can be solved. This can be achieved by verifiable rational secret sharing,
where players do not exchange their values directly, but as pieces of it.
On the Security of Pseudorandomized Information-Theoretically Secure Schemes
In this article, we discuss a naive method of randomness reduction for cryptographic schemes, which replaces the required perfect randomness with output distribution of a computationally secure pseudorandom generator (PRG). We propose novel ideas and techniques for evaluating the indistinguishability between the random and pseudorandom cases, even against an adversary with computationally unbounded attack algorithm. Hence the PRG-based randomness reduction can be effective even for information-theoretically secure cryptographic schemes, especially when the amount of information received by the adversary is small. In comparison to a preceding result of Dubrov and Ishai (STOC 2006), our result removes the requirement of generalized notion of ``nb-PRGs'' and is effective for more general kinds of protocols. We give some numerical examples to show the effectiveness of our result in practical situations, and we also propose a further idea for improving the effect of the PRG-based randomness reduction.
Signatures for Multi-source Network Coding
We consider the problem of securing inter-flow network coding with multiple sources. We present a practical homomorphic signature scheme that makes possible to verify network coded packets composed of data originating from different sources. The multi-source signature scheme allows to circumvent the need of a secret key shared by all sources. Our solution is an extension of the pairing based homomorphic signature scheme by Boneh et al. We prove the security of the extended scheme by showing a reduction to the single-source case. We evaluated the performance of required computations and our results imply that the solution is applicable in practice.
Efficiency-Improved Fully Simulatable Adaptive OT under the DDH Assumption
At Asiacrypt 2009, Kurosawa and Nojima showed a fully simulatable adaptive oblivious transfer (OT) protocol under the DDH assumption in the standard model. However, Green and Hohenberger pointed out
that the communication cost of each transfer phase is O(n), where n is the number of the sender's messages. In this paper, we show that the cost can be reduced to O(1) by utilizing a verifiable shuffle protocol.
Privacy-Preserving Multi-Objective Evolutionary Algorithms
Existing privacy-preserving evolutionary algorithms are limited to specific problems securing only cost function evaluation. This lack of functionality and security prevents their use for many security sensitive business optimization problems, such as our use case in collaborative supply chain management. We present a technique to construct privacy-preserving algorithms that address multi-objective problems and secure the entire algorithm including survivor selection. We improve performance over Yao's protocol for privacy-preserving algorithms and achieve solution quality only slightly inferior to the multi-objective evolutionary algorithm NSGA-II.
Effect of the Dependent Paths in Linear Hull
Uncategorized
Uncategorized
Linear Hull is a phenomenon that there are a lot of linear paths
with the same data mask but different key masks for a block cipher.
In 1994, K. Nyberg presented the effect on the key-recovery attack
such as Algorithm 2 with linear hull, in which the required number
of the known plaintexts can be decreased compared with that in the
attack using an individual linear path. In 2009, S. Murphy proved
that K. Nyberg's results can only be used to give a lower bound on
the data complexity and will be no use on the real linear
cryptanalysis. In fact, the linear hull produces such positive
effect in linear cryptanalysis only for some keys instead of the
whole key space. So the linear hull can be used to improve the
classic linear cryptanalysis for some weak keys. In the same year,
K. Ohkuma gave the linear hull analysis on reduced-round PRESENT
block cipher, and showed that there are $32\%$ weak keys of PRESENT
which make the bias of a given linear hull with multiple paths more
than a lower bound. However, K. Ohkuma has not considered the
dependency of the multi-path, and his results are based on the
assumption that the linear paths are independent. Actually, most of
the linear paths are dependent in the linear hull. In this paper, we
will analyze the dependency of the linear paths in a linear hull and
the real effect of linear hull with the dependent linear paths.
Firstly, we give the relation between the bias of a linear hull and
its linear paths in linear cryptanalysis. Secondly, we present the
formula to compute the rate of weak keys corresponding to the
expected bias of the dependent paths. Based on the formula, we show
that the dependency of linear paths reduces the number of weak keys
corresponding to higher biases of the linear hull compared with that
in the independent case. It means that the dependency of linear
paths reduces the effect of linear hull. At last, we verify our
conclusion by analyzing reduced-round of PRESENT.
Applications of SAT Solvers to AES key Recovery from Decayed Key Schedule Images
Uncategorized
Uncategorized
Cold boot attack is a side channel attack which exploits the data remanence property of random access memory (RAM) to retrieve its contents which remain readable shortly after its power has been removed. Given the nature of the cold boot attack, only a corrupted image of the memory contents will be available to the attacker. In this paper, we investigate the use of an off-the-shelf SAT solver, CryptoMinSat, to improve the key recovery of the AES-128 key schedules from its corresponding decayed memory images. By exploiting the asymmetric decay of the memory images and the redundancy of key material inherent in the AES key schedule, rectifying the faults in the corrupted memory images of the AES-128 key schedule is formulated as a Boolean satisfiability problem which
can be solved efficiently for relatively very large decay factors.
Our experimental results show that this approach improves upon the previously known results.
Security Analysis of SIMD
In this paper we study the security of the SHA-3 candidate SIMD. We first show a new free-start distinguisher based on symmetry relations. It allows to distinguish the compression function of SIMD from a random function with a single evaluation. However, we also show that this property is very hard to exploit to mount any attack on the hash function because of the mode of operation of the compression function. Essentially, if one can build a pair of symmetric states, the symmetry property can only be triggered once.
In the second part, we show that a class of free-start distinguishers is not a threat to the wide-pipe hash functions. In particular, this means that our distinguisher has a minimal impact on the security of the hash function, and we still have a security proof for the SIMD hash function. Intuitively, the reason why this distinguisher does not weaken the function is that getting into a symmetric state is about as hard as finding a preimage.
Finally, in the third part we study differential path in SIMD, and give an upper bound on the probability of related key differential paths. Our bound is in the order of $2^{n/2}$ using very weak assumptions. Resistance to related key attacks is often overlooked, but it is very important for hash function designs.
Improved Single-Key Attacks on 8-round AES
AES is the most widely used block cipher today,
and its security is one of the most important issues in cryptanalysis.
After 13 years of analysis, related-key attacks were recently found against two
of its flavors (AES-192 and AES-256). However, such a strong type of
attack is not universally accepted as a valid attack model,
and in the more standard single-key attack model
at most 8 rounds of these two versions can be currently attacked.
In the case of 8-round AES-192, the only known attack
(found 10 years ago) is extremely marginal, requiring the evaluation
of essentially all the 2^{128} possible plaintext/ciphertext pairs in order
to speed up exhaustive key search by a factor of 16. In this paper we introduce
three new cryptanalytic techniques,
and use them to get the first non-marginal attack on 8-round AES-192
(making its time complexity about a million times faster than exhaustive search,
and reducing its data complexity to about 1/32,000 of the full codebook).
In addition, our new techniques can reduce the best known time
complexities for all the other combinations of 7-round and 8-round AES-192
and AES-256.
Subspace Distinguisher for 5/8 Rounds of the ECHO-256 Hash Function
Uncategorized
Uncategorized
In this work we present first results for the hash function of ECHO. We provide a subspace distinguisher for 5 rounds, near-collisions on 4.5 rounds and collisions for 4 out of 8 rounds of the ECHO-256 hash function. The complexities are $2^{96}$ compression function calls for the distinguisher and near-collision attack, and $2^{64}$ for the collision attack. The memory requirements are $2^{64}$ for all attacks. Furthermore, we provide improved compression function attacks on ECHO-256 to get distinguishers on 7 rounds and near-collisions for 6 and 6.5 rounds. The compression function attacks also apply to ECHO-512. To get these results, we consider new and sparse truncated differential paths through ECHO. We are able to construct these paths by analyzing the combined MixColumns and BigMixColumns transformation. Since in these sparse truncated differential paths at most one fourth of all bytes of each ECHO state are active, missing degrees of freedom are not a problem. Therefore, we are able to mount a rebound attack with multiple inbound phases to efficiently find according message pairs for ECHO.
Last updated: 2010-06-08
On isotopisms of commutative presemifields and CCZ-equivalence of functions
A function $F$ from \textbf{F}$_{p^n}$ to itself is planar if for any $a\in$\textbf{F}$_{p^n}^*$ the function $F(x+a)-F(x)$ is a permutation. CCZ-equivalence is the most general known equivalence relation of functions preserving planar property. This paper considers two possible extensions of CCZ-equivalence for functions over fields of odd characteristics, one proposed by Coulter and Henderson and the other by Budaghyan and Carlet, and we show that they in fact coincide with CCZ-equivalence. We prove that two finite commutative presemifields of odd order are isotopic if and only if they are strongly isotopic. This result implies that two isotopic commutative presemifields always define CCZ-equivalent planar functions (this was unknown for the general case). Further we prove that, for any odd prime $p$ and any positive integers $n$ and $m$, the indicators of the graphs of functions $F$ and $F'$ from \textbf{F}$_{p^n}$ to \textbf{F}$_{p^m}$ are CCZ-equivalent if and only if $F$ and $F'$ are CCZ-equivalent.
We also prove that, for any odd prime $p$, CCZ-equivalence of functions from \textbf{F}$_{p^n}$ to \textbf{F}$_{p^m}$, is strictly more general than EA-equivalence when $n\ge3$ and $m$ is greater or equal to the smallest positive divisor of $n$ different from 1.
On the Security of a Bidirectional Proxy Re-Encryption Scheme from PKC 2010
In PKC 2010, Matsuda, Nishimaki and Tanaka proposed a bidirectional proxy re-encryption (PRE) scheme without bilinear maps, and claimed that their scheme is chosen-ciphertext secure in the standard model. However, by giving a concrete attack, in this paper we indicate that their PRE scheme fails to achieve the chosen-ciphertext security. The purpose of this paper is to clarify the fact that, it is still an open problem to come up with a chosen-ciphertext secure PRE scheme without bilinear maps in the standard model.
Multiparty Computation for Dishonest Majority: from Passive to Active Security at Low Cost
Multiparty computation protocols have been known for more than twenty years now, but due to their lack of efficiency their use is still limited in real-world applications: the goal of this paper is the design of efficient two and multi party computation protocols aimed to fill the gap between theory and practice. We propose a new protocol to securely evaluate reactive arithmetic circuits, that offers security against an active adversary in the universally composable security framework. Instead of the ``do-and-compile'' approach (where the parties use zero-knowledge proofs to show that they are following the protocol) our key ingredient is an efficient version of the ``cut-and-choose'' technique, that allow us to achieve active security for just a (small) constant amount of work more than for passive security.
A Note On Gottesman-Chuang Quantum Signature Scheme
In 2001, Gottesman and Chuang proposed a quantum signature scheme. Unlike classical signature schemes, the public keys in the scheme can only be used once. The authors claim that the scheme is somewhat cumbersome but it serves as a good model and suggests novel research directions for the field of quantum cryptography. In this note, we remark that the Gottesman-Chuang quantum signature scheme is so commonplace and cumbersome that it can not suggest the potential for quantum public key cryptography. The authors ignore an ultimate fact, namely, the cost to grantee the authenticity of a user's public key is expensive in the scenario of Public Key Infrastructure. It entails that a user's public key should be repeatedly usable in the life duration.
A New Human Identification Protocol and Coppersmith's Baby-Step Giant-Step Algorithm
We propose a new protocol providing cryptographically secure authentication to unaided humans against passive adversaries. We also propose a new generic passive attack on human identification protocols. The attack is an application of Coppersmith's baby-step giant-step algorithm on human identification protcols. Under this attack, the achievable security of some of the best candidates for human identification protocols in the literature is further reduced. We show that our protocol preserves similar usability while achieves better security than these protocols. A comprehensive security analysis is provided which suggests parameters guaranteeing desired levels of security.
Efficient Techniques for High-Speed Elliptic Curve Cryptography
In this paper, a thorough bottom-up optimization process (field, point and scalar arithmetic) is used to speed up the computation of elliptic curve point multiplication and report new speed records on modern x86-64 based processors. Our different implementations include elliptic curves using Jacobian coordinates, extended Twisted Edwards coordinates and the recently proposed Galbraith-Lin-Scott (GLS) method. Compared to state-of-the-art implementations on identical platforms the proposed techniques provide up to 30% speed improvements. Additionally, compared to the best previous published results on similar platforms improvements up to 31% are observed. This research is crucial for advancing high speed cryptography on new emerging processor
architectures.
Weaknesses of a dynamic ID-based remote user authentication scheme
The security of a password authentication scheme using smart cards proposed by Khan et al. is analyzed. Four kinds of attacks are presented in different scenarios. The analyses show that the scheme is insecure for practical application.
Fast Exhaustive Search for Polynomial Systems in $F_2$
We analyze how fast we can solve general systems of multivariate
equations of various low degrees over \GF{2}; this is
a well known hard problem which is important both in itself and
as part of many types of algebraic cryptanalysis. Compared to the standard
exhaustive-search technique, our improved approach is more
efficient both asymptotically and practically.
We implemented several optimized versions of our techniques on CPUs and GPUs. Modern graphic cards allows our technique to run more than 10 times faster than the most powerful CPU available. Today, we can solve 48+ quadratic equations in 48 binary variables on a NVIDIA GTX 295 video card (USD 500) in 21 minutes.
With this level of performance, solving systems of equations supposed to ensure a security level of 64 bits turns out to be feasible in practice with a modest budget. This is a clear demonstration of the power of GPUs in solving many types of combinatorial and cryptanalytic problems.
Security weakness of two authenticated key exchange protocols from pairings
Uncategorized
Uncategorized
Recently, Liu proposed two authenticated multiple key exchange protocols using pairings, and claimed two protocols featured many security attributes. In this paper, we show that Liu’s protocols are insecure. Both of Liu’s protocols cannot provide perfect forward secrecy.
Combining leak--resistant arithmetic for elliptic curves defined over $\F_p$ and RNS representation
In this paper we combine the residue number system (RNS) representation
and the leak-resistant arithmetic on elliptic curves. These two techniques are relevant for implementation of elliptic curve
cryptography on embedded devices.\\
% since they have leak-resistance properties.
It is well known that the RNS multiplication is very efficient whereas the reduction step is costly. Hence, we optimize formulae for basic operations arising in leak-resistant arithmetic on elliptic curves (unified addition, Montgomery ladder) in order to minimize the number of modular reductions. We also improve the complexity of the RNS modular reduction step. As a result, we show how to obtain a competitive secured implementation.\\
Finally, %we recall the main advantages of the RNS representation,
%especially in hardware and for embedded devices, and
we show that,
contrary to other approaches, ours takes optimally the advantage of a dedicated parallel
architecture.
Last updated: 2013-11-22
The analytical property for $\zeta(s)$
In this article it's discussed the analytic property of $\zeta(s)$.
The popular opinion is denied.
Co-Z Addition Formulae and Binary Ladders on Elliptic Curves
Meloni recently introduced a new type of arithmetic on elliptic curves when adding projective points sharing the same Z-coordinate. This paper presents further co-Z addition formulae (and register allocations) for various point additions on Weierstrass elliptic curves. It explains how the use of conjugate point addition and other implementation tricks allow one to develop efficient scalar multiplication algorithms making use of co-Z arithmetic. Specifically, this paper describes efficient co-Z based versions of Montgomery ladder and Joye’s double-add algorithm. Further, the resulting implementations are protected against a large variety of implementation attacks.
Attacking M&M Collective Signature Scheme
A collective signature scheme aims to solve the problem of signing a message by multiple
signers. Recently, Moldovyan and Moldovyan [1] proposed a scheme for
collective signatures based on Schnorr signatures. We show some security weaknesses of the scheme.
Impossible Differential Cryptanalysis of SPN Ciphers
Impossible differential cryptanalysis is a very popular tool
for analyzing the security of modern block ciphers and the core of
such attack is based on the existence of impossible differentials.
Currently, most methods for finding impossible differentials are
based on the miss-in-the-middle technique and they are very ad-hoc.
In this paper, we concentrate SPN ciphers whose diffusion layer is
defined by a linear transformation $P$. Based on the theory of
linear algebra, we propose several criteria on $P$ and its inversion
$P^{-1}$ to characterize the existence of $3/4$-round impossible
differentials. We further discuss the possibility to extend these
methods to analyze $5/6$-round impossible differentials. Using these
criteria, impossible differentials for reduced-round Rijndael are
found that are consistent with the ones found before. New $4$-round
impossible differentials are discovered for block cipher ARIA. And
many $4$-round impossible differentials are firstly detected for a
kind of SPN cipher that employs a $32\times32$ binary matrix
proposed at ICISC 2006 as its diffusion layer. It is concluded
that the linear transformation should be carefully designed
in order to protect the cipher against impossible differential cryptanalysis.
On security of a remote user authentication scheme without using smart cards
The security of a password authentication scheme using smart cards proposed by Rhee et al. is analyzed. A kind of impersonation attack is presented. The analyses show that the scheme is insecure for practical application. In order to eliminate the security vulnerability, an efficient countermeasure is proposed.
On the Impossibility of Cryptography Alone for Privacy-Preserving Cloud Computing
Cloud computing denotes an architectural shift toward thin clients
and conveniently centralized provision of computing resources.
Clients’ lack of direct resource control in the cloud prompts concern
about the potential for data privacy violations, particularly
abuse or leakage of sensitive information by service providers. Cryptography is an oft-touted remedy. Among its most powerful primitives is fully homomorphic encryption (FHE), dubbed by some the
field’s “Holy Grail,” and recently realized as a fully functional construct with seeming promise for cloud privacy.
We argue that cryptography alone can’t enforce the privacy demanded
by common cloud computing services, even with such powerful
tools as FHE.We formally define a hierarchy of natural classes
of private cloud applications, and show that no cryptographic protocol
can implement those classes where data is shared among clients.
We posit that users of cloud services will also need to rely on other
forms of privacy enforcement, such as tamperproof hardware, distributed computing, and complex trust ecosystems.
Cryptanalysis of the Compression Function of SIMD
SIMD is one of the second round candidates of the SHA-3 competition
hosted by NIST. In this paper, we present some results on the
compression function of SIMD 1.1 (the tweaked version) using the
modular difference method. For SIMD-256, We give a free-start near
collision attack on the compression function reduced to 20 steps
with complexity $2^{-107}$. And for SIMD-512, we give a free-start
near collision attack on the 24-step compression function with
complexity $2^{208}$. Furthermore, we give a distinguisher attack on
the full compression function of SIMD-512 with complexity $2^{398}$.
Our attacks are also applicable for the final compression function
of SIMD.
Universally Composable Symbolic Analysis of Diffie-Hellman based Key Exchange
Canetti and Herzog (TCC'06) show how to efficiently perform fully
automated, computationally sound security analysis of key exchange
protocols with an unbounded number of sessions. A key tool in their
analysis is {\em composability}, which allows deducing security of
the multi-session case from the security of a single session.
However, their framework only captures protocols that use public key
encryption as the only cryptographic primitive, and only handles
static corruptions.
We extend the [CH'06] modeling in two ways. First, we handle also
protocols that use digital signatures and Diffie-Hellman exchange.
Second, we handle also forward secrecy under fully adaptive party
corruptions. This allows us to automatically analyze systems that
use an unbounded number of sessions of realistic key exchange
protocols such as the ISO 9798-3 or TLS protocol.
A central tool in our treatment is a new abstract modeling of plain
Diffie-Hellman key exchange. Specifically, we show that plain
Diffie-Hellman securely realizes an idealized version of Key
Encapsulation.
Using the Inhomogeneous Simultaneous Approximation Problem for Cryptographic Design
Since the introduction of the concept of provable security, there has been the steady search for suitable problems that can be used as a foundation for cryptographic schemes. Indeed, identifying such problems is a challenging task. First, it should be open and investigated for a long time to make its hardness assumption plausible. Second, it should be easy to construct hard problem instances. Third, it should allow to build cryptographic applications on top of them. Not surprisingly, only a few problems are known today that satisfy all conditions, e.g., factorization, discrete logarithm, and lattice problems.
In this work, we introduce another candidate: the Inhomogeneous Simultaneous Approximation Problem (ISAP), an old problem from the field of analytic number theory that dates back to the 19th century. Although the Simultaneous Approximation Problem (SAP) is already known in cryptography, it has mainly been considered in its homogeneous instantiation for attacking schemes. We take a look at the hardness and applicability of ISAP, i.e., the inhomogeneous variant, for designing schemes.
More precisely, we define a decisional problem related to ISAP, called DISAP, and show that it is NP-complete. With respect to its hardness, we review existing approaches for computing a solution and give suggestions for the efficient generation of hard instances. Regarding the applicability, we describe as a proof of concept a bit commitment scheme where the hiding property is directly reducible to DISAP. An implementation confirms its usability in principle (e.g., size of one commitment is slightly more than 6 KB and execution time is in the milliseconds).
On generalized Feistel networks
We prove beyond-birthday-bound security for the well-known types of
generalized Feistel networks, including: (1) unbalanced Feistel networks, where the $n$-bit to $m$-bit round functions may have $n\ne m$; (2) alternating Feistel networks, where the round functions alternate between contracting and expanding; (3) type-1, type-2, and type-3 Feistel networks, where $n$-bit to $n$-bit round functions are used to encipher $kn$-bit strings for some $k\ge2$; and (4) numeric variants of any of the above, where one enciphers numbers in some given range rather than strings of some given size. Using a unified analytic framework we show that, in any of these settings, for
any $\varepsilon>0$, with enough rounds, the subject scheme can tolerate CCA attacks of up to $q\sim N^{1-\varepsilon}$ adversarial queries, where $N$ is the size of the round functions' domain (the size of the larger domain for alternating Feistel). This is asymptotically optimal. Prior analyses for generalized Feistel networks established security to only $q\sim N^{0.5}$ adversarial queries.
Optimal Average Joint Hamming Weight and Minimal Weight Conversion of d Integers
In this paper, we propose the minimal joint Hamming weight conversion for any binary expansions of $d$ integers. With redundant representations, we may represent a number by many expansions, and the minimal joint Hamming weight conversion is the algorithm to select the expansion that has the least joint Hamming weight. As the computation time of the cryptosystem strongly depends on the joint Hamming weight, the conversion can make the cryptosystem faster. Most of existing conversions are limited to some specific representations, and are difficult to apply to other representations. On the other hand, our conversion is applicable to any binary expansions. The proposed can explore the minimal average weights in a class of representation that have not been found. One of the most interesting results is that, for the expansion of integer pairs when the digit set is $\{0, \pm 1, \pm 3\}$, we show that the minimal average joint Hamming weight is $0.3575$. This improves the upper bound value, $0.3616$, proposed by Dahmen, Okeya, and Takagi.
Faster Fully Homomorphic Encryption
Uncategorized
Uncategorized
We describe two improvements to Gentry's fully homomorphic scheme
based on ideal lattices and its analysis: we provide a more aggressive
analysis of one of the hardness assumptions (the one related to the
Sparse Subset Sum Problem) and we introduce a probabilistic decryption
algorithm that can be implemented with an algebraic circuit of low
multiplicative degree. Combined together, these improvements lead to a
faster fully homomorphic scheme, with a~$\softO(\lambda^{3.5})$ bit
complexity per elementary binary add/mult gate, where~$\lambda$ is the
security parameter. These improvements also apply to the fully
homomorphic schemes of Smart and Vercauteren [PKC'2010] and van Dijk
et al.\ [Eurocrypt'2010].
On the Indifferentiability of the Grøstl Hash Function
The notion of indifferentiability, introduced by Maurer et al., is an important criterion for the security of hash functions. Concretely, it ensures that a hash function has no structural design flaws and thus guarantees security against generic attacks up to the exhibited bounds. In this work we prove the indifferentiability of Grøstl, a second round SHA-3 hash function candidate. Grøstl combines characteristics of the wide-pipe and chop-Merkle-Damgård iterations and uses two distinct permutations P and Q internally. Under the assumption that P and Q are random l-bit permutations, where l is the iterated state size of Grøstl, we prove that the advantage of a distinguisher to differentiate Grøstl from a random oracle is upper bounded by O((Kq)^4/2^l), where the distinguisher makes at most q queries of length at most K blocks. For the specific Grøstl parameters, this result implies that Grøstl behaves like a random oracle up to q=O(2^{n/2}) queries, where n is the output size.
Furthermore, we show that the output transformation of Grøstl, as well as `Grøstail' (the composition of the final compression function and the output transformation), are clearly differentiable from a random oracle. This renders out indifferentiability proofs which rely on the idealness of a final state transformation.
Correlation-Enhanced Power Analysis Collision Attack
Side-channel based collision attacks are a mostly disregarded alternative to DPA for analyzing unprotected implementations. The advent of strong countermeasures, such as masking, has made further research in collision attacks seemingly in vain. In this work, we show that the principles of collision attacks can be adapted to efficiently break some masked hardware implementation of the AES which still have first-order leakage. The proposed attack breaks an AES implementation based on the corrected version of the masked S-box of Canright and Batina presented at ACNS 2008 which is supposed to be resistant against firstorder attacks. It requires only six times the number of traces necessary for breaking a comparable unprotected implementation. At the same time, the presented attack has minimal requirements on the abilities and knowledge of an adversary. The attack requires no detailed knowledge about the design, nor does it require a training phase.
Hash-based Multivariate Public Key Cryptosystems
Many efficient attacks have appeared in recent years, which have led
to serious blow for the traditional multivariate public key
cryptosystems. For example, the signature scheme SFLASH was broken
by Dubois et al. at CRYPTO'07, and the Square signature (or
encryption) scheme by Billet et al. at ASIACRYPTO'09. Most
multivariate schemes known so far are insecure, except maybe the
sigature schemes UOV and HFEv-. Following these new developments, it
seems that the general design principle of multivariate schemes has
been seriously questioned, and there is a rather pressing desire to
find new trapdoor construction or mathematical tools and ideal. In
this paper, we introduce the hash authentication techniques and
combine with the traditional MQ-trapdoors to propose a novel
hash-based multivariate public key cryptosystems. The resulting
scheme, called EMC (Extended Multivariate Cryptosystem), can
also be seen as a novel hash-based cryptosystems like Merkle tree
signature. And it offers the double security protection for signing
or encrypting. By the our analysis, we can construct the secure and
efficient not only signature scheme but also encryption scheme by
using the EMC scheme combined some modification methods summarized
by Wolf. And thus we present two new schems: EMC signature scheme
(with the Minus method ``-") and EMC encryption scheme (with the
Plus method ``+"). In addition, we also propose a reduced scheme of
the EMC signature scheme (a light-weight signature scheme). Precise
complexity estimates for these schemes are provided, but their
security proofs in the random oracle model are still an open
problem.
Ideal Key Derivation and Encryption in Simulation-based Security
Many real-world protocols, such as SSL/TLS, SSH, IPsec, IEEE 802.11i, DNSSEC, and Kerberos, derive new keys from other keys. To be able to analyze such protocols in a composable way, in this paper we extend an ideal functionality for symmetric and public-key encryption
proposed in previous work by a mechanism for key derivation. We also equip this functionality with message authentication codes (MACs) and ideal nonce generation. We show that the resulting ideal functionality can be realized based on standard cryptographic assumptions and constructions, hence, providing a solid foundation for
faithful, composable cryptographic analysis of real-world security protocols.
Based on this new functionality, we identify sufficient criteria for protocols to provide universally composable key exchange and secure channels. Since these criteria are based on the new ideal functionality, checking the criteria requires merely information-theoretic or even only syntactical arguments, rather than involved reduction arguments.
As a case study, we use our method to analyze two central protocols of the IEEE 802.11i standard, namely the 4-Way Handshake Protocol and the CCM Protocol, proving composable security properties. As to the best of our knowledge, this constitutes the first rigorous cryptographic analysis of these protocols.
Computing genus 2 curves from invariants on the Hilbert moduli space
We give a new method for generating genus 2 curves over a finite field with a given number of points on the Jacobian of the curve. We define two new invariants for genus 2 curves as values of modular functions on the Hilbert moduli space and show how to compute them. We relate them to the usual three Igusa invariants on the Siegel moduli space and give an algorithm to construct curves using these new invariants. Our approach simplifies the complex analytic method for computing genus 2 curves for cryptography and reduces the amount of computation required.
Security of balanced and unbalanced Feistel Schemes with Linear Non Equalities
\begin{abstract}
In this paper we will study 2 security results ``above the birthday bound'' related to secret key cryptographic problems.\\
1. The classical problem of the security of 4, 5, 6 rounds balanced Random Feistel Schemes.\\
2. The problem of the security of unbalanced Feistel Schemes with contracting functions from $2n$ bits to $n$ bits. This problem was studied by Naor and Reingold~\cite{NR99} and by~\cite{YPL} with a proof of security up to the birthday bound.\\
These two problems are included here in the same paper since their analysis is closely related, as we will see. In problem 1 we will obtain security result very near the information bound (in $O(\frac {2^n}{n})$) with improved proofs and stronger explicit security bounds than previously known. In problem 2 we will cross the birthday bound of Naor and Reingold. For some of our proofs we will use~\cite{A2} submitted to Crypto 2010.
\end{abstract}
A Low-Area yet Performant FPGA Implementation of Shabal
In this paper, we present an efficient FPGA implementation of the SHA-3 hash function candidate Shabal. Targeted at the recent Xilinx Virtex-5 FPGA family, our design achieves a relatively high throughput of 2 Gbit/s at a cost of only 153 slices, yielding a throughput-vs.-area ratio of 13.4 Mbit/s per slice. Our work can also be ported to Xilinx Spartan-3 FPGAs, on which it supports a throughput of 800 Mbit/s for only 499 slices, or equivalently 1.6 Mbit/s per slice.
According to the SHA-3 Zoo website, this work is among the smallest reported FPGA implementations of SHA-3 candidates, and ranks first in terms of throughput per area.
Cryptanalysis of an Exquisite Mutual Authentication Scheme with Key Agreement Using Smart Card
The weakness of an exquisite authentication scheme based on smart cards and passwords proposed by Liao et al. [C. H. Liao, H. C. Chen, and C. T. Wang, An Exquisite Mutual Authentication Scheme with Key Agreement Using Smart Card, Informatica, Vol. 33, No. 2, 2009, 125-132.] is analyzed. Five kinds of weakness are presented in different scenarios. The analyses show that Liao et al.’s scheme is insecure for practical application.
Intractable Problems in Cryptography
Uncategorized
Uncategorized
We examine several variants of the Diffie-Hellman and Discrete Log problems
that are connected to the security of cryptographic protocols. We discuss
the reductions that are known between them and the challenges in trying
to assess the true level of difficulty of these problems, particularly
if they are interactive or have complicated input.
A Two-Party Protocol with Trusted Initializer for Computing the Inner Product
We propose the first protocol for securely computing the inner product modulo an integer $m$ between two distrustful parties based on a trusted initializer, i.e. a trusted party that interacts with the players solely during a setup phase. We obtain a very simple protocol with universally composable security. As an application of our protocol, we obtain a solution for securely computing linear equations.
Lattice-based Identity-Based Broadcast Encryption Scheme
Motivated by the lattice basis delegation technique due to [8], we
propose an adaptively secure identity-based broadcast
encryption(IBBE) scheme based on the hard worst-case lattice
problems. Our construction can be generalized to obtain a
hierarchical IBBE (HIBBE) scheme easily. To the best of the authors'
knowledge, our construction and its variants constitute the first
adaptively secure IBBE schemes from lattices, which are believed
secure in the post-quantum environment.
Introduction to Mirror Theory: Analysis of Systems of Linear Equalities and Linear Non Equalities for Cryptography
\begin{abstract}
In this paper we will first study two closely related problems:\\
1. The problem of distinguishing $f(x\Vert 0)\oplus f(x \Vert 1)$ where $f$ is a random permutation on $n$ bits. This problem was first studied by Bellare and Implagliazzo in~\cite{BI}.\\
2. The so-called ``Theorem $P_i \oplus P_j$'' of Patarin (cf~\cite{P05}).
Then, we will see many variants and generalizations of this ``Theorem $P_i \oplus P_j$'' useful in Cryptography. In fact all these results can be seen as part of the theory that analyzes the number of solutions of systems of linear equalities and linear non equalities in finite groups. We have nicknamed these analysis ``Mirror Theory'' due to the multiples induction properties that we have in it.
\end{abstract}
On second-order nonlinearities of some $\mathcal{D}_0$ type bent functions
In this paper we study the lower bounds of second-order nonlinearities
of bent functions constructed by modifying certain cubic Maiorana-McFarland type bent functions.
A SAT-based preimage analysis of reduced KECCAK hash functions
Uncategorized
Uncategorized
In this paper, we present a preimage attack on reduced versions of Keccak hash functions. We use our recently developed toolkit
CryptLogVer for generating CNF (conjunctive normal form) which is
passed to the SAT solver PrecoSAT. We found preimages for some
reduced versions of the function and showed that full Keccak function
is secure against the presented attack.
Secure Two-Party Computation via Cut-and-Choose Oblivious Transfer
Protocols for secure two-party computation enable a pair of parties to compute a function of their inputs while preserving security properties such as privacy, correctness and independence of inputs. Recently, a number of protocols have been proposed for the efficient construction of two-party computation secure in the presence of malicious adversaries (where security is proven under the standard simulation-based ideal/real model paradigm for defining security). In this paper, we present a protocol for this task that follows the methodology of using cut-and-choose to boost Yao's protocol to be secure in the presence of malicious adversaries. Relying on specific assumptions (DDH), we construct a protocol that is significantly more efficient and far simpler than the protocol of Lindell and Pinkas (Eurocrypt 2007) that follows the same methodology. We provide an exact, concrete analysis of the efficiency of our scheme and demonstrate that (at least for not very small circuits) our protocol is more efficient than any other known today.
Recursive Information Hiding in Visual Cryptography
Visual Cryptography is a secret sharing scheme that uses the human visual system to perform computations. This paper presents a recursive hiding scheme for 3 out of 5 secret sharing. The idea used is to hide smaller secrets in the shares of a larger secret without an expansion in the size of the latter.
Pseudo-Linear Approximations for ARX Ciphers: With Application to Threefish
The operations addition modulo 2^n and exclusive-or have recently been combined to obtain an efficient mechanism for nonlinearity in block cipher design. In this paper, we show that ciphers using this approach may be approximated by pseudo-linear expressions relating groups of contiguous bits of the round key, round input, and round output. The bias of an approximation can be large enough for known plaintext attacks. We demonstrate an application of this concept to a reduced-round version of the Threefish block cipher, a component of the Skein entry in the secure hash function competition.
Protocols for Reliable and Secure Message Transmission
Consider the following problem: a sender S and a receiver R are part of an unreliable, connected, distributed network. The distrust in the network is modelled by an entity called adversary, who has unbounded
computing power and who can corrupt some of the nodes of the network (excluding S and R)in a variety of ways. S wishes to send to R a message m that consists of \ell elements, where \ell \geq 1, selected uniformly from a finite field F. The challenge is to design a protocol, such that after interacting with S as per the protocol, R should output m without any error (perfect reliability). Moreover, this hold irrespective of the disruptive actions done by the adversary. This problem is called reliable message transmission or RMT in short. The problem of secure message transmission or SMT in short requires an additional constraint that the adversary should not get any information about the message what so ever in information theoretic sense (perfect secrecy). Security against an adversary with infinite computing power is also known as non-cryptographic or information theoretic or Shannon security and this is the strongest notion of security. Notice that since the adversary has unbounded computing power, we cannot solve RMT and SMT problem by using classical cryptographic primitives such as public key cryptography, digital signatures, authentication schemes, etc as the security of all these primitives holds good only against an adversary having polynomially bounded computing power.
RMT and SMT problem can be studied in various network models and adversarial settings. We may use the following parameters to describe
different settings/models for studying RMT/SMT:
\begin{enumerate}
\item Type of Underlying Network --- Undirected Graph, Directed Graph, Hypergraph.
\item Type of Communication --- Synchronous, Asynchronous.
\item Adversary capacity --- Threshold Static, Threshold Mobile, Non-threshold Static, Non-threshold Mobile.
\item Type of Faults --- Fail-stop, Passive, Byzantine, Mixed.
\end{enumerate}
Irrespective of the settings in which RMT/SMT is studied, the following issues are common:
\begin{enumerate}
\item Possibility: What are the necessary and sufficient structural conditions to be satisfied by the underlying network for the existence of any RMT/SMT protocol, tolerating a given type of adversary?
\item Feasibility: Once the existence of a RMT/SMT protocol in a network is ascertained, the next natural question is, does there exist an efficient protocol on the given network?
\item Optimality: Given a message of specific length, what is the minimum communication complexity (lower bound) needed by any RMT/SMT protocol to transmit the message and how to design a polynomial time RMT/SMT protocol whose total communication complexity matches the lower bound on the communication complexity (optimal protocol)?
\end{enumerate}
In this dissertation, we look into the above issues in several network models and adversarial settings. This thesis reports several new/improved/efficient/optimal solutions, gives affirmative/negative answers to several significant open problems and last but not the least, provides first solutions to several newly formulated problems.
Studies on Verifiable Secret Sharing, Byzantine Agreement and Multiparty Computation
This dissertation deals with three most important as well as fundamental problems in secure distributed computing, namely Verifiable Secret Sharing (VSS), Byzantine Agreement (BA) and Multiparty Computation (MPC).
VSS is a two phase protocol (Sharing and Reconstruction) carried out among $n$ parties in the presence of a centralized adversary who can
corrupt up to $t$ parties. Informally, the goal of the VSS protocol is
to share a secret $s$, among the $n$ parties during the sharing phase in a way that would later allow for a unique reconstruction of this secret in the reconstruction phase, while preserving the secrecy of $s$ until the reconstruction phase. VSS is used as a key tool in MPC, BA and many other secure distributed computing problems. It can take many different forms, depending on the underlying network (synchronous or asynchronous), the nature (passive or active) and computing power (bounded or unbounded) of the adversary, type of security (cryptographic or information theoretic) etc. We study
VSS in information theoretic setting over both synchronous as well as
asynchronous network, considering an active unbounded powerful adversary.
Our main contributions for VSS are:
\begin{itemize}
\item In synchronous network, we carry out in-depth investigation on the round complexity of VSS by allowing a probability of error in computation and show that existing lower bounds for the round complexity of error-free VSS can be circumvented by introducing a negligible probability of error.
\item We study the communication and round efficiency of VSS in synchronous network and present a robust VSS protocol that is simultaneously communication efficient and round efficient. In addition, our protocol is the best known communication and round efficient protocol in the literature.
\item In asynchronous network, we study the communication complexity of VSS and propose a number of VSS protocols. Our protocols are highly communication efficient and show significant improvement over the existing protocols in terms of communication complexity.
\end{itemize}
The next problem that we deal with is Byzantine Agreement (BA). BA is considered as one of the most fundamental primitives for fault tolerant distributed computing and cryptographic protocols. BA among a set of $n$ parties, each having a private input value, allows them to reach agreement on a common value even if some of the malicious parties (at most $t$) try to prevent agreement among the parties.
Similar to the case of VSS, several models for BA have been proposed during the last three decades, considering various aspects like the underlying network, the nature and computing power of adversary, type of security. One of these models is BA over asynchronous network which is considered to be more realistic network than synchronous in many occasions. Though important, research in BA in asynchronous network has received much less attention in comparison to the BA protocols in synchronous network. Even the existing protocols for asynchronous BA involve high communication complexity and in general are very inefficient in comparison to their synchronous counterparts. We focus on BA in information theoretic setting over asynchronous network tolerating an active adversary having unbounded computing power and mainly work towards the communication efficiency of the problem. Our contributions for BA are as follows:
\begin{itemize}
\item We propose communication efficient asynchronous BA protocols that show huge improvement over the existing protocols in the same setting. Our protocols for asynchronous BA use our VSS protocols
in asynchronous network as their vital building blocks.
\item We also construct a communication optimal asynchronous BA protocol for sufficiently long message size. Precisely, our asynchronous BA communicates O(\ell n) bits for \ell bit message, for sufficiently large \ell.
\end{itemize}
The studies on VSS and BA naturally lead one towards MPC problems. The MPC can model almost any known cryptographic application and uses VSS as well as BA as building blocks. MPC enables a set of $n$ mutually distrusting parties to compute some function of their private inputs, such that the privacy of the inputs of the honest parties is guaranteed (except for what can be derived from the function output) even in the presence of an adversary corrupting up
to $t$ of the parties and making them misbehave arbitrarily. Much like
VSS and BA, MPC can also be studied in various models. Here, we attempt to solve MPC in information theoretic setting over synchronous as well as asynchronous network, tolerating an active unbounded powerful adversary. As for MPC, our main contributions are:
\begin{itemize}
\item Using one of our synchronous VSS protocol, we design a synchronous MPC that minimizes the communication and round complexity simultaneously, where existing MPC protocols try to minimize one complexity measure at a time (i.e the existing protocols minimize either communication complexity or round complexity).
\item We study the communication complexity of asynchronous MPC protocols and design a number of protocols for the same that show significant gain in communication complexity in comparison to the existing asynchronous MPC protocols.
\item We also study a specific instance of MPC problem called Multiparty Set Intersection (MPSI) and provide protocols for the same.
\end{itemize}
In brief, our work in this thesis has made significant advancement in the state-of-the-art research on VSS, BA and MPC by presenting several inherent lower bounds and efficient/optimal solutions for the problems in terms of their key parameters such as communication complexity and time/round complexity. Thus our work has made a significant contribution to the field of secure distributed computing by carrying out a foundation research on the three most important
problems of this field.
On the Round Complexity of Covert Computation
Uncategorized
Uncategorized
In STOC'05, von Ahn, Hopper and Langford introduced the notion of covert computation. In covert computation, a party runs a secure computation protocol over a covert (or steganographic) channel without knowing if the other parties are participating as well or not. At the end of the protocol, if all parties participated in the protocol and if the function output is "favorable" to all parties, then the output is revealed (along with the fact that everyone participated). All covert computation protocols known so far require a large polynomial number of rounds. In this work, we first study the question of the round complexity of covert computation and obtain the following results:
1) There does not exist a constant round covert computation protocol with respect to black box simulation even for the case of two parties. (In comparison, such protocols are known even for the multi-party case if there is no covertness requirement.)
2) By relying on the two slot non-black-box simulation technique of Pass (STOC'04) and techniques from cryptography in NC^0 (Applebaum et al, FOCS'04), we obtain a construction of a constant round covert multi-party computation protocol.
Put together, the above adds one more example to the growing list of tasks for which non-black-box simulation techniques (introduced in the work of Barak in FOCS'01) are necessary.
Finally, we study the problem of covert multi-party computation in the setting where the parties only have point to point (covert) communication channels. We observe that our covert computation protocol for the broadcast channel inherits, from the protocol of Pass, the property of secure composition in the bounded concurrent setting. Then, as an application of this protocol, somewhat surprisingly we show the existence of covert multi-party computation with point to point channels (assuming that the number of parties is a constant).
Overcoming the Hole In The Bucket: Public-Key Cryptography Resilient to Continual Memory Leakage
In recent years, there has been a major effort to design cryptographic schemes
that remain secure even if part of the secret key is leaked. This is due to a
recent proliferation of side channel attacks which, through various physical
means, can recover part of the secret key. We explore the possibility of
achieving security even with continual leakage, i.e., even if some information
is leaked each time the key is used.
We show how to securely update a secret key while information is leaked: We
construct schemes that remain secure even if an attacker, {\em at each time
period}, can probe the entire memory (containing a secret key) and ``leak'' up
to a $(1-o(1))$ fraction of the secret key. The attacker may also probe the
memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security
parameter (relying on subexponential hardness allows $k^\epsilon$ bits of
leakage during each update process). All of the above is achieved without
restricting the model as is done in previous works (e.g. by assuming that
``only computation leaks information'' [Micali-Reyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which
allows for a leakage rate of $(1/2-o(1))$) or the symmetric external
Diffie-Hellman assumption (which allows for a leakage rate of $(1-o(1))$), we
achieve the above for public key encryption, identity-based encryption, and
signature schemes. Prior to this work, it was not known how to construct
public-key encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a
secret key while information is leaked (in the more general model) and (2)
giving a public key encryption (and IBE) schemes that are resilient to
continual leakage.
Last updated: 2010-05-18
Adaptively Secure Broadcast Encryption with Short Ciphertexts
We propose an adaptively secure broadcast encryption scheme
with short ciphertexts. That is the size of the broadcast encryption message is fixed, regardless of the size of the broadcast group. In our proposed scheme, members can join and leave the group without requiring
any change to public parameters of the system or private keys of existing members. Our construction has a twofold improvement over best previously known broadcast encryption schemes. First, we propose a scheme that immediately yields adaptive security in the CCA model without any (sub-linear) increase in the size of ciphertexts or use of a random oracle. Secondly, the security model in our system includes decryption queries for any member, even including the members in the challenge set. This a more secure model, as it is closer to the adversary in real world.
Garbled Circuits for Leakage-Resilience: Hardware Implementation and Evaluation of One-Time Programs
Uncategorized
Uncategorized
The power of side-channel leakage attacks on cryptographic implementations is evident.
Today's practical defenses are typically attack-specific countermeasures against certain classes of side-channel attacks.
The demand for a more general solution has given rise to the recent theoretical research that aims to build provably leakage-resilient cryptography.
This direction is, however, very new and still largely lacks practitioners' evaluation with regard to both efficiency and practical security.
A recent approach, One-Time Programs (OTPs), proposes using Yao's Garbled Circuit (GC) and very simple tamper-proof hardware to securely implement oblivious transfer, to guarantee leakage resilience.
Our main contributions are (i) a generic architecture for using GC/OTP modularly, and (ii) hardware implementation and efficiency analysis of GC/OTP evaluation.
We implemented two FPGA-based prototypes:
a system-on-a-programmable-chip with access to hardware crypto accelerator (suitable for smartcards and future smartphones), and a stand-alone hardware implementation (suitable for ASIC design).
We chose AES as a representative complex function for implementation and measurements.
As a result of this work, we are able to understand, evaluate and improve the practicality of employing GC/OTP as a leakage-resistance approach.
Last, but not least, we believe that our work contributes to bringing together the results of both theoretical and practical communities.
Position-Based Quantum Cryptography: Impossibility and Constructions
In this work, we study position-based cryptography in the quantum setting. The aim is to use the geographical position of a party as its only credential. On the negative side, we show that if adversaries are allowed to share an arbitrarily large entangled quantum state, no secure position-verification is possible at all. We show a distributed protocol for computing any unitary operation on a state shared between the different users, using local operations and one round of classical communication. Using this surprising result, we break any position-verification scheme of a very general form.
On the positive side, we show that if adversaries do not share any entangled quantum state but can compute arbitrary quantum operations, secure position-verification is achievable. Jointly, these results suggest the interesting question whether secure position-verification is possible in case of a bounded amount of entanglement. Our positive result can be interpreted as resolving this question in the simplest case, where the bound is set to zero. In models where secure positioning is achievable, it has a number of interesting applications. For example, it enables secure communication over an insecure channel without having any pre-shared key, with the guarantee that only a party at a specific location can learn the content of the conversation. More generally, we show that in settings where secure position-verification is achievable, other position-based cryptographic schemes are possible as well, such as secure position-based authentication and position-based key agreement.
Online/Offline Identity-Based Signcryption Revisited
In this paper, we redefine a cryptographic notion called
Online/Offline Identity-Based Signcryption. It is an
``online/offline'' version of identity-based signcryption, where most of the computations are carried out offline while the online part does not require any heavy computations such as pairings or multiplications on elliptic curve. It is particularly suitable for power-constrained devices such as smart cards.
We give a concrete implementation of online/offline identity-based
signcryption, which is very efficient and flexible. Unlike all
the previous schemes in the literature, our scheme does
not require the knowledge of receiver's information (either
public key or identity) in the offline stage. The receiver's identity and the message to be signcrypted are only needed in the online stage. This feature provides a great flexibility to our scheme and makes it practical to use in real-world applications. To our knowledge, our scheme is the first one in the literature to provide this kind of feature. We prove that the proposed scheme meets strong
security requirements in the random oracle model, assuming the Strong Diffie-Hellman (SDH) and Bilinear Diffie-Hellman Inversion (BDHI) are computationally hard.
Symmetric States and their Structure: Improved Analysis of CubeHash
Uncategorized
Uncategorized
This paper provides three improvements over previous work on analyzing CubeHash, based
on its classes of symmetric states: (1) We present a detailed analysis of the hierarchy of symmetry classes. (2) We point out some flaws in previously claimed attacks which tried to exploit the symmetry classes. (3) We present and analyze new multicollision and preimage attacks. For the default parameter setting of CubeHash, namely for a message block size of b = 32, the new attacks are slightly faster than 2^384
operations. If one increases the size of a message block by a single byte to b = 33, our multicollision and preimage attacks become much faster – they only require about 2^256 operations. This demonstrates how sensitive the security of CubeHash is, depending on minor changes of the tunable security parameter b.
Virtual Secure Circuit: Porting Dual-Rail Pre-charge Technique into Software on Multicore
This paper discusses a novel direction for multicore cryptographic
software, namely the use of multicore to protect a design against
side-channel attacks. We present a technique which is based on the
principle of dual-rail pre-charge, but which can be completely
implemented in software. The resulting protected software is called
a Virtual Secure Circuit (VSC). Similar to the dual-rail pre-charge
technique, a VSC executes as two complementary programs on two
identical processor cores. Our key contributions include (1) the
analysis of the security properties of a VSC, (2) the construction
of a VSC AES prototype on a dual-PowerPC architecture, (3) the
demonstration of VSC's protection effectiveness with real
side-channel attack experiments. The attack results showed that the
VSC protected AES needs 80 times more measurements than the
unprotected AES to find the first correct key byte. Even one million
measurements were not sufficient to fully break VSC protected AES,
while unprotected AES was broken using only 40000 measurements. We
conclude that VSC can provide a similar side-channel resistance as
WDDL, the dedicated hardware equivalent of dual-rail pre-charge.
However, in contrast to WDDL, VSC is a software technique, and
therefore it is flexible.
Selecting Parameters for Secure McEliece-based Cryptosystems
Uncategorized
Uncategorized
In 1994, P. Shor showed that quantum computers will be able to break cryptosystems based on integer factorization and on the discrete logarithm, e.g. RSA or ECC. Code-based crytosystems are promising alternatives to public key schemes based on these problems, and they are believed to be secure against quantum computer attacks. In this paper, we solve the problem of selecting optimal parameters for the McEliece cryptosystem that provide security until a given year and give detailed recommendations. Our analysis is based on the lower bound complexity estimates by Sendrier and Finiasz, and the security
requirements model proposed by Lenstra and Verheul.
Factorization of RSA-180
Uncategorized
Uncategorized
We present a brief report on the factorization of RSA-180, currently smallest unfactored RSA number. We show that the numbers of similar size could be factored in a reasonable time at home using open source factoring software running on a few Intel Core i7 PCs.
LAB Form for Iterated Hash Functions
Uncategorized
Uncategorized
In this paper,we proposed a efficient and laconic mode for iterative
hash functions and tried to fix the flaws of the Merkle-Damgaard construction
completely and certainly tried to prevent varieties of those generic attacks ,such
as Multicollisions Attack,Second Preimage Attack and Herding Attack.The struc-
ture of this new mode is different from HAIFA or any other proposal,it contains a
new method “Locking Abutting Blocks”(LAB)with checksum ,it makes a larger
size of connotative chaining value without requirements of intricate computing
and larger memory and it allows for an online computation in one pass with a
fixed memory independently .It’s also easy to avoid the generic attacks (presented
by Praveen Gauravaram and John Kelsey) which apply on the hash functions with
linear-XOR/additive checksum.
Key-Controlled Order-Preserving Encryption
Uncategorized
Uncategorized
In this paper we study order-preserving encryption (OPE), a primitive in the database community for allowing efficient range queries on ecrypted data. OPE was suggested by Agrawal et al [1], and was throughly studied by Boldyreva et al [2]. In this paper we present a practical OPE scheme, which is a key-controlled algorithm, based on simple computation. A primary analysis shows that our algorithm is secure enough
Two improved authenticated multiple key exchange protocols
Many authenticated multiple key exchange protocols were published in recent years. In 2008, Lee et al. presented an authenticated multiple key exchange protocol based on bilinear pairings. However, Vo et al. demonstrated an impersonation attack on the protocol , and it failed to provide authenticity and perfect forward secrecy as they had claimed. Later, Vo et al. proposed their enhancement protocol conforming which conforms to all desirable security properties. But, Vo's protocol required any party had held the public key each other, which required a large amount of storage. In this paper, we propose two new authenticated multiple key exchange protocols based on Lee's protocol, and makes them immune against Vo et al.'s attacks.
Multiparty Computation for Modulo Reduction without Bit-Decomposition and A Generalization to Bit-Decomposition
Uncategorized
Uncategorized
Bit-decomposition, which is proposed by Damgård \emph{et al.}, is a powerful tool for multi-party computation (MPC). Given a sharing of secret $x$, it allows the parties to compute the sharings of the bits of $x$ in constant rounds. With the help of bit-decomposition, constant-rounds protocols for various MPC problems can be constructed. However, bit-decomposition is relatively expensive, so constructing protocols for MPC problems without relying on bit-decomposition is a meaningful work. In multi-party computation, it remains an open problem whether the \emph{modulo reduction problem} can be solved in constant rounds without bit-decomposition.
In this paper, we propose a protocol for (public) modulo reduction without relying on bit-decomposition. This protocol achieves constant round complexity and linear communication complexity. Moreover, we show a generalized bit-decomposition protocol which can, in constant rounds, convert the sharing of secret $x$ into the sharings of the digits of $x$, along with the sharings of the bits of every digit. The digits can be base-\emph{m} for any $m\geq2$. Obviously, when \emph{m} is a power of 2, this generalized protocol is just the original bit-decomposition protocol.
CCA-Secure Unidirectional Proxy Re-Encryption in the Adaptive Corruption Model without Random Oracles
Proxy re-encryption (PRE), introduced by Blaze, Bleumer and Strauss in Eurocrypt'98, allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into an encryption of the same message intended for Bob. PRE has recently drawn great interest, and many interesting PRE schemes have been proposed. However, up to now, it is still an important question to come up with a chosen-ciphertext secure unidirectional PRE in the adaptive corruption model. To address this problem, we propose a new unidirectional PRE scheme, and prove its chosen-ciphertext security in the adaptive corruption model without random oracles. Compared with the best known unidirectional PRE scheme proposed by Libert and Vergnaud in PKC'08, our schemes enjoys the advantages of both higher efficiency and stronger security.
Cryptographic Extraction and Key Derivation: The HKDF Scheme
In spite of the central role of key derivation functions (KDF) in applied cryptography, there has been little formal work addressing the design and analysis of general multi-purpose KDFs. In practice, most KDFs (including those widely standardized) follow ad-hoc approaches that treat cryptographic hash functions as perfectly random functions. In this paper we close some gaps between theory and practice by contributing to the study and engineering of KDFs in several ways. We provide detailed rationale for the design of KDFs based on the extract-then-expand approach; we present the first general and rigorous definition of KDFs and their security which we base on the notion of computational extractors; we specify a concrete fully practical KDF based on the HMAC construction; and we provide an analysis of this construction based on the extraction and pseudorandom properties of HMAC. The resultant KDF design can support a large variety of KDF applications under suitable assumptions on the underlying hash function; particular attention and effort is devoted to minimizing these assumptions as much as possible for each usage scenario.
Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function.
(The HMAC-based scheme presented here, named HKDF, is being standardized by the IETF.)
Last updated: 2010-05-13
Lattice Reduction and Polynomial Solving
In this paper, we suggest a generalization of Coppersmith's method for finding integer roots of a multivariate polynomial. Our generalization allows finding integer solutions of a system of $k$ multivariate polynomial equations. We then apply our method to the so-called implicit factoring problem, which constitutes the main contribution of this paper. The problem is as follows: let $N_1 = p_1 q_1$ and $N_2 = p_2 q_2$ be two RSA moduli of same bit-size, where $q_1, q_2$ are $\alpha$-bit primes. We are given the \emph{implicit} information that $p_1$ and $p_2$ share $t$ most significant bits. We present a novel and rigorous lattice-based method that leads to the factorization of $N_1$ and $N_2$ in polynomial time as soon as $t \ge 2 \alpha + 3$. Subsequently, we heuristically generalize the method to $k$ RSA moduli $N_i = p_i q_i$ where the $p_i$'s all share $t$ most significant bits (MSBs) and obtain an improved bound on $t$ that converges to $t \ge \alpha + 3.55\ldots$ as $k$ tends to infinity. This paper extends the work of May and Ritzenhofen in \cite{DBLP:conf/pkc/MayR09}, where similar results were obtained when the $p_i$'s share least significant bits (LSBs). In \cite{sarkar2009further}, Sarkar and Maitra describe an alternative but heuristic method for only two RSA moduli, when the $p_i$'s share LSBs and/or MSBs, or bits in the middle. In the case of shared MSBs or bits in the middle and two RSA moduli, they get better experimental results in some cases, but we use much lower (at least 23 times lower) lattice dimensions. Our results rely on the following surprisingly simple algebraic relation in which the shared MSBs of p_1$ and $p_2$ cancel out: $q_1 N_2 - q_2 N_1 = q_1 q_2 (p_2 - p_1)$. This relation allows us to build a lattice whose shortest vector yields the factorization of the $N_i$'s.
Cube Test Analysis of the Statistical Behavior of CubeHash and Skein
This work analyzes the statistical properties of the SHA-3 candidate cryptographic hash algorithms CubeHash and Skein to try to find nonrandom behavior. Cube tests were used to probe each algorithm's internal polynomial structure for a large number of choices of the polynomial input variables. The cube test data were calculated on a 40-core hybrid SMP cluster parallel computer. The cube test data were subjected to three statistical tests: balance, independence, and off-by-one. Although isolated statistical test failures were observed, the balance and off-by-one tests did not find nonrandom behavior overall in either CubeHash or Skein. However, the independence test did find nonrandom behavior overall in both CubeHash and Skein.
Links Between Theoretical and Effective Differential Probabilities: Experiments on PRESENT
Recent iterated ciphers have been designed to be resistant to differential cryptanalysis.
This implies that cryptanalysts have to deal with differentials having so small probabilities that, for a fixed
key, the whole codebook may not be sufficient to detect it.
The question is then, do these theoretically computed small probabilities have any sense?
We propose here a deep study of differential and differential trail probabilities supported
by experimental results obtained on a reduced version of PRESENT.