All papers in 2009 (Page 4 of 638 results)
Security weaknesses in two multi-server password based authentication protocols
Uncategorized
Uncategorized
In 2004 and 2005, Tsaur et al. proposed a smart card based password authentication schemes for multi-server environments, respectively. They claimed that their protocols are safe and can withstand various kinds of attacks. However, after analysis, we found their schemes each have some secure loopholes. In this article, we will show the security flaws in these two protocols.
A New Lattice-Based Cryptosystem Mixed with a Knapsack
In this paper, we present a new lattice-based public-key
cryptosystem mixed with a knapsack, which has reasonable key size
and quick encryption and decryption. The module strategy in our
cryptosystem can also be used to construct a framework for some
GGH-type cryptosystems to improve their security.
Partial Signatures and their Applications
Uncategorized
Uncategorized
We introduce Partial Signatures, where a signer, given a message, can
compute a ``stub'' which preserves her anonymity, yet later she, but
nobody else, can complete the stub to a full and verifiable signature
under her public key. We provide a formal definition requiring three
properties, namely anonymity, unambiguity and unforgeability. We
provide schemes meeting our definition both with and without random
oracles. Our schemes are surprisingly cheap in both bandwidth and
computation. We describe applications including anonymous bidding and
betting.
Related-Key Rectangle Attack of the Full 80-Round HAS-160 Encryption Mode
In this paper we investigate the security of the encryption mode of the HAS-160 hash function. HAS-160 is a Korean hash standard which is widely used in Korea's industry. The structure of HAS-160 is similar to SHA-1 but includes some improvements. The encryption mode of HAS-160 is defined similarly as the encryption mode of SHA-1 that is called SHACAL-1. In 2006, Dunkelman et. al. successfully broke the full 80-round SHACAL-1. In this paper, we present the first cryptographic attack that breaks the encryption mode of the full 80-round HAS-160. SHACAL-1 and the encryption mode of HAS-160 are both blockciphers with key size 512 bits and plain-/ciphertext size of 160 bits.
We will apply a key recovery attack that needs about 2^{155} chosen plaintexts and 2^{375.98} 80-round HAS-160 encryptions. The attack does not aim for a collision, preimage or 2nd-preimage attack, but it shows that HAS-160 used as a block cipher can be differentiated from an ideal cipher faster than exhaustive search.
Attacking Reduced Rounds of the ARIA Block Cipher
ARIA is a block cipher proposed at ICISC'03. Its design is very similar to the advanced encryption standard (AES). The authors propose that on 32-bit processors, the encryption speed is at least 70% of that of the AES. They claim to offer a higher security level than AES. In this paper we present two attacks of reduced round ARIA which shows some weaknesses of the cipher. Moreover, our attacks have the lowest memory requirements compared to existing attacks on ARIA with an increase in the time complexity.
Hard Fault Analysis of Trivium
Fault analysis is a powerful attack to stream ciphers. Up to now,
the major idea of fault analysis is to simplify the cipher system by
injecting some soft faults. We call it soft fault analysis. As a
hardware--oriented stream cipher, Trivium is weak under soft fault
analysis.
In this paper we consider another type of fault analysis of stream
cipher, which is to simplify the cipher system by injecting some
hard faults. We call it hard fault analysis. We present the
following results about such attack to Trivium. In Case 1 with the
probability not smaller than 0.2396, the attacker can obtain 69 bits
of 80--bits--key. In Case 2 with the probability not smaller than
0.2291, the attacker can obtain all of 80--bits--key. In Case 3 with
the probability not smaller than 0.2291, the attacker can partially
solve the key. In Case 4 with non--neglectable probability, the
attacker can obtain a simplified cipher, with smaller number of
state bits and slower non--linearization procedure. In Case 5 with
non--neglectable probability, the attacker can obtain another
simplified cipher. Besides, these 5 cases can be checked out by
observing the key--stream.
Untraceable RFID protocols are not trivially composable: Attacks on the revision of EC-RAC
It is well-known that protocols that satisfy a security property when
executed in isolation do not necessarily satisfy the same security
property when they are executed in an environment containing other
protocols.
We demonstrate this fact on a family of recently proposed RFID
protocols by Lee, Batina, and Verbauwhede. We invalidate the authentication and untraceability claims made for several of the family's protocols.
We also present man-in-the-middle attacks on untraceability in all of the protocols in the family. Similar attacks can be carried out on some other protocols in the literature, as well.
We briefly indicate how to repair the protocols.
Security Notions and Generic Constructions for Client Puzzles
Uncategorized
Uncategorized
Computational puzzles are mildly difficult computational problems that require
resources (processor cycles, memory, or both) to solve. Puzzles have
found a variety of uses in security.
In this paper we are concerned with {\em client puzzles}: a type of puzzle
used as a defense against Denial of Service (DoS) attacks.
Before engaging in a resource consuming protocol with a client, a server
demands that the client solves a freshly generated client puzzle.
Despite their widespread use, the lack of formal models for security of client
puzzles prevents a full analysis of proposed puzzles and, more
importantly, prevents rigorous proofs for the effectiveness of puzzles as
a DoS defense.
The main contribution of this paper is a formal model for the security of
client puzzles as a stepping stone towards solving the above problems.
We clarify the interface that client puzzles should offer and give two
security notions for puzzles.
Both functionality and security are inspired by, and tailored to, the use of
puzzles as a defense against DoS attacks.
The first notion -- puzzle unforgeability -- requires that an adversary is
unable to produce valid looking puzzles on its own. The second notion --
puzzle-difficulty -- requires that an adversary spends at least an
appropriate amount of resources solving puzzles.
Our definitions fill an important gap: breaking either of the two
properties immediately leads to successful DoS attacks.
We illustrate this point with an attack against a previously proposed
puzzle construction.
We show that a subtle flaw renders the construction forgeable and we
explain how to exploit this flaw to mount a DoS attack on certain
protocols that use this puzzle.
We also provide a generic construction of a client puzzle.
Our construction uses a pseudorandom function family to provide unforgeability
and a one way function for the difficulty.
We prove our generic construction meets our definitions of unforgeability and
difficulty for client puzzles.
Finally, we discuss and analyze (in the random oracle model) a practical
instantiation of our construction based on hash functions.
Last updated: 2009-08-04
NTRU, quaternion algebra, public key cryptography
Uncategorized
Uncategorized
We propose QTRU, a probabilistic and multi-dimensional public key cryptosystem based on the NTRU public key cryptosystem using quaternion algebra. QTRU encrypts four data vectors in each encryption session and the only other major difference between NTRU and QTRU is that the underlying algebraic structure has been changed to a non-commutative algebraic structure. As a result, QTRU inherits the strength of NTRU and its positive points. In addition, the non-commutativity of the underlying structure of QTRU makes it much more resistant to some lattice-based attacks. After a brief description of NRTU, we begin by describing the algebraic structure
used in QTRU. Further, we present the details of the key generation, encryption and decryption algorithms of QTRU and discuss the issues regarding key security, message security, and probability of successful decryption. Last but not least, QTRU’s resistance against lattice-based attacks is investigated.
Last updated: 2009-10-05
Efficient Approximation of Higher Order Boolean function in a Low Order Function
Uncategorized
Uncategorized
A few of non-linear approximation methods for Boolean functions
have been developed but they are not of practical application. However,
if a low order Boolean function can be found that can nearly approximate
a higher order Boolean function of an encryption technique
then the low order Boolean function can be used to exploit the cipher.
Such a technique can become a strong cryptanalytic tool and can sneak
in a cipher. In this article, an efficient method has been devised to find
non-linear low degree approximation of the Boolean function. The algorithm
is based on non-linear filter generator followed by solving Galois
field 2 equations. To find best approximations execution time of the proposed
algorithm is tremendously low as compared the brute force search.
Suggested method is very efficient and of practical nature.
Flowchart description of security primitives for Controlled Physical Unclonable Functions
Physical Unclonable Functions (PUFs) are
physical objects that are unique, practically unclonable and that
behave like a random function when subjected to a challenge.
Their use has been proposed for
authentication tokens and anti-counterfeiting.
A Controlled PUF (CPUF) consists of a PUF and a control layer
that restricts a user's access to the PUF input and output.
CPUFs can be used for secure key storage, authentication, certified
execution of programs, and certified measurements.
In this paper we modify a number of protocols
involving CPUFs in order to improve their security.
Our modifications mainly consist of encrypting a
larger portion of the message traffic, and
additional restrictions on the CPUF accessibility.
We simplify the description of CPUF protocols by using
flowchart notation.
Furthermore we explicitly show how the helper data for the PUFs
is handled.
Simple Adaptive Oblivious Transfer Without Random Oracle
Adaptive oblivious transfer (adaptive OT) schemes have wide applications such as oblivious database searches, secure multiparty computation and etc. It is a two-party protocol which simulates an ideal world such that the sender sends $M_1, \cdots, M_n$ to the trusted third party (TTP) first, and then the receiver receives $M_{\sigma_i}$ from TTP adaptively for $i=1,2,\cdots k$. In the standard model, however, the fully simulatable schemes known so far had to rely on dynamic assumptions such as $q$-strong DH assumption, $q$-PDDH assumption and $q$-hidden LRSW assumption.
This paper shows two fully simulatable adaptive OT schemes which do not rely on dynamic assumptions in the standard model. Our first scheme holds under the DDH assumption and our second scheme holds under the Paillier's decisional $N$th residuosity assumption, respectively.
The Application of Polynomials over the Field of Two Elements to a Problem in Intellectual Property
It is a routine task to convert a digital circuit to a system
of polynomial equations over GF(2). A very well-studied set
of tools called ``SAT-solvers'', which solve Conjunctive Normal
Form logical satisfiability problems, can be used to determine
if two circuits are equivalent, as is commonly done in
computer engineering. An interesting problem in intellectual
property is to determine if two circuits are identical but
subject to an unknown permutation of the inputs and outputs.
This could be of interest if a manufacturer suspects his
encryption circuit has been stolen. While this is easily shown
to be a sub-problem of the famously hard “isomorphism of
polynomials” problem, we show that in fact it can be easily
solved via conversion to a polynomial system of equations over
GF(2), and is only slightly harder than if the permutations
were in fact known.
Characterizing Padding Rules of MD Hash Functions Preserving Collision Security
This paper characterizes collision preserving padding rules and provides variants of \MD (MD) which are having less or no overhead costs due to length. We first show that suffix-free property of padding rule is necessary as well as sufficient to preserve the collision security of MD hash function for an arbitrary domain $\s^*$. Knowing this, we propose a simple suffix-free padding rule padding only $\log |M|$ bits for a message $M$, which is less than that of Damg\aa rd's and Sarkar's padding rules. We also prove that the length-padding is not absolutely necessary. We show that a simple variant of MD with $10^d$-padding (or any injective padding) is collision resistant provided that the underlying compression function is collision resistant after chopping the last-bit. Finally, we design another variant of MD hash function preserving all three basic security notions of hash functions, namely collision and (2nd) preimage. This is an improvement over a recently designed (SAC-08) three-property preserving hash function in terms of both salt size and efficiency.
Group-Oriented Fair Exchange of Signatures
Uncategorized
Uncategorized
In an Optimistic Fair Exchange (OFE) for digital signatures, two parties exchange their signatures fairly without requiring any online trusted third party. The third party is only involved when a dispute occurs. In all the previous work, OFE has been considered only in a setting where both of the communicating parties are individuals. There is little work discussing about the fair exchange between two \emph{groups} of users, though we can see that this is actually a common scenario in actual OFE applications. In this paper, we introduce a new variant of OFE, called
\emph{Group-Oriented Optimistic Fair Exchange} (GOFE). A GOFE allows two users from two different groups to exchange signatures on behalf of their groups in a fair and anonymous manner. Although GOFE may be considered as a fair exchange for group signatures, it might be inefficient if it is
constructed generically from a group signature scheme. Instead, we show that GOFE is \emph{backward compatible} to the Ambiguous OFE (AOFE). Also, we propose an efficient and concrete construction of GOFE, and prove its security under the security models we propose in this model. The security of the scheme relies on the decision linear assumption and strong Diffie-Hellman assumption under the random oracle model.
Factoring Unbalanced Moduli with Known Bits
Let $n = pq > q^3$ be an RSA modulus. This note describes a LLL-based method allowing to factor $n$ given $2log_2q$ contiguous bits of $p$, irrespective to their position. A second method is presented, which needs fewer bits but whose length depends on the position of the known bit pattern. Finally, we introduce a somewhat surprising ad hoc method where two different known bit chunks, totalling $\frac32 log_2 q$ bits suffice to factor $n$.
Certifying Assembly with Formal Cryptographic Proofs: the Case of BBS
With today's dissemination of embedded systems manipulating sensitive data, it has become important to equip low-level programs with strong security guarantees. Unfortunately, security proofs as done by cryptographers are about algorithms, not about concrete implementations running on hardware. In this paper, we show how to extend security proofs to guarantee the security of assembly implementations of cryptographic primitives. Our approach is based on a framework in the Coq proof-assistant that integrates correctness proofs of assembly programs with game-playing proofs of provable security. We demonstrate the usability of our approach using the Blum-Blum-Shub (BBS) pseudorandom number generator, for which a MIPS implementation for smartcards is shown cryptographically secure.
Tweakable Enciphering Schemes From Stream Ciphers With IV
We present the first construction of a tweakable enciphering scheme from a stream cipher
supporting an initialization vector. This construction can take advantage of the recent
advances in hardware efficient stream ciphers to yield disk encryption systems with a very
small hardware footprint. Such systems will be attractive for resource constrained devices.
Automorphic Signatures in Bilinear Groups and an Application to Round-Optimal Blind Signatures
We introduce the notion of automorphic signatures, which satisfy the following properties: the verification keys lie in the message space, messages and signatures consist of elements of a bilinear group, and verification is done by evaluating a set of pairing-product equations.
These signatures make a perfect counterpart to the powerful proof system by Groth and Sahai (Eurocrypt 2008). We provide practical instantiations of automorphic signatures under appropriate assumptions and use them to construct the first efficient round-optimal blind signatures. By combining them with Groth-Sahai proofs, we moreover give practical instantiations of various other cryptographic primitives, such as fully-secure group signatures, non-interactive anonymous credentials and anonymous proxy signatures. To do so, we show how to transform signature schemes whose message space is a group to a scheme that signs arbitrarily many messages at once.
Comments and Improvements on Chameleon Hashing Without Key Exposure Based on Factoring
In this paper, we present some security flaws of the key-exposure
free chameleon hash scheme based on factoring \cite{GWX07}.
Besides, we propose an improved chameleon hash scheme without key
exposure based on factoring which enjoys all the desired security
notions of chameleon hashing.
The Fermat factorization method revisited
We consider the well known Fermat factorization method ({\it FFM}) when it is applied on a balanced RSA modulus $N=p\, q>0$, with primes $p$ and $q$ supposed of equal length. We call the {\it Fermat factorization equation} the equation (and all the possible variants) solved by the FFM like ${\cal P}(x,y)=(x+2R)^2-y^2-4N=0$ (where $R=\lceil N^{1/2} \rceil$).
These equations are bivariate integer polynomial equations and we propose to solve them directly using Coppersmith's methods for bivariate integer polynomials. As we use them as a black box, our proofs will be brief.
We show first that, using Coppersmith's methods, we can factor $N$ in a polynomial time if $|p-q|<N^{3/14}$, with
$3/14 \approx 0.214\cdots$ and,
using the fact that the Newton polygon of ${\cal P}(x,y)$ is a lower triangle
we show a better result: we can indeed factor $N$ in a polynomial time if $|p-q|<N^{1/4}$.
Unfortunately this shows that using Coppersmith's methods for bivariate integer polynomials is no better than the FFM, because in that case the FFM is immediate. This is confirmed by numerical experiments.
We then propose another method: solving the {\it modular} Fermat factorization equation
$ (x+2R)^2-y^2=0 \mod 4N $.
Since Coppersmith's methods for {\it modular} multivariate integer polynomial equations are {\it empirical}, there relies on the the famous {\it "resultant heuristic"}, we get
only an empirical method that can factor $N$ in a polynomial time if $|p-q|<N^{1/3}$.
We conclude with proposals for future works.
Related-key Cryptanalysis of the Full AES-192 and AES-256
In this paper we present two related-key attacks on the full AES. For AES-256 we show the first key recovery attack that works for all the keys and has $2^{99.5}$ time and data complexity, while the recent attack by Biryukov-Khovratovich-Nikolic works for a weak key class and has much higher complexity. The second attack is the first cryptanalysis of the full AES-192. Both our attacks are boomerang attacks, which are based on the recent idea of finding local collisions in block ciphers and enhanced with the boomerang switching techniques to gain free rounds in the middle.
An Efficient Password Security of Key Exchange Protocol based on ECDLP
Uncategorized
Uncategorized
In this paper we have proposed an efficient password security of Three-Party and Multiparty Key Exchange Protocol based on Elliptic Curve Discrete Logarithm Problem. In three party Key exchange protocols allow two clients with a trusted Server where they registered their password and communicate over a public network to establish a common secret key called session key. Similary multiparty key exchange protocol allow a group of parties
communicating over a public network to establish the session key. Due to their significance by in building a secure communication channel, a number of key exchange protocols have been suggested over the years for a variety of settings.Here we have taken two one-way hash functions to built the level of security high.
Breaking RSA-based PIN Encryption with thirty ciphertext validity queries
Uncategorized
Uncategorized
We show that one can recover the PIN from a standardised RSA-based PIN encryption algorithm from a small number of queries to a ciphertext validity checking oracle. The validity checking oracle required is rather special and we discuss whether such oracles could be obtained in
the real world. Our method works using a minor extension to the ideas
of Bleichenbacher and Manger, in particular we obtain information from negative, as well as positive, responses from the validity checking oracle.
Secure Two-Party Computation is Practical
Secure multi-party computation has been considered by the cryptographic community for a number of years. Until recently it has been a purely theoretical area, with few implementations with which to test various ideas. This has led to a number of optimisations being proposed which are quite restricted in their application. In this paper we describe an implementation of the two-party case, using Yao’s garbled circuits, and present various algorithmic protocol improvements. These optimisations are analysed both theoretically and empirically, using experiments of various adversarial situations. Our experimental data is provided for reasonably large circuits, including one which performs an AES encryption, a problem which we discuss in the context of various possible applications.
Identity Based Group Signatures from Hierarchical Identity-Based Encryption
Uncategorized
Uncategorized
A number of previous papers explored the notion of identity-based group signature. We present a generic construction of identity-based group signatures. Our construction is based on the Naor transformation of a identity-based signature out of an identity-based encryption, adjusted to hierarchical identity-based encryption. We identify sufficient conditions on the underlying HIBE so that the scheme that results
from our transformation meets our security definitions. Finally, we suggest a couple of extensions enabled by our construction, one of which is to hierarchical identity-based group signatures.
Jacobi Quartic Curves Revisited
Uncategorized
Uncategorized
This paper provides new results about efficient arithmetic on Jacobi quartic form elliptic curves, $y^2 = d x^4 + 2 a x^2 + 1$. With recent proposals, the arithmetic on Jacobi quartic curves became solidly faster than that of Weierstrass curves. These proposals use up to 7 coordinates to represent a single point. However, fast scalar multiplication algorithms based on windowing techniques, precompute and store several points which require more space than what it takes with 3 coordinates. Also note that some of these proposals require $d = 1$ for full speed. Unfortunately, elliptic curves having 2-times-a-prime number of points, cannot be written in Jacobi quartic form if $d = 1$. Even worse the contemporary formulae may fail to output correct coordinates for some inputs. This paper provides improved speeds using fewer coordinates without causing the above mentioned problems. For instance, our proposed point doubling algorithm takes only 2 multiplications, 5 squarings, and no multiplication with curve constants when $d$ is arbitrary and $a = \pm1/2$.
Multi Party Distributed Private Matching, Set Disjointness and Cardinality Set Intersection with Information Theoretic Security
In this paper, we focus on the specific problems of Private Matching, Set Disjointness and Cardinality Set Intersection in information theoretic settings. Specifically, we give perfectly secure protocols
for the above problems in n party settings, tolerating a computational ly unbounded semi-honest adversary, who can passively corrupt at most t < n/2 parties. To the best of our knowledge, these are the first such
information theoretically secure protocols in a multi-party setting for all three problems. Previous solutions for Distributed Private Matching and Cardinality Set Intersection were cryptographical ly secure and the
previous Set Disjointness solution, though information theoretically secure, is in a two party setting. We also propose a new model for Distributed Private matching which is relevant in a multi-party setting.
RFID distance bounding protocol with mixed challenges to prevent relay attacks
RFID systems suffer from different location-based attacks such as distance fraud, mafia fraud and terrorist fraud attacks. Among them mafia fraud attack is the most serious since this attack can be mounted without the notice of both the reader and the tag. An adversary performs a kind of man-in-the-middle attack between the reader and the tag. It is very difficult to prevent this attack since the adversary does not change any data between the reader and the tag. Recently distance bounding protocols measuring the round-trip time between the reader and the tag have been researched to prevent this attack.
All the existing distance bounding protocols based on binary challenges, without final signature, provide an adversary success probability equal to (3/4)^n where n is the number of rounds in the protocol. In this paper, we introduce a new protocol based on binary mixed challenges that converges toward the expected and optimal (1/2)^n bound. We prove its security in case of both noisy and non-noisy channels.
Fault Attacks on RSA Signatures with Partially Unknown Messages
Fault attacks exploit hardware malfunctions to recover secrets from embedded electronic devices.
In the late 90's, Boneh, DeMillo and Lipton introduced fault-based attacks on {\sc crt-rsa}.
These attacks factor the signer's modulus when the message padding
function is deterministic. However, the attack does not apply when the
message is partially unknown, for example when messages contain some
randomness which is recovered only when
verifying a {\sl correct} signature.
In this paper we successfully extends RSA fault attacks to a large class
of partially known message configurations.
The new attacks rely on Coppersmith's algorithm for finding small roots of multivariate polynomial equations.
We illustrate the approach by successfully attacking several randomized versions of the ISO 9796-2 encoding standard.
Practical experiments show that a $2048$-bit modulus can be factored
in less than a minute given one faulty signature containing $160$
random bits and an unknown $160$-bit message digest.
A note on the Certificateless Multi-receiver Signcryption Scheme
Certificateless cryptography aims at combining the advantages of identity based and public key cryptography, so as to avoid the
key escrow problem inherent in the identity based system and cumbersome certificate management in public key infrastructure. Signcryption
achieves confidentiality and authentication simultaneously in an e
Anonymous Signatures Revisited
We revisit the notion of the anonymous signature, first formalized by
Yang, Wong, Deng and Wang, and then further developed by
Fischlin, and Zhang and Imai.
We present a new formalism of anonymous signature, where instead of the message,
a part of the signature is withheld to maintain anonymity.
We introduce the notion unpretendability
to guarantee infeasibility for someone other than the correct signer to
pretend authorship of the message and signature. Our definition retains
applicability for all previous applications of the anonymous signature,
provides stronger security, and is conceptually simpler.
We give a generic construction from any ordinary signature scheme, and also
show that the short signature scheme by Boneh and Boyen can be naturally
regarded as such a secure anonymous signature scheme according to
our formalism.
Authentic Time-Stamps for Archival Storage
We study the problem of authenticating the content and creation time of documents generated by an organization and retained in archival storage. Recent regulations (e.g., the Sarbanes-Oxley act and the Securities and Exchange Commission rule) mandate secure retention of important business records for several years. We provide a mechanism to authenticate bulk repositories of archived documents. In our approach, a space efficient local data structure encapsulates a full document repository in a short (e.g., 32-byte) digest. Periodically registered with a trusted party, these commitments enable compact proofs of both document creation time and content integrity. The data structure, an append-only persistent authenticated dictionary, allows for efficient proofs of existence and non-existence, improving on state-of-the-art techniques. We give a rigorous security analysis of our solution and confirm through an experimental evaluation with the Enron email corpus its feasibility in practice.
Improved generic algorithms for 3-collisions
An $r$-collision for a function is a set of $r$ distinct inputs with identical outputs. Actually finding $r$-collisions for a random map over a finite set of cardinality $N$ requires at least about $N^{(r-1)/r} $ units of time on a sequential machine. For $r$=2, memoryless and well-parallelisable algorithms are known. The current paper describes memory-efficient and parallelisable algorithms for $r \ge 3$. The main results are: (1)~A sequential algorithm for 3-collisions, roughly using memory $N^\alpha$ and time $N^{1-\alpha}$ for $\alpha\le1/3$. I.e., given $N^{1/3}$ units of storage, on can find 3-collisions in time $N^{2/3}$. Note that there is a time-memory tradeoff which allows to reduce the memory consumption. (2)~A parallelisation of this algorithm using $N^{1/3}$ processors running in time $N^{1/3}$. Each single processor only needs a constant amount of memory. (3)~An generalisation of this second approach to $r$-collisions for $r \ge3$: given $N^s$ parallel processors, on can generate $r$-collisions roughly in time $N^{((r-1)/r)-s}$, using memory $N^{((r-2)/r)-s}$ on every processor.
Factor-4 and 6 Compression of Cyclotomic Subgroups
Uncategorized
Uncategorized
Bilinear pairings derived from supersingular elliptic curves of embedding degrees 4 and 6 over finite fields of characteristic two and three, respectively, have been used to implement pairing-based cryptographic protocols. The pairing values lie in certain prime-order subgroups of certain cyclotomic subgroups. It was previously known how to compress the pairing values over characteristic two fields by a factor of 2, and the pairing values over characteristic three fields by a factor of 6. In this paper, we show how the pairing values over characteristic two fields can be compressed by a factor of 4. Moreover, we present and compare several algorithms for performing exponentiation in the prime-order subgroups using the compressed representations. In particular, in the case where the base is fixed, we expect to gain at least a 54% speed up over the fastest previously known exponentiation algorithm that uses factor-6 compressed representations.
Key extraction from general non-discrete signals
Uncategorized
Uncategorized
We address the problem of designing optimal schemes for the generation of secure cryptographic keys from continuous noisy data. We argue that, contrary to the discrete case, a universal fuzzy extractor does not exist.
This implies that in the continuous case, key extraction schemes have to be designed for particular probability distributions.
We extend the known definitions of the correctness and security properties of fuzzy extractors. Our definitions apply to continuous as well as discrete variables.
We propose a generic construction for fuzzy extractors from noisy continuous sources, using independent partitions.
The extra freedom in the choice of discretisation, which does not exist in the discrete case, is advantageously used to give the extracted key a uniform distribution.
We analyze the privacy properties of the scheme and the error probabilities in a one-dimensional toy model with simplified noise.
Finally, we study the security implications of incomplete knowledge of the source's probability distribution P.
We derive a bound on the min-entropy of the extracted key under the worst case assumption, where the attacker knows P exactly.
Cryptanalysis of ESSENCE
ESSENCE is a hash function submitted to the NIST Hash Competition that stands out as a hardware-friendly and highly parallelizable design. Previous analysis showed some non-randomness in the compression function which could not be extended to an attack on the hash function and ESSENCE remained unbroken. Preliminary analysis in its documentation argues that it resists standard differential cryptanalysis. This paper disproves this claim, showing that advanced techniques can be used to significantly reduce the cost of such attacks: using a manually found differential characteristic and an advanced search algorithm, we obtain collision attacks on the full ESSENCE-256 and ESSENCE-512, with respective complexities 2^67.4 and 2^134.7. In addition, we show how to use these attacks to forge valid (message, MAC) pairs for HMAC-ESSENCE-256 and HMAC-ESSENCE-512, essentially at the same cost as a collision.
A Probabilistic Secret Sharing Scheme for a Compartmented Access Structure
Uncategorized
Uncategorized
In a compartmented access structure, there are disjoint participants
C1, . . . ,Cm. The access structure consists of subsets of participants
containing at least ti from Ci for i = 1, . . . ,m, and a total of at
least t0 participants. Tassa [2] asked: whether there exists an efficient ideal secret sharing scheme for such an access structure? Tassa and Dyn [5] presented a solution using the idea of bivariate interpolation and the concept of dual program [9, 10]. For the purpose of practical applications, it is advantageous to have a simple scheme solving the problem. In this paper a simple scheme is given for this problem using the similar idea from [5].
Universally Composable Contributory Group Key Exchange
We treat the security of group key exchange (GKE) in the universal composability (UC) framework. Analyzing GKE protocols in the UC framework naturally addresses attacks by malicious insiders. We define an ideal functionality for GKE that captures contributiveness in addition to other desired security goals. We show that an efficient two-round protocol securely realizes the proposed functionality in the random oracle model. As a result, we obtain the most efficient UC-secure contributory GKE protocol known.
On the security of oscillator-based random number generators
Physical random number generators (a.k.a. TRNGs) appear to be
critical components of many cryptographic systems. Yet, such
building blocks are still too seldom provided with a formal
assessment of security, in comparison to what is achieved for
conventional cryptography. In this work, we present a comprehensive
statistical study of TRNGs based on the sampling of an oscillator
subject to phase noise (a.k.a. phase jitters). This classical
layout, typically instantiated with a ring oscillator, provides a
simple and attractive way to implement a TRNG on a chip. Our
mathematical study allows one to evaluate and control the main
security parameters of such a random source, including its entropy
rate and the biases of certain bit patterns, provided that a small
number of physical parameters of the oscillator are known. In order
to evaluate these parameters in a secure way, we also provide an
experimental method for filtering out the global perturbations
affecting a chip and possibly visible to an attacker. Finally, from
our mathematical model, we deduce specific statistical tests
applicable to the bit stream of a TRNG. In particular, in the case
of an insecure configuration, we show how to recover the parameters
of the underlying oscillator.
Cryptanalysis of Certificateless Signcryption Schemes and an Efficient Construction Without Pairing
Uncategorized
Uncategorized
Certificateless cryptography introduced by Al-Riyami and Paterson eliminates the key escrow problem inherent in identity based cryptosystems. Even though building practical identity based signcryption schemes without bilinear pairing are considered to be almost impossible, it will be interesting to explore possibilities of constructing such systems in other settings like certificateless cryptography. Often for practical systems, bilinear pairings are considered to induce computational overhead. Signcryption is a powerful primitive that offers both confidentiality and authenticity to noteworthy messages. Though some prior attempts were made for designing certificateless signcryption schemes, almost all the known ones have security weaknesses. Specifically, in this paper we demonstrate the security weakness of the schemes in \cite{BF08}, \cite{DRJR08} and \cite{CZ08}. We also present the first provably secure certificateless signcryption scheme without bilinear pairing and prove it in the random oracle model.
Last updated: 2009-08-04
A New Improved Distinguisher for HC-128
In this paper, we present a new distinguisher for HC-128 which is the best known so far. The distinguisher requires approximately $2^{138}$ keystream words with success probability 0.9772.
Perfectly Balanced Functions in Symbolic Dynamics
In the present paper we study properties of perfectly balanced Boolean functions. Based on the concept of Boolean function barrier, we propose a novel approach to construct large classes of perfectly balanced Boolean functions.
Defending Against Key Abuse Attacks in KP-ABE Enabled Broadcast Systems
Key-Policy Attribute-Based Encryption (KP-ABE) is a promising
cryptographic primitive which enables fine-grained access control
over sensitive data. However, key abuse attacks in KP-ABE may impede
its wide application especially in copyright-sensitive systems. To
defend against this kind of attacks, this paper proposes a novel KP-ABE scheme which is able to disclose any illegal key distributor’s ID when key abuse is detected. In our scheme, each bit of user ID is defined as an attribute and the user secret key is associated with his unique ID. The tracing algorithm fulfills its task by tricking the pirate device into decrypting the ciphertext associated with the corresponding bits of his ID. Our proposed scheme has the salient property of black box tracing, i.e., it traces back to the illegal key distributor’s ID only by observing the pirate device’s outputs on certain inputs. In addition, it does not require the pirate device’s secret keys to be well-formed as compared to some previous work. Our proposed scheme is provably secure under the Decisional Bilinear Diffie-Hellman (DBDH) assumption and the Decisional Linear (DL) assumption.
Low Latency High Bandwidth Anonymous Overlay Network with Anonymous Routing
Uncategorized
Uncategorized
Most existing anonymous networks focus on providing strong anonymity for the price of having lower bandwidth, higher latency and degraded usability when compared with the conventional use of the Internet. They also often anonymize only a few specific applications.
In this paper, we propose a new approach of constructing an anonymous network. The network consists of an overlay network, which provides anonymity to all applications running on top of it, and a routing protocol, which can be considered as an anonymized version of path vector routing. The protocol preserves the high performance characteristics of the path vector routing and also has the added advantage of hiding the overlay network topology. Our simulation results show that the expected latency of our approach is 50% better than that of existing systems.
Besides the new anonymous routing protocol, this paper aims to provide the general overview of this new anonymous overlay network which may serve as the input for further research.
Enhancing Attribute-based Encryption with Attribute Hierarchy
Attribute-based encryption (ABE) has been envisioned as a promising cryptographic primitive for realizing secure and
flexible access control. However, ABE is being criticized for its high scheme overhead as extensive pairing operations are usually required. In this paper, we focus on improving the efficiency of ABE by leveraging a previously overlooked fact, i.e., the often-found hierarchical relationships among the attributes that are inherent to many access control scenarios. As the first research effort along this direction, we coin the notion of hierarchical ABE (\textsf{HABE}), which can be viewed as the generalization of traditional ABE in the sense that both definitions are equal when all attributes are independent. We further give a concrete \textsf{HABE} construction considering a tree hierarchy among the attributes, which is provably secure. More importantly, our construction exhibits significant improvements over the traditional ABE when attribute hierarchies exist.
Implementing Wagner's generalized birthday attack against the SHA-3 round-1 candidate FSB
This paper applies generalized birthday attacks to the FSB
compression function, and shows how to adapt the attacks so that they
run in far less memory. In particular, this paper presents details of a
parallel implementation attacking FSB48 , a scaled-down version of FSB
proposed by the FSB submitters. The implementation runs on a cluster of
8 PCs, each with only 8GB of RAM and 700GB of disk. This situation is
very interesting for estimating the security of systems against distributed
attacks using contributed off-the-shelf PCs.
Modeling Key Compromise Impersonation Attacks on Group Key Exchange Protocols
A key exchange protocol allows a set of parties to agree upon a secret session key over a public network. Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for the case of GKE protocols. We first model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure even against outsider KCI attacks. The attacks on these protocols demonstrate the necessity of considering KCI resilience for GKE protocols. Finally, we give a new proof of security for an existing GKE protocol under the revised model assuming random oracles.
Security Analysis of Aggregate signature and Batch verification signature schemes
Uncategorized
Uncategorized
An identity based signature scheme allows any pair of users to communicate securely and to verify each others signatures without exchanging public key certificates. An aggregate signature scheme is a digital signature scheme which supports aggregation of signatures. Batch verification is a method to verify multiple signatures at once. Aggregate signature is useful in reducing both communication and computation cost. In this paper, we describe the breaks possible in some of the aggregate signature schemes and batch verification scheme.
Analysis of the End-by-Hop Protocol for Secure Aggregation in Sensor Networks
In order to save bandwidth and thus battery power, sensor network measurements are sometimes aggregated en-route while being reported back to the querying server. Authentication of the measurements then becomes a challenge if message integrity is important for the application.
At ESAS 2007, the End-by-Hop protocol for securing in-network aggregation for sensor nodes was presented. The solution was claimed to be secure and efficient and to provide the possibility of trading off bandwidth against computation time on the server.
In this paper, we disprove these claims. We describe several attacks against the proposed solution and point out shortcomings in the original complexity analysis. In particular, we show that the proposed solution is inferior to a naive solution without in-network aggregation both in security and in efficiency.
Efficient Key Exchange with Tight Security Reduction
In this paper, we propose two authenticated key exchange (AKE) protocols, SMEN and SMEN−, which have efficient online computation and tight security proof in the extended Canetti-Krawczyk (eCK) model. SMEN takes 1.25 exponentiations in online computation, close
to that (1.17 exponentiations) of the most efficient AKEs MQV and its variants HMQV and CMQV. SMEN has a security reduction as tight as that of NAXOS, which is the first AKE having a tight security reduction in the eCK model. As a comparison, MQV does not have a security proof; both HMQV and CMQV have a highly non-tight security reduction, and HMQV needs a non-standard assumption; NAXOS takes 2.17 exponentiations in online computation; NETS, a NAXOS variant, takes two online exponentiations in online computation. SMEN simultaneously
achieves online efficiency and a tight security proof at a cost of 0.17 more exponentiations in offline computation and the restriction that one party is not allowed to establish a key with itself. SMEN− takes 1.29 exponentiations in online computation, but SMEN− does not use the static private key to compute the ephemeral public key (as does in SMEN, NAXOS, CMQV, and NETS), and hence reduces the risk of leaking the static private key.
Generic Attacks on Alternating Unbalanced Feistel Schemes
\begin{abstract}
Generic attacks against classical (balanced) Feistel schemes, unbalanced Feistel schemes with contracting functions and unbalanced Feistel schemes with expanding functions have been studied in \cite {P01}, \cite{Jut}, \cite{PNB06}, \cite{PNB07}. In this paper we study schemes where we use alternatively contracting random functions and expanding random functions. We name these schemes ``Alternating Unbalanced Feistel Schemes''. They allow constructing pseudo-random permutations from $kn$ bits to $kn$ bits where $k \geq 3$. At each round, we use either a random function from $n$ bits to $(k-1)n$ bits or a random function from $(k-1)n$ bits to $n$ bits. We describe the best generic attacks we have found. We present``known plaintext attacks'' (KPA) and ``non-adaptive chosen plaintext attacks'' (CPA-1). Let $d$ be the number of rounds. We show that if $d \leq k$, there are CPA-1 with 2 messages and KPA with $m$ the number of messages about $2^{\frac {(d-1)n}{4}}$. For $d \geq k+1$ we have to distinguish $k$ even and $k$ odd. For $k$ even, we have $m=2$ in CPA-1 and $m \simeq 2^{\frac {kn}{4}}$ in KPA. When $k$ is odd, we show that there exist CPA-1 for $d \leq 2k-1$ and KPA for $d \leq 2k+3$ with less than $2^{kn}$ messages and computations. Beyond these values, we give KPA against generators of permutations.
\end{abstract}
On Privacy Losses in the Trusted Agent Model (Abstract)
Tamper-proof devices are pretty powerful. They typically make security applications simpler (provided that the tamper-proof assumption is not violated). For application requiring privacy, we observe that some properties may become harder (if possible at all) to achieve when devices are maliciously used. We take the example of deniability, receipt-freeness, and anonymity.
We formalize the trusted agent model which assumes tamper-proof hardware in a way which captures the notion of programmable secure hardware. This model defines a functionality relative to which deniability requires provers to use a tamper proof hardware. Otherwise, any asymmetric situation in which the malicious verifiers have more powerful tamper-proof devices than the honest ones makes deniability impossible.
We conclude by observing that the ability to put boundaries in computing devices prevents from providing full control on how private information spreads: the concept of sealing a device is in some sense incompatible with some privacy notions.
Efficient Public Key Encryption Based on Ideal Lattices
The potential high efficiency of public-key encryption based on
structured lattices was first indicated by the NTRU cryptosystem,
which was proposed about 10 years ago. Unfortunately, the security of
NTRU is only heuristic. Thus, it remained an important research challenge to construct an efficient encryption scheme based on structured lattices which admits a proof of security relative to a well established cryptographic assumption. We make progress in addressing the above challenge. We show how to construct a CPA-secure public-key encryption scheme with security provably based on the worst case hardness of the approximate Shortest Vector Problem in structured ideal lattices. Under the assumption that the latter is exponentially hard to solve even with a quantum computer, our scheme resists any subexponential attack and offers (quasi-)optimal asymptotic performance: if $n$ is the security parameter, both keys are of bit-length $\softO(n)$ and the amortized costs of both encryption and decryption are $\softO(1)$ per message
bit. Our construction adapts the trapdoor one-way function of Gentry,
Peikert and Vaikuntanathan (STOC 2008), based on the Learning With
Errors problem, to structured lattices. Our main technical tools are
an adaptation of Ajtai's trapdoor key generation algorithm
(ICALP 1999) to structured ideal lattices, and a re-interpretation of
Regev's quantum reduction between the Closest Vector Problem and
sampling short lattice vectors. We think these techniques are very
likely to find further applications in the future.
Privacy-aware Attribute-based Encryption with User Accountability
As a new public key primitive, attribute-based encryption (ABE) is envisioned to be a promising tool for implementing fine-grained access control. To further address the concern of user access privacy, privacy-aware ABE schemes are being developed to achieve hidden access policy recently. For the purpose of secure access control, there is, however, still one critical functionality missing in the existing ABE schemes, which is user accountability. Currently, no ABE scheme can completely prevent the problem of illegal key sharing among users. In this paper, we tackle this problem by firstly proposing the notion of accountable, anonymous, and ciphertext-policy ABE (CP-A$^3$BE, in short) and then giving out a concrete construction. We start by improving the state-of-the-art of anonymous CP-ABE to obtain shorter public parameters and ciphertext length. In the proposed CP-A$^3$BE construction, user accountability can be achieved in black-box model by embedding additional user-specific information into the attribute private key issued to that user, while still maintaining hidden access policy. The proposed constructions are provably secure.
Short and Stateless Signatures from the RSA Assumption
We present the first signature scheme which is ''short'', stateless and secure under the RSA assumption in the standard model. Prior short, standard model signatures in the RSA setting required either a strong complexity assumption such as Strong RSA or (recently) that the signer maintain state. A signature in our scheme is comprised of one element in ZN* and one integer. The public key is also short, requiring only the modulus N, one element of ZN*, one integer, one PRF seed and some short chameleon hash parameters.
To design our signature, we employ the known generic construction of fully-secure signatures from weakly-secure signatures and a chameleon hash. We then introduce a new proof technique for reasoning about weakly-secure signatures. This technique enables the simulator to predict a prefix of the message on which the adversary will forge and to use knowledge of this prefix to embed the challenge. This technique has wider applications beyond RSA.
We also use it to provide an entirely new analysis of the security of the Waters signatures: the only short, stateless signatures known to be secure under the Computational Diffie-Hellman assumption in the standard model.
Leakage-Resilient Signatures
The strongest standard security notion for digital signature schemes is unforgeability under chosen message attacks. In practice, however, this notion can be insufficient due to ``side-channel attacks'' which exploit leakage of information about the secret internal state of the scheme's hardware implementation. In this work we put forward the notion of ``leakage-resilient signatures,'' which strengthens the standard security notion by giving the adversary the additional power to learn a bounded amount of arbitrary information about the secret state that was accessed during every signature generation. This notion naturally implies security against all possible side-channel attacks as long as the amount of information leaked on each invocation is bounded and ``only computation leaks information.''
The main result of this paper is a construction which gives a (tree based, stateful) leakage-resilient signature scheme based on any 3-time signature scheme. The amount of information that our scheme can safely leak per signature generation is $1/3$ of the information the underlying 3-time signature scheme can leak in total. Based on recent works by Alwen, Dodis, Wichs and by Katz we give several efficient instantiations of 3-time signature schemes with the required security properties, hence yielding the first constructions of provably secure leakage-resilient signature schemes.
Enabling Public Verifiability and Data Dynamics for Storage Security
Uncategorized
Uncategorized
Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of client through the auditing of whether his data stored in the cloud is indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public verifiability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the Proof of Retrievability model \cite{Shacham:ASIACRYPT:2008} by manipulating the classic Merkle Hash Tree (MHT) construction for block tag authentication. Extensive security and performance analysis show that the proposed scheme is highly efficient and provably secure.
Universally Anonymous IBE based on the Quadratic Residuosity Assumption
Uncategorized
Uncategorized
We introduce the first universally anonymous, thus key-private, IBE whose security is based on the standard quadratic residuosity assumption. Our scheme is a variant of Cocks IBE (which is not anonymous) and is efficient and highly parallelizable.
Algebraic Side-Channel Attacks
Uncategorized
Uncategorized
In 2002, algebraic attacks using overdefined systems of equations have been proposed as a potentially very powerful cryptanalysis technique against block ciphers. However, although a number of convincing experiments have been performed against certain reduced algorithms, it is not clear wether these attacks can be successfully applied in general and to a large class of ciphers. In this paper, we show that algebraic techniques can be combined with side-channel attacks in a very effective and natural fashion. As an illustration, we apply them to the block cipher PRESENT that is a stimulating first target, due to its simple algebraic structure. The proposed attacks have a number of interesting features: (1) they exploit the information leakages of all the cipher rounds, (2) in common
implementation contexts (e.g. assuming a Hamming weight leakage model), they recover the block cipher keys after the observation of a single encryption, (3) these attacks can succeed in an unknown-plaintext/ciphertext adversarial scenario and (4) they directly defeat countermeasures such as boolean masking. Eventually, we argue that algebraic side-channel attacks can take advantage of any kind of physical leakage, leading to a new tradeoff between the robustness and informativeness of the side-channel information extraction.
Towards Electrical, Integrated Implementations of SIMPL Systems
This paper discusses the practical implementation of a novel security tool termed SIMPL system,
which was introduced in [1]. SIMPL systems can be regarded as a public key version of physical
unclonable functions (PUFs). Like the latter, a SIMPL system S is physically unique and nonreproducible,
and implements an individual function FS. In opposition to a PUF, however, a SIMPL
system S possesses a publicly known numerical description, which allows its digital simulation and
prediction. At the same time, any such simulation must work at a detectably lower speed than the
real-time behavior of S. As argued in [1], SIMPL systems have certain practicality and security
advantages in comparison to PUFs, certificates of authenticity, physically obfuscated keys, and also
to standard mathematical cryptotechniques.
In [1], definitions, protocols, and optical implementations of SIMPL systems were presented.
This manuscript focuses on concrete electrical, integrated realizations of SIMPL systems, and
proposes two potential candidates: SIMPL systems derived from special SRAM-architectures (socalled
“skew designs” of SRAM cells), and implementations based on Cellular Non-Linear Networks
(CNNs).
On the Foundations of Physical Unclonable Functions
We investigate the foundations of Physical Unclonable Functions from several perspectives. Firstly, we discuss formal and conceptual issues in the various current definitions of PUFs. As we argue, they have the effect that many PUF candidates formally meet no existing definition. Next, we present alternative definitions and a new formalism. It avoids asymptotic concepts like polynomial time, but is based on concrete time bounds and on the concept of a security experiment. The formalism splits the notion of a PUF into two new notions, Strong $t$-PUFs and Obfuscating $t$-PUFs.
Then, we provide a comparative analysis between the existing definitions and our new notions, by classifying existing PUF implementations with respect to them. In this process, we use several new and unpublished machine learning results. The outcome of this comparative classification is that our definitions seem to match the current PUF landscape well, perhaps better than previous definitions.
Finally, we analyze the security and practicality features of Strong and Obfuscating $t$-PUFs in concrete applications, obtaining further justification for the split into two notions.
Multi-core Implementation of the Tate Pairing over Supersingular Elliptic Curves
This paper describes the design of a fast multi-core library for the cryptographic Tate pairing over supersingular elliptic curves. For the computation of the reduced modified Tate pairing over $\mathbb{F}_{3^{509}}$, we report calculation times of just $2.94$ ms and $1.87$ ms on the Intel Core2 and Intel Core i7 architectures, respectively. We also try to answer one important design question that surges: how many cores should be utilized for a given application?
Algebraic Attacks specialized to \(\mathbb{F}_2\) (Diplomarbeit)
This thesis studies the ring of boolean functions. A simple algorithm that performs similarly to the euclidian algorithm is derived, and it's performance for solving multivariate boolean equation systems is evaluated. Furthermore, the Raddum-Semaev equation system solving algorithm is put into geometric context, and the pseudo-euclidian algorithm is generalized to multivariate polynomial rings over arbitrary finite fields.
A Collision-resistance Hash Function DIHA2
Uncategorized
Uncategorized
Abstract. The new hash function DIHA2 (Dynamic Input Hash Algorithm)is
with the structure of Merkle-Damgard and is based on 64-bit computing.It oper-
ates each 1024-bit block and outputts a 256-bit hash-value. For a 64-bit sub-block
X[j](0 ≤ j ≤ 15) of each step, DIHA2 gets a dynamic mapping value of TLU
(table look up,The table was 256-Byte only)and add it to operation of variables
a, b, c, d,so as to eliminate the differential effect.At the same time DIHA2 sets
3 assistant register variables r1, r2, r3 to store the mapping value and resume
loading 3 steps later, so as to be interleaving. DIHA2 therefore obtained strong
avalanche effect than the others and can resist the sharp and serious attack of
differential.
Universally Composable and Statistically Secure Verifiable Secret Sharing Scheme Based on Pre-Distributed Data
Uncategorized
Uncategorized
This paper presents a non-interactive verifiable secret sharing scheme (VSS) tolerating a dishonest majority based on data pre-distributed by a trusted authority. As an application of this VSS scheme we present very efficient unconditionally secure multiparty protocols based on pre-distributed data which generalize two-party computations based on linear pre-distributed bit commitments. The main results of this paper are a non-interactive VSS where the amount of data which needs to be pre-distributed to each player depends on the number of tolerable cheaters only, a simplified multiplication protocol for shared values based on pre-distributed random products, and non-interactive zero
knowledge proofs for arbitrary polynomial relations. The security of the schemes are proved using the UC framework.
A Conjecture on Binary String and Its Applications on Constructing Boolean Functions of Optimal Algebraic Immunity
In this paper, we propose a combinatoric conjecture on binary
string, on the premise that our conjecture is correct we mainly
obtain two classes of functions which are both algebraic immunity
optimal: the first class of functions are also bent, moreover, from
this fact we conclude that the algebraic immunity of bent functions
can take all possible values except one. The second class are
balanced functions, which have optimal algebraic degree and the best
nonlinearity up to now.
Reducing the Ciphertext Size of Dolev-Dwork-Naor like Public Key Cryptosystems
Uncategorized
Uncategorized
We show a method for compressing the ciphertext and reducing the computational cost of the Dolev-Dwork-Naor cryptosystem and related schemes without changing their other parameters nor reducing the original security levels.
Information-Theoretically Secure Oblivious Polynomial Evaluation in the Commodity-Based Model
Uncategorized
Uncategorized
Oblivious polynomial evaluation (OPE) consists of a two-party
protocol where a sender inputs a polynomial $P$, and a receiver
inputs a single value $i$. At the end of the protocol, the
sender learns nothing and the receiver learns $P(i)$. This
paper deals with the problem of oblivious polynomial evaluation
under an information-theoretical perspective, which is based
on recent definitions of Unconditional Security
developed by Crépeau et al. (EUROCRYPT 2006). In this paper, we
propose an information-theoretical model for oblivious polynomial
evaluation relying on pre-distributed data, and prove very general
lower bounds on the size of the pre-distributed data, as well as
the size of the communications in any protocol. It is
demonstrated that these bounds are tight by obtaining a
round-optimal OPE protocol, which meets the lower bounds
simultaneously. Some applications of the proposed model are
provided, such as solutions for the ``Millionaire Problem'' and the
``Oblivious Equality Testing Problem''. We also present a natural generalization to OPE called oblivious linear functional evaluation.
Side-channel attacks based on linear approximations
Uncategorized
Uncategorized
Power analysis attacks against embedded secret key cryptosystems are widely studied since the seminal paper of Paul C. Kocher, Joshua Jaffe and Benjamin Jun in 1998 where has been introduced the powerful Differential Power Analysis. The strength of DPA is such that it became necessary to develop sound and efficient countermeasures. Nowadays embedded cryptographic primitives usually integrate one or several of these countermeasures (e.g. masking techniques, asynchronous designs, balanced dynamic dual-rail gates designs, noise adding, power consumption smoothing, etc. ...). This document presents new power analysis attacks based on linear approximations of the target cipher. This new type of attacks have several advantages compared to classical DPA-like attacks: first they can use multiple intermediate values by query (i.e. power trace) allowing to reduce data complexity to a minimum, secondly they can be applied on parts of the symmetric cipher that are practically unreachable by DPA-like attacks and finally they can be mounted on an unknown cipher implementation.
Last updated: 2012-04-23
Dealer-Free Dynamic Secret Sharing Schemes with Unconditional Security
Uncategorized
Uncategorized
We propose a dealer-free dynamic secret sharing scheme where the threshold and the secret can be changed multiple times to arbitrary values after the scheme's initialization.
Our motivation is to tackle the following problems. In a threshold scheme, the sensitivity of the secret as well as the number of players may fluctuate due to various reasons, e.g., the structure of the players' organization might be changed. A possible solution to this problem is to modify the threshold and/or change the secret. Moreover, a common problem with almost all secret sharing schemes is that they are ``one-time'', meaning that the secret and shares are known to everyone after secret recovery. This problem could be resolved if the dealer shares various secrets at the beginning, but a better solution is to dynamically generate new secrets in the absence of the dealer.
As our contribution, the well-known re-sharing techniques are analyzed in both passive and active adversary models. Subsequently, our solution for a dealer-free dynamic secret sharing scheme is provided. Finally, a new secret sharing protocol, as an applications of our dynamic scheme, is proposed.
Simulation based security in the applied pi calculus
Uncategorized
Uncategorized
We present a symbolic framework for refinement and composition of security protocols. The framework uses the notion of ideal functionalities. These are abstract systems which are secure by construction and which can be combined into larger systems. They can be separately refined in order to obtain concrete protocols implementing them. Our work builds on ideas from computational models such as the universally composable security and reactive simulatability frameworks. The underlying language we use is the applied pi calculus which is a general language for specifying security protocols. In our framework we can express the different standard flavours of simulation-based security which happen to all coincide. We illustrate our framework on an authentication functionality which can be realized using the Needham-Schroeder-Lowe protocol. For this we need to define an ideal functionality for asymmetric encryption and its realization. We also show a joint state result for this functionality which allows composition (even though the same key material is reused) using a tagging mechanism.
Pseudorandomness Analysis of the Lai-Massey Scheme
At Asiacrypt’99, Vaudenay modified the structure in the IDEA cipher to a new scheme, which they called as the Lai-Massey scheme. It is proved that 3-round Lai-Massey scheme is sufficient for pseudorandomness and 4-round Lai-Massey scheme is sufficient for strong pseudorandomness. But the author didn’t point out whether three rounds and four rounds are necessary for the pseudorandomness and strong pseudorandomness of the Lai-Massey Scheme. In this paper we find a two round pseudorandomness distinguisher and a three-round strong pseudorandomness distinguisher, thus prove that three rounds is necessary for the pseudorandomness and four rounds is necessary for the strong pseudorandomness.
Revisiting the Indifferentiability of PGV Hash Functions
In this paper, first we point out some flaws in the existing indifferentiability simulations of the pf-MD and the NMAC constructions, and provide new differentiable attacks on the hash functions based these schemes. Afterthat, the indifferentiability of
the 20 collision resistant PGV hash functions, which are padded
under the pf-MD, the NMAC/HMAC and the chop-MD constructions, are
reconsidered. Moreover, we disclose that there exist 4 PGV schemes
can be differentiable from a random oracle with the pf-MD among 16
indifferentiable PGV schemes proven by Chang et al. Finally, new indifferentiability simulations are provided for 20 collision-resistant PGV schemes. The simulations exploit that 20 collision-resistant PGV hash functions, which implemented with the NMAC/HMAC and the chop-MD, are indifferentiable from a random oracle. Our result implies that same compression functions under MD variants might have the same security bound with respect to the collision resistance, but quite different in the view of indifferentiability.
Proposal of PPS Multivariate Public Key Cryptosystems
In this paper we propose a new MPKC, called PPS, based on (i) the 2-layer nonlinear piece in hand method, (ii) PMI, and (iii) STS. The PPS is a specific MPKC obtained by applying the 2-layer nonlinear piece in hand method to STS, in the manner that the rank and randomness of the lower rank steps in the original secret polynomial
vector of STS are enhanced by adding a perturbation polynomial vector and moreover PMI is used in the auxiliary part. The PPS overcomes the drawbacks of the three schemes by the advantage of the three schemes themself. Thus, PPS can be thought to be immune simultaneously from the algebraic attacks, such as the Groebner bases
attacks, from the rank attacks, and from the differential attacks.
General Error Decodable Secret Sharing Scheme and Its Application
Uncategorized
Uncategorized
Consider a model of secret sharing schemes with cheaters. We say that
a secret sharing scheme is error decodable if we can still recover the secret $s$ correctly from a noisy share vector $({\tt share}_1', \cdots, {\tt share}_n')$. In this paper, we first prove that there exists an error decodable secret sharing scheme if and only if the adversary structure $\Gamma$ satisfies a certain condition called $Q^3$. Next for any $\Gamma$ which satisfies $Q^3$, we show an error decodable secret sharing scheme such that the decoding algorithm runs in polynomial-time in $|S|$ and the size of a linear secret sharing scheme (monotone span program) which realzes $\Gamma$. We finally show an applicaiton to $1$-round Perfectly Secure Message Transmission schemes (PSMT).
Computationally Secure Two-Round Authenticated Message Exchange
We study two-round authenticated message exchange protocols consisting of a single request and a single response, with the realistic assumption that the responder is long-lived and has bounded memory. We first argue that such protocols necessarily need elements such as timestamps to be secure. We then present such a protocol and prove that it is correct and computationally secure.
In our model, the adversary provides the initiator and the responder with the payload of their messages, which means our protocol can be used to implement securely any service based on authenticated message exchange. We even allow the adversary to read and reset the memory of the principals and to use, with very few restrictions, the private keys of the principals for signing the payloads or parts thereof. The latter corresponds to situations in which the keys are not only used by our protocol. We use timestamps to secure our protocol, but only assume that each principal has access to a local clock.
Security of Cyclic Double Block Length Hash Functions including Abreast-DM
We provide the first proof of security for Abreast-DM, one of the oldest and most well-known constructions for turning a block cipher with $n$-bit block length and $2n$-bit key length into a 2n-bit cryptographic hash function. In particular, we prove that when Abreast-DM is instantiated with AES-256, i.e. a block cipher with 128-bit block length and 256-bit key length, any adversary that asks less than 2^124.42 queries cannot find a collision with success probability greater than 1/2. Surprisingly, this about 15 years old construction is one of the few constructions that have the desirable feature of a near-optimal collision resistance guarantee.
We generalize our techniques used in the proof of Abreast-DM to a huge class of double block length (DBL) hash functions that we will call Cyclic-DM. Using this generalized theorem we are able to derive several DBL constructions that lead to compression functions that even have a higher security guarantee and are more efficient than Abreast-DM. Furthermore we give DBL constructions that have the highest security guarantee of all DBL compression functions currently known in literature. We also provide an analysis of preimage resistance for Cyclic-DM compression functions. Note that this work has been already presented at Dagstuhl '09.
A Study on RAM Requirements of Various SHA-3 Candidates on Low-cost 8-bit CPUs
In this paper, we compare the implementation costs of various
SHA-3 candidates on low-cost 8-bit CPUs by estimating RAM/ROM
requirements of them. As a first step toward this kind of research, in
our comparison, we make reasonable estimations of RAM/ROM requirements
of them which can be done without implementation.
Last updated: 2009-08-10
Differential Path for SHA-1 with complexity $O(2^{52})$
Uncategorized
Uncategorized
Although SHA-1 has been theoretically broken for some time now, the task
of finding a practical collision is yet to be completed. Using some new approaches to differential analysis, we were able to find a new differential path which can be used in a collision attack with complexity of $O(2^{52})$. This is currently the lowest complexity attack on SHA-1.
FACTORIZATION WITH GENUS 2 CURVES
The elliptic curve method (ECM) is one of the best factorization methods available. It is possible to use hyperelliptic curves instead of elliptic curves but it is in theory slower. We use special hyperelliptic curves and Kummer surfaces to reduce the complexity of the algorithm. Our implementation GMP-HECM is faster than GMP-ECM for factoring big numbers.
FORMAT CONTROLLING ENCRYPTION USING DATATYPE PRESERVING ENCRYPTION
Datatype-Preserving Encryption (DTP) enables encryption of values within a certain character set into ciphertext restricted to the same set, while still keeping data length. This is in contrast to conventional block cipher modes which produce binary data, i e each encrypted character may have an arbitrary value, possibly outside the original character set, often accompanied with a length expansion caused by padding.
Format-Controlling Encryption (FCE) is an extension to DTP, for which data length still is kept, but the output character range is allowed to be larger, though not covering the range of all possible values (i e binary data). With FCE it is possible to handle certain DTP limitations, like limited key rotation and integrity support.
Multiple Linear Cryptanalysis of Reduced-Round SMS4 Block Cipher
SMS4 is a 32-round unbalanced Feistel block cipher with its block
size and key size being 128 bits. As a fundamental block cipher used
in the WAPI standard, the Chinese national standard for WLAN, it has
been widely implemented in Chinese WLAN industry. In this paper, we
present a modified branch-and-bound algorithm which can be used for
searching multiple linear characteristics for SMS4-like unbalanced
Feistel block ciphers. Furthermore, we find a series of 5-round
iterative linear characteristics of SMS4 when applying the modified
algorithm in SMS4. Then based on each 5-round iterative linear
characteristic mentioned above, an 18-round linear characteristic of
SMS4 can be constructed, thus leading to a list of 18-round linear
characteristics of SMS4. According to the framework of Biryukov $et\
al.$ from Crpto 2004, a key recovery attack can be mounted on
22-round SMS4 by utilizing the above multiple linear
characteristics. As a matter of fact, our result has much lower data
complexity than the previously best known cryptanalytic result on
22-round SMS4, which is also the previously best known result on
SMS4.
SIMPL Systems: On a Public Key Variant of Physical Unclonable Functions
Uncategorized
Uncategorized
This paper theoretically discusses a novel security tool termed {\it SIMPL system}, which can be regarded as a public key version of physical unclonable functions (PUFs). Like the latter, a SIMPL system $S$ is physically unique and non-reproducible, and implements an individual function $F_S$. In opposition to a PUF, however, a SIMPL system $S$ possesses a publicly known numerical description $D(S)$, which allows its digital simulation and prediction. At the same time, it is required that any digital simulation of a SIMPL system $S$ must work at a detectably lower speed than its real-time behavior.
In other words, the holder of a SIMPL system $S$ can evaluate a publicly known, publicly computable function $F_S$ faster than anyone else. This feature, so we argue in this paper, allows a number of improved practicality and security features. Once implemented successfully, SIMPL systems would have specific advantages over PUFs, certificates of authenticity, physically obfuscated keys, and also over standard mathematical cryptotechniques.
Improvement of One Quantum Encryption Scheme
Zhou et al proposed a quantum encryption scheme based on quantum computation in 2006. Each qubit of the ciphertext is constrained to two pairs of conjugate states. So its implementation is feasible with the existing technology. But it is inefficient since it entails six key bits to encrypt one message bit, and the resulting ciphertext for one message bit consists of three qubits.
In addition, its security can not be directly reduced to the
well-known BB84 protocol. In this paper, we revisit it using the
technique developed in BB84 protocol. The new scheme entails only
two key bits to encrypt one message bit. The resulting ciphertext is
just composed of two qubits. It saves about a half cost without the
loss of security. Moreover, the encryption scheme is probabilistic
rather than deterministic.
Formally and Practically Relating the CK, CK-HMQV, and eCK Security Models for Authenticated Key Exchange
Many recent key exchange (KE) protocols have been proven
secure in the CK, CK-HMQV, or eCK security models. The exact relation
between these security models, and hence between the security guarantees
provided by the protocols, is unclear. First, we show that the CK,
CK-HMQV, and eCK security models are formally incomparable. Second, we
show that these models are also practically incomparable, by providing
for each model attacks that are not considered by the other models. Our
analysis enables us to find several previously unreported flaws in
existing protocol security proofs. We identify the causes of
these flaws and show how they can be avoided.
Sparse Boolean equations and circuit lattices
Uncategorized
Uncategorized
A system of Boolean equations is called sparse if each equation
depends on a small number of variables. Finding efficiently
solutions to the system is an underlying hard problem in the
cryptanalysis of modern ciphers. In this paper we study new properties
of the Agreeing Algorithm, which was earlier
designed to solve such equations. Then we show that mathematical
description of the Algorithm is translated straight into the language of electric wires and switches. Applications to the DES and the Triple
DES are discussed. The new approach, at least theoretically, allows a faster key-rejecting in brute-force than with Copacobana.
Format-Preserving Encryption
Format-preserving encryption (FPE) encrypts a plaintext of some specified format into a ciphertext of identical format—for example, encrypting a valid credit-card number into a valid creditcard number. The problem has been known for some time, but it has lacked a fully general and rigorous treatment.We provide one, starting off by formally defining FPE and security goals for it.We investigate the natural approach for achieving FPE on complex domains, the “rank-then-encipher” approach, and explore what it can and cannot do. We describe two flavors of unbalanced Feistel networks that can be used for achieving FPE, and we prove new security results for each. We revisit the cycle-walking approach for enciphering on a non-sparse subset of an encipherable domain, showing that the timing information that may be divulged by cycle walking is not a damaging thing to leak.
Last updated: 2009-10-05
Modifications in the Design of Trivium to Increase its Security Level
Uncategorized
Uncategorized
Inner state of a stream cipher is said to be as large as necessary but at
the same time as small as possible. Trivium, a hardware oriented stream cipher, has
been selected for the final portfolio of the eSTREAM project. It offers a security level
of 80 bits while it has 288 internal state bits. Owing to its simple algebraic structure,
it has been proved experimentally that Trivium can provide only a marginal security
level of 80 bits. This article presents some modified versions of Trivium to increase
its security level from 80 bits. Our objective is to give a better security level with the
same number of internal states without changing much the elegant and simple design
philosophy of Trivium. The focus is to make its algebraic structure intricate enough
to resist the algebraic attack with guess and determine approach, which can recover
its secret internal state bits. We have proposed two possible modifications that can
increase its security level without any increase in the number of AND gates. Maximov
and Biryukov have proposed a tweaked version of Trivium (Trivium/128) (11), with
additional AND gates, to increase the security level to 128 bits. In this article, two
other modifications with additional product terms proven to have a better security
margin than Trivium/128 are also presented.
Symbolic Encryption with Pseudorandom Keys
We give an efficient decision procedure that, on input two (acyclic)
cryptographic expressions making arbitrary use of an encryption scheme
and a (length doubling) pseudorandom generator, determines (in polynomial time) if the two expressions produce computationally indistinguishable distributions for any pseudorandom generator and encryption scheme satisfying the standard security notions of pseudorandomness and indistinguishability under chosen plaintext attack.
The procedure works by mapping each expression to a symbolic pattern that captures, in a fully abstract way, the information revealed by the expression to a computationally bounded observer. We then prove that if any two (possibly cyclic) expressions are mapped to the same
pattern, then the associated distributions are indistinguishable.
At the same time, if the expressions are mapped to different symbolic
patterns and do not contain encryption cycles, there are secure
pseudorandom generators and encryption schemes for which the two
distributions can be distinguished with overwhelming advantage.
Cryptanalysis of the MST_3 Public Key Cryptosystem
In this paper we describe a cryptanalysis of MST_3, a public key
cryptosystem based on non-commutative groups recently proposed by
Lempken, Magliveras, van Trung and Wei.
On the Necessary and Sufficient Assumptions for UC Computation
We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the common random string model and the key-registration authority (KRA) model, and provide new result for all of them. Perhaps most interestingly we show that:
- For even the minimal meaningful KRA, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious-transfer protocol for the stand-alone model. Since a KRA where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the KRA model: any non-trivial KRA is sufficient for UC computation.
- We show that in the KRA model one-way functions are sufficient for UC commitment and UC zero-knowledge. These are the first examples of UC secure protocols for non-trivial tasks which do not assume the existence of public-key primitives. In particular, the protocols show that non-trivial UC computation is possible in Minicrypt.
On-Chip Electric Waves: An Analog Circuit Approach to Physical Uncloneable Functions
Uncategorized
Uncategorized
This paper proposes the use of Cellular Non-Linear Networks (CNNs) as physical uncloneable functions (PUFs). We argue that analog circuits offer higher security than existing digital PUFs and that the CNN paradigm allows us to build large, unclonable, and scalable analog PUFs, which still show a stable and repeatable input--output behavior.
CNNs are dynamical arrays of locally-interconected cells, with a cell dynamics that depends upon the interconnection strengths to their neighbors. They can be designed to evolve in time according to partial differential equations.
If this equation describes a physical phenomenon, then the CNN can simulate a complex physical system on-chip. This can be exploited to create electrical PUFs with high relevant structural information content.
To illustrate our paradigm at work, we design a circuit that directly emulates nonlinear wave propagation phenomena in a random media. It effectively translates the complexity of optical PUFs into electrical circuits. %&, leading to better practicality, while maintaining or even improving the security properties of their optical counterparts.
Cryptanalysis of the Birational Permutation Signature Scheme over a Non-commutative Ring
In 2008, Hashimoto and Sakurai proposed a new efficient
signature scheme, which is a non-commutative ring version of Shamir’s
birational permutation signature scheme. Shamir’s scheme is a generalization of the OSS (Ong-Schnorr-Shamir) signature scheme and was broken by Coppersmith et al. using its linearity and commutativity.
The HS (Hashimoto-Sakurai) scheme is expected to be secure against
the attack of Coppersmith et al. since the scheme is based on the noncommutative structure. In this paper, we propose an attack against the HS scheme. Our proposed attack is practical under the condition that its step size and the number of steps are small. More precisely, we firstly show that the HS scheme is essentially a commutative scheme, that is, the HS scheme can be reduced to some commutative birational permutation signature scheme. Then we apply Patarin-like attack against the commutative birational permutation signature scheme. We discuss efficiency of our attack by using some experimental results. Furthermore the commutative scheme obtained from the HS scheme is the Rainbow-type signature scheme. We also discuss the security of the Rainbow-type signature scheme, and propose an efficient attack against some class of the Rainbow-type signature scheme.
Tardos Fingerprinting Codes in the Combined Digit Model
We introduce a new attack model for collusion secure codes, and
analyze the collusion resistance of two version of the Tardos code in
this model, both for binary and non-binary alphabets. The model allows
to consider signal processing and averaging attacks via a set of
symbol detection error rates. The false positive rate is represented
as a single number; the false negative rate is a function of the false
positive rate and of the number of symbols mixed by the colluders.
We study two versions of the q-ary Tardos code in which the
accusation method has been modified so as to allow for the detection
of multiple symbols in the same content segment. The collusion
resilience of both variants turns out to be comparable. For realistic
attacker strengths the increase in code length is modest, demonstrating
that the modified Tardos code is effective in the new model.
Faster Pairings on Special Weierstrass Curves
This paper presents efficient formulas for computing cryptographic pairings on the curve y^2 = c*x^3 + 1 over fields of large characteristic. We provide examples of pairing-friendly elliptic curves of this form which are of interest for efficient pairing implementations.
Examples of differential multicollisions for 13 and 14 rounds of AES-256
Here we present practical differential $q$-multicollisions for AES-256, which can be tested on any implementation of AES-256. In our paper "Distinguisher and Related-Key Attack on the Full AES-256" $q$-multicollisions are found with complexity $q\cdot 2^{67}$. We relax conditions on the plaintext
difference $\Delta_P$ allowing some bytes to vary and find multicollisions for 13 and 14 round AES with complexity $q\cdot 2^{37}$.
Even with the relaxation there is still a large complexity gap between our algorithm and the lower bound that we have proved in Lemma 1. Moreover we believe that in practice finding even
two fixed-difference collisions for a good cipher would be very challenging.
Distinguisher and Related-Key Attack on the Full AES-256 (Extended Version)
In this paper we construct a chosen-key distinguisher and a
related-key attack on the full 256-bit key AES. We define a
notion of {\em differential $q$-multicollision} and show that for
AES-256 $q$-multicollisions can be constructed in time $q\cdot
2^{67}$ and with negligible memory, while we prove that the same
task for an ideal cipher of the same block size would require at
least $O(q\cdot 2^{\frac{q-1}{q+1}128})$ time. Using similar
approach and with the same complexity we can also construct
$q$-pseudo collisions for AES-256 in Davies-Meyer hashing mode, a
scheme which is provably secure in the ideal-cipher model. We have
also computed partial $q$-multicollisions in time $q\cdot 2^{37}$
on a PC to verify our results. These results show that AES-256 can
not model an ideal cipher in theoretical constructions.
Finally, we extend our results
to find the first publicly known attack on the full 14-round
AES-256: a related-key distinguisher which works for one out of
every $2^{35}$ keys with $2^{120}$ data and time complexity and
negligible memory. This distinguisher is translated into a
key-recovery
attack with total complexity of $2^{131}$ time and $2^{65}$ memory.
Group Testing and Batch Verification
We observe that finding invalid signatures in batches of signatures that fail batch verification is an instance of the classical group testing problem. We present and compare new sequential and parallel algorithms for finding invalid signatures based on group testing algorithms. Of the five new algorithms, three show improved performance for many parameter choices, and the performance gains are especially notable when multiple processors are available.
Protecting the NOEKEON Cipher Against SCARE Attacks in FPGAs by using Dynamic Implementations
Uncategorized
Uncategorized
Protecting an implementation against Side Channel Analysis for Reverse Engineering (SCARE) attacks is a great challenge and we address this challenge by presenting a first proof of concept. White-box cryptography has been developed to protect programs against an adversary who has full access to their software implementation. It has also been suggested as a countermeasure against side channel attacks and we examine here these techniques in the wider perspective of SCARE. We consider that the adversary has only access to the cryptographic device through its side channels and his goal is to recover the specifications of the algorithm. In this work, we focus on FPGA (Field-Programmable Gate Array) technologies and examine how to thwart SCARE attacks by implementing a block cipher following white-box techniques. The proposed principle is based on changing dynamically the implementations. It is illustrated by an example on the Noekeon cipher and feasibility in different FPGAs is studied.