All papers (Page 215 of 24087 results)
Redundant $\tau$-adic Expansions II: Non-Optimality and Chaotic Behaviour
When computing scalar multiples on Koblitz curves, the Frobenius endomorphism can be used to replace the usual doublings on the curve. This involves digital expansions of the scalar to the complex base $\tau=(\pm 1\pm \sqrt{-7})/2$ instead of binary expansions. As in the binary case, this method can be sped up by enlarging the set of valid digits at the cost of precomputing some points on the curve. In the binary case, it is known that a simple syntactical condition (the so-called $w$-NAF-condition) on the expansion ensures that the number of curve additions is minimised. The purpose of this paper is to show that this is not longer true for the base $\tau$ and $w\in\{4,5,6\}$. Even worse, it is shown that there is no longer an online algorithm to compute an optimal expansion from the digits of some standard expansion from the least to the most significant digit, which can be interpreted as chaotic behaviour. The proofs heavily depend on symbolic computations involving transducer automata.
Computational soundness of symbolic zero-knowledge proofs
The abstraction of cryptographic operations by term algebras, called
Dolev-Yao models, is essential in almost all tool-supported methods
for proving security protocols. Recently significant progress was made
in proving that Dolev-Yao models offering the core cryptographic
operations such as encryption and digital signatures can be sound with
respect to actual cryptographic realizations and security
definitions. Recent work, however, has started to extend Dolev-Yao
models to more sophisticated operations with unique security
features. Zero-knowledge proofs arguably constitute the most amazing
such extension.
In this paper, we first identify which additional properties a
cryptographic (non-interactive) zero-knowledge proof needs to fulfill
in order to serve as a computationally sound implementation of
symbolic (Dolev-Yao style) zero-knowledge proofs; this leads to the
novel definition of a symbolically-sound zero-knowledge proof
system. We prove that even in the presence of arbitrary active
adversaries, such proof systems constitute computationally sound
implementations of symbolic zero-knowledge proofs. This yields the
first computational soundness result for symbolic zero-knowledge
proofs and the first such result against fully active adversaries of
Dolev-Yao models that go beyond the core cryptographic operations.
Last updated: 2008-09-23
Impossible Differential Cryptanalysis of CLEFIA
Uncategorized
Uncategorized
This paper mainly discussed the impossible differerential crypt-
analysis on CLEFIA which was proposed in FSE2007. New 9-round
impossible differentials which are difrererent from the previous ones are discovered. Then these differerences are applied to the attack of reduced-CLEFIA. For 128-bit case, it is possible to apply an impossible differen-tial attack to 12-round CLEFIA which requires 2^110.93 chosen plaintexts and the time complexity is 2^111. For 192/256-bit cases, it is possible to apply impossible differential attack to 13-round CLEFIA and the chosen plaintexts and time complexity are 2^111.72 and 2^158 respectively. For 256-bit cases, it needs 2^112.3 chosen plaintexts and no more than 2^199 encryptions to attack 14-round CLEFIA and 2^113 chosen plaintexts to attack 15-round 256-bit CLEFIA with the time complexity less than 2^248 encryptions.
Robust Combiners for Software Hardening
Uncategorized
Uncategorized
All practical software hardening schemes, as well as practical encryption schemes, e.g., AES, were not proven to be secure. One technique to enhance security is {\em robust combiners}.
An algorithm $C$ is a robust combiner for specification $S$, e.g., privacy, if for any two implementations $X$ and $Y$, of a cryptographic scheme, the combined scheme $C(X,Y)$ satisfies $S$ provided {\em either} $X$ {\em or} $Y$ satisfy $S$.
We present the first robust combiners for software hardening, specifically for obfuscation \cite{barak:obfuscation}, and for White-Box Remote Program Execution (\w) \cite{herzberg2009towards}. WBRPE and obfuscators are software hardening techniques that are employed to protect execution of programs in remote, hostile environment. \w\ provides a software only platform allowing secure execution of programs on untrusted, remote hosts, ensuring privacy of the program, and of the inputs to the program, as well as privacy and integrity of the result of the computation. Obfuscators protect the code (and secret data) of the program that is sent to the remote host for execution.
Robust combiners are particularly important for software hardening, where there is no standard whose security is established. In addition, robust combiners for software hardening are interesting from software engineering perspective since they introduce new techniques of reductions and code manipulation.
Toy Factoring by Newton's Method
A theoretical approach for integer factoring using complex analysis is
to apply Newton's method on an analytic function whose zeros, within
in a certain strip, correspond to factors of a given integer. A few
successful toy experiments of factoring small numbers with this
approach, implemented in a naive manner that amounts to complexified
trial division, show that, at least for small numbers, Newton's method
finds the requisite zeros, at least some of time (factors of nearby
numbers were also sometimes found). Nevertheless, some heuristic
arguments predict that this approach will be infeasible for factoring
numbers of the size currently used in cryptography.
Redundant $\tau$-adic Expansions I: Non-Adjacent Digit Sets and their Applications to Scalar Multiplication
This paper investigates some properties of $\tau$-adic expansions of
scalars. Such expansions are widely used in the design of scalar
multiplication algorithms on Koblitz Curves, but at the same
time they are much less understood than their binary counterparts.
Solinas introduced the width-$w$ $\tau$-adic non-adjacent form for
use with Koblitz curves. This is an expansion of integers
$z=\sum_{i=0}^\ell z_i\tau^i$, where $\tau$ is
a quadratic integer depending on the curve, such that $z_i\ne 0$
implies $z_{w+i-1}=\ldots=z_{i+1}=0$,
like the sliding window binary recodings of integers.
It uses a redundant digit set, i.e., an expansion of an integer using
this digit set need not be uniquely determined if the
syntactical constraints are not enforced.
We show that the digit sets described by Solinas, formed by elements
of minimal norm in their residue classes, are uniquely determined.
Apart from this digit set of minimal norm representatives, other digit sets
can be chosen such that all integers can be represented by a width-$w$
non-adjacent form using those digits. We describe an algorithm
recognizing admissible digit sets.
Results by Solinas and by Blake, Murty, and Xu are generalized.
In particular, we introduce two new useful families of digit sets.
The first set is syntactically defined.
As a consequence of its adoption we can
also present improved and streamlined algorithms to perform the
precomputations in $\tau$-adic scalar multiplication methods.
The latter use an improvement of the computation of sums and differences of
points on elliptic curves with mixed affine and López-Dahab
coordinates.
The second set is suitable for low-memory applications, generalizing an approach
started by Avanzi, Ciet, and Sica. It permits to devise a scalar multiplication
algorithm that dispenses with the initial precomputation stage
and its associated memory space. A suitable choice of the parameters of the
method leads to a scalar multiplication algorithm on Koblitz Curves
that achieves sublinear complexity in the number of expensive curve operations.
A Real-World Attack Breaking A5/1 within Hours
Uncategorized
Uncategorized
In this paper we present a real-world hardware-assisted attack on the well-known A5/1 stream cipher which is (still) used to secure GSM communication in most countries all over the world. During the last ten years A5/1 has been intensively analyzed. However, most of the proposed attacks are just of theoretical interest since they lack from practicability — due to strong preconditions, high computational demands and/or huge storage requirements — and have never been fully implemented.
In contrast to these attacks, our attack which is based on the work by Keller and Seitz [KS01] is running on an existing special-purpose hardware device, called COPACOBANA. With the knowledge of only 64 bits of keystream the machine is able to reveal the corresponding internal 64-bit state of the cipher in about 7 hours on average. Besides providing a detailed description of our attack architecture as well as implementation results, we propose and analyze an optimization that leads again to an improvement of about 16% in computation time.
Dynamic SHA-2
Uncategorized
Uncategorized
In this paper I describe the construction of Dynamic SHA-2 family of cryptographic hash functions. They are built with design components from the SHA-2 family, but I use the bits in message as parameters of function G, R and ROTR operation in the new hash functionh. It enabled us to achieve a novel design principle: When message is changed, the calculation will be different. It make the system can resistant against all extant attacks.
Fast Multiple Point Multiplication on Elliptic Curves over Prime and Binary Fields using the Double-Base Number System
Uncategorized
Uncategorized
Multiple-point multiplication on elliptic curves is
the highest computational complex operation in the elliptic curve
cyptographic based digital signature schemes. We describe three
algorithms for multiple-point multiplication on elliptic curves
over prime and binary fields, based on the representations
of two scalars, as sums of mixed powers of 2 and 3. Our
approaches include sliding window mechanism and some precomputed
values of points on the curve. A proof for formulae to
calculate the number of double-based elements, doublings and
triplings below 2^n is listed. Affine coordinates and Jacobian
coordinates are considered in both prime fields and binary fields.
We have achieved upto 24% of improvements in new algorithms
for multiple-point multiplication.
Last updated: 2018-07-18
A Note on Differential Privacy: Defining Resistance to Arbitrary Side Information
In this note we give a precise formulation of "resistance to arbitrary side information" and show that several relaxations of differential privacy imply it. The formulation follows the ideas originally due to Dwork and McSherry, stated implicitly in [Dwork06]. This is, to our knowledge, the first place such a formulation appears explicitly. The proof that relaxed definitions satisfy the Bayesian formulation is new.
Certificateless Signcryption
Certificateless cryptography achieves the best of the two worlds: it inherits from identity-based techniques a solution to the certificate management problem in public-key encryption, whilst removing the secret key escrow functionality inherent to the identity-based setting. Signcryption schemes achieve confidentiality and authentication simultaneously by combining public-key encryption and digital signatures, offering better overall performance and security. In this paper, we introduce the notion of certificateless signcryption and present an efficient construction which guarantees security under insider attacks, and therefore provides forward secrecy and non-repudiation. The scheme is shown to be secure using random oracles under a variant of the bilinear Diffie-Hellman assumption.
Attacking Reduced Round SHA-256
Uncategorized
Uncategorized
The SHA-256 hash function has started getting attention recently by the cryptanalysis community
due to the various weaknesses found in its predecessors such as MD4, MD5, SHA-0 and SHA-1. We make
two contributions in this work. First we describe message modification techniques and use them to obtain an
algorithm to generate message pairs which collide for the actual SHA-256 reduced to 18 steps. Our second
contribution is to present differential paths for 19, 20, 21, 22 and 23 steps of SHA-256. We construct parity
check equations in a novel way to find these characteristics. Further, the 19-step differential path presented here
is constructed by using only 15 local collisions, as against the previously known 19-step near collision differential
path which consists of interleaving of 23 local collisions. Our 19-step differential path can also be seen as a single
local collision at the message word level. We use a linearized local collision in this work. These results do not
cause any threat to the security of the SHA-256 hash function.
Unconditionally Reliable and Secure Message Transmission in Undirected Synchronous Networks: Possibility, Feasibility and Optimality
We study the interplay of network connectivity and the issues related to the ‘possibility’, ‘feasibility’ and ‘optimality’ for unconditionally reliable message transmission (URMT) and unconditionally secure message transmission (USMT) in an undirected
synchronous network, under the influence of an adaptive mixed adversary having unbounded computing power, who can corrupt some of the nodes in the network in Byzantine, omission, fail-stop and passive fashion respectively. We consider two types of adversary, namely threshold and non-threshold. One of the important conclusions we arrive at from our study is that allowing a negligible error probability significantly helps in the ‘possibility’, ‘feasibility’ and ‘optimality’ of both reliable and secure message transmission protocols. To design our protocols, we propose several new techniques which are of independent interest.
Reducing Complexity Assumptions for Oblivious Transfer
Reducing the minimum assumptions needed to construct various cryptographic primitives is an important and
interesting task in theoretical cryptography. Oblivious Transfer, one of the most basic cryptographic building
blocks, is also studied under this scenario. Reducing the minimum assumptions for Oblivious Transfer seems not
an easy task, as there are a few impossibility results under black-box reductions.
Until recently, it is widely believed that Oblivious Transfer can be constructed with trapdoor permutations but
not trapdoor functions in general. In this paper, we enhance previous results and show one
Oblivious Transfer protocol based on a collection of trapdoor functions with some extra properties.
We also provide reasons for adding the extra properties and argue that the assumptions in the protocol
are nearly minimum.
Chosen-Ciphertext Secure Fuzzy Identity-Based Key Encapsulation without ROM
We use hybrid encryption with Fuzzy Identity-Based Encryption (Fuzzy-IBE) schemes, and present the first and efficient fuzzy identity-based key encapsulation mechanism (Fuzzy-IB-KEM) schemes which are chosen-ciphertext secure (CCA) without random oracle in the selective-ID model. To achieve these goals, we consider Fuzzy-IBE schemes as consisting of separate key and data encapsulation mechanisms (KEM-DEM), and then give the definition of Fuzzy-IB-KEM. Our main idea is to enhance Sahai and Waters' "large universe" construction (Sahai and Waters, 2005), chosen-plaintext secure (CPA) Fuzzy-IBE, by adding some redundant information to the ciphertext to make it CCA-secure.
Oblivious Transfer Based on the McEliece Assumptions
We implement one-out-of-two bit oblivious transfer (OT) based on the
assumptions used in the McEliece cryptosystem:
the hardness of decoding random binary linear codes, and the difficulty of distinguishing
a permuted generating matrix of Goppa codes from a random matrix. To our knowledge
this is the first OT reduction to these problems only.
More Discriminants with the Brezing-Weng Method
The Brezing-Weng method is a general framework to generate families of pairing-friendly elliptic curves. Here, we introduce an improvement which can be used to generate more curves with larger discriminants. Apart from the number of curves this yields, it provides an easy way to avoid endomorphism rings with small class number.
Constant-Size Dynamic $k$-TAA
$k$-times anonymous authentication ($k$-TAA) schemes allow members
of a group to be authenticated anonymously by application providers
for a bounded number of times. Dynamic $k$-TAA allows application
providers to independently grant or revoke users from their own
access group so as to provide better control over their clients. In
terms of time and space complexity, existing dynamic $k$-TAA schemes
are of complexities O($k$), where $k$ is the allowed number of
authentication. In this paper, we construct a dynamic $k$-TAA scheme
with space and time complexities of $O(log(k))$. We also outline how
to construct dynamic $k$-TAA scheme with a constant proving effort.
Public key size of this variant, however, is $O(k)$.
We then describe a trade-off between efficiency and setup freeness
of AP, in which AP does not need to hold any secret while
maintaining control over their clients.
To build our system, we modify the short group signature scheme into
a signature scheme and provide efficient protocols that allow one to
prove in zero-knowledge the knowledge of a signature and to obtain a
signature on a committed block of messages. We prove that the
signature scheme is secure in the standard model under the $q$-SDH
assumption.
Finally, we show that our dynamic $k$-TAA scheme, constructed from
bilinear pairing, is secure in the random oracle model.
Unbalanced Digit Sets and the Closest Choice Strategy for Minimal Weight Integer Representations
An online algorithm is presented that produces an optimal radix-2 representation of an input integer $n$ using digits from the set $D_{\ell,u}=\{a\in\Z:\ell\le a\le u\}$, where $\ell \leq 0$ and $u \geq 1$. The algorithm works by scanning the digits of the binary representation of $n$ from left-to-right (\ie from most-significant to least-significant). The output representation is optimal in the sense that, of all radix-2 representations of $n$ with digits from $D_{\ell,u}$, it has as few nonzero digits as possible (\ie it has \emph{minimal weight}). Such representations are useful in the efficient implementation of elliptic curve cryptography. The strategy the algorithm utilizes is to choose an integer of the form $d 2^i$, where $d \in D_{\ell,u}$, that is closest to $n$ with respect to a particular distance function. It is possible to choose values of $\ell$ and $u$ so that the set $D_{\ell,u}$ is unbalanced in the sense that it contains more negative digits than positive digits, or more positive digits than negative digits. Our distance function takes the possible unbalanced nature of $D_{\ell,u}$ into account.
Efficient Lossy Trapdoor Functions based on the Composite Residuosity Assumption
THIS REPORT IS REPLACED BY REPORT 2009/590.
The arithmetic of characteristic 2 Kummer surfaces
The purpose of this paper is a description of a model of Kummer surfaces in characteristic 2, together with the associated formulas for the pseudo-group law. Since the classical model has bad reduction, a renormalization of the parameters is required, that can be justified using the theory of algebraic theta functions. The formulas that are obtained are very efficient and may be useful in cryptographic applications. We also show that applying the same strategy to elliptic curves gives Montgomery-like formulas in odd characteristic that are of some interest, and we recover already known formulas by Stam in characteristic 2.
A Framework for the Sound Specification of Cryptographic Tasks
Nowadays it is widely accepted to formulate the security of a protocol
carrying out a given task via the ``trusted-party paradigm,'' where
the protocol execution is compared with an ideal process where the
outputs are computed by a trusted party that sees all the inputs. A
protocol is said to securely carry out a given task if running the
protocol with a realistic adversary amounts to ``emulating'' the ideal
process with the appropriate trusted party. In the Universal
Composability (UC) framework the program run by the trusted party is
called an {\em ideal functionality}. While this simulation-based
security formulation provides strong security guarantees, its
usefulness is contingent on the properties and correct specification
of the ideal functionality, which, as demonstrated in recent years by
the coexistence of complex, multiple functionalities for the same task
as well as by their ``unstable'' nature, does not seem to be an easy
task.
In this paper we address this problem, by introducing a general methodology for the sound specification of ideal functionalities.
First, we introduce the class of {\em canonical} ideal functionalities
for a cryptographic task, which unifies the syntactic specification of a large class of cryptographic tasks under the same basic template functionality.
%
Furthermore, this representation enables the isolation of the
individual properties of a cryptographic task as separate members of
the corresponding class. By endowing the class of canonical
functionalities with an algebraic structure we are able to combine
basic functionalities to a single final canonical functionality for a
given task. Effectively, this puts forth a bottom-up
approach for the specification of ideal functionalities: first one
defines a set of basic constituent functionalities for the task at
hand, and then combines them into a single
ideal functionality taking advantage of the algebraic structure.
In our framework, the constituent functionalities of a task can be
derived either directly or, following a translation strategy we
introduce, from existing game-based definitions; such definitions have
in many cases captured desired individual properties of cryptographic
tasks, albeit in less adversarial settings.
Our translation methodology entails a sequence of steps
that systematically derive a corresponding canonical functionality given a game-based
definition, effectively ``lifting'' the game-based definition to its composition-safe
version.
We showcase our methodology by applying it to a variety of basic cryptographic tasks, including commitments,
digital signatures, zero-knowledge proofs, and oblivious transfer.
While in some cases our derived canonical functionalities are
equivalent to existing formulations, thus attesting to the validity
of our approach, in others they differ, enabling us to ``debug''
previous definitions and pinpoint their shortcomings.
Collisions and other Non-Random Properties for Step-Reduced SHA-256
We study the security of step-reduced but otherwise unmodified SHA-256. We show the first collision attacks on SHA-256 reduced to 23 and 24 steps with complexities $2^{18}$ and $2^{28.5}$, respectively. We give example colliding message pairs for 23-step and 24-step SHA-256. The best previous, recently obtained result was a collision attack for up to 22 steps. We extend our attacks to 23 and 24-step reduced SHA-512 with respective complexities of $2^{44.9}$ and $2^{53.0}$. Additionally, we show non-random behaviour of the SHA-256 compression function in the form of free-start near-collisions for up to 31 steps, which is 6 more steps than the recently obtained non-random behaviour in the form of a free-start near-collision. Even though this represents a step forwards in terms of cryptanalytic techniques, the results do not threaten the security of applications using SHA-256.
Analysis of Step-Reduced SHA-256
Uncategorized
Uncategorized
This is the first article analyzing the security of SHA-256 against fast collision search which considers the recent attacks by Wang et al. We show the limits of applying techniques known so far to SHA-256. Next we introduce a new type of perturbation vector which circumvents the identified limits. This new technique is then applied to the unmodified SHA-256. Exploiting the combination of Boolean functions and modular
addition together with the newly developed technique allows us to derive collision-producing characteristics for step-reduced SHA-256, which was not possible before. Although our results do not threaten the security of SHA-256, we show that the low probability of a single local collision may give rise to a false sense of security.
Controlling access to personal data through Accredited Symmetrically Private Information Retrieval
With the digitization of society and the continuous migration of services to the electronic world, individuals have lost significant control over their data. In this paper, we consider the problem of protecting personal information according to privacy policies defined by the data subjects. More specifically, we propose a new primitive allowing a data subject to decide when, how, and by whom his data can be accessed, without the database manager learning anything about his identity, at the time the data is retrieved. The proposed solution, which we call Accredited SPIR, combines symmetrically private information retrieval and privacy-preserving digital credentials. We present three constructions based on the discrete logarithm and RSA problems. Despite the added privacy safeguards, the extra cost incurred by our constructions is negligeable compared to that of the underlying building blocks.
A Chosen IV Attack Using Phase Shifting Equivalent Keys against DECIM v2
DECIM v2 is a stream cipher submitted to the ECRYPT stream cipher project (eSTREAM) and ISO/IEC 18033-4. No attack against DECIM v2 has been proposed yet. In this paper, we propose a chosen IV attack against DECIM v2 using a new equivalent key class. Our attack can recover an $80$-bit key with a time complexity of $2^{79.90}$ when all bits of the IV are zero. This result is the best one on DECIM v2.
A Pipelined Karatsuba-Ofman Multiplier over GF($3^{97}$) Amenable for Pairing Computation
We present a subquadratic ternary field multiplier based on the combination of several variants of the Karatsuba-Ofman scheme
recently published. Since one of the most relevant applications for this kind of multipliers is pairing computation,
where several field multiplications need to be computed at once, we decided to design a $k$-stage pipeline
structure for $k=1,\ldots,4$, where each stage is composed of a 49-trit polynomial multiplier unit. That
architecture can compute an average of $k$ field multiplications every three clock cycles, which implies that our
four-stage pipeline design can perform more than one field multiplication per clock cycle. When implemented in
a Xilinx Virtex V XC5VLX330 FPGA device, this multiplier can compute one field multiplication over \gf($3^{97}$)
in just $11.47$ns.
Machine Learning Attacks Against the ASIRRA CAPTCHA
The ASIRRA CAPTCHA [EDHS2007], recently proposed at ACM CCS 2007, relies on the problem of distinguishing images of cats and dogs (a task that humans are very good at). The security of ASIRRA is based on the presumed difficulty of classifying these images automatically. In this paper, we describe a classifier which is 82.7% accurate in telling apart the images of cats and dogs used in ASIRRA. This classifier is a combination of support-vector machine classifiers trained on color and texture features extracted from images. Our classifier allows us to solve a 12-image ASIRRA challenge automatically with probability 10.3%. This probability of success is significantly higher than the estimate given in [EDHS2007] for machine vision attacks. The weakness we expose in the current implementation of ASIRRA does not mean that ASIRRA cannot be deployed securely. With appropriate safeguards, we believe that ASIRRA offers an appealing balance between usability and security. One contribution of this work is to inform the choice of safeguard parameters in ASIRRA deployments.
Pairing Lattices
We provide a convenient mathematical framework that essentially
encompasses all known pairing functions based on the Tate
pairing. We prove non-degeneracy and bounds on the lowest possible
degree of these pairing functions and show how efficient
endomorphisms can be used to achieve a further degree reduction.
A Simple Derivation for the Frobenius Pseudoprime Test
Probabilistic compositeness tests are of great practical importance in cryptography. Besides prominent tests (like the well-known Miller-Rabin test) there are tests that use Lucas-sequences for testing compositeness. One example is the so-called Frobenius test that has a very low error probability. Using a slight modification of the above mentioned Lucas sequences we present a simple derivation for the Frobenius pseudoprime test in the version proposed by Crandall and Pommerance.
Secure Adiabatic Logic: a Low-Energy DPA-Resistant Logic Style
Uncategorized
Uncategorized
The charge recovery logic families have been designed several years ago not in order to eliminate the side-channel leakage but to reduce the power consumption. However, in this article we present a new charge recovery logic style not only to gain high energy efficiency but also to achieve the resistance against side-channel attacks (SDA) especially against differential power analysis (DPA) attacks. Simulation results show a significant improvement in DPA-resistance level as well as in power consumption reduction in comparison with DPA-resistant logic styles proposed so far.
TinyECCK: Efficient Elliptic Curve Cryptography Implementation over $GF(2^m)$ on 8-bit MICAz Mote
In this paper, we revisit a generally accepted opinion:
implementing Elliptic Curve Cryptosystem (ECC) over $GF(2^m)$ on
sensor motes using small word size is not appropriate because XOR
multiplication over $GF(2^m)$ is not efficiently supported by
current low-powered microprocessors. Although there are some
implementations over $GF(2^m)$ on sensor motes, their performances
are not satisfactory enough to be used for wireless sensor networks (WSNs).
We have found that a field multiplication over $GF(2^m)$ are involved
in a number of redundant memory accesses and its inefficiency
is originated from this problem. Moreover, the field reduction process
also requires many redundant memory accesses.
Therefore, we propose some techniques for reducing unnecessary
memory accesses. With the proposed strategies, the
running time of field multiplication and reduction over
$GF(2^{163})$ can be decreased by 21.1\% and 24.7\%, respectively.
These savings noticeably decrease execution times spent in
Elliptic Curve Digital Signature Algorithm (ECDSA) operations
(signing and verification) by around $15\% \sim 19\%$. We present
TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve -- a
kind of TinyOS package supporting elliptic curve operations) which
is the fastest ECC implementation over $GF(2^m)$ on 8-bit sensor motes
using ATmega128L as far as we know. Through comparisons with existing software
implementations of ECC built in C or hybrid of C and inline
assembly on sensor motes, we show that TinyECCK outperforms
them in terms of running time, code size and supporting services.
Furthermore, we show that a field multiplication over $GF(2^m)$
can be faster than that over $GF(p)$ on 8-bit ATmega128L processor
by comparing TinyECCK with TinyECC, a well-known ECC implementation
over $GF(p)$. TinyECCK with sect163k1 can compute a scalar multiplication
within 1.14 secs on a MICAz mote at the expense of 5,592-byte of ROM and
618-byte of RAM. Furthermore, it can also generate a signature and verify
it in 1.37 and 2.32 secs with 13,748-byte of ROM and 1,004-byte of RAM.
New proofs for old modes
We study the standard block cipher modes of operation: CBC, CFB, and OFB
and analyse their security. We don't look at ECB other than briefly to
note its insecurity, and we have no new results on counter mode. Our
results improve over those previously published in that (a) our bounds are
better, (b) our proofs are shorter and easier, (c) the proofs correct
errors we discovered in previous work, or some combination of these. We
provide a new security notion for symmetric encryption which turns out to
be rather useful when analysing block cipher modes. Finally, we pay
attention to different methods for selecting initialization vectors for the
block cipher modes, and prove security for a number of different selection
policies. In particular, we introduce the concept of a `generalized
counter', and prove that generalized counters suffice for security in
(full-width) CFB and OFB modes and that generalized counters encrypted
using the block cipher (with the same key) suffice for all three modes.
Public key encryption and encryption emulation attacks
The main purpose of this paper is to suggest that public key encryption can be secure against the "encryption emulation" attack (on the sender's encryption) by computationally unbounded adversary, with one reservation: a legitimate receiver decrypts correctly with probability that can be made arbitrarily close to 1, but not equal to 1.
Linear Bandwidth Naccache-Stern Encryption
The Naccache-Stern (NS) knapsack cryptosystem is an original yet
little-known public-key encryption scheme. In this scheme, the
ciphertext is obtained by multiplying public-keys indexed by the
message bits modulo a prime $p$. The cleartext is recovered by
factoring the ciphertext raised to a secret power modulo $p$.
NS encryption requires a multiplication per two plaintext bits on
the average. Decryption is roughly as costly as an RSA decryption.
However, NS features a bandwidth sublinear in $\log p$, namely $\log
p/\log \log p$. As an example, for a $2048$-bit prime $p$, NS
encryption features a 233-bit bandwidth for a 59-kilobyte public key
size.
This paper presents new NS variants achieving bandwidths {\sl
linear} in $\log p$. As linear bandwidth claims a public-key of size
$\log^3 p/\log \log p$, we recommend to combine our scheme with
other bandwidth optimization techniques presented here.
For a $2048$-bit prime $p$, we obtain figures such as 169-bit
plaintext for a 10-kilobyte public key, 255-bit plaintext for a
20-kilobyte public key or a 781-bit plaintext for a 512-kilobyte
public key. Encryption and decryption remain unaffected by our
optimizations: As an example, the 781-bit variant requires 152
multiplications per encryption.
Setting Speed Records with the (Fractional) Multibase Non-Adjacent Form Method for Efficient Elliptic Curve Scalar Multiplication
In this paper, we introduce the Fractional Window-w Multibase Non-Adjacent Form (Frac-wmbNAF) method to perform the scalar multiplication. This method generalizes the recently developed Window-w mbNAF (wmbNAF) method by allowing an unrestricted number of precomputed points. We then make a comprehensive analysis of the most recent and relevant methods existent in the literature for the ECC scalar multiplication, including the presented generalization and its original non-window version known as Multibase Non-Adjacent Form (mbNAF). Moreover, we present new improvements in the point operation formulae. Specifically, we reduce further the cost of composite operations such as doubling-addition, tripling, quintupling and septupling of a point, which are relevant for the speed up of methods using multiple bases. Following, we also analyze the precomputation stage in scalar multiplications and present efficient schemes for the different studied scenarios. Our analysis includes the standard elliptic curves using Jacobian coordinates, and also Edwards curves, which are gaining growing attention due to their high performance. We demonstrate with extensive tests that mbNAF is currently the most efficient method without precomputations not only for the standard curves but also for the faster Edwards form. Similarly, Frac-wmbNAF is shown to attain the highest performance among window-based methods for all the studied curve forms.
Exponentiation in pairing-friendly groups using homomorphisms
We present efficiently computable homomorphisms of the groups $G_2$ and $G_T$ for pairings $G_1 \times G_2 \rightarrow G_T$. This allows
exponentiation in $G_2$ and $G_T$ to be accelerated using the Gallant-Lambert-Vanstone method.
Chosen-Ciphertext Security via Correlated Products
We initiate the study of one-wayness under {\em correlated products}. We are interested in identifying necessary and sufficient conditions for a function $f$ and a distribution on inputs $(x_1, \ldots, x_k)$, so that the function $(f(x_1), \ldots, f(x_k))$ is one-way. The main motivation of this study is the construction of public-key encryption schemes that are secure against chosen-ciphertext attacks (CCA). We show that any collection of injective trapdoor functions that is secure under a very natural correlated product can be used to construct a CCA-secure public-key encryption scheme. The construction is simple, black-box, and admits a direct proof of security. It can be viewed as a simplification of the seminal work of Dolev, Dwork and Naor (SICOMP '00), while relying on a seemingly incomparable assumption.
We provide evidence that security under correlated products is achievable by demonstrating that lossy trapdoor functions (Peikert and Waters, STOC '08) yield injective trapdoor functions that are secure under the above mentioned correlated product. Although we currently base security under correlated products on existing constructions of lossy trapdoor functions, we argue that the former notion is potentially weaker as a general assumption. Specifically, there is no fully-black-box construction of lossy trapdoor functions from trapdoor functions that are secure under correlated products.
A Comparison Between Hardware Accelerators for the Modified Tate Pairing over $\mathbb{F}_{2^m}$ and $\mathbb{F}_{3^m}$
In this article we propose a study of the modified Tate pairing in characteristics two and three. Starting from the $\eta_T$ pairing introduced by Barreto {\em et al.} (Des Codes Crypt, 2007), we detail various algorithmic improvements in the case of characteristic two. As far as characteristic three is concerned, we refer to the survey by Beuchat {\em et al.} (ePrint 2007-417). We then show how to get back to the modified Tate pairing at almost no extra cost. Finally, we explore the trade-offs involved in the hardware implementation of this pairing for both characteristics two and three. From our experiments, characteristic three appears to have a slight advantage over characteristic two.
Scalable and Efficient Provable Data Possession
Uncategorized
Uncategorized
Storage outsourcing is a rising trend which prompts a
number of interesting security issues, many of which
have been extensively investigated in the past. However,
Provable Data Possession (PDP) is a topic that has only
recently appeared in the research literature. The main
issue is how to frequently, efficiently and securely verify
that a storage server is faithfully storing its client’s
(potentially very large) outsourced data. The storage
server is assumed to be untrusted in terms of both security
and reliability. (In other words, it might maliciously
or accidentally erase hosted data; it might also relegate
it to slow or off-line storage.) The problem is exacerbated
by the client being a small computing device with
limited resources. Prior work has addressed this problem
using either public key cryptography or requiring
the client to outsource its data in encrypted form.
In this paper, we construct a highly efficient and
provably secure PDP technique based entirely on symmetric
key cryptography, while not requiring any bulk
encryption. Also, in contrast with its predecessors, our
PDP technique allows outsourcing of dynamic data, i.e,
it efficiently supports operations, such as block modification,
deletion and append.
Open Source Is Not Enough. Attacking the EC-package of Bouncycastle version 1.x_132
BouncyCastle is an open source Crypto provider written in Java which supplies
classes for Elliptic Curve Cryptography (ECC). We have found a flaw in the class ECPoint
resulting from an unhappy interaction of elementary algorithms. We show how to exploit this flaw to a real world attack, e.g., on the encryption scheme ECIES. BouncyCastle has since fixed this flaw (version 1.x_133 and higher) but all older versions remain highly vulnerable to an active attacker and the attack shows a certain vulnerability of the involved validation algorithms.
Democratic Group Signatures with Threshold Traceability
Recently, democratic group signatures(DGSs) particularly catch our
attention
due to their great flexibilities, \emph{i.e}., \emph{no
group manager}, \emph{anonymity}, and \emph{individual
traceability}. In existing DGS schemes, individual traceability says
that any member in the group can reveal the actual signer's
identity from a given signature. In this paper, we formally describe
the definition of DGS, revisit its security notions by strengthening
the requirement for the property of traceability, and present a
concrete DGS construction with $(t, n)$-\emph{threshold
traceability} which combines the concepts of group signatures and of
threshold cryptography. The idea behind the $(t, n)$-threshold
traceability is to distribute between $n$ group members the
capability of tracing the actual signer such that any subset of not
less than $t$ members can jointly reconstruct a secret and reveal
the identity of the signer while preserving security even in the
presence of an active adversary which can corrupt up to $t-1$ group
members.
THE DESIGN OF BOOLEAN FUNCTIONS BY MODIFIED HILL CLIMBING METHOD
With cryptographic investigations, the design of Boolean functions is a wide area. The Boolean functions play important role in the construction of a symmetric cryptosystem. In this paper the modifed hill climbing method is considered. The method allows using hill climbing techniques to modify bent functions used to design balanced,
highly nonlinear Boolean functions with high algebraic degree and low autocorrelation. The experimental results of constructing the cryptographically strong Boolean functions are presented.
Last updated: 2012-03-16
On the Design of Secure and Fast Double Block Length Hash Functions
Uncategorized
Uncategorized
In this work the security of double block length hash functions with
rate 1, which are based on a block cipher with a block length of $n$
bits and a key length of $2n$ bits, is reconsidered.
Counter-examples and new attacks are presented on this general class
of fast double block length hash functions, which reveal unnoticed
flaws in the necessary conditions given by Satoh \textit{et al.} and
Hirose. Preimage and second preimage attacks are presented on
Hirose's two examples which were left as an open problem. Our
synthetic analysis show that all rate-1 hash functions in FDBL-II
are failed to be optimally (second) preimage resistant. The
necessary conditions are refined for ensuring a subclass of hash
functions in FDBL-II to be optimally secure against collision
attacks. In particular, one of Hirose's two examples, which
satisfies our refined conditions, is proven to be indifferentiable
from a random oracle in the ideal cipher model. The security results
are extended to a new class of double block length hash functions
with rate 1, where the key length of one block cipher used in the
compression function is equal to the block length, whereas the other
is doubled.
Collisions for Round-Reduced LAKE
Uncategorized
Uncategorized
LAKE is a family of cryptographic hash functions presented at FSE 2008. It is an iterated hash function and defines two main instances with a 256 bit and 512 bit hash value. In this paper, we present the first security analysis of LAKE. We show how collision attacks, exploiting the non-bijectiveness of the internal compression function of LAKE, can be mounted on reduced variants of LAKE. We show an efficient attack on the 256 bit hash function LAKE-256 reduced to 3 rounds and present an actual colliding message pair. Furthermore, we present a theoretical attack on LAKE-256 reduced to 4 rounds with a complexity of $2^{109}$. By using more sophisticated message modification techniques we expect that the attack can be extended to 5 rounds. However, for the moment our approach does not appear to be applicable to the full LAKE-256 hash function (with all 8 rounds).
New Differential-Algebraic Attacks and Reparametrization of Rainbow
Uncategorized
Uncategorized
A recently proposed class of multivariate quadratic schemes, the
Rainbow-Like signature Schemes, in which successive sets of central
variables are obtained from previous ones by solving linear
equations, seem to lead to efficient schemes (TTS, TRMS, and
Rainbow) that perform well on systems of low computational
resources. Recently SFLASH ($C^{\ast-}$) was broken by Dubois,
Fouque, Shamir, and Stern via a differential attack. In this paper,
we exhibit similar attacks based on differentials, that will reduce
published Rainbow-like schemes below their security levels. We will
present a new type of construction of Rainbow-Like schemes and
design signature schemes with new parameters for practical
applications.
Private Branching Programs: On Communication-Efficient Cryptocomputing
We polish a recent cryptocomputing method of Ishai and Paskin from TCC 2007. More precisely, we show that every function can be cryptocomputed in communication, linear in the product of client's input length and the length of the branching program, and computation, linear in the size of the branching program that computes it. The method is based on the existence of a communication-efficient $(2,1)$-CPIR protocol. We give several nontrivial applications, including: (a) improvement on the communication of Lipmaa's CPIR protocol, (b) a CPIR protocol with log-squared communication and sublinear server-computation by giving a secure function evaluation protocol for Boolean functions with similar performance, (c) a protocol for PIR-writing with low amortized complexity, (d) a selective private function evaluation (SPFE) protocol. We detail one application of SPFE that makes it possible to compute how similar is client's input to an element in server's database, without revealing any information to the server. For SPFE, we design a $4$-message extension of the basic protocol that is efficient for a large class of functionalities.
Knapsack cryptosystems built on NP-hard instances
We construct three public key knapsack cryptosystems.
Standard knapsack cryptosystems hide easy instances
of the knapsack problem and have been broken. The systems considered in
the article face this problem: They
hide a random (possibly hard) instance of the knapsack
problem. We provide both complexity results (size of
the key, time needed to encypher/decypher...)
and experimental results.
Security results are given for the second cryptosystem
( the fastest one and the one with the shortest key).
Probabilistic polynomial reductions show that
finding the private key is as difficult
as factorizing a product of two primes. We also consider heuristic
attacks. First, the density of the cryptosystem can be chosen
arbitrarily close to one, discarding low density attacks.
Finally, we consider explicit heuristic attacks based on the LLL algorithm and we
prove that with respect to these attacks, the public key
is as secure as a random key.
Cryptanalysis of White-Box Implementations
A white-box implementation of a block cipher is a software implementation from which it is difficult for an attacker to extract the cryptographic key. Chow et al. published white-box implementations for AES and DES that both have been cryptanalyzed. However, these white-box implementations are based on ideas that can easily be used to derive white-box implementations for other block ciphers as well. As the cryptanalyses published use typical properties of AES and DES, it remains an open question whether the white-box techniques proposed by Chow et al. can result in a secure white-box implementation for other ciphers than AES and DES. In this paper we identify a generic class of block ciphers for which the white-box techniques of Chow et al. do not result in a secure white-box implementation. The result can serve as a basis to design block ciphers and to develop white-box techniques that do result in secure white-box implementations.
Simplified Security Notions of Direct Anonymous Attestation and a Concrete Scheme from Pairings
Direct Anonymous Attestation (DAA) is a cryptographic mechanism that
enables remote authentication of a user while preserving privacy under
the user's control. The DAA scheme developed by Brickell, Camenisch,
and Chen has been adopted by the Trust Computing Group (TCG) for remote anonymous attestation of Trusted Platform Module (TPM), a small
hardware device with limited storage space and communication
capability. In this paper, we provide two contributions to DAA. We
first introduce simplified security notions of DAA including the formal definitions of user controlled anonymity and traceability. We then propose a new DAA scheme from elliptic curve cryptography and bilinear maps. The lengths of private keys and signatures in our scheme are much shorter than the lengths in the original DAA scheme, with a similar level of security and computational complexity. Our scheme builds upon the Camenisch-Lysyanskaya signature scheme and is efficient and provably secure in the random oracle model under the LRSW (stands for Lysyanskaya, Rivest, Sahai and Wolf) assumption and the decisional Bilinear Diffie-Hellman assumption.
Last updated: 2008-12-16
Identity-Based Proxy Re-encryption Schemes with Multiuse, Unidirection, and CCA Security
A proxy re-encryption (PRE) scheme allows a proxy to transform a
ciphertext under Alice's public key into a ciphertext under Bob's
public key on the same message. In 2006, Green and Ateniese extended
the above notion to identity-based proxy re-encryption (IB-PRE), and
proposed two open problems \cite{GA06}: building 1. IB-PRE schemes
which are CCA-secure in the standard model; 2. multi-use CCA-secure
IB-PRE schemes. Chu and Tzeng proposed two identity-based proxy
re-encryption schemes afterwards in \cite{CT07}, aiming at solving
the two open problems. However, in this paper, we show that they
don't solve these two open problems by revealing a security flaw in
their solution. Furthermore, we propose an improvement which is a
solution to the open problems.
Degradation and Amplification of Computational Hardness
What happens when you use a partially defective bit-commitment protocol to commit to the same bit many times? For example, suppose that the protocol allows the receiver to guess the committed bit with advantage $\eps$, and that you used that protocol to commit to the same bit more than $1/\eps$ times. Or suppose that you encrypted some message many times (to many people), only to discover later that the encryption scheme that you were using is partially defective, and an eavesdropper has some noticeable advantage in guessing the encrypted message from the ciphertext. Can we at least show that even after many such encryptions, the eavesdropper could not have learned the message with certainty?
In this work we take another look at amplification and degradation of computational hardness. We describe a rather generic setting where one can argue about amplification or degradation of computational hardness via sequential repetition of interactive protocols, and prove that in all the cases that we consider, it behaves as one would expect from the corresponding information theoretic bounds. In particular, for the example above we can prove that after committing to the same bit for $n$ times, the receiver's advantage in guessing the encrypted bit is negligibly close to $1-(1-\eps)^n$.
Our results for hardness amplification follow just by observing that some of the known proofs for Yao's lemmas can be easily extended also to handle interactive protocols. On the other hand, the question of hardness degradation was never considered before as far as we know, and we prove these results from scratch.
Last updated: 2008-06-03
Probabilistic Verifiable Secret Sharing Tolerating Adaptive Adversary
In this work we focus on two basic secure distributed computation tasks- Probabilistic Weak Secret Sharing (PWSS) and Probabilistic Verifiable Secret Sharing (PVSS). PVSS allows a dealer to share a secret among several players in a way that would later allow a unique reconstruction of the secret with negligible error probability. PWSS is slightly weaker version of PVSS where the dealer can choose not to disclose his secret later. Both of them are well-studied problems. While PVSS is used as a building block in every general probabilistic secure multiparty computation, PWSS can be used as a building block for PVSS protocols. Both these problems can be parameterized by the number of players ($n$) and the fault tolerance threshold ($t$) which bounds the total number of malicious (Byzantine) players having {\it unbounded computing power}. We focus on the standard {\it secure channel model}, where all players have access to secure point-to-point channels and a common broadcast medium. We show the following for PVSS: (a) 1-round PVSS is possible iff $t=1$ and $n>3$ (b) 2-round PVSS is possible if $n>3t$ (c) 4-round PVSS is possible if $n>2t$. For the PWSS we show the following: (a) 1-round PWSS is possible iff $n>3t$ and (b) 3-round PWSS is possible if $n>2t$. All our protocols are {\it efficient}. Comparing our results with the existing trade-off
results for perfect (zero error probability) VSS and WSS, we find that probabilistically relaxing the conditions of VSS/WSS helps to increase fault tolerance significantly.
Accelerating the Scalar Multiplication on Elliptic Curve Cryptosystems over Prime Fields
Elliptic curve cryptography (ECC), independently introduced by Koblitz and Miller in the 80's, has attracted increasing attention in recent years due to its shorter key length requirement in comparison with other public-key cryptosystems such as RSA. Shorter key length means reduced power consumption and computing effort, and less storage requirement, factors that are fundamental in ubiquitous portable devices such as PDAs, cellphones, smartcards, and many others. To that end, a lot of research has been carried out to speed-up and improve ECC implementations, mainly focusing on the most important and time-consuming ECC operation: scalar multiplication.
In this thesis, we focus in optimizing such ECC operation at the point and scalar arithmetic levels, specifically targeting standard curves over prime fields. At the point arithmetic level, we introduce two innovative methodologies to accelerate ECC formulae: the use of new composite operations, which are built on top of basic point doubling and addition operations; and the substitution of field multiplications by squarings and other cheaper operations. These techniques are efficiently exploited, individually or jointly, in several contexts: to accelerate computation of scalar multiplications, and the computation of pre-computed points for window-based scalar multiplications (up to 30% improvement in comparison with previous best method); to speed-up computations of simple side-channel attack (SSCA)-protected implementations using innovative atomic structures (up to 22% improvement in comparison with scalar multiplication using original atomic structures); and to develop parallel formulae for SIMD-based applications, which are able to execute three and four operations simultaneously (up to 72% of improvement in comparison with a sequential scalar multiplication).
At the scalar arithmetic level, we develop new sublinear (in terms of Hamming weight) multibase scalar multiplications based on NAF-like conversion algorithms that are shown to be faster than any previous scalar multiplication method. For instance, proposed multibase scalar multiplications reduce computing times in 10.9% and 25.3% in comparison with traditional NAF for unprotected and SSCA-protected scenarios, respectively. Moreover, our conversion algorithms overcome the problem of converting any integer to multibase representation, solving an open problem that was defined as hard. Thus, our algorithms make the use of multiple bases practical for applications as ECC scalar multiplication for first time.
The Elliptic Curve Discrete Logarithm Problem and Equivalent Hard Problems for Elliptic Divisibility Sequences
We define three hard problems in the theory of elliptic divisibility sequences (EDS Association, EDS Residue and EDS Discrete Log), each of which is solvable in sub-exponential time if and only if the elliptic curve discrete logarithm problem is solvable in sub-exponential time. We also relate the problem of EDS Association to the Tate pairing and the MOV, Frey-Rück and Shipsey EDS attacks on the elliptic curve discrete logarithm problem in the cases where these apply.
On Security Notions for Verifiable Encrypted Signature
First we revisit three - BGLS, MBGLS and GZZ verifiably encrypted
signature schemes[2,3,6].We find that they are all
not strong unforgeable.We remark that the notion of existential
unforgeable is not sufficient for fair exchange protocols in most
circumstances.So we propose three new - NBGLS, MBGLS and NGZZ
verifiably encrypted signature schemes which are strong unforgeable.
Also we reconsider other two - ZSS and CA verifiably encrypted
signature schemes[4,8], we find that they both cannot
resist replacing public key attack. So we strongly suggest that
strong unforgeable for verifiably encrypted signature maybe a better
notion than existential unforgeable and checking adjudicator knowing
its private key is a necessary step for secure verifiably encrypted
signature scheme.
Fairness with an Honest Minority and a Rational Majority
Uncategorized
Uncategorized
We provide a simple protocol for secret reconstruction in any threshold secret sharing scheme, and prove that it is fair when executed with many rational parties together with a small minority of honest parties. That is, all parties will learn the secret with high probability when the honest parties follow the protocol and the rational parties act in their own self-interest (as captured by the notion of a Bayesian subgame perfect equilibrium). The protocol only
requires a standard (synchronous) broadcast channel, and tolerates fail-stop deviations (i.e. early stopping, but not incorrectly computed messages).
Previous protocols for this problem in the cryptographic or economic models have either required an honest majority, used strong communication channels that enable simultaneous exchange of information, or settled for approximate notions of security/equilibria.
Optimal Pairings
In this paper we introduce the concept of an \emph{optimal pairing}, which by definition can be computed using only $\log_2 r/ \varphi(k)$ basic Miller iterations, with $r$ the order of the groups involved and $k$ the embedding degree. We describe an algorithm to construct optimal ate pairings on all parametrized families of pairing friendly elliptic curves. Finally, we conjecture that any non-degenerate pairing on an elliptic curve without efficiently computable endomorphisms different from powers of Frobenius requires at least $\log_2 r/ \varphi(k)$ basic Miller iterations.
Strongly Unforgeable ID-based Signatures Without Random Oracles
In this paper, we construct a strongly unforgeable ID-based signature scheme without random oracles. The signature size of our scheme is smaller than that of other schemes based on varieties of the Diffie-Hellman problem or the discrete logarithm problem. The security of the scheme relies on the difficulty to solve three problems related to the Diffie-Hellman problem and a one-way isomorphism.
Universally Composable Undeniable Signature
How to define the security of undeniable signature schemes is a challenging task. This paper presents two security definitions of undeniable signature schemes which are more useful or natural than the existing definition. It then proves their equivalence.
We first define the UC-security, where UC means universal composability. We next show that there exists a UC-secure undeniable signature scheme which does not satisfy the standard definition of security that has been believed to be adequate so far. More precisely, it does not satisfy the invisibility defined by \cite{DP96}. We then show a more adequate definition of invisibility which captures a wider class of (naturally secure) undeniable signature schemes.
We finally prove that the UC-security against non-adaptive adversaries is equivalent to this definition of invisibility and the strong unforgeability in $\cF_{ZK}$-hybrid model, where $\cF_{ZK}$ is the ideal ZK functionality. Our result of equivalence implies that all the known proven secure undeniable signature schemes (including Chaum's scheme) are UC-secure if the confirmation/disavowal protocols are both UC zero-knowledge.
New ID-based Fair Blind Signatures
A blind signature is a cryptographic premitive in which a user can obtain a signature from the signer without revealing any information about message signature pair.Blind signatures are used in electronic payment systems, electronic voting machines etc.The anonymity can be misused by criminals by money laundering or by dubious money.To prevent these crimes, the idea of fair blind signature scheme was given by stadler et al.In fair blind signature scheme, there is a trusted third party judge who can provide a linking protocol to signer to link his view to the message signature pair.In this paper we are proposing some identity based fair blind signatures.
An Efficient SPRP-secure Construction based on Pseudo Random Involution
Here we present a new security notion called as pseudo random
involution or PRI which are associated with tweakable involution
enciphering schemes or TIES (i.e., the encryption and decryption are
same algorithm). This new security notion is important in two
reasons. Firstly, it is the natural security notion for TIES which
are having practical importance. Secondly, we show that there is a
generic method to obtain a sprp-secure tweakable enciphering scheme
(TES) from pri-secure construction. The generic method costs an
extra xor with an extra key. In this paper, we also propose an
efficient pri-secure construction Hash-Counter Involution or HCI and
based on it we obtain a sprp-secure construction which is real
improvement over XCB. We call the new construction as MXCB or
Modified-XCB. HCH, XCB and HCTR are some of the popular counter
based enciphering schemes, where HCTR is more efficient among them
and HCH, XCB guarantee more security compare to HCTR. The new
proposal MXCB has efficiency similar to HCTR and guarantees more
security similar to HCH and XCB. We consider this new construction
to be an important in light of the current activities of the IEEE
working group on storage security which is working towards a
standard for a wide block TES.
A Generic Method to Extend Message Space of a Strong Pseudorandom Permutation
In this paper we present an efficient and secure generic method
which can encrypt messages of size at least $n$. This generic
encryption algorithm needs a secure encryption algorithm for
messages of multiple of $n$. The first generic construction, XLS,
has been proposed by Ristenpart and Rogaway in FSE-07. It needs
two extra invocations of an independently chosen strong
pseudorandom permutation or SPRP defined over $\s^n$ for
encryption of an incomplete message block. Whereas our
construction needs only one invocation of a weak pseudorandom
function and two multiplications over a finite field
(equivalently, two invocations of an universal hash function). We
prove here that the proposed method preserves (tweakable) SPRP.
This new construction is meaningful for two reasons. Firstly, it
is based on weak pseudorandom function which is a weaker security
notion than SPRP. Thus we are able to achieve stronger security
from a weaker one. Secondly, in practice, finite field
multiplication is more efficient than an invocation of SPRP. Hence
our method can be more efficient than XLS.
Improving upon HCTR and matching attacks for Hash-Counter-Hash approach
McGrew and Fluhrer first proposed hash-counter-hash approach to
encrypt arbitrary length messages. By its nature, counter can handle
incomplete message blocks as well as complete message blocks in the
same manner. HCTR is the till date best (in terms of efficiency)
strong pseudo random permutation or SPRP among all known counter
based SPRPs. But as of now, a cubic bound for HCTR is known.
Moreover, all invocations of underlying block ciphers can not be
made in parallel. Our new proposal (we call it HMC or Hash Modified
Counter) provides a quadratic security bound and all block cipher
invocations are parallel in nature even if we have an incomplete
message block. We also present a prp-distinguishing attack on a
generic counter based encryption, which makes $q$ non-adaptive
encryption queries consisting of $(\ell+1)$ $n$-bit blocks and has
success probability roughly $\ell^2q^2/2^n$. Loosely speaking, the
success probability matches with the upper bound of distinguishing
probability. As a result, we prove that the known quadratic bounds
for XCB, HCH and HMC are tight.
An improved preimage attack on MD2
This paper describes an improved preimage attack on the cryptographic hash function MD2. The attack has complexity equivalent to about $2^{73}$ evaluations of the MD2 compression function. This is to be compared with the previous best known preimage attack, which has complexity about $2^{97}$.
A Public Key Encryption In Standard Model Using Cramer-Shoup Paradigm
We present a public-key encryption scheme which is provably secure against adaptive chosen ciphertext attack. The scheme is constructed using Cramer-Shoup paradigm. The security of the scheme is based on the Decisional Bilinear Diffie-Hellman problem
Towards a Theory of White-Box Security
Program hardening for secure execution in remote untrusted environment is an important yet elusive goal of security, with numerous attempts and efforts of the research community to produce secure solutions.
Obfuscation is the prevailing practical technique employed to tackle this issue.
Unfortunately, no provably secure obfuscation techniques currently exist. Moreover, Barak et. al., showed that not all programs can be obfuscated.
Theoretical research exhibits provably secure albeit inefficient constructions, e.g. using tools from encrypted domain. We present a rigorous approach to software execution in remote environment based on a new white box primitive, the White Box Remote Program Execution (WBRPE), whose security specifications include confidentiality and integrity of both the local and the remote hosts. WBRPE can be used for many applications, e.g. grid computing, digital rights management, mobile agents.
We then present a construction of a specific program such that if there exists a secure WBRPE for that program, then there is a secure WBRPE for any program, reducing its security to the underlying WBRPE primitive. The security of WBRPE construction is established by reduction among two white box primitives and it introduces new techniques of programs manipulation.
Efficient Perfectly Reliable and Secure Communication Tolerating Mobile Adversary
We study the problem of Perfectly Reliable Message Transmission}(PRMT) and Perfectly Secure Message Transmission (PSMT) between two nodes S and R in an undirected synchronous network, a part of which is under the influence of an all powerful mobile Byzantine adversary. In ACISP 2007 Srinathan et. al. has proved that the connectivity requirement for PSMT protocols is same for both static and mobile adversary thus showing that mobility of the adversary has no effect on the possibility of PSMT (also PRMT) protocols. Similarly in CRYPTO 2004, Srinathan et. al. has shown that the lower bound on the communication complexity of any multiphase PSMT protocol is same for static and mobile adversary. The authors have also designed a $O(t)$ phase (A phase is a send from S to R or R to S or both) protocol satisfying this bound where $t$ is the maximum number of nodes corrupted by the Byzantine adversary. In this work, we design a three phase bit optimal PSMT protocol using a novel technique, whose communication complexity
matches the lower bound proved in CRYPTO 2004 and thus significantly reducing the number of phases from $O(t)$ to three.
Further using our novel technique, we design a three phase bit optimal
PRMT protocol which achieves reliability with constant factor overhead against a mobile adversary. These are the first ever constant phase optimal PRMT and PSMT protocols against mobile Byzantine adversary.
We also characterize PSMT protocols in directed networks tolerating mobile adversary.
All the existing PRMT and PSMT protocols abstracts the paths between S and R as wires, neglecting the intermediate nodes in the paths. However, this causes significant over estimation in the communication complexity as well as round complexity (Round is different from phase. Round is a send from one node to its immediate neighbor in the network} of protocols. Here, we consider the underlying paths
as a whole instead of abstracting them as wires and derive a tight bound on the number of rounds required to achieve reliable communication from S to R tolerating a mobile adversary with arbitrary roaming speed (By roaming speed we mean the speed with which the adversary changes the set of corrupted node). We show how our constant phase PRMT and PSMT protocols can be easily adapted to design round optimal and bit optimal PRMT and PSMT protocols provided the network is given as a collection of vertex disjoint paths.
All Pairings Are in a Group
Uncategorized
Uncategorized
In this paper, we suggest that all pairings be in a group from an
abstract angle. It is possible that our observation can be applied
into other aspects of pairing-based cryptosystems.
ID based generalized signcryption
Generalized signcryption is a new cryptographic primitive in which a signcryption scheme can work as an encryption scheme as well as a signature scheme. This paper presents an identity based generalized signcryption scheme based on bilinear pairing and discusses its security for message confidentiality non repudiation and ciphertext authentication.
On the Security of Chien's Ultralightweight RFID Authentication Protocol
Recently, Chien proposed an ultralightweight RFID authentication protocol to prevent all possible attacks. However, we find two de-synchronization attacks to break the protocol.
Improving the Farnel, Threeballot, and Randell-Ryan Voting Schemes
A number of recent voting schemes provide the property of voter verifiability: voters can confirm that their votes are accurately counted in the tally. The Farnel type voting schemes are based on the observation that to achieve voter-verifiability it is not necessary for the voter to carry away a receipt corresponding to their own vote. The Farnel approach then is to provide voters, when they cast their vote, with copies of receipts of one or more randomly selected, previous cast votes. This idea has a number of attractive features: ballot secrecy is achieved up front and does not have to be provided by anonymising mixes etc during tabulation. In fact, plaintext receipts can be used in contrast to the encrypted receipts of
many other voter-verifiable schemes. Furthermore, any fears that voters might have that their vote is not truly concealed in an encrypted receipt are mitigated. The Farnel mechanism also mitigates randomization style attacks. In this paper we explore some enhancements to the original Farnel scheme and ways that the Farnel concept can be combined with some existing voter-verifiable schemes, namely Prˆet-`a-Voter, ThreeBallot, and Randell-Ryan.
Template Attacks on ECDSA
Template attacks have been considered exclusively in the context of implementations of symmetric cryptographic algorithms on 8-bit
devices. Within these scenarios, they have proven to be the most
powerful attacks. This is not surprising because they assume the
most powerful adversaries. In this article we investigate how
template attacks can be applied to implementations of an asymmetric
cryptographic algorithm on a 32-bit platform. The asymmetric
cryptosystem under scrutiny is the elliptic curve digital signature
algorithm (ECDSA). ECDSA is particularly suitable for 32-bit
platforms. In this article we show that even SPA resistant
implementations of ECDSA on a typical 32-bit platform succumb to
template-based SPA attacks. The only way to secure such
implementations against template-based SPA attacks is to make them
resistant against DPA attacks.
Pairing-Based Onion Routing with Improved Forward Secrecy
This paper presents new protocols for onion routing anonymity networks. We define a provably secure privacy-preserving key agreement scheme in an identity-based infrastructure setting, and use it to forge new onion routing circuit constructions. These constructions, based on a user's selection, offer immediate or eventual forward secrecy at each node in a circuit and require significantly less computation and communication than the telescoping mechanism used by Tor. Further, the use of the identity-based infrastructure also leads to a reduction in the required amount of authenticated directory information. Therefore, our constructions provide practical ways to allow onion routing anonymity networks to scale gracefully.
Homomorphic Encryption with CCA Security
We address the problem of constructing public-key encryption schemes that meaningfully combine useful {\em computability features} with {\em non-malleability}. In particular, we investigate schemes in which anyone can change an encryption of an unknown message $m$ into an encryption of $T(m)$ (as a {\em feature}), for a specific set of allowed functions $T$, but the scheme is ``non-malleable'' with respect to all other operations. We formulate precise definitions that capture these intuitive requirements and also show relationships among our new definitions and other more standard ones (IND-CCA, gCCA, and RCCA). We further justify our definitions by showing their equivalence to a natural formulation of security in the Universally Composable framework. We also consider extending the definitions to features which combine {\em multiple} ciphertexts, and show that a natural definition is unattainable for a useful class of features. Finally, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations $T$, and which are secure under the standard Decisional Diffie-Hellman (DDH) assumption.
A Short Proof of the PRP/PRF Switching Lemma
In Eurocrypt 2006, Bellare and Rogaway \cite{BeRo06} gave a proof of the PRP/PRF switching Lemma using their game-based proof technique. In the appendix of the same paper, they also gave an proof without games. In this paper, we give another proof of the switching lemma, which is simple and mathematically-clear and easy to uderstand. Our proof is based on \textit{the strong interpolation theorem}.
Nonlinear Piece In Hand Matrix Method for Enhancing Security of Multivariate Public Key Cryptosystems
It is widely believed to take exponential time to find a solution of a system of random multivariate polynomials because of the NP-completeness of such a task. On the other hand, in most of multivariate public key cryptosystems proposed so far, the computational complexity of cryptanalysis is apt to be polynomial time due to the trapdoor structure. In this paper, we develop the concept, piece in hand matrix (PH matrix, for short), which aims to bring the computational complexity of cryptanalysis of multivariate public key cryptosystems close to exponential time by adding random polynomial terms to original cryptosystems. This is a general concept which can be applicable to any reasonable type of multivariate public key cryptosystems for the purpose of enhancing their security. There are two types of the PH matrices: a linear matrix whose elements are constants and a nonlinear matrix whose elements are polynomial functions of the plain text or random numbers. In the present paper, we focus our thought on the nonlinear PH matrix method and develop the framework of it. The nonlinear PH matrix method is obtained by generalizing the linear PH matrix method, and the nonlinearity may bring an additional randomization to the original linear PH matrix method. Thus, the nonlinear PH matrix method may enhance the security of the original multivariate public key cryptosystem more than the linear PH matrix method. We show, in an experimental manner, that this actually holds in the enhancement of the security of the Matsumoto-Imai cryptosystem and RSE(2)PKC against the Gröbner basis attack.
Results from a Search for the Best Linear Approximation of a Block Cipher
In this paper, we investigate the application of an algorithm to find the best linear approximation of a basic Substitution-Permutation Network block cipher. The results imply that, while it is well known that the S-box used for the Advanced Encryption Standard has good nonlinear properties, it is straightforward to randomly select other S-boxes which are able to provide a similar level of security, as indicated by the exact bias of the best linear approximation found by the algorithm, rather than a simple upper bound on the maximum bias.
On the Strength of the Concatenated Hash Combiner when All the Hash Functions are Weak
Uncategorized
Uncategorized
At Crypto 2004 Joux showed a novel attack against the concatenated hash combiner instantiated with \md iterated hash functions. His method of producing multicollisions in the \md design was the first in a recent line of generic attacks against the \md construction. In the same paper, Joux raised an open question concerning the strength of the concatenated hash combiner and asked whether his attack can be improved when the attacker can efficiently find collisions in both underlying compression functions. We solve this open problem by showing that even in the powerful adversarial scenario first introduced by Liskov (SAC 2006) in which the underlying compression functions can be fully inverted (which implies that collisions can be easily generated), collisions in the concatenated hash cannot be created using fewer than $2^{n/2}$ queries. We then expand this result to include the double pipe hash construction of Lucks from Asiacrypt 2005. One of the intermediate results is of interest on its own and provides the first streamable construction provably indifferentiable from a random oracle in this model.
On the Chikazawa-Inoue ID based key system
Uncategorized
Uncategorized
In this paper, we show that Chikazawa-Inoue ID-based key system is insecure by collusion, where Chikazawa-Inoue ID-based key system means the key parameters established during the initiation phase. We describe an algorithm factorizing a public key of Trust Center. Since our attack is based on only the key system and has no relation with
specific key sharing protocols, it can be applied to all variant protocols of Chikazawa-Inoue ID based key sharing protocol. From this analysis, we obtain conclusions that Chikazawa-Inoue ID-based key system cannot be used any more for any application protocols.
Compact Proofs of Retrievability
In a proof-of-retrievability system, a data storage center must
prove to a verifier that he is actually storing all of a client's
data. The central challenge is to build systems that are both
efficient and provably secure -- that is, it should be possible
to extract the client's data from any prover that passes a
verification check. All previous provably secure solutions
require that a prover send O(l) authenticator values (i.e., MACs
or signatures) to verify a file, for a total of O(l^2) bits of
communication, where l is the security parameter. The extra cost
over the ideal O(l) communication can be prohibitive in systems
where a verifier needs to check many files.
We create the first compact and provably secure proof of
retrievability systems. Our solutions allow for compact proofs
with just one authenticator value -- in practice this can lead to
proofs with as little as 40 bytes of communication. We present
two solutions with similar structure. The first one is privately
verifiable and builds elegantly on pseudorandom functions (PRFs);
the second allows for publicly verifiable proofs and is built
from the signature scheme of Boneh, Lynn, and Shacham in bilinear
groups. Both solutions rely on homomorphic properties to
aggregate a proof into one small authenticator value.
The SIP Security Enhanced by Using Pairing-assisted Massey-Omura Signcryption
Uncategorized
Uncategorized
Voice over IP (or VoIP) has been adopted progressively not only by a great number of companies but also by an expressive number of people, in Brazil and in other countries. However, this crescent adoption of VoIP in the world brings some concerns such as security risks and threats, mainly on the privacy and integrity of the communication. The risks and threats already exist in the signaling process to the call establishment. This signaling process is performed by specific types of protocols, like the H.323 and SIP (Session Initiation Protocol). Among those risks and threats, we can emphasize the man-in-the-middle attack because of its high danger degree. After doing a bibliographical revision of the current SIP security mechanisms and analyzing some proposals to improve these mechanisms, we verified that the SIP vulnerability to the man-in-the-middle was not totally solved. Then we propose a new security mechanism for SIP in this paper, aiming both to be an alternative security mechanism and a solution for the vulnerability to the man-in-the-middle attack. In our proposal we use a protocol for secure information exchange -- the Massey-Omura protocol -- which, when combined with Pairing-based Cryptography (PBC), provides a better security level for SIP in all its aspects.
Blockcipher Based Hashing Revisited
We revisit the rate-1 blockcipher based hash
functions as first studied by Preneel, Govaerts
and Vandewalle (Crypto'93) and later extensively analysed by Black,
Rogaway and Shrimpton (Crypto'02). We analyze a further generalization where any pre- and postprocessing is considered. By introducing a new
tweak to earlier proof methods, we obtain a simpler proof
that is both more general and more tight than existing
results. As added benefit, this also leads to a clearer understanding
of the current classification of rate-1 blockcipher based schemes as introduced by Preneel et al. and refined by Black et al.
Generators of Jacobians of Genus Two Curves
Uncategorized
Uncategorized
We prove that in most cases relevant to cryptography, the Frobenius endomorphism on the Jacobian of a genus two curve is represented by a diagonal matrix with respect to an appropriate basis of the subgroup of l-torsion points. From this fact we get an explicit description of the Weil-pairing on the subgroup of l-torsion points. Finally, the
explicit description of the Weil-pairing provides us with an efficient, probabilistic algorithm to find generators of the subgroup of l-torsion points on the Jacobian of a genus two curve.
HENKOS Cryptanalysis-Related keys attack
This paper describes a series of vulnerabilities found on HENKOS algorithm (http://eprint.iacr.org/080) having a description below, regarding to the related key attacks, mounting this type of attack for a particular relation between keys and showing that is a practical attack, having results in real time.
Multiparty Computation Goes Live
In this note, we report on the first large-scale and practical application of multiparty computation, which took place in January 2008. We also report on the novel cryptographic protocols that were used.
The Twin Diffie-Hellman Problem and Applications
We propose a new computational problem called the \emph{twin Diffie-Hellman problem}. This problem is closely related to the usual (computational) Diffie-Hellman problem and can be used in many of the same cryptographic constructions that are based on the Diffie-Hellman problem. Moreover, the twin Diffie-Hellman problem is at least as hard as the ordinary Diffie-Hellman problem. However, we are able to show that the twin Diffie-Hellman problem remains hard, even in the presence of a decision oracle that recognizes solutions to the problem --- this is a feature not enjoyed by the Diffie-Hellman problem in general. Specifically, we show how to build a certain ``trapdoor test'' that allows us to effectively answer decision oracle queries for the twin Diffie-Hellman problem without knowing any of the corresponding discrete logarithms. Our new techniques have many applications. As one such application, we present a new variant of ElGamal encryption with very short ciphertexts, and with a very simple and tight security proof, in the random oracle model, under the assumption that the ordinary Diffie-Hellman problem is hard. We present several other applications as well, including: a new variant of Diffie and Hellman's non-interactive key exchange protocol; a new variant of Cramer-Shoup encryption, with a very simple proof in the standard model; a new variant of Boneh-Franklin identity-based encryption, with very short ciphertexts; a more robust version of a password-authenticated key exchange protocol of Abdalla and Pointcheval.
High Performance Architecture for Elliptic Curve Scalar Multiplication over GF(2^m)
We propose a new architecture for performing Elliptic Curve Scalar
Multiplication (ECSM) on elliptic curves over GF(2^m). This architecture maximizes the parallelism that the projective version of the Montgomery ECSM algorithm can achieve. It completes one ECSM operation in about $2(m-1)( \lceil m/D \rceil +4)+m$ cycles, and is at least three times the speed of the best known result currently available. When implemented on a Virtex-4 FPGA, it completes one ECSM operation over GF(2^163) in 12.5us with the maximum achievable frequency of 222MHz. Two other implementation variants for less resource consumption are also proposed. Our first variant reduces the resource consumption by almost 50% while still maintaining the utilization efficiency, which is measured by a performance to resource consumption ratio. Our second variant achieves the best utilization efficiency and in our actual implementation on an elliptic curve group over GF(2^163), it gives more than 30% reduction on resource consumption while maintaining almost the same speed of computation as that of our original design. For achieving this high performance, we also propose a modified finite field inversion algorithm which takes only m cycles to invert an element over GF(2^m), rather than 2m cycles as the traditional Extended Euclid algorithm does, and this new design yields much better utilization of the cycle time.
Infringing and Improving Password Security of a Three-Party Key Exchange Protocol
Key exchange protocols allow two or more parties communicating over
a public network to establish a common secret key called a
session key. Due to their significance in building a secure
communication channel, a number of key exchange protocols have been
suggested over the years for a variety of settings. Among these is
the so-called S-3PAKE protocol proposed by Lu and Cao for
password-authenticated key exchange in the three-party setting. In
the current work, we are concerned with the password security of the
S-3PAKE protocol. We first show that S-3PAKE is vulnerable to an
off-line dictionary attack in which an attacker exhaustively
enumerates all possible passwords in an attempt to determine the
correct one. We then figure out how to eliminate the security
vulnerability of S-3PAKE.
Remarks on the NFS complexity
In this contribution we investigate practical issues with
implementing the NFS algorithm to solve the DLP arising in
XTR-based cryptosystems. We can transform original XTR-DLP to a
DLP instance in $\mathbb{F}_{p^6},$ where $p$ is a medium sized
prime. Unfortunately, for practical ranges of $p,$ the optimal degree of NFS polynomial is less than the required degree 6. This leads
to a problem to find enough smooth equations during the sieve
stage of the NFS algorithm. We discuss several techniques that can
increase the NFS output, i.e. the number of equations produced
during the sieve, without increasing the smoothness bound.
Efficient Sequential Aggregate Signed Data
We generalize the concept of sequential aggregate signatures (SAS), proposed by Lysyanskaya, Micali, Reyzin, and Shacham at Eurocrypt 2004, to a new primitive called sequential aggregate signed data (SASD) that tries to minimize the total amount of transmitted data, rather than just signature length. We present SAS and SASD schemes that offer numerous advantages over the scheme of Lysyanskaya et al. Most importantly, our schemes can be instantiated with uncertified claw-free permutations, thereby allowing implementations based on low-exponent RSA and factoring, and drastically reducing signing and verification costs. Our schemes support aggregation of signatures under keys of different lengths, and the SASD scheme even has as little as 160 bits of bandwidth overhead. Finally, we present a multi-signed data scheme that, when compared to the state-of-the-art multi-signature schemes, is the first scheme with non-interactive signature generation not based on pairings. All of our constructions are proved secure in the random oracle model based on families of claw-free permutations.
Computing Hilbert Class Polynomials
We present and analyze two algorithms for computing the Hilbert class polynomial H_D(X). The first is a p-adic lifting algorithm for inert primes p in the order of discriminant D < 0. The second is an improved Chinese remainder algorithm which uses the class group action on CM-curves over finite fields. Our run time analysis gives tighter bounds for the complexity of all known algorithms for computing H_D(X), and we show that all methods have comparable run times.
Abelian varieties with prescribed embedding degree
We present an algorithm that, on input of a CM-field $K$, an integer $k \ge 1$, and a prime $r \equiv 1 \bmod k$, constructs a $q$-Weil number $\pi \in \O_K$ corresponding to
an ordinary, simple abelian variety $A$ over the field $\F$ of $q$ elements that has an $\F$-rational point of order $r$ and embedding degree $k$ with respect to $r$. We then discuss how CM-methods over $K$ can be used to explicitly construct $A$.
Fast Algorithms for Arithmetic on Elliptic Curves Over Prime Fields
We present here a thorough discussion of the problem of fast arithmetic on elliptic curves over prime order finite fields. Since elliptic curves were independently pro- posed as a setting for cryptography by Koblitz [53] and Miller [67], the group of points on an elliptic curve has been widely used for discrete logarithm based cryptosystems. In this thesis, we survey, analyse and compare the fastest known serial and parallel algorithms for elliptic curve scalar multiplication, the primary operation in discrete logarithm based cryptosystems. We also introduce some new algorithms for the basic group operation and several new parallel scalar multiplication algorithms. We present a mathematical basis for comparing the various algorithms and make recommendations for the fastest algorithms to use in different circumstances.
Buying random votes is as hard as buying no-votes
In voting systems where a mark in a fixed position may mean a vote for Alice on a ballot,and a vote for Bob on another ballot, an attacker may coerce voters to put their mark at a certain position, enforcing effectively a random vote. This attack is meaningful if the voting system allows to take receipts with them and/or posts them to a bulletin board. The coercer may also ask for a blank receipt. We analyze this kind of attack and prove that it requires the same effort as a comparable attack would require against any voting system, even one without receipts.
Physical Cryptanalysis of KeeLoq Code Hopping Applications
KeeLoq remote keyless entry systems are widely used for access control purposes such as garage door openers for car anti-theft systems. We present the first successful differential power analysis attacks on numerous commercially available products employing KeeLoq code hopping. Our new techniques combine side-channel cryptanalysis with specific properties of the KeeLoq algorithm. They allow for efficiently revealing both the secret key of a remote transmitter and the manufacturer key stored in a receiver. As a result, a remote control can be cloned from only ten power traces, allowing for a practical key recovery in few minutes. Once knowing the manufacturer key, we demonstrate how to disclose the secret key of a remote control and replicate it from a distance, just by eavesdropping at most two messages. This key-cloning without physical access to the device has serious real-world security implications. Finally, we mount a denial-of-service attack on a KeeLoq access control system. All the proposed attacks have been verified on several commercial KeeLoq products.
Software Implementation of Genus-2 Hyperelliptic Curve Cryptosystems Over Prime Fields
Uncategorized
Uncategorized
This paper describes the system parameters and software implementation of a HECDSA cryptosystem based on genus-2 hyperelliptic curves over prime fields. We show how to reduce the computational complexity for special cases and compare the given cryptosystem with the well-known ECDSA cryptosystem based on elliptic curves.
Fast explicit formulae for genus 2 hyperelliptic curves using projective coordinates (Updated)
This contribution proposes a modification of method of divisors group operation in the Jacobian of hyperelliptic curve over even and odd characteristic fields in projective coordinate. The hyperelliptic curve cryptosystem (HECC), enhances cryptographic security efficiency in e.g. information and telecommunications systems.
Last updated: 2008-02-07
cryptanalysis and Improvement of a Recently Proposed Remote User Authentication Scheme Using Smart Cards
Recently Debasis et al[1] proposed an improvement to prevent offline attack in Fang et al’s[2] scheme, where [2] was an improvement of Das et al’s[3] scheme. However the improved scheme is insecure against side channel attack. In this paper we propose an enhancement for [1]. The enhanced scheme is secure against substitution, impersonation, spoofing, replay, side-channel and password guessing attacks.
Variants of the Distinguished Point Method for Cryptanalytic Time Memory Trade-offs (Full version)
The time memory trade-off (TMTO) algorithm, first introduced by Hellman, is a method for quickly inverting a one-way function, using pre-computed tables. The distinguished point method (DP) is a technique that reduces the number of table lookups performed by Hellman's algorithm.
In this paper we propose a new variant of the DP technique, named variable DP (VDP), having properties very different from DP. It has an effect on the amount of memory required to store the pre-computed tables. We also show how to combine variable chain length techniques like DP and VDP with a more recent trade-off algorithm called the rainbow table method.