All papers in 2014 (Page 8 of 1029 results)
Explicit Optimal Binary Pebbling for One-Way Hash Chain Reversal
We present explicit optimal binary pebbling algorithms for reversing one-way hash chains. For a hash chain of length $2^k$, the number of hashes performed in each output round does not exceed $\lceil k/2 \rceil$, whereas the number of hash values stored (pebbles) throughout is at most $k$. This is optimal for binary pebbling algorithms characterized by the property that the midpoint of the hash chain is computed just once and stored until it is output, and that this property applies recursively to both halves of the hash chain.
We introduce a framework for rigorous comparison of explicit binary pebbling algorithms, including simple speed-1 binary pebbling, Jakobsson's speed-2 binary pebbling, and our optimal binary pebbling algorithm. Explicit schedules describe for each pebble exactly how many hashes need to be performed in each round. The optimal schedule turns out to be essentially unique and exhibits a nice recursive structure, which allows for fully optimized implementations that can readily be deployed. In particular, we develop the first in-place implementations with minimal storage overhead (essentially, storing only hash values), and fast implementations with minimal computational overhead. Moreover, we show that our approach is not limited to hash chains of length $n=2^k$, but accommodates hash chains of arbitrary length $n\geq1$, without incurring any overhead. Finally, we show how to run a cascade of pebbling algorithms along with a bootstrapping technique, facilitating sequential reversal of an unlimited number of hash chains growing in length up to a given bound.
Affine-evasive Sets Modulo a Prime
Uncategorized
Uncategorized
In this work, we describe a simple and efficient construction of a large subset S of F_p, where p is a prime, such that the set A(S) for any non-identity affine map A over F_p has small intersection with S.
Such sets, called affine-evasive sets, were defined and constructed in~\cite{ADL14} as the central step in the construction of non-malleable codes against affine tampering over F_p, for a prime p. This was then used to obtain efficient non-malleable codes against split-state tampering.
Our result resolves one of the two main open questions in~\cite{ADL14}. It improves the rate of non-malleable codes against affine tampering over F_p from log log p to a constant, and consequently the rate for non-malleable codes against split-state tampering for n-bit messages is improved from n^6 log^7 n to n^6.
An Optimal Strong Password Authentication Protocol with USB Sticks
Uncategorized
Uncategorized
Authentication is the process for identify the user authorized or not. The identity contains mainly the username and passwords for verifying the two entities. The authentication information’s are stored in the form encryption in a device which is properly registered in the server. At the time of authentication process perform between user and server the intruder can eves-dropping the communication channel and login into the system by an authorized user. To overcome this optimal strong password authentication (OSPA) protocol uses the multiple hash operation the time of authentication for the users. The server chooses the hash function only at the time of user request for the login process. So the intruder cannot know the information which is transferred at the time of authentication process.
The OSPA can improve the authentication process for obtaining mutual communication between user and server. The authentication information will not know to the intruder. So the multiple hash function obtains the secure authentication information process. The OSPA protect information of the user & server and protect from the guessing attack. The guessing attack prevention performs by the server using the multiple hash function & USB Stick.
Last updated: 2019-04-09
FeW: A Lightweight Block Cipher
In this paper, we propose a new lightweight block cipher called FeW which encrypts 64-bit plaintext using key size 80/128 bits and produces 64-bit ciphertext. FeW is a software oriented design with the aim of achieving high efficiency in software based environments. We use a mix of Feistel and generalised Feistel structures (referred as Feistel-M structure hereinafter) to enhance the security of our design against basic cryptanalytic attacks like differential, linear, impossible differential and zero correlation attacks. Security analysis of this scheme proves its strength against present day cryptanalytic attacks.
A practical forgery and state recovery attack on the authenticated cipher PANDA-s
PANDA is a family of authenticated ciphers submitted to CARSAR, which consists of two ciphers: PANDA-s and PANDA-b. In this work we present a
state recovery attack against PANDA-s with time complexity about $2^{41}$ under the known-plaintext-attack model, which needs 137 pairs of known plaintext/ciphertext and about 2GB memories. Our attack is practical in a small workstation. Based on the above attack, we further deduce a forgery attack against PANDA-s, which can forge a legal ciphertext $(C,T)$ of an arbitrary plaintext $P$. The results show that PANDA-s is insecure.
From Single-Bit to Multi-Bit Public-Key Encryption via Non-Malleable Codes
One approach towards basing public-key encryption (PKE) schemes on weak and credible assumptions is to build ``stronger'' or more general schemes generically from ``weaker'' or more restricted ones. One particular line of work in this context was initiated by Myers and shelat (FOCS '09) and continued by Hohenberger, Lewko, and Waters (Eurocrypt '12), who provide constructions of multi-bit CCA-secure PKE from single-bit CCA-secure PKE.
It is well-known that encrypting each bit of a plaintext string independently is not CCA-secure---the resulting scheme is *malleable*. We therefore investigate whether this malleability can be dealt with using the conceptually simple approach of applying a suitable non-malleable code (Dziembowski et al., ICS '10) to the plaintext and subsequently encrypting the resulting codeword bit-by-bit. We find that an attacker's ability to ask multiple decryption queries requires that the underlying code be *continuously* non-malleable (Faust et al., TCC '14). Since, as we show, this flavor of non-malleability can only be achieved if the code is allowed to ``self-destruct,'' the resulting scheme inherits this property and therefore only achieves a weaker variant of CCA security.
We formalize this new notion of so-called *self-destruct CCA security (SD-CCA)* as CCA security with the restriction that the decryption oracle stops working once the attacker submits an invalid ciphertext.
We first show that the above approach based on non-malleable codes yields a solution to the problem of domain extension for SD-CCA-secure PKE, provided that the underlying code is continuously non-malleable against a *reduced* form of bit-wise tampering. Then, we prove that the code of Dziembowski et al.\ is actually already continuously non-malleable against (even *full*) bit-wise tampering; this constitutes the first *information-theoretically* secure continuously non-malleable code, a technical contribution that we believe is of independent interest. Compared to the previous approaches to PKE domain extension, our scheme is more efficient and intuitive, at the cost of not achieving full CCA security. Our result is also one of the first applications of non-malleable codes in a context other than memory tampering.
Some Remarks on Honeyword Based Password-Cracking Detection
Recently, Juels and Rivest proposed honeywords (decoy pass-
words) to detect attacks against hashed password databases. For each
user account, the legitimate password is stored with several honeywords in order to sense impersonation. If honeywords are selected properly, an adversary who steals a file of hashed passwords cannot be sure if it is the real password or a honeyword for any account. Moreover, entering with a honeyword to login will trigger an alarm notifying the administrator about a password file breach. At the expense of increasing storage requirement by 20 times, the authors introduce a simple and effective solution to detection of password file disclosure events. In this study, we scrutinize the honeyword system and present some remarks to highlight possible weak points. Also, we suggest an alternative approach that selects honeywords from existing user passwords in the system to provide
realistic honeywords – a perfectly flat honeyword generation method – and also to reduce storage cost of the honeyword scheme.
Coding Theoretic Construction of Quantum Ramp Secret Sharing
We show a construction of a quantum ramp secret sharing scheme from a nested pair of linear codes. Necessary and sufficient conditions for qualified sets and forbidden sets are given in terms of combinatorial properties of nested linear codes. An algebraic geometric construction for quantum secret sharing is also given.
Efficient Quantum-Immune Keyless Signatures with Identity
Uncategorized
Uncategorized
We show how to extend hash-tree based data signatures to server-assisted personal digital signature schemes. The new signature scheme does not use trapdoor functions and is based solely on cryptographic hash functions and is thereby, considering the current state of knowledge, resistant to quantum computational attacks. In the new scheme, we combine hash-tree data signature (time- stamping) solutions with hash sequence authentication mechanisms. We show how to implement such a scheme in practice.
Improved Differential Cryptanalysis of Round-Reduced Speck
Simon and Speck are families of lightweight block ciphers designed by the U.S. National Security Agency and published in 2013. Each of the families contains 10 variants, supporting a wide range of block and key sizes. Since the publication of Simon and Speck, several research papers analyzed their security using various cryptanalytic techniques. The best previously published attacks on all the 20 round-reduced ciphers are differential attacks, and are described in two papers (presented at FSE 2014) by Abed et al. and Biryukov et al.
In this paper, we focus on the software-optimized block cipher family Speck, and describe significantly improved attacks on all of its 10 variants. In particular, we increase the number of rounds which can be attacked by 1, 2, or 3, for 9 out of 10 round-reduced members of the family, while significantly improving the complexity of the previous best attack on the remaining round-reduced member. Our attacks use an untraditional key recovery technique for differential attacks, whose main ideas were published by Albrecht and Cid at FSE 2009 in the cryptanalysis of the block cipher PRESENT.
Despite our improved attacks, they do not seem to threaten the security of any member of Speck.
Preimage attacks on Reduced-round Stribog
In August 2012, the Stribog hash function was selected as the new Russian cryptographic hash standard (GOST R 34.11-2012). Stribog employs twelve rounds of an AES-based compression function operating in Miyaguchi-Preneel mode. In this paper, we investigate the preimage resistance of the Stribog hash function. Specifically, we apply a meet in the middle preimage attack on the compression function which allows us to obtain a 5-round pseudo preimage for a given compression function output with time complexity of $2^{448}$ and memory complexity of $2^{64}$. Additionally, we adopt a guess and determine approach to obtain a 6-round chunk separation that balances the available degrees of freedom and the guess size. The proposed chunk separation allows us to attack 6 out of 12 rounds with time and memory complexities of $2^{496}$ and $2^{112}$, respectively. Finally, employing $2^t$ multicollision, we show that preimages of the 5 and 6-round reduced hash function can be generated with time complexity of $2^{481}$ and $2^{505}$, respectively. The two preimage attacks have equal memory complexity of $2^{256}$.
Index calculus in the trace zero variety
We discuss how to apply Gaudry’s index calculus algorithm for abelian varieties to solve the discrete logarithm problem in the trace zero variety of an elliptic curve. We treat in particular the practically relevant cases of field extensions of degree 3 or 5. Our theoretical analysis is compared to other algorithms present in the literature, and is complemented by results from a prototype implementation.
Analysis of NORX: Investigating Differential and Rotational Properties
Uncategorized
Uncategorized
This paper presents a thorough analysis of the AEAD scheme NORX, focussing on
differential and rotational properties. We first introduce mathematical models
that describe differential propagation with respect to the non-linear operation
of NORX. Afterwards, we adapt a framework previously proposed for ARX designs
allowing us to automatise the search for differentials and characteristics. We
give upper bounds on the differential probability for a small number of steps of
the NORX core permutation. For example, in a scenario where an attacker can only
modify the nonce during initialisation, we show that characteristics have
probabilities of less than $2^{-60}$ ($32$-bit) and $2^{-53}$ ($64$-bit) after
only one round. Furthermore, we describe how we found the best characteristics
for four rounds, which have probabilities of $2^{-584}$ ($32$-bit) and
$2^{-836}$ ($64$-bit), respectively. Finally, we discuss some rotational
properties of the core permutation which yield some first, rough bounds and can
be used as a basis for future studies.
Explicit Non-Malleable Codes Resistant to Permutations
The notion of non-malleable codes was introduced as a relaxation of standard error-correction and error-detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value.
In the information theoretic setting, although existence of such codes for various rich classes of tampering functions is known, explicit constructions exist only for highly structured family of tampering functions. Prior explicit constructions of non-malleable codes rely on the ``compartmentalized'' structure of the tampering function, i.e. the codeword is partitioned into {\em a priori fixed} blocks and each block can {\em only} be tampered independently. The prominent examples of this model are the family of bit-wise independent tampering functions and the split-state
model.
We consider an infinitely large natural class of non-compartmentalized tampering functions. In our model, the tampering function can permute the bits of the encoding and (optionally) perturb them. In the information theoretic setting, we provide an {\em explicit} and {\em efficient}, {\em rate-1} non-malleable code for {\em multi-bit messages}.
Lack of explicit constructions of non-malleable codes for non-compartmentalized tampering functions severely inhibits their utility in cryptographic protocols. As a motivation for our construction, we show an application of non-malleable codes to cryptographic protocols. In an idealized setting, we show how string commitments can be based on one-bit commitments, if non-malleable codes exist. Further, as an example of a non-trivial use of non-malleable codes in standard cryptographic protocols (not in an idealized model), we show that if explicit non-malleable codes are obtained for a slightly larger class of tampering functions than we currently handle, one can obtain a very simple non-malleable commitment scheme, under somewhat strong assumptions.
Statistical weaknesses in 20 RC4-like algorithms and (probably) the simplest algorithm free from these weaknesses - VMPC-R
We find statistical weaknesses in 20 RC4-like algorithms including the original RC4, RC4A, PC-RC4 and others.
This is achieved using a simple statistical test.
We found only one algorithm which was able to pass the test - VMPC-R.
This algorithm, being approximately three times more complex then RC4,
is probably the simplest RC4-like cipher capable of producing pseudo-random output.
Improved Leakage Model Based on Genetic Algorithm
Uncategorized
Uncategorized
The classical leakage model usually exploits the power of one single S-box, which is called divide and conquer. Taking DES algorithm for example, the attack on each S-box needs to search the key space of 2^6 in a brute force way. Besides, 48-bit round key is limited to the result correctness of each single S-box. In this paper, we put forward a new leakage model based on the power consumption of multi S-box. The implementation of this method is combined with genetic algorithm. In DES algorithm, we can establish leakage model based on the Hamming distance of summing up 8 S-boxes. The genetic algorithm can search the key space of 2^48 to complete the attack of 8 S-boxes at the same time intelligently. And we also experimentally validate the fact that the leakage model of 8 S-boxes can decrease about 60% number of traces which is needed in the classical based on one single S-box in time domain and it also decreases about 33% number of traces in frequency domain. The IC card which is used in experiment is the training card 8 provided by Riscure Company.
On the Complexity of Finding Low-Level Solutions
In this article the complexity of finding low-level solutions is investigated.
Structure-Preserving Signatures from Type II Pairings
We investigate structure-preserving signatures in asymmetric bilinear groups with an efficiently computable homomorphism from one source group to the other, i.e., the Type II setting. It has been shown that in the Type I and Type III settings (with maximal symmetry and maximal asymmetry respectively), structure-preserving signatures need at least 2 verification equations and 3 group elements. It is therefore natural to conjecture that this would also be required in the intermediate Type II setting, but surprisingly this turns out not to be the case. We construct structure-preserving signatures in the Type II setting that only require a single verification equation and consist of only 2 group elements. This shows that the Type II setting with partial asymmetry is different from the other two settings in a way that permits the construction of cryptographic schemes with unique properties.
We also investigate lower bounds on the size of the public verification key in the Type II setting. Previous work in structure-preserving signatures has explored lower bounds on the number of verification equations and the number of group elements in a signature but the size of the verification key has not been investigated before. We show that in the Type II setting it is necessary to have at least 2 group elements in the public verification key in a signature scheme with a single verification equation.
Our constructions match the lower bounds so they are optimal with respect to verification complexity, signature sizes and verification key sizes. In fact, in terms of verification complexity, they are the most efficient structure preserving signature schemes to date. Depending on the context in which a scheme is deployed it is sometimes desirable to have strong existential unforgeability, and in other cases full randomizability. We give two structure-preserving signature schemes with a single verification equation where both the signatures and the public verification keys consist of two group elements each. One signature scheme is strongly existentially unforgeable, the other is fully randomizable. Having such simple and elegant structure-preserving signatures may make the Type II setting the easiest to use when designing new structure-preserving cryptographic schemes, and lead to schemes with the greatest conceptual simplicity.
Exponent-inversion Signatures and IBE under Static Assumptions
Boneh-Boyen signatures are widely used in many advanced cryptosystems. It has a structure of ``inversion in the exponent", and its unforgeability against $q$ chosen-messages attack is proven under the non-static $q$-Strong Diffie-Hellman assumption. It has been an open problem whether the exponent-inversion signature, and its various applications, can be proved based on a weaker static assumption.
We propose a dual-form Boneh-Boyen signature and demonstrate how to prove the security for the exponent-inversion signature structure in the standard model under static assumptions. We apply our proof technique to a number of related cryptosystems employing similar structure, including anonymous credentials, identity-based encryption (IBE) and accountable authority IBE. Our results give the first exponent-inversion IBE in the standard model under static assumption. Our anonymous credentials and accountable authority IBE are also better than existing schemes in terms of both security and efficiency.
Sakai-Ohgishi-Kasahara Identity-Based Non-Interactive Key Exchange Revisited and More
Uncategorized
Uncategorized
Identity-based non-interactive key exchange (IB-NIKE) is a powerful but a bit overlooked primitive in identity-based cryptography.
While identity-based encryption and signature have been extensively investigated over the past three decades, IB-NIKE has remained largely unstudied. Currently, there are only few IB-NIKE schemes in the literature. Among them, Sakai-Ohgishi-Kasahara (SOK) scheme is the first efficient and secure two-party IB-NIKE scheme, which has great influence on follow-up works. However, the SOK scheme required its identity mapping function to be modeled as a random oracle to prove security. Moreover, its existing security proof heavily relies on the ability of programming the random oracle. It is unknown whether such reliance is inherent.
In this work, we intensively revisit the SOK IB-NIKE scheme,
and present a series of possible and impossible results in the random oracle model and the standard model. In the random oracle model, we first improve previous security analysis for the SOK IB-NIKE scheme
by giving a tighter reduction. We then use meta-reduction technique to show that the SOK scheme is unlikely proven to be secure based on the computational bilinear Diffie-Hellman (CBDH) assumption
without programming the random oracle. In the standard model, we show how to instantiate the random oracle in the SOK scheme with a concrete hash function from admissible hash functions (AHFs) and indistinguishability obfuscation. The resulting scheme is adaptively secure based on the decisional bilinear Diffie-Hellman inversion (DBDHI) assumption. To the best of our knowledge, this is the first adaptively secure IB-NIKE scheme in the standard model that does not explicitly require multilinear maps. Previous schemes in the standard model either have merely selective security or require programmable hash functions in the multilinear setting. At the technical heart of our scheme, we generalize the definition of AHFs, and propose a generic construction which enables AHFs with previously unachieved parameters, which might be of independent interest.
In addition, we present some new results about IB-NIKE. On the first place, we present a generic construction of multiparty IB-NIKE
from extractable witness PRFs and existentially unforgeable signatures. On the second place, we investigate the relation between semi-adaptive security and adaptive security for IB-NIKE. Somewhat surprisingly, we show that these two notions are polynomially equivalent.
Indistinguishability Obfuscation from the Multilinear Subgroup Elimination Assumption
We revisit the question of constructing secure general-purpose indistinguishability obfuscation (iO), with a security reduction based on explicit computational assumptions over multi- linear maps. Previous to our work, such reductions were only known to exist based on meta- assumptions and/or ad-hoc assumptions: In the original constructive work of Garg et al. (FOCS 2013), the underlying explicit computational assumption encapsulated an exponential family of assumptions for each pair of circuits to be obfuscated. In the more recent work of Pass et al. (Crypto 2014), the underlying assumption is a meta-assumption that also encapsulates an exponential family of assumptions, and this meta-assumption is invoked in a manner that captures the specific pair of circuits to be obfuscated. The assumptions underlying both these works substantially capture (either explicitly or implicitly) the actual structure of the obfuscation mechanism itself.
In our work, we provide the first construction of general-purpose indistinguishability obfuscation proven secure via a reduction to a natural computational assumption over multilinear maps, namely, the Multilinear Subgroup Elimination Assumption. This assumption does not depend on the circuits to be obfuscated (except for its size), and does not correspond to the underlying structure of our obfuscator. The technical heart of our paper is our reduction, which gives a new way to argue about the security of indistinguishability obfuscation.
The Locality of Searchable Symmetric Encryption
This paper proves a lower bound on the trade-off between server storage size and the locality of memory accesses in searchable symmetric encryption (SSE). Namely, when encrypting an index of $N$ identifier/keyword pairs, the encrypted index must have size $\omega(N)$ \emph{or} the scheme must perform searching with $\omega(1)$ non-contiguous reads to memory \emph{or} the scheme must read many more bits than is necessary to compute the results. Recent implementations have shown that non-locality of server memory accesses create a throughput-bottleneck on very large databases. Our lower bound shows that this is due to the security notion and not a defect of the constructions. An upper bound is also given in the form of a new SSE construction with an $O(N\log N)$ size encrypted index that performs $O(\log N)$ reads during a search.
Simulation-Time Security Margin Assessment against Power-Based Side Channel Attacks
A sound design time evaluation of the security of a digital device is
a goal which has attracted a great amount of research effort lately.
Common security metrics for the attack consider either the theoretical leakage of the device, or assume as a security metric the
number of measurements needed in order to be able to always recover the secret key. In this work we provide a combined security
metric taking into account the computational effort needed to lead
the attack, in combination with the quantity of measurements to
be performed, and provide a practical lower bound for the security
margin which can be employed by a secure hardware designer. This
paper represents a first exploration of a design-time security metric
incorporating the computational effort required to lead a power-
based side channel attack in the security level assessment of the
device. We take into account in our metric the possible presence of
masking and hiding schemes, and we assume the best measurement
conditions for the attacker, thus leading to a conservative estimate
of the security of the device. We provide a practical validation of
our security metric through an analysis of transistor-level accurate
power simulations of a 128-bit AES core implemented on a 65 nm
library.
Publicly Evaluable Pseudorandom Functions and Their Applications
Uncategorized
Uncategorized
We put forth the notion of \emph{publicly evaluable} pseudorandom functions (PEPRFs),
which can be viewed as a counterpart of standard pseudorandom functions (PRFs) in the public-key setting. Briefly, PEPRFs are defined over domain $X$ containing a language $L$ associated with a hard relation $\mathsf{R}_L$, and each secret key $sk$ is associated with a public key $pk$. For any $x \in L$, in addition to evaluate $\mathsf{F}_{sk}(x)$ using $sk$ as standard PRFs, one is also able to evaluate $\mathsf{F}_{sk}(x)$ with $pk$, $x$ and a witness $w$ for $x \in L$. We consider two security notions for PEPRFs. The basic one is weak pseudorandomness which stipulates a PEPRF cannot be distinguished from a real random function on uniformly random chosen inputs. The strengthened one is adaptive weak pseudorandomness which requires a PEPRF remains weak pseudorandom even when an adversary is given adaptive access to an evaluation oracle. We conduct a formal study of PEPRFs, focusing on applications, constructions, and extensions.
We show how to construct chosen-plaintext secure (CPA) and chosen-ciphertext secure (CCA) public-key encryption (PKE) schemes from (adaptive) PEPRFs. The construction is simple, black-box, and admits a direct proof of security. We provide evidence that (adaptive) PEPRFs exist by showing constructions from injective trapdoor functions, hash proof systems, extractable hash proof systems, as well as a construction from puncturable PRFs with program obfuscation.
We introduce the notion of publicly sampleable PRFs (PSPRFs), which is a relaxation of PEPRFs, but nonetheless imply PKE. We show (adaptive) PSPRFs are implied by (adaptive) trapdoor relations. This helps us to unify and clarify many PKE schemes from seemingly unrelated general assumptions and paradigms under the notion of PSPRFs.
We explore similar extension on recently emerging constrained PRFs, and introduce the notion of publicly evaluable constrained PRFs, which, as an immediate application, implies attribute-based encryption.
We propose a twist on PEPRFs, which we call publicly evaluable and verifiable functions (PEVFs). Compared to PEPRFs, PEVFs have an additional promising property named public verifiability while the best possible security degrades to unpredictability. We justify the applicability of PEVFs by presenting a simple construction of ``hash-and-sign'' signatures, both in the random oracle model and the standard model.
Collision Attack on 5 Rounds of Grøstl
In this article, we describe a novel collision attack for up to 5 rounds of the Grøstl hash function. This significantly improves upon the best previously published results on 3 rounds. By using a new type of differential trail spanning over more than one message block we are able to construct collisions for Grøstl on 4 and 5 rounds with complexity of $2^{67}$ and $2^{120}$, respectively. Both attacks need $2^{64}$ memory. Due to the generic nature of our attack we can even construct meaningful collisions in the chosen-prefix setting with the same attack complexity.
Actively Private and Correct MPC Scheme in $t < n/2$ from Passively Secure Schemes with Small Overhead
Recently, several efforts to implement and use an unconditionally secure multi-party computation (MPC) scheme have been put into practice. These implementations are {\em passively} secure MPC schemes in which an adversary must follow the MPC schemes. Although passively secure MPC schemes are efficient, passive security has the strong restriction concerning the behavior of the adversary. We investigate how secure we can construct MPC schemes while maintaining comparable efficiency with the passive case, and propose a construction of an {\em actively} secure MPC scheme from passively secure ones. Our construction is secure in the $t < n/2$ setting, which is the same as the passively secure one. Our construction operates not only the theoretical minimal set for computing arbitrary circuits, that is, addition and multiplication, but also high-level operations such as shuffling and sorting. We do not use the broadcast channel in the construction. Therefore, privacy and correctness are achieved but {\em robustness} is absent; if the adversary cheats, a protocol may not be finished but anyone can detect the cheat (and may stop the protocol) without leaking secret information. Instead of this, our construction requires $O((c_B n + n^2)\kappa)$ communication that is comparable to one of the best known passively secure MPC schemes, $O((c_M n + n^2)\log n)$, where $\kappa$ denote the security parameter, $c_B$ denotes the sum of multiplication gates and high-level operations, and $c_M$ denotes the number of multiplication gates. Furthermore, we implemented our construction and confirmed that its efficiency is comparable to the current astest passively secure implementation.
Last updated: 2014-05-26
On the security of Xu et al.'s authentication and key agreement scheme for telecare medicine information systems
Uncategorized
Uncategorized
In 2014, Xu et al. proposed a two-factor mutual authentication and
key agreement scheme for telecare medicine information system (TIMS)
based on elliptic curve cryptography (ECC). However, it has been
shown that Xu et al.'s scheme is not suitable for practical use as
it is many problems. As a remedy, an improved scheme is proposed
with better security and functionality attributes.
Branching Heuristics in Differential Collision Search with Applications to SHA-512
In this work, we present practical semi-free-start collisions for SHA-512 on up to 38 (out of 80) steps with complexity $2^{40.5}$. The best previously published result was on 24 steps. The attack is based on extending local collisions as proposed by Mendel et al. in their Eurocrypt 2013 attack on SHA-256. However, for SHA-512, the search space is too large for direct application of these techniques. We achieve our result by improving the branching heuristic of the guess-and-determine approach to find differential characteristics and conforming message pairs. Experiments show that for smaller problems like 27 steps of SHA-512, the heuristic can also speed up the collision search by a factor of $2^{20}$.
How to Avoid Obfuscation Using Witness PRFs
We propose a new cryptographic primitive called \emph{witness pseudorandom functions} (witness PRFs). Witness PRFs are related to witness encryption, but appear strictly stronger: we show that witness PRFs can be used for applications such as multi-party key exchange without trsuted setup, polynomially-many hardcore bits for any one-way function, and several others that were previously only possible using obfuscation. Current candidate obfuscators are far from practical and typically rely on unnatural hardness assumptions about multilinear maps. We give a construction of witness PRFs from multilinear maps that is simpler and much more efficient than current obfuscation candidates, thus bringing several applications of obfuscation closer to practice. Our construction relies on new but very natural hardness assumptions about the underlying maps that appear to be resistant to a recent line of attacks.
On the Powers of 2
In 2013 the function field sieve algorithm for computing discrete logarithms in finite fields of small characteristic underwent a series of dramatic improvements, culminating in the first heuristic quasi-polynomial time algorithm, due to Barbulescu, Gaudry, Joux and Thomé. In this article we present an alternative descent method which is built entirely from the on-the-fly degree two elimination method of
Göloğlu, Granger, McGuire and Zumbrägel. This also results in a heuristic quasi-polynomial time algorithm, for which the descent does not require any relation gathering or linear algebra eliminations and interestingly, does not require any smoothness assumptions about non-uniformly distributed polynomials. These properties make the new descent method readily applicable at currently viable bitlengths and better suited to theoretical analysis.
Optimality of Non-Adaptive Strategies: The Case of Parallel Games
Most cryptographic security proofs require showing that two
systems are indistinguishable. A central tool in such
proofs is that of a game, where winning the game means
provoking a certain condition, and it is shown that the two
systems considered cannot be distinguished unless this
condition is provoked. Upper bounding the probability of
winning such a game, i.e., provoking this condition, for an
arbitrary strategy is usually hard, except in the special
case where the best strategy for winning such a game is
known to be non-adaptive.
A sufficient criterion for ensuring the optimality of
non-adaptive strategies is that of conditional equivalence
to a system, a notion introduced in [Mau02]. In this
paper, we show that this criterion is not necessary
to ensure the optimality of non-adaptive strategies by
giving two results of independent interest: 1) the
optimality of non-adaptive strategies is not preserved under
parallel composition; 2) in contrast, conditional
equivalence is preserved under parallel composition.
Torsion Limits and Riemann-Roch Systems for Function Fields and Applications
Uncategorized
Uncategorized
The Ihara limit (or constant) $A(q)$ has been a central problem of study in the asymptotic theory of global function fields (or equivalently, algebraic curves over finite fields). It addresses global function fields with many rational points and,
so far, most applications of this theory do not
require additional properties. Motivated by recent applications, we require global function fields
with the additional property that their zero class divisor groups contain at most a small number of $d$-torsion points. We capture this with the notion of torsion limit, a new asymptotic quantity for global function fields.
It seems that it is even harder to determine values of this new quantity than the Ihara constant.
Nevertheless, some non-trivial upper bounds are derived.
Apart from this new asymptotic quantity and bounds on it, we also introduce Riemann-Roch systems of equations. It turns out that this type of equation system
plays an important role in the study of several other problems in each of these areas: arithmetic secret sharing, symmetric bilinear complexity of multiplication in finite fields, frameproof codes and the theory of error correcting codes.
Finally, we show how our new asymptotic quantity, our bounds on it and Riemann-Roch systems can be used to improve results in these areas.
Pipelineable On-Line Encryption
Correct authenticated decryption requires the receiver to buffer the decrypted message until the authenticity check has been performed. In high-speed networks, which must handle large message frames at low latency, this behavior becomes practically infeasible. This paper proposes CCA-secure on-line ciphers as a practical alternative to AE schemes since the former provide some defense against malicious message modifications. Unfortunately, all published on-line ciphers so far are either inherently sequential, or lack a CCA-security proof.
This paper introduces POE, a family of on-line ciphers that combines provable security against chosen-ciphertext attacks with pipelineability to support efficient implementations. POE combines a block cipher and an e-AXU family of hash functions. Different instantiations of POE are given, based on different universal hash functions and suitable for different platforms. Moreover, this paper introduces POET, a provably secure on-line AE scheme, which inherits pipelineability and chosen-ciphertext-security from POE and provides additional resistance against nonce-misuse attacks.
Quantum Attacks on Classical Proof Systems - The Hardness of Quantum Rewinding
Quantum zero-knowledge proofs and quantum proofs of knowledge are
inherently difficult to analyze because their security analysis uses
rewinding. Certain cases of quantum rewinding are handled by the
results by Watrous (SIAM J Comput, 2009) and Unruh (Eurocrypt 2012),
yet in general the problem remains elusive. We show that this is not
only due to a lack of proof techniques: relative to an oracle,
we show that classically secure proofs and proofs of knowledge are
insecure in the quantum setting.
More specifically, sigma-protocols, the Fiat-Shamir construction,
and Fischlin's proof system are quantum insecure under assumptions that are
sufficient for classical security. Additionally, we show that for
similar reasons, computationally binding commitments provide almost
no security guarantees in a quantum setting.
To show these results, we develop the "pick-one trick", a general
technique that allows an adversary to find one value satisfying a
given predicate, but not two.
ZAPs and Non-Interactive Witness Indistinguishability from Indistinguishability Obfuscation
We present new constructions of two-message and one-message witness-indistinguishable proofs (ZAPs and NIWIs). This includes:
\begin{itemize}
\item
ZAP (or, equivalently, non-interactive zero-knowledge in the common random string model) from indistinguishability obfuscation and one-way functions.
\item
NIWIs from indistinguishability obfuscation and one-way permutations.
\end{itemize}
The previous construction of ZAPs [Dwork and Naor, FOCS 00] was based on trapdoor permutations. The two previous NIWI constructions were based either on ZAPs and a derandomization-type complexity assumption [Barak, Ong, and Vadhan CRYPTO 03], or on a specific number theoretic assumption in bilinear groups [Groth, Sahai, and Ostrovsky, CRYPTO 06].
The M3lcrypt Password Based Key Derivation Function
M3lcrypt (canonical M3lcryptH) is a password based key derivation
function built around the Merkle-Damgard hash function H. It supports
large [pseudo]random salt values (>= 128-bit) and password lengths.
Last updated: 2014-06-22
An Efficient Abuse-Free Fair Contract-Signing Protocol Based on RSA Signature and Σ-protocol
A fair contract-signing protocol is an important mechanism which allows two participants to sign a digital contract via the public computer networks in a fair way. Based on the RSA signature scheme and Σ-protocol, we propose a new contract-signing protocol in this paper. The proposed protocol is not only fair and optimistic, but also efficient and abuse-free. Moreover, security and efficiency analysis are provided.
Improved Meet-in-the-Middle Attacks on Reduced-Round Camellia-192/256
Camellia is one of the widely used block ciphers, which has been selected as an international standard by ISO/IEC. In this paper, we focus on the key-recovery attacks on reduced-round Camellia-192/256 with meet-in-the-middle methods. We utilize multiset and the differential enumeration methods which are popular to analyse AES in the recent to attack Camellia-192/256. We propose a 7-round property for Camellia-192, and achieve a 12-round attack with $2^{180}$ encryptions, $2^{113}$ chosen plaintexts and $2^{130}$ 128-bit memories. Furthermore, we present an 8-round property for Camellia-256, and apply it to break the 13-round Camellia-256 with $2^{232.7}$ encryptions, $2^{113}$ chosen ciphertexts and $2^{227}$ 128-bit memories.
Trial multiplication is not optimal but... On the symmetry of finite cyclic groups (Z/pZ)∗
Uncategorized
Uncategorized
The Discrete Logarithm Problem is at the base of the famous Diffie Hellman key agreement algorithm and many others. The key idea behind Diffie Helmann is the usage of the Discrete Logarithm function in (Z/pZ)∗ as a trap door function. The Discrete Logarithm function output in (Z/pZ)∗ seems to escape to any attempt of finding some sort of pattern. Nevertheless some new characterization will be introduced together with a novel and more efficient trial multi- plication algorithm.
Reliable Broadcast with Respect to Topology Knowledge
We study the Reliable Broadcast problem in incomplete networks against
a Byzantine adversary. We examine the problem under the locally bounded adversary model of Koo (2004) and the general adversary model of Hirt and Maurer (1997) and explore the tradeoff between the
level of topology knowledge and the solvability of the problem.
We refine the local pair-cut technique of Pelc and Peleg (2005) in order to obtain impossibility results for every level of topology knowledge and any type of corruption distribution. On the positive side we devise protocols that match the obtained bounds and thus, exactly characterize the classes of graphs in which Reliable Broadcast is possible.
Among others, we show that Koo's Certified Propagation Algorithm
(CPA) is unique against locally bounded adversaries in ad hoc networks, that is, it can tolerate as many local corruptions as any other non-faulty algorithm; this settles an open question posed by Pelc and Peleg. We also provide an adaptation of CPA against general adversaries and show its uniqueness in this case too. To the best of our knowledge this is the first optimal algorithm for Reliable Broadcast in generic topology ad hoc networks against general adversaries.
An Empirical Study and some Improvements of the MiniMac Protocol for Secure Computation
Recent developments in Multi-party Computation (MPC) has resulted in very efficient protocols for dishonest majority in the pre- processing model. In particular, two very promising protocols for Boolean circuits have been proposed by Nielsen et al. (nicknamed TinyOT) and by Damg ̊ard and Zakarias (nicknamed MiniMac). While TinyOT has already been implemented, we present in this paper the first implemen- tation of MiniMac, using the same platform as the existing TinyOT im- plementation. We also suggest several improvements of MiniMac, both on the protocol design and implementation level. In particular, we sug- gest a modification of MiniMac that achieves increased parallelism at no extra communication cost. This gives an asymptotic improvement of the original protocol as well as an 8-fold speed-up of our implementation. We compare the resulting protocol to TinyOT for the case of secure com- putation in parallel of a large number of AES encryptions and find that it performs better than results reported so far on TinyOT, on the same hardware.
Resilient Aggregation in Simple Linear Sensor Networks
Uncategorized
Uncategorized
A sensor network is a network comprised of many small, wireless, resource-limited nodes that sense data about their environment and report readings to a base station. One technique to conserve power in a sensor network is to aggregate sensor readings hop-by-hop as they travel towards a base station, thereby reducing the total number of messages required to collect each sensor reading. In an adversarial setting, the ability of a malicious node to alter this aggregate total must be limited. We present three aggregation protocols inspired by three natural key pre-distribution schemes for linear networks. Assuming no more than $k$ consecutive nodes are malicious, each of these protocols limits the capability of a malicious node to altering the aggregate total by at most a single valid sensor reading. Additionally, our protocols are able to detect malicious behavior as it occurs, allowing the protocol to be aborted early, thereby conserving energy in the remaining nodes. A rigorous proof of security is also given for each protocol.
Active and Passive Side-Channel Attacks on Delay Based PUF Designs
Uncategorized
Uncategorized
Physical Unclonable Functions (PUFs) have emerged as a lightweight alternative to traditional cryptography. The fact that no secret key needs to be stored in non-volatile memory makes PUFs especially well suited for embedded systems in which securely generating and storing secret keys is difficult and expensive. Compared to traditional cryptography, PUFs are often believed to be more resistant to implementation attacks.
In this paper we will take a closer look at this assumption. Using a controlled Arbiter PUF as an example, we show that just like traditional cryptography strong PUFs are susceptible to implementation attacks. By combining machine learning with with side-channel analysis we are able to attack designed based on Arbiter PUFs that on are resistant to normal machine learning attacks. We use two different side-channels for our attacks: a passive power side-channel and an active fault attack based on altering the supply voltage of the controlled PUF. Even in the presence of considerable noise both attacks can accurately model the Controlled Arbiter PUF. Hence, the assumption that PUFs are generally more resistant against side-channel attacks is not necessarily true and side-channel resistance needs to be considered when PUF designs are evaluated.
Weaknesses of Password Authentication Scheme Based on Geometric Hashing
We show that a recently proposed password authentication scheme based on geometric hashing has several security weaknesses, and that the use of this scheme should be avoided in practice.
Privacy-Enhancing Proxy Signatures from Non-Interactive Anonymous Credentials
Proxy signatures enable an originator to delegate the signing rights for a restricted set of messages to a proxy. The proxy is then able to produce valid signatures only for messages from this delegated set on behalf of the originator. Recently, two variants of privacy-enhancing proxy signatures, namely blank signatures and warrant-hiding proxy signatures, have been introduced. In this context, privacy-enhancing means that a verifier of a proxy signature does not learn anything about the delegated message set beyond the message being presented for verification.
We observe that this principle bears similarities with functionality provided by anonymous credentials. Inspired by this observation, we examine black-box constructions of the two aforementioned proxy signatures from non-interactive anonymous credentials, i.e., anonymous credentials with a non-interactive showing protocol, and show that the so obtained proxy signatures are secure if the anonymous credential system is secure. Moreover, we present two concrete instantiations using well-known representatives of anonymous credentials, namely Camenisch-Lysyanskaya (CL) and Brands' credentials.
While constructions of anonymous credentials from signature schemes with particular properties, such as CL signatures or structure-preserving signatures, as well as from special variants of signature schemes, such as group signatures, sanitizable and indexed aggregate signatures, are known, this is the first paper that provides constructions of special variants of signature schemes, i.e., privacy-enhancing proxy signatures, from anonymous credentials.
Resettably Sound Zero-Knoweldge Arguments from OWFs - the (semi) Black-Box way
We construct a constant-round resettably-sound zero-knowledge argument of knowledge based on black-box use of any one-way function.
Resettable-soundness was introduced by Barak, Goldreich, Goldwasser and Lindell [FOCS 01] and is a strengthening of the soundness requirement in interactive proofs demanding that soundness should hold even if the malicious prover is allowed to “reset” and “restart” the verifier. In their work they show that resettably-sound ZK arguments require non-black-box simulation techniques, and also provide the first construction based on the breakthrough simulation technique of Barak [FOCS 01]. All known implementations of Barak’s non-black-box technique required non-black-box use of a collision-resistance hash-function (CRHF).
Very recently, Goyal, Ostrovsky, Scafuro and Visconti [STOC 14] showed an implementation of Barak’s technique that needs only black-box access to a collision-resistant hash-function while still having a non-black-box simulator. (Such a construction is referred to as semi black-box.) Plugging this implementation in the BGGL’s construction yields the first resettably-sound ZK arguments based on black-box use of CRHFs.
However, from the work of Chung, Pass and Seth [STOC 13] and Bitansky and Paneth [STOC13], we know that resettably-sound ZK arguments can be constructed from non-black-box use of any one-way function (OWF), which is the minimal assumption for ZK arguments.
Hence, a natural question is whether it is possible to construct resettably-sound zero-knowledge arguments from black-box use of any OWF only. In this work we provide a positive answer to this question thus closing the gap between black-box and non-black-box constructions for resettably-sound ZK arguments.
Structural Lattice Reduction: Generalized Worst-Case to Average-Case Reductions and Homomorphic Cryptosystems
Uncategorized
Uncategorized
In lattice cryptography, worst-case to average-case reductions rely on two problems: Ajtai's SIS and Regev's LWE,
which both refer to a very small class of random lattices related to the group $G=\mZ_q^n$.
We generalize worst-case to average-case reductions to all integer lattices of sufficiently large determinant,
by allowing $G$ to be any (sufficiently large) finite abelian group.
In particular, we obtain a partition of the set of full-rank integer lattices of large volume
such that finding short vectors in a lattice chosen uniformly at random from any of the partition cells is as hard as finding short vectors in any integer lattice.
Our main tool is a novel generalization of lattice reduction, which we call structural lattice reduction: given a finite abelian group $G$ and a lattice $L$,
it finds a short basis of some lattice $\bar{L}$ such that $L \subseteq \bar{L}$ and $\bar{L}/L \simeq G$.
Our group generalizations of SIS and LWE allow us to abstract lattice cryptography, yet preserve worst-case assumptions: as an example, we provide a somewhat conceptually simpler
generalization of the Alperin-Sheriff-Peikert variant of the Gentry-Sahai-Waters homomorphic scheme.
We introduce homomorphic mux gates, which
allows us to homomorphically evaluate any boolean function with a noise overhead
proportional to the square root of its number of variables,
and bootstrap the full scheme using only a linear noise overhead.
On The Orthogonal Vector Problem and The Feasibility of Unconditionally Secure Leakage Resilient Computation
We consider unconditionally secure leakage resilient two-party
computation, where security means that the leakage obtained by an
adversary can be simulated using a similar amount of leakage from the
private inputs or outputs. A related problem is known as circuit
compilation, where there is only one device doing a computation on
public input and output. Here the goal is to ensure that the adversary
learns only the input/output behaviour of the computation, even given
leakage from the internal state of the device. We study these
problems in an enhanced version of the ``only computation leaks''
model, where the adversary is additionally allowed a bounded amount of
{\em global} leakage from the state of the entity under attack. In
this model, we show the first unconditionally secure leakage resilient
two-party computation protocol. The protocol assumes access to
correlated randomness in the form of a functionality $\fOrt$ that
outputs pairs of orthogonal vectors $(\vec{u}, \vec{v})$ over some
finite field, where the adversary can leak independently from
$\vec{u}$ and from $\vec{v}$. We also construct a general circuit
compiler secure in the same leakage model. Our constructions work,
even if the adversary is allowed to corrupt a constant fraction of the
calls to $\fOrt$ and decide which vectors should be output. On the
negative side, we show that unconditionally secure two-party
computation and circuit compilation are in general impossible in the
plain version of our model. For circuit compilation we need a
computational assumption to exhibit a function that cannot be securely
computed, on the other hand impossibility holds even if global leakage
is not allowed. It follows that even a somewhat unreliable version of
$\fOrt$ cannot be implemented with unconditional security in the plain
leakage model, using classical communication. However, we show that an
implementation using quantum communication does exist. In particular,
we propose a simple ``prepare-and-measure'' type protocol which we
show secure using a new result on sampling from a quantum
population. Although the protocol may produce a small number of
incorrect pairs, this is sufficient for leakage resilient computation
by our other results.
WCFB: a tweakable wide block cipher
We define a model for applications that process large data sets in a way that enables additional optimizations of encryption operations. We designed a new strong pseudo-random tweakable permutation, WCFB, to take advantage of identified characteristics. WCFB is built with only 2m+1 block cipher invocation for m cipherblocks and approximately 5m XOR operations.
WCFB can benefit from commonly occurring plaintext, such as encryption of a 0^nm sector, and repeated operations on the same wide block.
We prove the birthday-bound security of the mode, expressed in terms of the security of the underlying block cipher.
A case analysis of disk block access requests by Windows 8.1 is provided.
MSEA: Modified Symmetric Encryption Algorithm
In this article, a new symmetric block cipher named MSEA is proposed. MSEA is based on ARX cryptographic design technique. MSEA is simple in nature due to the use of combinations of elementary operations like modular addition, bit-wise rotation and bit-wise XOR. In MSEA, plain text block, secret key, and number of encryption rounds are variable in size, while the size of cipher text is double of size of plain text. Data-dependant rotation is the most vital feature of MSEA through which the unpredictability of encrypted text is increasing. Key formation and encryption/decryption schemes of MSEA are significantly fast.
Improved Impossible Differential Attacks against Round-Reduced LBlock
Impossible differential attacks are among the most powerful forms of cryptanalysis against block ciphers. We present in this paper an in-depth complexity analysis of these attacks. We show an unified way to mount such attacks and provide generic formulas for estimating their time, data and memory complexities. LBlock is a well studied lightweight block cipher with respect to impossible differential attacks. While previous single-key cryptanalysis reached up to 22 rounds, by applying our method we are able to break 23 rounds with time complexity $2^{75.36}$ and data complexity $2^{59}$. Other time/data trade-offs are equally possible. This is to our knowledge the best (non-exhaustive search like) cryptanalysis of this function in the single-key model.
Stronger Security Notions for Decentralized Traceable Attribute-Based Signatures and More Efficient Constructions
In this work, we revisit the notion of Decentralized Traceable Attribute-Based Signatures (DTABS) introduced by El Kaafarani et al. (CT-RSA 2014) and improve the state-of-the-art in three dimensions:
Firstly, we provide a new stronger security model which circumvents some shortcomings in existing models. Our model minimizes the trust placed in attribute authorities and hence provides, among other things, a stronger definition for non-frameability. In addition, unlike previous models, our model
captures the notion of tracing soundness which is important for many applications of the primitive, and which ensures that even if all parties in the system are fully corrupt, no one but the actual signer can claim authorship of the signature.
Secondly, we provide a generic construction that is secure w.r.t.\ our strong security model and show two example instantiations in the standard model which are more efficient than existing constructions (secure under weaker security definitions).
Finally, unlike existing constructions, we dispense with the need for the expensive zero-knowledge proofs required for proving tracing correctness by the tracing authority. As a result, tracing a signature in our constructions is significantly more efficient than existing constructions, both in terms of the size of the tracing proof and the computational cost required to generate and verify it.
For instance, verifying tracing correctness in our constructions requires only 4 pairings compared to 34 pairings in the most efficient existing construction.
New Treatment of the BSW Sampling and Its Applications to Stream Ciphers
By combining the time-memory-data tradeoff (TMDTO) attack independently proposed by Babbage and Golić (BG) with the BSW sampling technique, this paper explores to mount a new TMDTO attack on stream ciphers. The new attack gives a wider variety of trade-offs, compared with original BG-TMDTO attack. It is efficient when multiple data is allowed for the attacker from the same key with different IVs, even though the internal state size is twice the key size. We apply the new attack to MICKEY and Grain stream ciphers, and improves the existing TMDTO attacks on them. Our attacks on Grain v1 and Grain-128 stream ciphers are rather attractive in the respect that the online time, offline time and memory complexities are all better than an exhaustive key search, and the amount of keystream needed are completely valid. Finally, we generalize the new attack to a Guess and Determine-TMDTO attack on stream ciphers, and mount a Guess and Determine-TMDTO attack on SOSEMANUK stream cipher with the online time and offline time complexities both equal to $2^{128}$, which achieves the best time complexity level compared with all existing attacks on SOSEMANUK so far.
Design of identity-based digital signature schemes using extended chaotic maps
Uncategorized
Uncategorized
Inspired from the Identity-based cryptosystem proposed by Adi Shamir, and Boneh and Franklin, this paper designed a new Identity-based digital signature (ECM-IDS) scheme using extended chaotic maps. The ECM-IDS scheme is secure based on the difficulties of integer factorization problem.
Identity-based encryption and digital signature schemes using extended chaotic maps
This paper designed a new extended chaotic map-based Identity-based encryption (ECM-IBE) scheme and Identity-based digital signature (ECM-IDS) scheme using extended chaotic maps. The security of the ECM-IBE scheme is based on the hardness assumption of chaotic maps-based decisional Diffie–Hellman (CDDH) problem, whereas the ECM-IDS scheme is secure based on the difficulties of chaotic maps-based discrete logarithm (CDL) problem.
A note on the construction of pairing-friendly elliptic curves for composite order protocols
Uncategorized
Uncategorized
In pairing-based cryptography, the security of protocols using composite
order groups relies on the difficulty of factoring a composite number
$N$. Boneh~\etal~proposed the Cocks-Pinch method to construct ordinary pairing-friendly elliptic curves having a subgroup of composite order $N$. Displaying such a curve as a public parameter implies revealing a square root $s$ of the complex multiplication discriminant $-D$ modulo $N$. We exploit this information leak and the structure of the endomorphism ring of the curve to factor the RSA modulus, under certain conditions. Our conclusion is that the values of $s$ modulo each prime in the factorization of $N$ should be chosen as high entropy input parameters when running the Cocks-Pinch algorithm.
Witness Encryption from Instance Independent Assumptions
Uncategorized
Uncategorized
Witness encryption was proposed by Garg, Gentry, Sahai, and Waters as
a means to encrypt to an instance, x, of an NP language and produce
a ciphertext. In such a system, any decryptor that knows of a witness w that
x is in the language can decrypt the ciphertext and learn the
message. In addition to proposing the concept, their work provided a candidate for a witness encryption scheme built using multilinear encodings. However, one
significant limitation of the work is that the candidate had no proof
of security (other than essentially assuming the scheme secure).
In this work we provide a proof framework for proving witness
encryption schemes secure under instance independent assumptions. At the
highest level we introduce the abstraction of positional witness
encryption which allows a proof reduction of a witness encryption
scheme via a sequence of 2^n hybrid experiments where n is the
witness length of the NP-statement. Each hybrid step proceeds by
looking at a single witness candidate and using the fact that it does not
satisfy the NP-relation to move the proof forward.
We show that this isolation strategy enables one to create a
witness encryption system that is provably secure from assumptions that
are (maximally) independent of any particular encryption instance.
We demonstrate the viability of our approach by implementing this strategy using
level n-linear encodings where n is the witness length. Our
complexity assumption has approximately n group elements,
but does not otherwise depend on the NP-instance x.
Impossible differential cryptanalysis of LBlock with concrete investigation of key scheduling algorithm
Impossible differential cryptanalysis has been proved to be one of the most powerful techniques to attack block ciphers. Based on the impossible differential paths, we can usually add several rounds before or after to launch the key recovery attack. Impossible differential cryptanalysis is powerful not only because the number of rounds it can break is very competitive compared to other attacks, but also unlike differential attacks which are statistical attacks in the essential, impossible differential analysis does not require many statistical assumptions. In this paper, we investigate the key recovery attack part of the impossible differential cryptanalysis. We point out that when taking the (non-linear) key scheduling algorithm into consideration, we can further derive the redundancy among the subkeys, and thus can filter the wrong key at a rather early stage. This can help us control the time complexity and increase the number of rounds we can attack. As an application, we analyze recently proposed lightweight block cipher LBlock, and as a result, we can break 23 rounds with complexity $2^{77.4}$ encryptions without using the whole code block, which is by far the best attack against this cipher.
STRIBOB: Authenticated Encryption from GOST R 34.11-2012 LPS Permutation
Authenticated encryption algorithms protect both the
confidentiality and integrity of messages in a single processing pass.
In this note we show how to utilize the $L \circ P \circ S$
transform of the Russian GOST R 34.11-2012 standard hash
``Streebog'' to build an efficient, lightweight algorithm for
Authenticated Encryption with Associated Data (AEAD) via the Sponge
construction and BLNK padding. The proposed algorithm ``StriBob'' has
attractive security properties, is faster than the Streebog hash alone,
twice as fast as the GOST 28147-89 encryption algorithm, and requires
only a modest amount of running-time memory. StriBob is a
Round 1 candidate in the CAESAR competition.
Faster Maliciously Secure Two-Party Computation Using the GPU
Uncategorized
Uncategorized
We present a new protocol for maliciously secure two-party computation based on cut-and-choose of garbled circuits using the recent idea of “forge-and-loose”, which eliminates around a factor 3 of garbled circuits that needs to be constructed and evaluated. Our protocol introduces a new way to realize the “forge-and-loose” approach, which avoids an auxiliary secure two-party computation protocol, does not rely on any number theoretic assumptions and parallelizes well in a same instruction, multiple data (SIMD) framework.
With this approach we prove our protocol universally composable secure against a malicious adversary assuming access to oblivious transfer, commitment and coin-tossing functionalities in the random oracle model.
Finally, we construct, and benchmark, a SIMD implementation of this protocol using a GPU as a massive SIMD device. The findings compare favorably with all previous implementations of maliciously secure, two-party computation.
Chosen Ciphertext Security via Point Obfuscation
In this paper, we show two new constructions of chosen ciphertext secure (CCA secure) public key encryption (PKE) from general assumptions. The key ingredient in our constructions is an obfuscator for point functions with multi-bit output (MBPF obfuscators, for short), that satisfies some (average-case) indistinguishability-based security, which we call AIND security, in the presence of hard-to-invert auxiliary input. Specifically, our first construction is based on a chosen plaintext secure PKE scheme and an MBPF obfuscator satisfying the AIND security in the presence of computationally hard-to-invert auxiliary input. Our second construction is based on a lossy encryption scheme and an MBPF obfuscator satisfying the AIND security in the presence of statistically hard-to-invert auxiliary input. To clarify the relative strength of AIND security, we show the relations among security notions for MBPF obfuscators, and show that AIND security with computationally (resp. statistically) hard-to-invert auxiliary input is implied by the average-case virtual black-box (resp. virtual grey-box) property with the same type of auxiliary input. Finally, we show that a lossy encryption scheme can be constructed from an obfuscator for point functions (point obfuscator) that satisfies re-randomizability and a weak form of composability in the worst-case virtual grey-box sense. This result, combined with our second generic construction and several previous results on point obfuscators and MBPF obfuscators, yields a CCA secure PKE scheme that is constructed \emph{solely} from a re-randomizable and composable point obfuscator. We believe that our results make an interesting bridge that connects CCA secure PKE and program obfuscators, two seemingly isolated but important cryptographic primitives in the area of cryptography.
New bit-parallel Montgomery multiplier for trinomials using squaring operation
Uncategorized
Uncategorized
In this paper, a new bit-parallel Montgomery multiplier for $GF(2^m)$ is presented, where the field is generated with an irreducible trinomial. We first present a slightly generalized version of a newly proposed divide and conquer approach. Then, by combining this approach and a carefully chosen Montgomery factor, the Montgomery multiplication can be transformed into a composition of small polynomial multiplications and Montgomery squarings, which are simpler and more efficient. Explicit complexity formulae in terms of gate counts and time delay of our architecture are investigated. As a result, the proposed multiplier has generally 25\% lower space complexity than the fastest multipliers, with time complexity as good as or better than previous Karatsuba-based multipliers for the same class of fields. Among the five irreducible polynomials recommended by NIST for the ECDSA (Elliptic Curve Digital Signature Algorithm), there are two trinomials which are available for our architecture. We show that our proposal outperforms the previous best known results if the space and time complexity are both considered.
Differential Fault Analysis on the families of SIMON and SPECK ciphers
Uncategorized
Uncategorized
In 2013, the US National Security Agency proposed two new families of lightweight block ciphers: SIMON and SPECK. Currently, linear and differential cryptanalytic results for SIMON are available in the literature but no fault attacks have been reported so far on these two cipher families. In this paper, we show that these families of ciphers are vulnerable to differential fault attacks. Specifically, we demonstrate two fault attacks on SIMON and one fault attack on SPECK. The first attack on SIMON assumes a bit-flip fault model and recovers the n-bit last round key of SIMON using n/2 bit faults. The second attack on SIMON uses a more practical, random byte fault model and requires n/8 faults on average to retrieve the last round key. The attack presented on SPECK also assumes a bit-flip fault model and recovers the n-bit last round key of SPECK using n/3 bit faults on average.
ICEPOLE: High-speed, Hardware-oriented Authenticated Encryption
This paper introduces our dedicated authenticated encryption scheme ICEPOLE. ICEPOLE is a high-speed hardware-oriented scheme, suitable for high-throughput network nodes or generally any environment where specialized hardware (such as FPGAs or ASICs) can be used to provide high data processing rates. ICEPOLE-128 (the primary ICEPOLE variant) is very fast. On the modern FPGA device Virtex 6, a basic iterative architecture of ICEPOLE reaches 41 Gbits/s, which is over 10 times faster than the equivalent implementation of AES-128-GCM. The throughput-to-area ratio is also substantially better when compared to AES-128-GCM. We have carefully examined the security of the algorithm through a range of cryptanalytic techniques and our findings indicate that ICEPOLE offers high security level.
Dual System Groups and its Applications --- Compact HIBE and More
We introduce the notion of *dual system groups*.
- We show how to derive compact HIBE by instantiating the dual system framework in Waters (Crypto '09) and Lewko and Waters (TCC '10) with dual system groups. Our construction provides a unified treatment of the prior compact HIBE schemes from static assumptions.
- We show how to instantiate dual system groups under the decisional subgroup assumption in composite-order groups and the decisional linear assumption ($d$-LIN) in prime-order groups. Along the way, we provide new tools for simulating properties of composite-order bilinear groups in prime-order groups. In particular, we present new randomization and parameter-hiding techniques in prime-order groups.
Combining the two, we obtain a number of new encryption schemes, notably
- a new construction of IBE in prime-order groups with shorter parameters;
- a new construction of compact HIBE in prime-order
groups whose structure closely mirrors the selectively secure HIBE
scheme of Boneh, Boyen and Goh (Eurocrypt '05);
- a new construction of compact spatial encryption in prime-order groups.
Continuous After-the-fact Leakage-Resilient Key Exchange (full version)
Security models for two-party authenticated key exchange (AKE) protocols have developed over time to provide security even when the adversary learns certain secret keys. In this work, we advance the modelling of AKE protocols by considering more granular, continuous leakage of long-term secrets of protocol participants: the adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated, with limits on the amount of leakage per query but no bounds on the total leakage. We present a security model supporting continuous leakage even when the adversary learns certain ephemeral secrets or session keys, and give a generic construction of a two-pass leakage-resilient key exchange protocol that is secure in the model; our protocol achieves continuous, after-the-fact leakage resilience with not much more cost than a previous protocol with only bounded, non-after-the-fact leakage.
A Generic Scan Attack on Hardware based eStream Winners
Scan chains, a design for testability (DFT)
feature, are included in most modern-day ICs. But, it
opens a side channel for attacking cryptographic chips.
We propose a methodology by which we can recover
internal states of any stream cipher using scan chains
without knowledge of its design. We consider conven-
tional scan-chain design which is normally not scram-
bled or protected in any other way. In this scenario
the challenge of the adversary is to obtain the corre-
spondence of output of the scan chain and the internal
state registers of the stream cipher. We present a math-
ematical model of the attack and the correspondence
between the scan chain-outputs and the internal state
bits have been proved under this model. We propose an
algorithm that through o-line and on-line simulation
forms bijection between the above mentioned sets and
thus nds the required correspondence. We also give an
estimate of the number of o-line simulations necessary
for nding the correspondence.
The proposed strategy is successfully applied to eS-
tream hardware based nalists MICKEY-128 2.0, Triv-
ium and Grain-128. To the best of our knowledge, this is
the rst scan based attack against full round Grain-128
and only the fourth reported cryptanalysis. This attack
on Trivium is better than that of the published scan-
attack on Trivium. This scan-based attack is also the
rst reported scan based cryptanalysis against MICKEY-
128 2.0.
Differential Fault Analysis of MICKEY Family of Stream Ciphers
This paper presents differential fault analysis of the
MICKEY family of stream ciphers, one of the winners of eStream
project. The current attacks are of the best performance among
all the attacks against MICKEY ciphers reported till date. The
number of faults required with respect to state size is about
1.5 times the state size. We obtain linear equations to determine
state bits. The fault model required is reasonable. The fault model
is further relaxed without reproducing the faults and allowing
multiple bit faults. In this scenario, more faults are required
when reproduction is not allowed whereas, it has been shown
that the number of faults remains same for multiple bit faults.
Fault Analysis of Grain Family of Stream Ciphers
In this paper, we present fault attack on Grain family of stream ciphers, an eStream finalist. The earlier fault attacks on Grain work on LFSR whereas our target for fault induction is the NFSR. Our attack requires a small number of faults to be injected; 150 only for Grain v1 and only 312 and 384 for Grain-128 and Grain-128a, respectively. The number of faults are much lesser than the earlier reported fault attacks; 1587 for Grain-128 and 1831 for Grain-128a.
Locally Decodable Codes for edit distance
Uncategorized
Uncategorized
Locally decodable codes (LDC)~\cite{BFLS91,KT00} are error correcting codes that allow decoding (any) individual symbol of the message, by reading only few symbols of the codeword. Consider an application such as storage
solutions for large data, where errors may occur in the disks (or some disks may just crush). In such an application, it is often desirable to recover only small portions of the data (have random access). Thus, in such applications, using LDC provides enormous efficiency gains over standard error correcting codes (ECCs), that need to read the entire encoded message to learn even a single bit of information.
Typically, LDC's, as well as standard ECC's decode
the encoded messaged if upto some bounded fraction of the symbols had been modified. This corresponds to decoding strings of bounded Hamming distance from a valid codeword. An often more realistic metric is the edit distance, measuring the shortest sequence of insertions and deletions (indel.) of symbols leading from one word to another.
For example, (few) indel. modifications is a more realistic model for mutations occurring in a genome. Even more commonly, communication over the web may sustain deletions (lost packets) and insertions (noise).\footnote{Edit distance is indeed "more expressive" then Hamming distance in the sense that $dist_E(x,y)\leq 2dist_H(x,y)$ always holds, while edit distance 2 may translate to Hamming distance $n$. For instance, consider $x=1010\ldots 10,y=0101\ldots 1$.
}
Standard ECC's for edit distance have been previously considered~\cite{SZ97}. Furthermore,~\cite{SZ97} devised codes with
rate and distance (error tolerance) optimal upto constants.
LDC's, originally considered in the setting of PCP's~\cite{BFLS91}, have found many additional applications, and generated a lot of fascinating work (see~\cite{Yek11} and references within).
However, combining these two useful settings of LDC, and robustness
against indel. errors has never been considered.
In this work, we study the question of constructing LDC's for edit distance. We demonstrate a strong positive result - LDC's for edit distance can be achieved, with similar parameters to LDC's for Hamming distance. More precisely, we devise a generic transformation from LDC
for Hamming distance to LDC for edit distance with related parameters.
Practical Complexity Cube Attacks on Round-Reduced Keccak Sponge Function
In this paper we mount the cube attack on the Keccak sponge function. The cube attack, formally introduced in 2008, is an algebraic technique applicable to cryptographic primitives whose output can be described as a low-degree polynomial in the input. Our results show that 5- and 6-round Keccak sponge function is vulnerable to this technique. All the presented attacks have practical complexities and were verified on a desktop PC.
A realtime key recovery attack on the authenticated cipher FASER128
Uncategorized
Uncategorized
FASER is a family of authenticated ciphers submitted to the CAESAR competition, which contains two parent ciphers: FASER128 and FASER256. In this work we only focus on FASER128 and present a key recovery attack to FASER128, which needs at most 64 key words and is realtime in a PC. The result shows that FASER128 is very insecure. What's more, our attack can be easily applied to FASER256 and break it entirely.
Handycipher: a Low-tech, Randomized, Symmetric-key Cryptosystem
Handycipher is a low-tech, randomized, symmetric-key, stream cipher, simple enough to permit pen-and-paper encrypting and decrypting of messages, while providing a significantly high level of security. It combines a simple 31-character substitution cipher with a 3,045-token nondeterministic homophonic substitution cipher, and employs the insertion of randomly chosen decoy characters at random locations in the ciphertext. A deniable encryption scheme based on the cipher is described, as well as a way of extending the cipher by using randomly generated session keys.
Private and Dynamic Time-Series Data Aggregation with Trust Relaxation
Uncategorized
Uncategorized
Abstract. With the advent of networking applications collecting user data on a massive scale, the privacy of individual users appears to be a major concern. The main challenge is the design of a solution that allows the data analyzer to compute global statistics over the set of individual inputs that are protected by some confidentiality mechanism. Joye et al. [7] recently suggested a solution that allows a centralized party to compute the sum of encrypted inputs collected through a smart metering network. The main shortcomings of this solution are
its reliance on a trusted dealer for key distribution and the need for frequent key
updates. In this paper we introduce a secure protocol for aggregation of time series
data that is based on the Joye et al. [7] scheme and in which the main
shortcomings of the latter, namely, the requirement for key updates and for the
trusted dealer are eliminated. Moreover our scheme supports a dynamic group
management, whereby as opposed to Joye et al. [7] leave and join operations do
not trigger a key update at the users.
Certification and Efficient Proofs of Committed Topology Graphs
Digital signature schemes are a foundational cryptographic building block in certification and the projection of trust. Based on a signature scheme on committed graphs, we propose a toolkit of certification and proof methods to sign committed topology graphs
and to prove properties of their certificates in zero-knowledge.
This toolkit allows an issuer, such as an auditor, to sign the topology representation of an infrastructure. The prover, such as an infrastructure provider, can then convince a verifier of topology properties, such as partitions, connectivity or isolation, without disclosing the structure of the topology itself. By that, we can achieve the certification of the structure of critical systems, such as infrastructure clouds or outsourced systems, while still maintaining confidentiality. We offer zero-knowledge proofs of knowledge for a general specification language of security goals for virtualized infrastructures, such that high-level security goalscan be proven over the topology certificate. Our method builds upon the Camenisch-Lysyanskaya signature scheme, is based on honest-verifier proofs and the strong RSA assumption.
Enhanced Lattice-Based Signatures on Reconfigurable Hardware
The recent Bimodal Lattice Signature Scheme (BLISS) showed that lattice-based constructions have evolved to practical alternatives to RSA or ECC. It offers small signatures of 5600 bits for a 128-bit level of security, and proved to be very fast in software. However, due to the complex sampling of Gaussian noise with high precision, it is not clear whether this scheme can be mapped efficiently to embedded devices. Even though the authors of BLISS also proposed a new sampling algorithm using Bernoulli variables this approach is more complex than previous methods using large precomputed tables. The clear disadvantage of using large tables for high performance is that they cannot be used on constrained computing environments, such as FPGAs, with limited memory. In this work we thus present techniques for an efficient Cumulative Distribution Table (CDT) based Gaussian sampler on reconfigurable hardware involving Peikert's convolution lemma and the Kullback-Leibler divergence. Based on our enhanced sampler design, we provide a scalable implementation of BLISS signing and verification on a Xilinx Spartan-6 FPGA supporting either 128-bit, 160-bit, or 192-bit security. For high speed we integrate fast FFT/NTT-based polynomial multiplication, parallel sparse multiplication, Huffman compression of signatures, and Keccak as hash function. Additionally, we compare the CDT with the Bernoulli approach and show that for the particular BLISS-I parameter set the improved CDT approach is faster with lower area consumption. Our BLISS-I core uses 2,291 slices, 5.5 BRAMs, and 5 DSPs and performs a signing operation in 114.1 us on average. Verification is even faster with a latency of 61.2 us and 17,101 supported verification operations per second.
Last updated: 2014-06-05
Practical and Secure Query Processing for Large-scale Encrypted Cloud Storage Systems
With the increasing popularity of cloud-based data services, data owners are highly motivated to store their huge amount of (potentially sensitive) personal data files on remote servers in encrypted form. Clients later can query over the encrypted database to retrieve files of interest while preventing database servers from learning private information about the contents of files and queries.
In this paper, we investigate new and novel SSE designs which meet all practical properties, including one-round multi-keyword query, comprehensive and practical privacy protection, sublinear search time, and efficient dynamic data operation support. Moreover, our solutions can well support parallel search and run for very large-scale cloud databases. Compared to the existing SSE solutions,
our solution is highly compact, efficient and flexible. Its performance and security are carefully characterized by rigorous analysis. Experimental evaluations conducted over large representative real-word data sets demonstrate that compared with the state-of-the-art our solution indeed achieves desirable properties for large-scale encrypted database systems.
Making RSA-PSS Provably Secure Against Non-Random Faults
RSA–CRT is the most widely used implementation for RSA signatures. However, deterministic and many probabilistic RSA signatures based on CRT are vulnerable to fault attacks. Nevertheless, Coron and Mandal (Asiacrypt 2009) show that the randomized PSS padding protects RSA signatures against random faults. In contrast, Fouque et al. (CHES 2012) show that PSS padding does not protect against certain non-random faults that can be injected in widely used implementations based on the Montgomery modular multiplication. In this article, we prove the security of an infective countermeasure against a large class of non-random faults; the proof extends Coron and Mandal’s result to a strong model where the adversary can force the faulty signatures to be a multiple of one of the prime factors of the RSA modulus. Such non-random faults induce more complex probability distributions than in the original proof, which we analyze using careful estimates of exponential sums attached to suitable rational functions. The security proof is formally verified using appropriate extensions of EasyCrypt, and provides the first application of formal verification to provable (i.e. reductionist) security in the context of fault attacks.
Forgery on Stateless CMCC
We present attacks against CMCC that invalidate the claimed security of integrity protection and misuse resistance. We exploit the fact zero-padding is used on both the message and authenticated data and demonstrate how one may generate a forgery with a single call to the encryption oracle. From this we calculate the ciphertext of the chosen message, yielding a forgery and so breaking INT-CTXT. In the nonce-reuse setting, existence of a forgery leads directly to a 2-query distinguisher.
Cryptanalysis of the MORE symmetric key fully homomorphic encryption scheme
The fully homomorphic symmetric encryption scheme \emph{MORE} encrypts
keys by conjugation with a random invertible matrix over an RSA modulus.
We provide a two known-ciphertexts cryptanalysis recovering a linear dependence among
the two encrypted keys.
Linear Extension Cube Attack on Stream Ciphers
Basing on the original Cube attack, this paper proposes an improved method of Cube attack on stream ciphers, which makes improvement on the pre-processing phase of the original attack. The new method can induce maxterms of higher-order from those of lower-order by the trade-off between time and space, thus recovering more key bits and reducing the search complexity on higher-dimension. In this paper, the improved attack is applied to Lili-128 algorithm and reduced variants of Trivium algorithm. We can recover 88 key bits of Lili-128 algorithm within time complexity of 2^14 and 48 key bits of Trivium algorithm can be recovered by cubes with dimension no larger than 8 when the initialization round is 576, the results are much better than those of the original attacks.
Fine grain Cross-VM Attacks on Xen and VMware are possible!
Uncategorized
Uncategorized
This work exposes further vulnerabilities in virtualized cloud servers by mounting Cross-VM cache attacks in Xen and VMware VMs targeting AES running in the victim VM. Even though there exists a rich literature on cache attacks on AES, so far only a single work, demonstrating a working attack on an ARM platform running a L4Re virtualization layer has been published. Here we show that AES in a number popular cryptographic libraries including OpenSSL, PolarSSL and Libgcrypt are vulnerable to Bernstein’s correlation attack when run in Xen and VMware (bare metal version) VMs, the most popular VMs used by cloud service providers (CSP) such as Amazon and Rackspace. We also show that the vulnerability persists even if the VMs are placed on different cores in the same machine. The results of this study shows that there is a great security risk to AES and (data encrypted under AES) on popular cloud services.
Introducing Fault Tolerance into Threshold Password-Authenticated Key Exchange
A threshold password-authenticated key exchange (T-PAKE) protocol allows a set of n servers to collectively authenticate a client with a human-memorizable password such that any subset of size greater than a threshold t can authenticate the client, while smaller subsets of servers learn no information about the password. With its protection against offline dictionary attacks, T-PAKE provides a practical solution for an important real-life problem with password authentication. However, the proposed T-PAKE constructions cannot tolerate any misbehavior---not even a crash---by a participating server during a protocol execution; the protocol has to be re-executed until all participating servers behave correctly. This not only presents a fault management challenge for the servers, but more importantly also can leave the clients frustrated for being denied access even after entering a correct password.
In this work, we present a novel T-PAKE protocol which solves the above fault management problem by employing a batched and offline phase of distributed key generation (DKG). Our protocol is secure against any malicious behavior from up to any t < n servers under the decisional Diffie-Hellman assumption in the random oracle model, and it ensures protocol completion for t < n/2. Moreover, it is efficient (16n + 7 exponentiations per client, 20n + 14 per server), performs explicit authentication in three communication rounds, and requires a significantly lesser number of broadcast rounds compared to previous secure T-PAKE constructions. We have implemented our protocol, and have verified its efficiency using micro-benchmark experiments. Our experimental results show that the protocol only introduces a computation overhead of few milliseconds at both the client and the server ends, and it is practical for use in real-life authentication scenarios.
Security Analysis of an Identity-Based Strongly Unforgeable Signature Scheme
Identity-based signature (IBS) is a specific type of public-key signature (PKS) where any identity string $ID$ can be used for the public key of a user. Although an IBS scheme can be constructed from any PKS scheme by using the certificate paradigm, it is still important to construct an efficient IBS scheme with short signature under the standard assumption without relying on random oracles. Recently, Kwon proposed an IBS scheme and claimed its strong unforgeability under the computational Diffie-Hellman (CDH) assumption. In this paper, we show that the security proof of Kwon is seriously flawed. To show the flaws, we first show that there exists a distinguisher that can distinguish the distribution of simulated signature from that of real signatures. Next, we also show that the simulator of Kwon's security argument cannot extract the solution of the CDH assumption even if there exists an adversary that forges the signature. Therefore, the security of the Kwon's IBS scheme is not related to the hardness of the CDH assumption.
A practical state recovery attack on the stream cipher Sablier v1
Sablier is an authenticated encryption cipher submitted to the CAESAR competition, which is composed of the encryption Sablier v1 and the authentication \textup{Au}. In this work we present a state recovery attack against the encryption Sablier v1 with time complexity about $2^{44}$ operations and data complexity about 24 of 16-bit keywords. Our attack is practical in the workstation. It is noticed that the update of the internal state of Sablier v1 is invertible, thus our attack can further deduce a key recovery attack and a forgery attack against the authenticated encryption Sablier. The result shows that Sablier v1 is far from the goal of its security design (80-bit level).
bitcoin.BitMint: Reconciling Bitcoin with Central Banks
The sweeping success of the original (2008) bitcoin protocol proves that digital currency has arrived. The mounting opposition from the financial establishment indicates an overshoot. We propose to tame bitcoin into bitcoin.BitMint: keeping the bitcoin excitement -- fitted into real world security, stability and fraud concerns.
The basic idea is to excise the bitcoin money generation formula, and otherwise apply bitcoin essentially “as is” over digital coins which are redeemable by the mint that minted them. This will preserve the bitcoin assured anonymity. The new bitcoin.BitMint solution will benefit from bitcoin’s double-spending prevention, and would otherwise enjoy all the benefits associated with money in a digital form.
bitcoin.BitMint will allow traders to invest in US$, gold, or any other commodity while practicing their trade in cyberspace, anonymously, securely, and non-speculatively.
This “mint-in-the-middle” protocol will allow law enforcement authorities to execute a proper court order to enforce the disclosure of a suspected fraudster, but the community of honest traders will trade with robust privacy as offered by the original bitcoin protocol.
We envision interlinked bitcoin.BitMint trading environments, integrated via an InterMint protocol: a framework for the evolution of a cascaded super currency – global and highly stable.
Reusable Fuzzy Extractors for Low-Entropy Distributions
Uncategorized
Uncategorized
Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, CCS 2004) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person's biometric is enrolled with multiple unrelated organizations).
We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated (the only prior construction assumed a very specific, unrealistic class of correlations). The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates.
Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates--lower than those supported by prior (nonreusable) constructions--assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates.
We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, providing a computationally secure and an information-theoretically secure construction for large-alphabet sources.
Zero-Knowledge Password Policy Checks and Verifier-Based PAKE
Uncategorized
Uncategorized
Zero-Knowledge Password Policy Checks (ZKPPC), introduced in this work, enable blind registration of client passwords at remote servers, i.e., client passwords are never transmitted to the servers. This eliminates the need for trusting servers to securely process and store client passwords. A ZKPPC protocol, executed as part of the registration procedure, allows clients to further prove compliance of chosen passwords with respect to password policies defined by the servers.
The main benefit of ZKPPC-based password registration is that it guarantees that registered passwords never appear in clear on the server side. At the end of the registration phase the server only receives and stores some verification information that can later be used for authentication in a suitable Verifier-based Password Authenticated Key Exchange (VPAKE) protocol.
We give general and concrete constructions of ZKPPC protocols and suitable VPAKE protocols for ASCII-based passwords and policies that are commonly used on the web. To this end we introduce a reversible mapping of ASCII characters to integers that can be used to preserve the structure of the password string and a new randomized password hashing scheme for ASCII-based passwords.
Last updated: 2016-11-23
A New Way to Prevent UKS Attacks Using Trusted Computing
Uncategorized
Uncategorized
UKS (unknown key-share) attacks are common attacks on Authenticated Key Exchange (AKE) protocols. We summarize two popular countermeasures against UKS attacks on implicitly authenticated key exchange protocols. The first one forces the CA to check the possession of private keys during registration, which is impractical for the CA. The second one adds identities in the derivation of the session key, which requires modifying the protocol which might have already been deployed in practice. By leveraging the key protection capability of hardware security chips such as TPM or TCM, we propose a new way to prevent UKS attacks that requires no check of possession of private keys and no addition of identities during the derivation of the session key. We modify the CK model to adapt protocols using hardware security chips. We then implement the KEA protocol once used in NSA, which is subject to UKS attacks, using TCM chips. Our implementation, called tKEA, is secure under our security model. To show the generality, we demonstrate that our new way can prevent UKS attacks on the MQV protocol.
Automatic Proofs of Privacy of Secure Multi-Party Computation Protocols Against Active Adversaries
We describe an automatic analysis to check secure multiparty computation protocols against privacy leaks. The analysis is sound --- a protocol that is deemed private does not leak anything about its private inputs, even if active attacks are performed against it. Privacy against active adversaries is an essential ingredient in constructions aiming to provide security (privacy + correctness) in adversarial models of intermediate (between passive and active) strength. Using our analysis we are able to show that the protocols used by the Sharemind secure multiparty computation platform are actively private.
Logical Reasoning to Detect Weaknesses About SHA-1 and MD4/5
In recent years, studies about the SATisfiability Problem (short for SAT) were more and more numerous because of its conceptual simplicity and ability to express a large set of various problems. Within a practical framework, works highlighting SAT impli- cations in real world problems had grown significantly. In this way, a new field called logical cryptanalysis appears in the 2000s and consists in an algebraic cryptanalysis in a binary context thanks to SAT solving. This paper deals with this concept applied to cryptographic hash functions. We first present the logical cryptanalysis principle, and provide details about our encoding approach. In a second part, we put the stress on the contribution of SAT to analyze the generated problem thanks to the discover of logical inferences and so simplifications in order to reduce the computational complexity of the SAT solving. This is mainly realized thanks to the use as a preprocessor of learning and pruning techniques from the community. Third, thanks to a probabilistic reasoning applied on the formulas, we present a weakness based on the use of round constants to detect probabilistic relations as implications or equivalences between certain vari- ables. Finally, we present a practical framework to exploit these weaknesses through the inversions of reduced-step versions of MD4, MD5, SHA-0 and SHA-1 and open some prospects.
High Parallel Complexity Graphs and Memory-Hard Functions
Uncategorized
Uncategorized
We develop new theoretical tools for proving lower-bounds on the (amortized) complexity of functions in a parallel setting. We demonstrate their use by constructing the first provably secure Memory-hard functions (MHF); a class of functions recently gaining acceptance in practice as an effective means to counter brute-force attacks on security relevant functions.
Pebbling games over graphs have proven to be a powerful abstraction for a wide variety of computational models. A dominant application of such games is proving complexity lower-bounds using ``pebbling reductions''. These bound the complexity of a given function $f_G$ in some computational model $\M$ of interest in terms of the pebbling complexity of a related graph $G$, where the pebbling complexity is defined via a pebbling game $\G$. In particular, finding a function with high complexity in $\M$ is reduced to (the hopefully simpler task of) finding graphs with high pebbling complexity in $\G$. We introduce a very simple and natural pebbling game $\G_p$ for abstracting parallel computation models. As an important conceptual contribution we also define a natural measure of pebbling complexity called \emph{cumulative complexity} (CC) and show how it overcomes a crucial shortcoming in the parallel setting exhibited by more traditional complexity measures used in past reductions. Next (analogous to the results of Tarjan et. al.~\cite{PTC77,LT82} for sequential pebbling games) we demonstrates, via a novel and non-tibial proof, an explicit construction of a constant in-degree family of graphs whose CC in $\G_p$ approaches maximality to within a polylogarithmic factor for any graph of equal size.
To demonstrate the use of these theoretical tools we give an example in the field of cryptography, by constructing the first provably secure Memory-hard function (MHF). Introduced by Percival~\cite{Per09}, an MHF can be computed efficiently on a sequential machine but further requires the same amount of memory (or much greater time) amortized per evaluation on a parallel machine. Thus, while a say an ASIC or FPGA may be faster than a CPU, the increased cost of working memory negates this advantage. In particular MHFs crucially underly the ASIC/FPGA-resistant proofs-of-work of the crypto-currency Litecoin\cite{Litecoin}, the brute-force resistant key-derivation function {\tt scrypt}~\cite{Per09} and can be used to increase the cost to an adversary of recovering passwords from a compromised login server. To place these applications on more sound theoretic footing we first provide a new formal definition (in the Random Oracle Model) and an accompanying notion of amortized memory hardness to capture the intuitive property of an MHF (thereby remedying an important issue with the original definition of~\cite{Per09}). Next we define a mapping from a graph $G$ to a related function $f_G$. Finally we prove a pebbling reduction bounding the amortized memory hardness per evaluation of $f_G$ in terms of the CC of $G$ in the game $\G_p$. Together with the construction above, this gives rise to the first provably secure example of an MHF.
SIMON Says, Break the Area Records for Symmetric Key Block Ciphers on FPGAs
Uncategorized
Uncategorized
While AES is extensively in use in a number of applications, its area cost limits its deployment in resource constrained platforms. In this paper, we have implemented SIMON, a recent promising low-cost alternative of AES on reconfigurable platforms. The Feistel network, the construction of the round function and the key generation of SIMON, enables bit-serial hardware architectures which can significantly reduce the cost. Moreover, encryption and decryption can be done using the same hardware. The results show that with an equivalent security level, SIMON is 86\% smaller than AES, 70\% smaller than PRESENT (a standardized low-cost AES alternative), and its smallest hardware architecture only costs 36 slices (72 LUTs, 30 registers). To our best knowledge, this work sets the new area records as we propose the hardware architecture of the smallest block cipher ever published on FPGAs at 128-bit level of security. Therefore, SIMON is a strong alternative to AES for low-cost FPGA based applications.
Linear Sequential Circuit Approximation of Acterbahn Stream Cipher
Achterbahn stream cipher is proposed as a candidate for ECRYPT eSTREAM project which deals with key of length 80-bit. The linear distinguishing attack,which aims at distinguishing the keystream from purely random keystream,is employed to Achterbahn stream cipher. A linear distinguishing attack is based on linear sequential circuit approximation technique which distinguishes statistical bias in the keystream. In order to build the distinguisher, linear approximations of both non-linear feedback shift register (NLFSR) and the non-linear Boolean combining function R:F_2^8→F_2 are used. The keystream sequence generated by this algorithm consist a distinguisher with its probability bias〖 2〗^(-1809). Thus, to distinguish the Achterbahn, we only need 1/ε^2 =〖〖(2〗^1809)〗^2=2^3618 keystream bits and the time complexity is about 10/ε^2 =2^3621.3 which is much higher than the exhaustive key search O(2^80).
Efficient Fuzzy Search on Encrypted Data
We study the problem of efficient (sub-linear) fuzzy search on encrypted outsourced data, in the symmetric-key setting. In particular, a user who stores encrypted data on a remote untrusted server forms queries that enable the server to efficiently locate the records containing the requested keywords, even though the user may misspell keywords or provide noisy data in the query. We define an appropriate primitive for a general \emph{closeness} function on the message space that we call \emph{efficiently fuzzy-searchable encryption} (\emph{EFSE}).
Next we identify an optimal security notion for EFSE. We demonstrate that existing schemes do not meet our security definition and propose a new scheme that we prove secure under basic assumptions. Unfortunately, the scheme requires large ciphertext length, but we show that, in a sense, this space-inefficiency is unavoidable for a general, optimally-secure scheme.
Seeking the right balance between efficiency and security, we then show how to construct schemes that are more efficient and satisfy a weaker security notion that we propose. To illustrate, we present and analyze a more space-efficient scheme for supporting fuzzy search on biometric data that achieves the weaker notion.
Enhancing Oblivious RAM Performance Using Dynamic Prefetching
Oblivious RAM (ORAM) is an established technique to hide the access pattern to an untrusted storage system.
With ORAM, a curious adversary cannot tell what data address the user is accessing when observing the bits moving between the user and the storage system.
All existing ORAM schemes achieve obliviousness by adding redundancy to the storage system, i.e., each access is turned into multiple random accesses.
Such redundancy incurs a large performance overhead.
Though traditional data prefetching techniques successfully hide memory latency in DRAM based systems, it turns out that they do not work well for ORAM.
In this paper, we exploit ORAM locality by taking advantage of the ORAM internal structures.
Though it might seem apparent that obliviousness and locality are two contradictory concepts, we challenge this intuition by exploiting data locality in ORAM without sacrificing provable security.
In particular, we propose an ORAM prefetching technique called dynamic super block scheme and comprehensively explore its design space.
The dynamic super block scheme detects data locality in the program's working set at runtime, and exploits the locality in a data-independent way.
% based on the key observation that position map ORAMs have better locality than the data ORAM.
Our simulation results show that with dynamic super block scheme, ORAM performance without super blocks can be significantly improved. After adding timing protection to ORAM, the average performance gain is 25.5\% (up to 49.4\%) over the baseline ORAM and 16.6\% (up to 30.1\%) over the best ORAM prefetching technique proposed previously.
Toward Practical Homomorphic Evaluation of Block Ciphers Using Prince
Uncategorized
Uncategorized
We present the homomorphic evaluation of the Prince block cipher. Our leveled implementation is based on a generalization of NTRU. We are motivated by the drastic bandwidth savings that may be achieved by scheme conversion. To unlock this advantage we turn to lightweight ciphers such as Prince. These ciphers were designed from scratch to yield fast and compact implementations on resource constrained embedded platforms. We show that some of these ciphers have the potential to enable near practical homomorphic evaluation of block ciphers. Indeed, our analysis shows that Prince can be implemented using only a 24 level deep circuit. Using an NTRU based implementation we achieve an evaluation time of 3.3 seconds per Prince block – one and two orders of magnitude improvement over homomorphic AES implementations achieved using NTRU, and BGV-style homomorphic encryption libraries, respectively.
Bandwidth Efficient PIR from NTRU
Uncategorized
Uncategorized
We present a private information retrieval (PIR) scheme based on a somewhat homomorphic encryption (SWHE). In particular, we customize an NTRU-based SWHE scheme in order to evaluate a specific class of fixed depth circuits relevant for PIR implementation, thus achieving a more practical implementation. In practice, a SWHE that can evaluate a depth 5 circuit is sufficient to construct a PIR capable of retrieving data from a database containing 4 billion rows. We leverage this property in order to produce a more practical PIR scheme. Com- pared to previous results, our implementation achieves a significantly lower bandwidth cost (more than 1000 times smaller). The computational cost of our implementation is higher than previous proposals for databases containing a small number of bits in each row. However, this cost is amortized as database rows become wider.
Self-Updatable Encryption with Short Public Parameters and Its Extensions
Cloud storage is very popular since it has many advantages, but there is a new threat to cloud storage that was not considered before. {\it Self-updatable encryption} that updates a past ciphertext to a future ciphertext by using a public key is a new cryptographic primitive introduced by Lee, Choi, Lee, Park, and Yung (Asiacrypt 2013) to defeat this threat such that an adversary who obtained a past-time private key can still decrypt a (previously unread) past-time ciphertext stored in cloud storage. Additionally, an SUE scheme can be combined with an attribute-based encryption (ABE) scheme to construct a powerful revocable-storage ABE (RS-ABE) scheme introduced by Sahai, Seyalioglu, and Waters (Crypto 2012) that provides the key revocation and ciphertext updating functionality for cloud storage.
In this paper, we propose an efficient SUE scheme and its extended schemes. First, we propose an SUE scheme with short public parameters in prime-order bilinear groups and prove its security under a $q$-type assumption. Next, we extend our SUE scheme to a time-interval SUE (TI-SUE) scheme that supports a time interval in ciphertexts. Our TI-SUE scheme has short public parameters and also secure under the $q$-type assumption. Finally, we propose the first large universe RS-ABE scheme with short public parameters in prime-order bilinear groups and prove its security in the selective revocation list model under a $q$-type assumption.
Isogeny graphs with maximal real multiplication
Uncategorized
Uncategorized
An isogeny graph is a graph whose vertices are principally polarizable abelian varieties and whose edges are isogenies between these varieties. In his thesis, Kohel describes
the structure of isogeny graphs for elliptic curves and shows that one may compute the endomorphism ring of an elliptic curve defined over a finite field by using a depth-first search (DFS) algorithm in the graph. In dimension 2, the structure of isogeny graphs is less understood and existing algorithms for computing endomorphism rings are very expensive. In this article, we show that, under certain circumstances, the problem of determining the endomorphism ring can also be solved in genus 2 with a DFS-based algorithm. We consider
the case of genus-2 Jacobians with complex multiplication, with the assumptions that the real multiplication subring is maximal and has class number one. We describe the isogeny graphs in that case, locally at prime numbers which split in the real multiplication subfield. The resulting algorithm is implemented over finite fields, and examples are provided. To the best of our knowledge, this is the first DFS-based algorithm in genus 2.