All papers in 2012 (Page 4 of 733 results)
Impossibility Results for Static Input Secure Computation
Consider a setting of two mutually distrustful parties Alice and Bob who want to securely
evaluate some function on pre-specified inputs. The well studied notion of two-party secure computation
allows them to do so in the standalone setting. Consider a deterministic function (e.g., 1-out-of-2 bit
OT) that Alice and Bob can not evaluate trivially and which allows only Bob to receive the output. We
show that Alice and Bob can not securely compute any such function in the concurrent setting even
when their inputs are pre-specified. Our impossibility result also extends to all deterministic functions in
which both Alice and Bob get the same output. Our results have implications in the bounded-concurrent
setting as well.
TorScan: Tracing Long-lived Connections and Differential Scanning Attacks
Tor is a widely used anonymity network providing low-latency communication capabilities. Around 400,000 users per day use Tor to route TCP traffic through a sequence of relays; three hops are selected from a pool of currently almost 3000 volunteer-operated Tor relays to comprise a route through the network for a limited time. In comparison to single-hop proxies, forwarding TCP streams through multiple relays increases the anonymity of the users significantly: each hop along the route only knows its successor and predecessor.
The anonymity provided by Tor heavily relies on the hardness of linking a user's entry and exit nodes. If an attacker gains access to the topological information about the Tor network instead of having to consider the network as a fully connected graph, this anonymity may be reduced. In fact, we have found ways to probe the connectivity of a Tor relay. We demonstrate how the resulting leakage of the Tor network topology can be used and present attacks to trace back a user from an exit relay to a small set of potential entry nodes.
On the Security of Dynamic Group Signatures: Preventing Signature Hijacking
We identify a potential weakness in the standard security model for dynamic group signatures which appears to have been overlooked previously. More specifically, we highlight that even if a scheme provably meets the security requirements of the model, a malicious group member can potentially claim ownership of a group signature produced by an honest group member by forging a proof of ownership. This property leads to a number of vulnerabilities in scenarios in which dynamic group signatures are likely to be used. We furthermore show that the dynamic group signature scheme by Groth (ASIACRYPT 2007) does not provide protection against this type of malicious behavior.
To address this, we introduce the notion of \emph{opening soundness} for group signatures which essentially requires that it is infeasible to produce a proof of ownership of a valid group signature for any user except the original signer. We then show a relatively simple modification of the scheme by Groth which allows us to prove opening soundness for the modified scheme without introducing any additional assumptions.
We believe that opening soundness is an important and natural security requirement for group signatures, and hope that future schemes will adopt this type of security.
A formal study of two physical countermeasures against side channel attacks
Secure electronic circuits must implement countermeasures against a wide range of attacks.
Often, the protection against side channel attacks requires to be tightly integrated within the functionality to be protected.
It is now part of the designer's job to implement them.
But this task is known to be error-prone, and with current development processes, countermeasures are evaluated often very late (at circuit fabrication).
In order to improve the confidence of the designer in the efficiency of the countermeasure,
we suggest in this article to resort to formal methods early in the design flow for two reasons.
First of all, we intend to check that the process of transformation of the design from the vulnerable description to the protected one does not alter the functionality.
Second, we wish to prove that the security properties (that can derive from a formal security functional specification) are indeed met after transformation.
Our first contribution is to show how such a framework can be setup (in COQ) for netlist-level protections.
The second contribution is to illustrate that this framework indeed allows to detect vulnerabilities in dual-rail logics,
with the examples of wave differential dynamic logic (WDDL) and balanced cell-based differential logic (BCDL).
Simple construction of epsilon-biased distribution
Epsilon-biased distribution has many applications in practice, including universal hashing computation. In this paper we will improve an existing epsilon-biased distribution construction due to Alon et al. that requires to uniformly and efficiently sample irreducible polynomials of a large degree, e.g. between 80 and 160. To remove the need for such a sampling which can be computationally expensive, we will replace the irreducible polynomials by random monic polynomials of higher degree, i.e. every degree r monic polynomial whether irreducible or reducible is selected with the same probability 2^{-r}. To analyse the security of the scheme, we need to
find the maximum number of degree r polynomials that divide a degree n polynomial where n > r.
Rational authentication protocols and their use in financial transactions
We use ideas from game theory to improve two families of authentication protocols, namely password-based and manual authentication schemes. The protocols will be transformed so that even if an intruder attacks different protocol runs between honest nodes, its expected payoff will still be lower than when it does not attack. A rational intruder, who always tries to maximise its payoff, therefore has no incentive to attack any protocol run among trustworthy parties. To illustrate the use of our method, we present a case study relating to the password-based authentication stage of on-line banking, where passwords are chosen either randomly or biasedly by, e.g., humans. For the latter we use the publicly available 32 million passwords of the social gaming network website RockYou as the source of human-selected passwords.
Constructing Pairing-Friendly Genus 2 Curves with Split Jacobian
Genus 2 curves with simple but not absolutely simple jacobians can be used to construct pairing-based cryptosystems more efficient than for a generic genus 2 curve. We show that there is a full analogy between methods for constructing ordinary pairing-friendly elliptic curves and simple abelian varieties, which are iogenous over some extension to a product of elliptic curves. We extend the notion of complete, complete with variable discriminant, and sparse families introduced in by Freeman, Scott and Teske for elliptic curves, and we generalize the Cocks-Pinch method and the Brezing-Weng method to construct families of each type. To realize abelian surfaces as jacobians we use of genus 2 curves of the form $y^2=x^5+ax^3+bx$ or $y^2=x^6+ax^3+b$, and apply the method of Freeman and Satoh. As applications we find some families of abelian surfaces with recorded $\rho$-value $\rho=2$ for embedding degrees $k=3,4,6,12$, or $\rho=2.1$ for $k=27,54$. We also give variable-discriminant families with best $\rho$-values.
A Generalised Formula for Calculating the Resilience of Random Key Predistribution Schemes
A commonly used metric for comparing the resilience of key predistribution schemes is $\fail_s$, which measures the proportion of network connections which are `broken' by an adversary which has compromised $s$ nodes. In `Random key predistribution schemes for sensor networks', Chan, Perrig and Song present a formula for measuring the resilience in a class of random key predistribution schemes called $q$-composite schemes. We present a correction to this formula for schemes where more than one key may be used to secure a link between a pair of nodes. Our corrected formula features an additional parameter which makes it applicable to a wider variety of random key predistribution schemes, including the original Eschenauer Gligor scheme. We also present a simplification of the formula for calculating connectivity.
We refer to the recent paper by Yum and Lee which also claims to correct the original formula for the $q$-composite scheme. However the resulting formula is complicated, computationally demanding, and hard to understand. The formula which we propose and prove is easily computable and can be applied to a wider range of schemes.
The Stream Cipher Core of the 3GPP Encryption Standard 128-EEA3: Timing Attacks and Countermeasures
The core of the 3rd Generation Partnership Project (3GPP) encryption standard 128-EEA3 is a stream cipher called ZUC. It was designed by the Chinese Academy of Sciences and proposed for inclusion in the cellular wireless standards called “Long Term Evolution” or “4G”. The LFSR-based cipher uses a 128-bit key. In this paper, we first show timing attacks on ZUC that can recover, with about 71.43% success rate, (i) one bit of the secret key immediately, and (ii) information involving 6 other key bits. The time, memory and data requirements of the attacks are negligible. While we see potential improvements to the attacks, we
also suggest countermeasures.
Scalable Group Signatures with Revocation
Group signatures are a central cryptographic primitive, simultaneously supporting accountability and anonymity. They allow
users to anonymously sign messages on behalf of a group they are
members of. The recent years saw the appearance of several
constructions with security proofs in the standard model ({\it
i.e.}, without appealing to the random oracle heuristic). For a
digital signature scheme to be adopted, an efficient revocation
scheme (as in regular PKI) is absolutely necessary.
Despite over a decade of extensive research, membership revocation
remains a non-trivial problem in group signatures: all existing
solutions are not truly scalable due to either high overhead (e.g., large group public key size), or limiting operational requirement (the need for all users to follow the system's entire history). In the standard model, the situation is even worse as many existing solutions are not readily adaptable. To fill this gap and tackle this challenge, we describe a new revocation approach based, perhaps
somewhat unexpectedly, on the Naor-Naor-Lotspiech framework which was introduced for a different problem (namely, that of broadcast encryption). Our mechanism yields efficient and scalable revocable group signatures in the standard model. In particular, the size of signatures and the verification cost are independent of the number of
revocations and the maximal cardinality $N$ of the group while other
complexities are at most polylogarithmic in $N$. Moreover, the
schemes are history-independent: unrevoked group members do not have
to update their keys when a revocation occurs.
Programmable encryption and key-dependent messages
We present the notion of PROG-KDM security for public-key encryption
schemes. This security notion captures both KDM security and
revealing of secret keys (key corruptions) in a single
definition. This is achieved by requiring the existence of a
simulator that can program ciphertexts when a secret key is
revealed, i.e., the simulator can delay the decision what plaintext
is contained in what ciphertext to the moment where the ciphertext
is opened. The definition is formulated in the random oracle model.
We show that PROG-KDM security can be achieved by showing that a
natural and practical construction in the ideal cipher model is
PROG-KDM secure (hybrid encryption using authenticated CBC
encryption).
Biclique Cryptanalysis of TWINE
TWINE is a lightweight block cipher proposed at ECRYPT Workshop on Lightweight Cryptography 2011, Belgium.
The cipher consists of 36 rounds and has two versions TWINE-80 and TWINE-128 supporting key lengths of 80 and 128 bits, respectively.
The block length of the two versions is 64-bit.
In this paper, we present the first single-key attacks on the both versions of the cipher.
In these attacks, we use the recently developed biclique technique.
The complexities of the attacks on TWINE-80 and TWINE-128 are $2^{79.10}$ and $2^{126.82}$ respectively and the data requirement for the two attacks is $2^{60}$.
Security margin evaluation of SHA-3 contest finalists through SAT-based attacks
In 2007, the U.S. National Institute of Standards and Technology (NIST) announced a public contest aiming at the selection of a new standard for a cryptographic hash function. In this paper, the security margin of five SHA-3 finalists is evaluated with an assumption that attacks launched on finalists should be practically verified. A method of attacks applied is called logical cryptanalysis where the original task is expressed as a SATisfiability problem instance. A new toolkit is used to simplify the most arduous stages of this type of cryptanalysis and helps to mount the attacks in a uniform way. In the context of SAT-based attacks, it has been shown that all the finalists have substantially bigger security margin than the current standards SHA-256 and SHA-1. Two other metrics, software performance and hardware efficiency are combined with security results to provide a more comprehensive picture of the SHA-3 finalists.
A Publicly-Veriable Mix-net with Everlasting Privacy Towards Observers
Uncategorized
Uncategorized
In this paper we present a novel, publicly verifiable mixing
scheme which has everlasting privacy towards observers: all the information published on the bulletin board by the mixes (audit information etc) reveals no information about the identity of any of the messages published. The correctness of the mixing process is statistical: even if all authorities conspire, they cannot change the contents of any message without being detected with overwhelming probability. We accomplish this by encoding the messages submitted using so-called Pedersen commitments. Decoding (opening) these is possible because we create a parallel mix-net run by the same mixes to which the public has no access. This private mix-net uses the same permutations as the public one, but uses homomorphic encryption, which is used to send auxiliary information (messages, decommitment values) through the mix-net to allow decoding.
Last updated: 2014-02-10
DAC-MACS: Effective Data Access Control for Multi-Authority Cloud Storage Systems
Data access control is an effective way to ensure the data security in the cloud. However, due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Existing access control schemes are no longer applicable to cloud storage systems, because they either produce multiple encrypted copies of the same data or require a fully trusted cloud server.
Ciphertext-Policy Attribute-based Encryption (CP-ABE) is a promising technique for access control of encrypted data. It requires a trusted authority manages all the attributes and distributes keys in the system. In cloud storage systems, there are multiple authorities co-exist and each authority is able to issue attributes independently.
However, existing CP-ABE schemes cannot be directly applied to the access control for multi-authority cloud storage systems, due to the inefficiency of decryption and revocation. In this paper, we propose DAC-MACS (Data Access Control for Multi-Authority Cloud Storage), an effective and secure data access control scheme with efficient decryption and revocation. Specifically, we construct a new multi-authority CP-ABE scheme with efficient decryption and also design an efficient attribute revocation method that can achieve both forward security and backward security. The analysis and the simulation results show that our DAC-MACS is highly efficient and provably secure under the security model.
Weaknesses of an Improvement Authentication Scheme using
Recently, Sood-Sarje-Singh proposed an improvement to Liou et al.’s dynamic ID-based remote user authentication scheme using smart cards to prevent impersonation attack, malicious user attack, off-line password guessing attack, and man-in-the-middle attack. However, we demonstrate that Sood et al.’s scheme is still vulnerable to malicious user attack, impersonation attack and steal information from a database attack.
Efficient Padding Oracle Attacks on Cryptographic Hardware
We show how to exploit the encrypted key import functions of a
variety of different cryptographic devices to reveal the imported
key. The attacks are padding oracle attacks, where error messages
resulting from incorrectly padded plaintexts are used as a side
channel. In the asymmetric encryption case, we modify and improve
Bleichenbacher's attack on RSA PKCS#1v1.5 padding, giving new
cryptanalysis that allows us to carry out the `million message
attack' in a mean of 49 000 and median of 14 500 oracle calls in the
case of cracking an unknown valid ciphertext under a 1024 bit key
(the original algorithm takes a mean of 215 000 and a median of 163
000 in the same case). We show how implementation details of certain
devices admit an attack that requires only 9 400 operations on
average (3 800 median). For the symmetric case, we adapt Vaudenay's
CBC attack, which is already highly efficient. We demonstrate the
vulnerabilities on a number of commercially available cryptographic
devices, including security tokens, smartcards
and the Estonian electronic ID card. The attacks are
efficient enough to be practical: we give timing details for all
the devices found to be vulnerable, showing how our optimisations make a qualitative difference to the practicality of the attack.
We give mathematical analysis of the effectiveness of the attacks,
extensive empirical results, and a discussion of countermeasures and manufacturer reaction.
Beyond eCK: Perfect Forward Secrecy under Actor Compromise and Ephemeral-Key Reveal
We show that it is possible to achieve perfect forward secrecy in two-message or one-round key exchange (KE) protocols that satisfy even stronger security properties than provided by the extended Canetti-Krawczyk (eCK) security model. In particular, we consider perfect forward secrecy in the presence of adversaries that can reveal ephemeral secret keys and the long-term secret keys of the actor of a session (similar to Key Compromise Impersonation).
We propose two new game-based security models for KE protocols. First, we formalize a slightly stronger variant of the eCK security model that we call eCKw. Second, we integrate perfect forward secrecy into eCKw, which gives rise to the even stronger eCK-PFS model. We propose a security-strengthening transformation (i.e., a compiler) between our new models. Given a two-message Diffie-Hellman type protocol secure in eCKw, our transformation yields a two-message protocol that is secure in eCK-PFS. As an example, we show how our transformation can be applied to the NAXOS protocol.
Revisiting Key Schedule's Diffusion In Relation With Round Function's Diffusion
We study the weakness of key schedules from an observation: many existing attacks use the fact that the key schedules poorly distribute key bits in the diffusion path of round function. This reminds us of the importance of the diffusion's relation between key schedule and round function. We present new cryptanalysis results by exploring such diffusion relation and propose a new criterion for necessary key schedule diffusion. We discuss potential attacks and summarize the causes for key schedules without satisfying this criterion. One major cause is that overlapping between the diffusion of key schedule and round function leads to information leakage of key bits. Finally, a measure to estimate our criterion for recursive key schedules is presented. Today designing key schedule still lacks practical and necessary principles. For a practical key schedule with limited diffusion, our work adds more insight to its requirements and helps to maximize the security level.
Low complexity bit-parallel $GF(2^m)$ multiplier for all-one polynomials
This paper presents a new bit-parallel multiplier for the finite
field $GF(2^m)$ generated with an irreducible all-one polynomial.
Redundant representation is used to reduce the time delay of the
proposed multiplier, while a three-term Karatsuba-like formula is
combined with this representation to decrease the space complexity.
As a result, the proposed multiplier requires about 10 percent fewer
AND/XOR gates than the most efficient bit-parallel multipliers using
an all-one polynomial, while it has almost the same time delay as
the previously proposed ones.
Highly Secure Strong PUF based on Nonlinearity of MOSFET Subthreshold Operation
Silicon physical unclonable functions (PUFs) are security primitives relying on intrinsic randomness of IC manufacturing. Strong PUFs have a very large input-output space which is essential for secure authentication. Several proposed strong PUFs use timing races to produce a rich set of responses. However, these PUFs are vulnerable to machine-learning attacks due to linear separability of the output function resulting from the additive nature of timing delay along timing paths. We introduce a novel strong silicon PUF based on the exponential current-voltage behavior in subthreshold region of FET operation which injects strong nonlinearity into the response of the PUF. The PUF, which we term subthreshold current array (SCA) PUF, is implemented as a two-dimensional nxk transistor array with all devices subject to stochastic variability operating in subthreshold region. Our PUF is fundamentally different from earlier attempts to inject nonlinearity via digital control techniques, which could also be used with SCA-PUF. Voltages produced by nominally identical arrays are compared to produce a random binary response.
SCA-PUF shows excellent security properties. The average inter-class Hamming distance, a measure of uniqueness, is 50.3%. The average intra-class Hamming distance, a measure of response stability, is 0.6%. Crucially, we demonstrate that the introduced PUF is much less vulnerable to modeling attacks. Using a machine-learning technique of support-vector machine with radial basis function kernel for best nonlinear learnability, we observe that "information leakage" (rate of error reduction with learning) is much lower than for delay-based PUFs. Over a wide range of the number of observed challenge-response pairs, the error rate is 3-35X higher than for earlier designs.
Probabilistic Infinite Secret Sharing
The study of probabilistic secret sharing schemes using arbitrary
probability spaces and possibly infinite number of participants lets us
investigate abstract properties of such schemes. It highlights important
properties, explains why certain definitions work better than others,
connects this topic to other branches of mathematics, and might yield new
design paradigms.
A {\em probabilistic secret sharing scheme} is a joint probability
distribution of the shares and the secret together with a collection of {\em
secret recovery functions} for qualified subsets. The scheme is measurable
if the recovery functions are measurable. Depending on how much information
an unqualified subset might have, we define four scheme types: {\em
perfect}, {\em almost perfect}, {\em ramp}, and {\em almost ramp}. Our main
results characterize the access structures which can be realized by schemes
of these types.
We show that every access structure can be realized by a non-measurable
perfect probabilistic scheme. The construction is based on a paradoxical
pair of independent random variables which determine each other.
For measurable schemes we have the following complete characterization. An
access structure can be realized by a (measurable) perfect, or almost
perfect scheme if and only if the access structure, as a subset of the
Sierpiński space $\{0,1\}^P$, is open, if and only if it can be realized
by a span program. The access structure can be realized by a (measurable)
ramp or almost ramp scheme if and only if the access structure is a
$G_\delta$ set (intersection of countably many open sets) in the
Sierpiński topology, if and only if it can be realized by a Hilbert-space
program.
Infinite Secret Sharing -- Examples
The motivation for extending secret sharing schemes to cases when either the
set of players is infinite or the domain from which the secret and/or the
shares are drawn is infinite or both, is similar to the case when switching
to abstract probability spaces from classical combinatorial probability. It
might shed new light on old problems, could connect seemingly unrelated
problems, and unify diverse phenomena.
Definitions equivalent in the finitary case could be very much different
when switching to infinity, signifying their difference. The standard
requirement that qualified subsets should be able to determine the secret
has different interpretations in spite of the fact that, by assumption, all
participants have infinite computing power. The requirement that unqualified
subsets should have no, or limited information on the secret suggests that
we also need some probability distribution. In the infinite case events with
zero probability are not necessarily impossible, and we should decide
whether bad events with zero probability are allowed or not.
In this paper, rather than giving precise definitions, we enlist an abundance
of hopefully interesting infinite secret sharing schemes. These
schemes touch quite diverse areas of mathematics such as projective
geometry, stochastic processes and Hilbert spaces. Nevertheless our main
tools are from probability theory. The examples discussed here serve as
foundation and illustration to the more theory oriented companion paper ``Probabilistic Infinite Secret Sharing.''
Cryptanalysis of an Identity-Based Multiple Key Agreement Scheme
Multiple key agreement (MKA) protocols allow two parties to generate two or more session keys in one session, which will be used for future secure communications in public network. In recent years, many MKA protocols have been proposed. However, most of them do not
consider ephemeral key compromise resilience, and some of them still exists security flaws. In this paper, we analyze the scheme proposed by Dehkordi and Alimoradi in 2011, which is announced with stronger security. We will present ephemeral key compromise attack and impersonation attack against Dehkordi and Alimoradi’s protocol. For overcoming these security flaws, we also propose an improvement of Dehkordi and Alimoradi’s protocol.
MDPC-McEliece: New McEliece Variants from Moderate Density Parity-Check Codes
Uncategorized
Uncategorized
In this work, we propose two McEliece cryptosystem variants: one from Moderate Density Parity-Check (MDPC) codes and another from quasi-cyclic MDPC codes. MDPC codes are LDPC codes of higher density than what is usually adopted for telecommunication applications. In general, this leads to a worse error-correction capability. However, in code-based cryptography we are not necessarily interested in correcting many errors, but only a number which ensures an adequate security level, a condition satisfied by MDPC codes. The benefits of their employment are many. Under a reasonable assumption, MDPC codes reduce the key-distinguishing McEliece problem to the problem of decoding linear codes. Since the message-attacks against the McEliece scheme also reduce to this problem, the security of our scheme has the benefit of relying on a single, well studied coding-theory problem. Furthermore, adding a quasi-cyclic structure, our proposal provides extremely compact-keys: for $80$-bits of security, the public-key has only $4801$ bits.
Efficient Implementation of Bilinear Pairings on ARM Processors
Uncategorized
Uncategorized
As hardware capabilities increase, low-power devices such
as smartphones represent a natural environment for the efficient
implementation of cryptographic pairings. Few works in the literature
have considered such platforms despite their growing importance in a
post-PC world. In this paper, we investigate the efficient computation
of the Optimal-Ate pairing over Barreto-Naehrig curves in software at
different security levels on ARM processors. We exploit
state-of-the-art techniques and propose new optimizations to speed up
the computation in the tower field and curve arithmetic. In
particular, we extend the concept of lazy reduction to inversion in
extension fields, analyze an efficient alternative for the sparse
multiplication used inside the Miller's algorithm and reduce further
the cost of point/line evaluation formulas in affine and projective
homogeneous coordinates. In addition, we study the efficiency of using
M-type sextic twists in the pairing computation and carry out a
detailed comparison between affine and projective coordinate
systems. Our implementations on various mass-market smartphones and
tablets significantly improve the state-of-the-art of pairing
computation on ARM-powered devices, outperforming by at least a factor
of 3.7 the best previous results in the literature.
Cross-Unlinkable Hierarchical Group Signatures
We introduce the notion of Cross-Unlinkability for group signature schemes. Considering groups organized in a tree structure, where belonging to the parent group is required to join a new group, Cross-Unlinkability enables a cascade revocation process that takes into account the underlying tree structure, while ensuring anonymity for non-revoked users, in particular, towards the managers of the other groups. We show how to achieve Cross-Unlinkability using the Verifier-Local Revocation group signature scheme of Bringer and Patey at Secrypt 2012, by exploiting its property of Backward Unlinkability.
Comments on four multi-server authentication protocols using smart card
Recently, researchers have proposed several nice multi-server authentication protocols. They claim that their protocols are secure and can withstand various attacks. However, after reviewing their schemes, we found that they although are perfect whereas flawed. Due to this observation, in this paper, we list the weakness found in these recent literatures.
Secure Computation on Floating Point Numbers
Uncategorized
Uncategorized
Secure computation undeniably received a lot of attention in the recent years, with the shift toward cloud computing offering a new incentive for secure computation and outsourcing. Surprisingly little attention, however, has been paid to computation with non-integer data types. To narrow this gap, in this work we develop efficient solutions for computation with real numbers in floating point representation, as well as more complex operations such as square root, logarithm, and exponentiation. Our techniques are information-theoretically secure, do not use expensive cryptographic techniques, and can be applied to a variety of settings. Our experimental results also show that the techniques exhibit rather fast performance and in some cases outperform operations on integers.
Secret Sharing Schemes for Very Dense Graphs
Uncategorized
Uncategorized
A secret-sharing scheme realizes a graph if every two vertices connected by an edge can reconstruct the secret while every independent set in the graph does not get any information on the secret. Similar to secret-sharing schemes for general access structures, there are gaps between the known lower bounds and upper bounds on the share size for graphs. Motivated by the question of what makes a graph ``hard'' for secret-sharing schemes (that is, require large shares), we study very dense graphs, that is, graphs whose complement contains few edges. We show that if a graph with $n$ vertices contains $\binom{n}{2}-n^{1+\beta}$ edges for some constant $0\leq\beta <1$, then there is a scheme realizing the graph with total share size of $\tilde{O}(n^{5/4+3\beta/4})$. This should be compared to $O(n^2/\log n)$ -- the best upper bound known for the share size in general graphs. Thus, if a graph is ``hard'', then the graph and its complement should have many edges. We generalize these results to nearly complete $k$-homogeneous access structures for a constant $k$. To complement our results, we prove lower bounds for secret-sharing schemes realizing very dense graphs, e.g., for linear secret-sharing schemes we prove a lower bound of $\Omega(n^{1+\beta/2})$ for a graph with $\binom{n}{2}-n^{1+\beta}$ edges.
Fully Private Revocable Predicate Encryption
We introduce the concept of \emph{Revocable Predicate Encryption (RPE)}, which extends the previous PE setting with revocation support: private keys can be used to decrypt an RPE ciphertext only if they match the decryption policy (defined via attributes encoded into the ciphertext and predicates associated with private keys) and were not revoked by the time the ciphertext was created.
The first challenge in RPE schemes is to preserve privacy for RPE ciphertexts, namely to ensure the \emph{attribute-hiding} property, which is inherent to traditional PE constructions, and which implies the more basic property of payload hiding, used in the context of Attribute-Based Encryption (ABE). We formalize the notion of attribute hiding in the presence of revocation and propose our first RPE construction, called AH-RPE, which is attribute-hiding under the Decision Linear assumption in the standard model. In the AH-RPE scheme we deploy the revocation system of Lewko, Sahai, and Waters (IEEE S\&P 2010), introduced for a simpler setting of broadcast encryption, which we modify for integration with the payload-hiding ABE scheme of Okamoto and Takashima (CRYPTO 2010), after making the latter attribute-hiding by borrowing additional techniques from Lewko, Okamoto, Sahai, Takashima, and Waters (Eurocrypt 2010).
As a second major step we show that RPE schemes may admit more stringent privacy requirements in comparison to PE schemes, especially when it comes to the revocation of private keys. In addition to attribute-hiding, RPE ciphertexts should ideally not leak any information about the revoked keys and by this about the revoked users. We formalize this stronger privacy notion, termed \emph{full hiding}, and propose another RPE scheme, called FH-RPE, which achieves this notion in the setting of ``sender-local revocation'' of Attrapadung and Imai (Cryptography and Coding 2009), under the same assumptions as our AH-RPE construction. Our FH-RPE scheme is also based on the attribute-hiding variant of Okamoto and Takashima's ABE scheme, yet with a different revocation method, in which we integrate the Subset-Cover Framework of Naor, Naor, and Lotspiech (CRYPTO 2001) for better efficiency.
Forward-Secure Hierarchical Predicate Encryption
Secrecy of decryption keys is an important pre-requisite for security of any encryption scheme and compromised private keys must be immediately replaced. \emph{Forward Security (FS)}, introduced to Public Key Encryption (PKE) by Canetti, Halevi, and Katz (Eurocrypt 2003), reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. The FS property was also shown to be achievable in (Hierarchical) Identity-Based Encryption (HIBE) by Yao, Fazio, Dodis, and Lysyanskaya (ACM CCS 2004). Yet, for emerging encryption techniques, offering flexible access control to encrypted data, by means of functional relationships between ciphertexts and decryption keys, FS protection was not known to exist.\smallskip
In this paper we introduce FS to the powerful setting of \emph{Hierarchical Predicate Encryption (HPE)}, proposed by Okamoto and Takashima (Asiacrypt 2009). Anticipated applications of FS-HPE schemes can be found in searchable encryption and in fully private communication. Considering the dependencies amongst the concepts, our FS-HPE scheme implies forward-secure flavors of Predicate Encryption and (Hierarchical) Attribute-Based Encryption.\smallskip
Our FS-HPE scheme guarantees forward security for plaintexts and for attributes that are hidden in HPE ciphertexts. It further allows delegation of decrypting abilities at any point in time, independent of FS time evolution. It realizes zero-inner-product predicates and is proven adaptively secure under standard assumptions. As the ``cross-product" approach taken in FS-HIBE is not directly applicable to the HPE setting, our construction resorts to techniques that are specific to existing HPE schemes and extends them with what can be seen as a reminiscent of binary tree encryption from FS-PKE.
An All-In-One Approach to Differential Cryptanalysis for Small Block Ciphers
We present a framework that unifies several standard differential techniques. This unified view allows us to consider many, potentially all, output differences for a given input difference and to combine the information derived from them in an optimal way. We then propose a new attack that implicitly mounts several standard, truncated, impossible, improbable and possible future variants of differential attacks in parallel and hence allows to significantly improve upon known differential attacks using the same input difference. To demonstrate the viability of our techniques, we apply them to KATAN-32. In particular, our attack allows us to break 115 rounds of KATAN-32, which is 37 rounds more than previous work. For this, our attack exploits the non-uniformity of the difference distribution after 91 rounds which is 20 rounds more than the previously best known differential characteristic.
Since our results still cover less than 1/2 of the cipher, they further strengthen our confidence in KATAN-32's resistance against differential attacks.
An Algebraic Fault Attack on the LED Block Cipher
Uncategorized
Uncategorized
In this paper we propose an attack on block ciphers where we combine techniques derived from algebraic and fault based cryptanalysis. The recently introduced block cipher LED serves us as a target for our attack. We show how to construct an algebraic representation of the encryption map and how to cast the side channel information gained from a fault injection into polynomial form. The resulting polynomial system is converted into a logical formula in conjunctive normal form and handed over to a SAT solver for reconstruction of the secret key. Following this approach we were able to mount a new, successful attack on the version of LED that uses a 64-bit secret key, requiring only a single fault injection.
Differential Fault Analysis on Block Cipher Piccolo
Piccolo is a 64-bit block cipher suitable for the constrained environments such as wireless sensor network environments. In this paper, we propose differential fault analysis on Piccolo. Based on a random byte fault model, our attack can recover the secret key of Piccolo-80 by using an exhaustive search of 2^{24} and six random byte fault injections on average. It can be simulated on a general PC within a few seconds. In the case of Piccolo-128, we require an exhaustive search of 2^{40} and eight random byte fault injections on average. This attack can be simulated on a general PC within one day. These results are the first known side-channel attack results on them.
PIRMAP: Efficient Private Information Retrieval for MapReduce
Private Information Retrieval (PIR) allows for retrieval of bits from a database in a way that hides a user's access pattern from the server. However, its practicality in a cloud computing setting has recently been questioned. In such a setting, PIR's enormous computation and communication overhead is expected to outweigh any cost saving advantages of cloud computing. This paper presents PIRMAP, a practical, highly efficient protocol for PIR in MapReduce, a widely supported cloud computing API. PIRMAP focuses especially on the retrieval of large files from the cloud, where it achieves close to optimal communication complexity with query times significantly faster than previous schemes. To achieve this, PIRMAP uses a special case of Lipmaa's PIR protocol that allows for optimal parallel computation during the "Map" phase of MapReduce, and homomorphic aggregation in the "Reduce" phase. To improve computational cost, we also use newer, faster homomorphic encryption which makes our scheme practical for databases of useful size, while keeping the communication costs low. PIRMAP has been implemented and tested in Amazon's public cloud with total database sizes of up to 1~TByte. Our performance evaluations show that PIRMAP is more than one order of magnitude cheaper and faster than "trivial PIR" on Amazon and adds only 20% overhead to a theoretical optimal PIR.
Cross-Domain Password-Based Authenticated Key Exchange Revisited
We revisit the problem of cross-domain secure communication between two users belonging to different security domains within an open and distributed environment. Existing approaches presuppose that either the users are in possession of public key certificates issued by a trusted certificate authority (CA), or the associated domain authentication servers share a long-term secret key. In this paper, we propose a variety of four-party password-based authenticated key exchange (4PAKE) protocols that take a different approach from previous work. The users are not required to have public key certificates, but they simply reuse their login passwords they share with their respective domain authentication servers. On the other hand, the authentication servers, assumed to be part of a standard PKI, act as ephemeral CAs that "certify'' some key materials that the users can subsequently use to exchange and agree on a session key.
Moreover, we adopt a compositional approach. That is, by treating any secure two-party password-based key exchange (2PAKE) protocol and two-party asymmetric-key/symmetric-key based key exchange (2AAKE/2SAKE) protocol as black boxes, we combine them to obtain generic and provably secure 4PAKE protocols. We also show that one can derive a 4PAKE protocol from just a 2PAKE protocol.
On second-order nonlinearity and maximum algebraic immunity of some bent functions in $\cP S^+$
In this paper, by modifying a subclass of bent functions in
$\mathcal P S_{ap}$, we construct another subclass of bent functions
in $\mathcal P S^+$ with maximum algebraic degree. We demonstrate
that the algebraic immunity of the constructed functions is maximum.
The result is proved by using the well known conjecture proposed by
Tu and Deng (Des. Codes Cryptogr. 60(1), pp. 1-14, 2011) which has
been proved recently by Cohen and Flori (http://eprint.iacr.org/
2011/400.pdf). Finally, we identify a class of $\cD_0$ type bent
functions constructed by modifying Dillon functions whose lower
bound on second-order nonlinearity is very high.
A New Efficient Authenticated ID-Based Group Key Agreement Protocol
Group key agreement (GKA) protocols Play a main role in constructing secure multicast channels. These protocols are algorithms that describe how a group of parties communicating over a public network can gain a common secret key. ID-based authenticated group key agreement (AGKA) cryptosystems based on bilinear pairings are update researching subject because of the simplicity of their public key management and their efficiency. The key agreement protocol is a good way to establish a common session key for communication. But in a group of member’s communication, we not only need to establish a common session key, but also need to concern the member changing situation. In this paper we propose a protocol based on Weil pairing, ID-based authentication and complete ternary tree architecture. We show that our protocol satisfies all known security requirements, and therefore it is more secure and efficient than the compared group key exchange protocols that we discuss in this article.
An ID-Based Key Agreement Protocol Based on ECC Among Users of Separate Networks
In this article we propose an identity based key agreement protocol based on elliptic curve cryptography (ECC) between users of different networks with independent private key generations (PKGs). Our protocol is based on Cao et al.'s protocol ,proposed in 2010, in which instead of bilinear pairings, elliptic curves are used for constructing an ID-based key agreement protocol . Our protocol develops Cao et al's protocol for situations that two users of independent organizations or networks with separate servers (that in this article, are named PKGs, because their main duty is generating private keys for the users) want to share a secret key via an insecure link. We also prove the security of the protocol in the random oracle model.
A Certificateless Multiple-key Agreement Protocol Based on Bilinear Pairings
Certificateless cryptosystems were proposed by Al-Riyami and Paterson in 2003 [1] to solve problems of public key cryptosystems based on PKI and based on identity. Up to now, various types of certificateless cryptographic primitives as encryption functions, signature schemes, key agreement protocols and etc, have been designed. But to the best of our knowledge, multiple-key agreement protocols have not been proposed based on certificateless cryptosystem yet. So in this paper we propose a certificateless authenticated multiple-key agreement protocol with bilinear pairings.
ID Based Signcryption Scheme in Standard Model
Designing an ID based signcryption scheme in the standard model is among the most interesting and important problems in cryptography. However, all the existing systems in the ID based setting, in the standard model, do not have either the unforgeability property or the indistinguishability property or both of them. In this paper, we present the first provably secure ID based signcryption scheme in the standard model with both these properties. The unforgeability property of this scheme is based on the hardness of Computational Diffie-Hellman problem and the indistinguishability property of this scheme is based on the hardness of Decisional Bilinear Diffie-Hellman problem. Our scheme is strongly unforgeable in the strong attack mode called insider security. Moreover, our scheme possess an interesting property called public verifiability of the ciphertext. Our scheme integrates cleverly, a modified version of Waters' IBE and a suitably modified version of the ID based signature scheme in the standard model proposed by Paterson et al. However, our security reductions are more efficient. Specifically, while the security reductions for indistinguishability is similar to the bounds of Waters' scheme, the unforgeability reductions are way better than the bounds for Paterson et al.'s scheme.
Analysis and Construction of Efficient RFID Authentication Protocol with Backward Privacy
Privacy of RFID systems is receiving increasing attentions in the
RFID community and an important issue required as to the security of RFID system. Backward privacy means the adversary can not trace the tag later even if he reveals the internal states of the tag sometimes before. In this paper, we analyze two recently proposed RFID authentication schemes: Randomized GPS and Randomized Hashed GPS scheme. We show both of them can not provide backward privacy in Juels and Weis privacy model, which allows the adversary to know whether the reader authenticates the tag successfully or not. In addition, we present a new protocol, called Challenge-Hiding GPS, based on the Schnorr identification scheme. The challenge is hidden from the eavesdropping through the technique of Diffie-Hellman key agreement protocol. The new protocol can satisfy backward privacy, and it has less communication overheads and almost the same computation, compared with the two schemes analyzed.
Regular Ternary Algorithm for Scalar Multiplication on Elliptic Curves over Finite Fields of Characteristic Three
In this paper we propose an efficient and regular ternary algorithm for scalar multiplication on elliptic curves over finite fields of characteristic three.
This method is based on full signed ternary expansion of a scalar to be multiplied. The cost per bit of this algorithm is lower than that of all previous ones.
Wide Strong Private RFID Identification based on Zero-Knowledge
We present the first wide-forward-insider and wide-strong RFID identification protocols that are based on zero-knowledge. Until now these notions have only been achieved by schemes based on IND-CCA2 encryption. We discuss why wide-forward-insider privacy is sufficient for most practical applications. Rigorous proofs in the standard model are provided for the security and privacy properties of our protocols. Furthermore, our protocols are the most efficient solution presented in the literature. Using only Elliptic Curve Cryptography (ECC), the required circuit area can be minimized such that our protocol even fits on small RFID tags. Concerning computation on the tag, we only require two scalar-EC point multiplications for the wide-forward-insider protocol.
The Arithmetic Codex
Uncategorized
Uncategorized
We introduce the notion of {\em arithmetic codex}, or {\em codex} for short.
It encompasses several well-established notions from cryptography (arithmetic secret sharing schemes, i.e., enjoying additive as well as multiplicative properties) and algebraic complexity theory (bilinear complexity of multiplication) in a natural mathematical framework.
Arithmetic secret sharing schemes have important applications to secure multiparty computation and even to {\em two}-party cryptography. Interestingly, several recent applications to two-party cryptography rely crucially on the existing results on ``{\em asymptotically good} families'' of suitable such schemes. Moreover, the construction of these schemes requires asymptotically good towers of function fields over finite fields: no elementary (probabilistic) constructions are known in these cases. Besides introducing the notion, we discuss some of the constructions, as well as some limitations.
New cryptographic constructions using generalized learning with errors problem
We present a generalized learning with errors (LWE) problem, which is essentially a simple and direct extension of the original LWE problem to the case of matrices.
Then we use this new version of LWE problem, which we call matrix LWE problem to build new cryptographic schemes, which include a new scalable key distribution scheme, a new key exchanges scheme and a new simple identity-based encryption scheme.
Cryptanalysis of Sood et al.’s Authentication Scheme using Smart Cards
In 2010, Sood-Sarje-Singh proposed a dynamic ID-based remote user authentication scheme and claimed that their scheme is more secure than Das et al.’s scheme and Liao et al.’s scheme. However, we show that Sood et al.’s scheme is still vulnerable to malicious user attack, man-in-the-middle attack, stolen smart card attack, off-line ID guessing attack, impersonation attack, and server spoofing attack, making the scheme unfeasible for practical implementation.
CCBKE – Session Key Negotiation for Fast and Secure Scheduling of Scientific Applications in Cloud Computing
Instead of purchasing and maintaining their own computing infrastructure, scientists can now run data-intensive scientific applications in a hybrid environment such as cloud computing by facilitating its vast storage and computation capabilities. During the scheduling of such scientific applications for execution, various computation data flows will happen between the controller and computing server instances. Amongst various quality-of-service (QoS) metrics, data security is always one of the greatest concerns to scientists because their data may be intercepted or stolen by malicious parties during those data flows, especially for less secure hybrid cloud systems. An existing typical method for addressing this issue is to apply Internet Key Exchange (IKE) scheme to generate and exchange session keys, and then to apply these keys for performing symmetric-key encryption which will encrypt those data flows. However, the IKE scheme suffers from low efficiency due to its low performance of asymmetric-key cryptological operations over a large amount of data and high-density operations which are exactly the characteristics of scientific applications. In this paper, we propose Cloud Computing Background Key Exchange (CCBKE), a novel authenticated key exchange scheme that aims at efficient security-aware scheduling of scientific applications. Our scheme is designed based on randomness-reuse strategy and Internet Key Exchange (IKE) scheme. Theoretical analyses and experimental results demonstrate that, compared with the IKE scheme, our CCBKE scheme can significantly improve the efficiency by dramatically reducing time consumption and computation load without sacrificing the level of security.
Functional Encryption for Regular Languages
We provide a functional encryption system that supports functionality
for regular languages. In our system a secret key is associated with
a Deterministic Finite Automata (DFA) M. A ciphertext, CT,
encrypts a message m and is associated with an arbitrary length string w. A user is able to decrypt the ciphertext CT if
and only if the DFA M associated with his private key accepts the string w.
Compared with other known functional encryption systems, this is
the first system where the functionality is capable of recognizing an
unbounded language. For example, in (Key-Policy) Attribute-Based
Encryption (ABE) a private key SK is associated with a single
boolean formula which operates over a fixed number of
boolean variables from the ciphertext. In contrast, in our system a
DFA M will meaningfully operate over an arbitrary length input w.
We propose a system that utilizes bilinear groups. Our solution is a
"public index" system, where the message m is hidden, but the
string w is not. We prove security in the selective model under a
variant of the decision l-Bilinear Diffie-Hellman Exponent
(BDHE) assumption that we call the decision l-Expanded BDHE problem.
Formalization of Information-Theoretic Security for Encryption and Key Agreement, Revisited
In this paper, we revisit formalizations of information-theoretic security for symmetric-key encryption and key agreement protocols which are very fundamental primitives in cryptography. In general, we can formalize information-theoretic security in various ways: some of them can be formalized as stand-alone security by extending (or relaxing) Shannon's perfect secrecy; some of them can be done based on composable security. Then, a natural question about this is: what is the gap between the formalizations? To answer the question, we investigate relationships between several formalizations of information-theoretic security for symmetric-key encryption and key agreement protocols. Specifically, for symmetric-key encryption protocols which may have decryption-errors, we deal with the following formalizations of security: formalizations extended (or relaxed) from Shannon's perfect secrecy by using mutual information and statistical distance; information-theoretic analogue of indistinguishability by Goldwasser and Micali;
and the ones of composable security by Maurer et al. and Canetti.
Then, we explicitly show that those formalizations are essentially equivalent under both one-time and multiple-use models. Under the both models, we also derive lower bounds on the adversary's (or distinguisher's) advantage and secret-key size required under all of the above formalizations. Although some of them may be already known, we can explicitly derive them all at once through our relationships between the formalizations. In addition, we briefly observe impossibility results which easily follow from the lower bounds. The similar results are also shown for key agreement protocols which may have agreement-errors.
On the Joint Security of Signature and Encryption Schemes under Randomness Reuse: Efficiency and Security Amplification
We extend the work of Bellare, Boldyreva and Staddon on the systematic analysis of randomness reuse to construct multi-recipient
encryption schemes to the case where randomness is reused across different cryptographic primitives. We find that through the additional binding introduced through randomness reuse, one can actually obtain a security amplification with respect to the standard black-box compositions, and achieve a stronger level of security. We introduce stronger notions of security for encryption and signatures,
where challenge messages can depend in a restricted way on the random coins used in encryption, and show that two variants of the KEM/DEM paradigm give rise to encryption schemes that meet this enhanced notion of security. We obtain a very efficient signcryption scheme that is
secure against insider attackers without random oracles.
Last updated: 2013-07-22
A Strongly Secure Authenticated Key Exchange Protocol from Bilinear Groups without Random Oracles
Uncategorized
Uncategorized
Since the introducing of extended Canetti-Krawczyk~(eCK) security model for two party key exchange, many protocols have been proposed to provide eCK security. However, most of those protocols are provably secure in the random oracle model or rely on special design technique well-known as the NAXOS trick. In contrast to previous schemes, we present an eCK secure protocol in the standard model, without NAXOS trick and without knowledge of secret key (KOSK) assumption for public key registration. The security proof of our scheme is based on standard pairing assumption, collision resistant hash functions, Bilinear Decision Diffie-Hellman (BDDH) and Decision Linear Diffie-Hellman (DLIN) assumptions, and pseudo-random functions with pairwise independent random source. Although our proposed protocol is based on bilinear groups, it doesn't need any pairing operations during protocol execution.
Several Weak Bit-Commitments Using Seal-Once Tamper-Evident Devices
Uncategorized
Uncategorized
Following both theoretical and practical arguments, we construct UC-secure bit-commitment protocols
that place their strength on the sender’s side and are built using tamper-evident devices, e.g., a type of distinguishable, sealed envelopes. We show that by using a second formalisation of tamper-evident distinguishable
envelopes we can attain better security guarantees, i.e., EUC-security. We show the relations between several flavours of weak bit-commitments, bit-commitments and distinguishable tamper-evident envelopes. We
focus, at all points, on the lightweight nature of the underlying mechanisms and on the end-to-end human
verifiability.
All-But-Many Encryption: A New Framework for Fully-Equipped UC Commitments
We present a general framework for constructing
non-interactive universally composable (UC) commitment schemes that are secure against adaptive adversaries
in the non-erasure model under a re-usable common reference string.
Previously, such ``fully-equipped'' UC commitment schemes have been known only in [CF01,CLOS02], with strict expansion factor O(k);
meaning that to commit L bits, communication strictly requires O(Lk)$ bits,
where k denotes the security parameter.
Efficient construction of a fully-equipped UC commitment scheme is
a long-standing open problem.
We introduce new abstraction, called all-but-many encryption (ABME),
and prove that it captures fully-equipped UC commitment schemes.
We propose the first fully-equipped UC commitment scheme
with optimal expansion factor O(1) from our ABME scheme related to the DCR assumption.
We also provide an all-but-many lossy trapdoor function (ABM-LTF)[Hof12] from
our DCR-based ABME scheme, with a better lossy rate than [Hof12].
Multiparty Proximity Testing with Dishonest Majority from Equality Testing
Motivated by the recent widespread emergence of location-based services (LBS) over mobile devices, we explore efficient protocols for proximity-testing. Such protocols allow a group of friends to discover if they are all close to each other in some physical location, without revealing their individual locations to each other. We focus on hand-held devices and aim at protocols with very small communication complexity and a small number of rounds.
The proximity-testing problem can be reduced to the private equality testing (PET) problem, in which parties find out whether or not they hold the same input (drawn from a low-entropy distribution) without revealing any other information about their inputs to each other. While previous works analyze the 2-party PET special case (and its LBS application), in this work we consider highly-efficient schemes for the multiparty case with no honest majority. We provide schemes for both a direct-communication setting and a setting with a honest-but-curious mediating server that does not learn the users’ inputs. Our most efficient scheme takes 2 rounds, where in each round each user sends only a couple of ElGamal ciphertexts.
Distributed Key Generation in the Wild
Distributed key generation (DKG) has been studied extensively in the cryptographic literature. However, it has never been examined outside of the synchronous setting, and the known DKG protocols cannot guarantee safety or liveness over the Internet.
In this work, we present the first realistic DKG protocol for use over the Internet. We propose a practical system model for the Internet and define an efficient verifiable secret sharing (VSS) scheme in it. We observe the necessity of Byzantine agreement for asynchronous DKG and analyze the difficulty of using a randomized protocol for it. Using our VSS scheme and a leader-based agreement protocol, we then design a provably secure DKG protocol. We also consider and achieve cryptographic properties such as uniform randomness of the shared secret and compare static versus adaptive adversary models. Finally, we implement our DKG protocol, and establish its efficiency and reliability by extensively testing it on the PlanetLab platform. Counter to a general non-scalability perception about asynchronous systems, our experiments demonstrate that our asynchronous DKG protocol scales well with the system size and it is suitable for realizing multiparty computation and threshold cryptography over the Internet.
Combinatorial Solutions Providing Improved Security for the Generalized Russian Cards Problem
We present the first formal mathematical presentation of the generalized Russian cards problem, and provide rigorous security definitions that capture both basic and extended versions of weak and perfect security notions. In the generalized Russian cards problem, three players, Alice, Bob, and Cathy, are dealt a deck of $n$ cards, each given $a$, $b$, and $c$ cards, respectively. The goal is for Alice and Bob to learn each other's hands via public communication, without Cathy learning the fate of any particular card. The basic idea is that Alice announces a set of possible hands she might hold, and Bob, using knowledge of his own hand, should be able to learn Alice's cards from this announcement, but Cathy should not. Using a combinatorial approach, we are able to give a nice characterization of informative strategies (i.e., strategies allowing Bob to learn Alice's hand), having optimal communication complexity, namely the set of possible hands Alice announces must be equivalent to a large set of $t-(n, a, 1)$-designs, where $t=a-c$. We also provide some interesting necessary conditions for certain types of deals to be simultaneously informative and secure. That is, for deals satisfying $c = a-d$ for some $d \geq 2$, where $b \geq d-1$ and the strategy is assumed to satisfy a strong version of security (namely perfect $(d-1)$-security), we show that $a = d+1$ and hence $c=1$. We also give a precise characterization of informative and perfectly $(d-1)$-secure deals of the form $(d+1, b, 1)$ satisfying $b \geq d-1$ involving $d-(n, d+1, 1)$-designs.
How to Store some Secrets
This paper introduces a special type of symmetric cryptosystem called multi-encryption scheme. It allows users to encrypt multiple plaintexts into a single ciphertext. Each plaintext is protected with its own secret key, meaning that they can be decrypted individually by applying the decryption function with the corresponding key to the ciphertext. Compared to encrypting the ciphertexts one-by-one using a standard symmetric cryptosystem, the main advantage of using a multi-encryption scheme is the no-search property, which guarantees that knowing the key is sufficient for decrypting a single plaintext. We show how to construct a multi-encryption scheme based on polynomials over finite fields. A possible application area is coercion-resistant electronic voting. To ensure a strong form of privacy, voters are equipped with multiple fake credentials, which are indistinguishable from the proper one. While theoretically sound, this requires a voter to perfectly recall multiple lengthy random numbers, and to know which of them is the proper one. To ensure 100\% recall, users need to manage these numbers and keep them secret. A multi-encryption scheme is an elegant solution for this problem.
Infiltrate the Vault: Security Analysis and Decryption of Lion Full Disk Encryption
With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume
encryption mechanism known as FileVault 2. Apple only disclosed marketing aspects of the closed-source software, e.g. its use of the AES-XTS tweakable encryption, but a publicly available security evaluation and detailed description was unavailable until now.
We have performed an extensive analysis of FileVault 2 and we have been able to find all the algorithms and parameters needed to successfully read an encrypted volume. This allows us to perform forensic investigations on encrypted volumes using our own tools.
In this paper we present the architecture of FileVault 2, giving details of the key derivation, encryption process and metadata structures needed to perform the volume decryption. Besides the analysis of the system, we have also built a library that can mount a volume encrypted with FileVault 2. As a contribution to the research and forensic communities we have made this library open source.
Additionally, we present an informal security evaluation of the system and comment on some of the design and implementation features. Among others we analyze the random number generator used to create the recovery password. We have also analyzed the entropy of each 512-byte block in the encrypted volume and discovered that part of the user data was left unencrypted.
Optimal Lower Bound for Differentially Private Multi-Party Aggregation
We consider distributed private data analysis,
where $n$ parties each holding some sensitive data wish
to compute some aggregate statistics over all parties' data.
We prove a tight lower bound for the private distributed summation
problem. Our lower bound is strictly stronger than
the prior lower-bound result by Beimel, Nissim, and Omri
published in CRYPTO 2008.
In particular, we show that any $n$-party protocol
computing the sum with sparse communication graph
must incur an additive error
of $\Omega(\sqrt{n})$
with constant probability, in order
to defend against potential coalitions of compromised users.
Furthermore, we show that in the client-server communication model,
where all users communicate solely with an untrusted server,
the additive error must be $\Omega(\sqrt{n})$, regardless of
the number of messages or rounds.
Both of our lower-bounds, for the general setting and the
client-to-server
communication model, are strictly stronger than those of
Beimel, Nissim and Omri, since we remove the assumption
on the number of rounds (and
also the number of messages in the client-to-server
communication model).
Our lower bounds generalize to the
$(\epsilon, \delta)$ differential
privacy notion, for reasonably small values of $\delta$.
Last updated: 2012-09-03
New Preimage Attacks on Hash Modes of AES-256
Uncategorized
Uncategorized
We study the slow diffusion of the AES key schedule for 256-bit keys and find weakness which can be used in the preimage attack on its Davis-Meyer mode. Our preimage attack works for 8 rounds of AES-256 with the computational complexity of $2^{124.9}$, while the best previous attack works for 7 rounds of AES-256. It is also extended to the preimage attack on some well-known double-block-length hash modes assuming the underlying block cipher is 8-round AES-256, whose computational complexity is $2^{252.9}$.
Simultaneous hashing of multiple messages
We describe a method for efficiently hashing multiple messages of different lengths. Such computations occur in various scenarios, and one of them is when an operating system checks the integrity of its components during boot time. These tasks can gain performance by parallelizing the computations and using SIMD architectures. For such scenarios, we compare the performance of a new 4-buffers SHA-256 S-HASH implementation, to that of the standard serial hashing. Our results are measured on the 2nd Generation Intel® Core™ Processor, and demonstrate SHA-256 processing at effectively ~5.2 Cycles per Byte, when hashing from any of the three cache levels, or from the system memory. This represents speedup by a factor of 3.42x compared to OpenSSL (1.0.1), and by 2.25x compared to the recent and faster n-SMS method. For hashing from a disk, we show an effective rate of ~6.73 Cycles/Byte, which is almost 3 times faster than OpenSSL (1.0.1) under the same conditions. These results indicate that for some usage models, SHA-256 is significantly faster than commonly perceived.
Improved Broadcast Encryption Scheme with Constant-Size Ciphertext
The Boneh-Gentry-Waters (BGW) scheme is one of the most efficient broadcast encryption scheme regarding the overhead size. This performance relies on the use of a pairing. Hence this protocol can benefit from public key improvements. The ciphertext is of constant size, whatever the proportion of revoked users is. The main lasting constraint is the computation time at receiver end as it depends on the number of revoked users. In this paper we describe two modifications to improve the BGW bandwidth and time complexity. First we rewrite the protocol and its security proof with an asymmetric pairing over the Barreto-Naehrig (BN) curves instead of a symmetric one over supersingular curves. This modification leads to a practical gain of 60 % in speed and 84 % in bandwidth. The second tweaks allows to reduce the computation time from $O(n-r)$ to $\min(O(r),O(n-r))$ for the worst case (and better for the average case). We give performance measures of our implementation for a 128-bit security level of the modified protocol on a smartphone.
Factorisation of RSA-704 with CADO-NFS
We give details of the factorization of RSA-704 with CADO-NFS. This is a record computation with publicly available software tools. The aim of this experiment was to stress CADO-NFS - which was originally designed for 512-bit factorizations - for larger inputs, and to identify possible rooms of improvement.
Comprehensive Evaluation of High-Speed and Medium-Speed Implementations of Five SHA-3 Finalists Using Xilinx and Altera FPGAs
Uncategorized
Uncategorized
In this paper we present a comprehensive comparison of all Round 3 SHA-3 candidates and the current standard SHA-2 from the point of view of hardware performance in modern FPGAs. Each algorithm is implemented using multiple architectures based on the concepts of iteration, folding, unrolling, pipelining, and circuit replication. Trade-offs between speed and area are investigated, and the best architecture from the point of view of the throughput to area ratio is identified. Finally, all algorithms are ranked based on their overall performance in FPGAs. The characteristic features of each algorithm important from the point of view of its implementation in hardware are identified.
On Continual Leakage of Discrete Log Representations
Let $\G$ be a group of prime order $q$, and let $g_1,\ldots,g_n$ be
random elements of $\G$. We say that a vector $\vecx =
(x_1,\ldots,x_n)\in \Z_q^n$ is a {\em discrete log representation} of
some some element $y\in\G$ (with respect to $g_1,\ldots,g_n$) if
$g_1^{x_1}\cdots g_n^{x_n} = y$. Any element $y$ has many discrete log
representations, forming an affine subspace of $\Z_q^n$. We show
that these representations have a nice {\em continuous leakage-resilience} property as follows. Assume some attacker
$\A(g_1,\ldots,g_n,y)$ can repeatedly learn $L$ bits of information on arbitrarily many random representations of $y$.
That is, $\A$ adaptively chooses polynomially many leakage functions
$f_i:\Z_q^n\rightarrow \{0,1\}^L$, and learns the value
$f_i(\vecx_i)$, where $\vecx_i$ is a {\em fresh and random}
discrete log representation of $y$. $\A$ wins the game if it eventually outputs a
valid discrete log representation $\vecx^*$ of $y$. We show that if
the discrete log assumption holds in $\G$, then no polynomially
bounded $\A$ can win this game with non-negligible probability, as
long as the leakage on each representation is bounded by $L\approx (n-2)\log q = (1-\frac{2}{n})\cdot |\vecx|$.
As direct extensions of this property, we design very simple continuous leakage-resilient (CLR) one-way function (OWF) and public-key encryption (PKE) schemes in the so called ``invisible key update'' model introduced by Alwen et al. at CRYPTO'09. Our CLR-OWF is based on the standard Discrete Log assumption and our CLR-PKE is based on the standard Decisional Diffie-Hellman assumption. Prior to our work, such schemes could only be constructed in groups with a bilinear pairing.
As another surprising application, we show how to design the first leakage-resilient {\em traitor tracing} scheme, where no attacker, getting the secret keys of a small subset of decoders (called ``traitors'') {\em and} bounded leakage on the secret keys of all other decoders, can create a valid decryption key which will not be traced back to at least one of the traitors.
Securing Circuits Against Constant-Rate Tampering
We present a compiler that converts any circuit into one that remains secure even if a constant fraction of its wires are tampered with. Following the seminal work of Ishai et al. (Eurocrypt 2006), we consider adversaries who may choose an arbitrary set of wires to corrupt, and may set each such wire to 0 or to 1, or may toggle with the wire. We prove that such adversaries, who continuously tamper with the circuit, can learn at most logarithmically many bits of secret information (in addition to black-box access to the circuit). Our results are information theoretic.
Public Auditing for Ensuring Cloud Data Storage Security With Zero Knowledge Privacy
Uncategorized
Uncategorized
In cloud storage service, clients upload their data together with authentication information to cloud storage server. To ensure the availability and integrity of clients' stored data, cloud server(CS) must prove to a verifier that he is actually storing all of the client's data unchanged. And, enabling public auditability for cloud storage is of critical importance to users with constrained computing resources, who can resort to a third party auditor (TPA) to check the integrity of outsourced data. However, most of the existing proofs of retrievability schemes or proof of data possession schemes do not consider data privacy problem. Zero knowledge privacy requires TPA or the adversary can not deduce any information of the file data from auditing system. In this paper, after giving a new construction of a recently proposed cryptographic primitive named aggregatable signature based broadcast (ASBB) encryption scheme, we present an efficient public auditing scheme with zero knowledge privacy. The new scheme is as efficient as the scheme presented by Shacham and Waters without considering privacy and is secure in the random oracle model.
Zero-Knowledge Proofs with Low Amortized Communication from Lattice Assumptions
We construct zero-knowledge proofs of plaintext knowledge (PoPK) and correct multiplication (PoPC) for the Regev encryption scheme with low amortized communication complexity. Previous constructions of both PoPK and PoPC had communication cost linear in the size of the public key (roughly quadratic in the lattice dimension, ignoring logarithmic factors). Furthermore, previous constructions of PoPK suffered from one of the following weaknesses: either the message and randomness space were restricted, or there was a super-polynomial gap between the size of the message and randomness that an honest prover chose and the size of which an accepting verifier would be convinced. The latter weakness was also present in the existent PoPC protocols.
In contrast, O(n) proofs (for lattice dimension n) in our PoPK and PoPC protocols have communication cost linear in the public key. Thus, we improve the amortized communication cost of each proof by a factor linear in the security parameter. Furthermore, we allow the message space to be \Z_p and the randomness distribution to be the discrete Gaussian, both of which are natural choices for the Regev encryption scheme. Finally, in our schemes there is no gap between the the size of the message and randomness that an honest prover chooses and the size of which an accepting verifier is convinced.
Our constructions use the ``MPC-in-the-head'' technique of Ishai et al. (STOC 2007). At the heart of our constructions is a protocol for proving that a value is bounded by some publicly known bound. This uses Lagrange's Theorem that states that any positive integer can be expressed as the sum of four squares (an idea previously used by Boudot (EUROCRYPT 2000)), as well as techniques from Cramer and Damgård (CRYPTO 2009).
A Unified Indifferentiability Proof for Permutation- or Block Cipher-Based Hash Functions
In the recent years, several hash constructions have been
introduced that aim at achieving enhanced security margins by strengthening the Merkle-Damgård mode. However, their security analysis have been conducted independently and using a variety of proof methodologies. This paper unifies these results by proposing a unique indifferentiability proof that considers a broadened form of the general compression function introduced by Stam at FSE09. This general definition enables us to capture in a realistic model most of the features of the mode of operation ({\em e.g.}, message encoding, blank rounds, message insertion,...) within the pre-processing and post-processing functions. Furthermore, it relies on an
inner primitive which can be instantiated either by an ideal block cipher, or by an ideal permutation. Then, most existing hash functions can be seen as the Chop-MD construction applied to some compression function which fits the broadened Stam model. Our result then gives the tightest known indifferentiability bounds for several general modes of operations, including Chop-MD, Haifa or sponges. Moreover, we show that it applies in a quite automatic way, by providing the security bounds for 7 out of the 14 second round SHA-3 candidates, which are in some cases improved over previously known ones.
Achieving Constant Round Leakage-Resilient Zero-Knowledge
Recently there has been a huge emphasis on constructing cryptographic protocols that maintain their security guarantees even in the presence of side channel attacks. Such attacks exploit the physical characteristics of a cryptographic device to learn useful information about the internal state of the device. Designing protocols that deliver meaningful security even in the presence of such leakage attacks is a challenging task.
The recent work of Garg, Jain, and Sahai formulates a meaningful notion of zero-knowledge in presence of leakage; and provides a construction which satisfies a weaker variant of this notion called (1+e)-leakage-resilient-zero-knowledge, for every constant e>0. In this weaker variant, roughly speaking, if the verifier learns L bits of leakage during the interaction, then the simulator is allowed to access (1+e).L bits of leakage. The round complexity of their protocol is n/e.
In this work, we present the first construction of leakage-resilient zero-knowledge satisfying the ideal requirement of e=0. While our focus is on a feasibility result for e=0, our construction also enjoys a constant number of rounds. At the heart of our construction is a new ``public-coin preamble'' which allows the simulator to recover arbitrary information from a (cheating) verifier in a ``straight line.'' We use non-black-box simulation techniques to accomplish this goal.
Quantum Key Distribution in the Classical Authenticated Key Exchange Framework
Key establishment is a crucial primitive for building secure channels: in a multi-party setting, it allows two parties using only public authenticated communication to establish a secret session key which can be used to encrypt messages. But if the session key is compromised, the confidentiality of encrypted messages is typically compromised as well. Without quantum mechanics, key establishment can only be done under the assumption that some computational problem is hard. Since digital communication can be easily eavesdropped and recorded, it is important to consider the secrecy of information anticipating future algorithmic and computational discoveries which could break the secrecy of past keys, violating the secrecy of the confidential channel.
Quantum key distribution (QKD) can be used generate secret keys that are secure against any future algorithmic or computational improvements. QKD protocols still require authentication of classical communication, however, which is most easily achieved using computationally secure digital signature schemes. It is generally considered folklore that QKD when used with computationally secure authentication is still secure against an unbounded adversary, provided the adversary did not break the authentication during the run of the protocol.
We describe a security model for quantum key distribution based on traditional classical authenticated key exchange (AKE) security models. Using our model, we characterize the long-term security of the BB84 QKD protocol with computationally secure authentication against an eventually unbounded adversary. By basing our model on traditional AKE models, we can more readily compare the relative merits of various forms of QKD and existing classical AKE protocols. This comparison illustrates in which types of adversarial environments different quantum and classical key agreement protocols can be secure.
Multiple Differential Cryptanalysis using \LLR and $\chi^2$ Statistics
Uncategorized
Uncategorized
Recent block ciphers have been designed to be resistant against differential
cryptanalysis. Nevertheless it has been shown that such resistance claims
may not be as tight as wished due to recent advances in this field.
One of the main improvements to differential cryptanalysis is the use of many differentials to reduce the data complexity. In this paper we propose a general model for understanding multiple differential cryptanalysis and propose new attacks based on tools used in multidimensional linear cryptanalysis (namely \LLR and $\CHI$ statistical tests). Practical cases are considered on a reduced version of the cipher PRESENT to evaluate different approaches for selecting and combining the differentials considered. We also consider the tightness of the theoretical estimates corresponding to these attacks.
Another look at non-uniformity
We argue that it is unnatural and undesirable to use the non-uniform model of complexity for practice-oriented security reductions in cryptography.
PICARO - A Block Cipher Allowing Efficient Higher-Order Side-Channel Resistance -- Extended Version --
Many papers deal with the problem of constructing an efficient masking scheme for existing block ciphers. We take the reverse approach: that is, given a proven masking scheme (Rivain and Prouff, CHES 2010) we design a block cipher that fits well the masking constraints. The difficulty of implementing efficient masking for a block cipher comes mainly from the S-boxes. Therefore the choice of an adequate S-box is the first and most critical step of our work. The S-box we selected is non-bijective; we discuss the resulting design and security problems. A complete design of the cipher is given, as well as some implementation results.
Publicly Verifiable Ciphertexts
In many applications, where encrypted traffic flows from an open (public) domain to a protected (private) domain, there exists a gateway that bridges the two domains and faithfully forwards the incoming traffic to the receiver. We observe that indistinguishability against (adaptive) chosen-ciphertext attacks (IND-CCA), which is a mandatory goal in face of active attacks in a public domain, can be essentially relaxed to indistinguishability against chosen-plaintext attacks (IND-CPA) for ciphertexts once they pass the gateway that acts as an IND-CCA/CPA filter by first checking the validity of an incoming IND-CCA ciphertext, then transforming it (if valid) into an IND-CPA ciphertext, and forwarding the latter to the recipient in the private domain. ``Non-trivial filtering'' can result in reduced decryption costs on the receivers' side.
We identify a class of encryption schemes with \emph{publicly verifiable ciphertexts} that admit generic constructions of (non-trivial) IND-CCA/CPA filters. These schemes are characterized by existence of public algorithms that can distinguish between valid and invalid ciphertexts. To this end, we formally define (non-trivial) public verifiability of ciphertexts for general encryption schemes, key encapsulation mechanisms, and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption flavours. We further analyze the security impact of public verifiability and discuss generic transformations and concrete constructions that enjoy this property.
Fully Anonymous Attribute Tokens from Lattices
Anonymous authentication schemes such as group signatures and anonymous credentials are important privacy-protecting tools in electronic communications. The only currently known scheme based on assumptions that resist quantum attacks is the group signature scheme by Gordon et al. (ASIACRYPT 2010). We present a generalization of group signatures called *anonymous attribute tokens* where users are issued attribute-containing credentials that they can use to anonymously sign messages and generate tokens revealing only a subset of their attributes. We present two lattice-based constructions of this new primitive, one with and one without opening capabilities for the group manager. The latter construction directly yields as a special case the first lattice-based group signature scheme offering full anonymity (in the random-oracle model), as opposed to the practically less relevant notion of chosen-plaintext anonymity offered by the scheme of Gordon et al. We also extend our scheme to protect users from
framing attacks by the group manager, where the latter creates tokens or signatures in the name of honest users. Our constructions involve new lattice-based tools for aggregating signatures and verifiable CCA2-secure encryption.
Never trust a bunny
``Lapin'' is a new RFID authentication protocol proposed at FSE 2012.
``Ring-LPN'' (Ring-Learning-Parity-with-Noise) is a new computational problem proposed in the same paper; there is a proof relating the security of Lapin to the difficulty of Ring-LPN. This paper presents an attack against Ring-LPN-512 and Lapin-512.The attack is not practical but nevertheless violates specific security claims in the FSE 2012 paper.
Hash Combiners for Second Pre-Image Resistance, Target Collision Resistance and Pre-Image Resistance have Long Output
A $(k,l)$ hash-function combiner for property $P$ is a construction that, given access to $l$ hash functions, yields a single cryptographic hash function which has property $P$ as long as at least $k$ out of the $l$ hash functions have that property. Hash function combiners are used to hedge against the failure of one or more of the individual components. One example of the application of hash function combiners are the previous versions of the TLS and SSL protocols \cite{RFC:6101,RFC:5246}.
The concatenation combiner which simply concatenates the outputs of all hash functions is an example of a robust combiner for collision resistance. However, its output length is, naturally, significantly longer than each individual hash-function output, while the security bounds are not necessarily stronger than that of the strongest input hash-function. In 2006 Boneh and Boyen asked whether a robust black-box combiner for collision resistance can exist that has an output length which is significantly less than that of the concatenation combiner \cite{C:BonBoy06}. Regrettably, this question has since been answered in the negative for fully black-box constructions (where hash function and adversary access is being treated as black-box), that is, combiners (in this setting) for collision resistance roughly need at least the length of the concatenation combiner to be robust \cite{C:BonBoy06,C:CRSTVW07,EC:Pietrzak07,C:Pietrzak08}.
In this paper we examine weaker notions of collision resistance, namely: \emph{second pre-image resistance} and \emph{target collision resistance} \cite{FSE:RogShr04} and \emph{pre-image resistance}. As a generic brute-force attack against any of these would take roughly $2^n$ queries to an $n$-bit hash function, in contrast to only $2^{n/2}$ queries it would take to break collision resistance (due to the birthday bound), this might indicate that combiners for weaker notions of collision resistance can exist which have a significantly shorter output than the concatenation combiner (which is, naturally, also robust for these properties). Regrettably, this is not the case.
On Reconfigurable Fabrics and Generic Side-Channel Countermeasures
The use of field programmable devices in security-critical applications is growing in popularity; in part, this can be attributed to their potential for balancing metrics such as efficiency and algorithm agility. However, in common with non-programmable alternatives, physical attack techniques such as fault and power analysis are a threat. We investigate a family of next-generation field programmable devices, specifically those based on the concept of time sharing,within this context: our results support the premise that extra, inherent flexibility in such devices can offer a range of possibilities for low-overhead, generic countermeasures against physical attack.
On Hashing Graphs
Collision resistant one-way hashing schemes are the basic building blocks of almost all crypto-systems. Use of graph-structured data models are on the rise -- in graph databases, representation of biological and healthcare data as well as in modeling systems for representing system topologies. Therefore, the problem of hashing graphs with respect to crypto-systems needs to be studied and addressed. The traditional Merkle Hash technique cannot be applied as it is because graphs are more complex data structures than trees. In this paper, we make the following contributions: (1) we define the formal security model of hashing schemes for graphs, (2) we define the formal security model of leakage-free hashing schemes for graphs, (3) we describe a hashing scheme for hashing directed and undirected graphs that uses Merkle hash technique, (4) and a hashing scheme that uses structural information instead of Merkle hash technique, (5) we define leakage-free hashing schemes for graphs. Our constructions use graph traversal techniques and are highly efficient with respect to updates to graphs: they require as little as two (O(1)) hash values to be updated to refresh the hash of the graph, while the Merkle Hash Technique and Search DAG schemes for trees and DAGs respectively require as many as O(|V|) and O(|V|+|E|).
SipHash: a fast short-input PRF
SipHash is a family of pseudorandom functions optimized for short inputs. Target applications include network traffic authentication and hash-table lookups protected against hash-flooding denial-of-service attacks. SipHash is simpler than MACs based on universal hashing, and faster on short inputs. Compared to dedicated designs for hash-table lookup, SipHash has well-defined security goals and competitive performance. For example, SipHash processes a 16-byte input with a fresh key in 140 cycles on an AMD FX-8150 processor, which is much faster than state-of-the-art MACs. We propose that hash tables switch to SipHash as a hash function.
A Note for the Ideal Order-Preserving Encryption Object and Generalized Order-Preserving Encryption
Order-preserving encryption (OPE) preserves the order of data in their ciphertexts and, hence, allows range search on the encrypted data without needing to decrypt them. Security analysis of OPE schemes is very important because OPE is not a perfect encryption algorithm (the ciphertexts leak the ordering information of the plaintexts). Most of the existing security analysis for the OPE schemes are informal: they are either based on author-defined attacks or experiments. The authors in \cite{Bol09} initiate the cryptographic study of the OPE scheme. They define the security notion POPF-CCA to qualify the security of OPE. In POPF-CCA, the ``ideal" OPE object is defined where the encryption function is uniformly randomly selected from all order-preserving functions (generally the ``ideal" OPE object is not computationally feasible), and a (constructed) ``real" OPE scheme is secure under POPF-CCA if it is computationally indistinguishable from the ideal object. In other words, although the ``ideal" OPE object is not computationally feasible, it is used as the security goal, and a (constructed) ``real" OPE scheme is secure if it is as secure as the ``ideal" OPE object. Such approach conceives the assumption (but not clearly stated and proved) that the ``ideal" OPE object is the most secure OPE. But the correctness of the assumption is an easily ignored problem.
In this paper, we investigate the security of the OPE in more depth. We first give example to show that the ``ideal" OPE object may not always be the most secure OPE. It indicates that we need to use the ``ideal" encryption object more cautiously in the security analysis of OPE. Additionally we extend the concept of OPE to generalized OPE (GOPE). Unlike OPE, the ciphertexts of GOPE may not be numbers, but GOPE still enables the comparisons on the encrypted data without needing to decrypt them. We present two GOPEs in polynomial-sized and superpolynomial-sized domains that satisfy stronger notions of security than that of the ideal OPE object, respectively.
A Differential Fault Attack on Grain-128a using MACs
The $32$-bit MAC of Grain-128a is a linear combination of the first 64 and then the alternative keystream bits. In this paper we describe a successful differential fault attack on Grain-128a, in which we recover the secret key by observing the correct and faulty MACs of certain chosen messages. The attack works due to certain properties of the Boolean functions and corresponding choices of the taps from the LFSR. We present methods to identify the fault locations and then construct set of linear equations to obtain the contents of the LFSR and the NFSR. Our attack requires less than $2^{11}$ fault injections
and invocations of less than $2^{12}$ MAC generation routines.
Oblivious Transfer with Hidden Access Control from Attribute-Based Encryption
The notion of oblivious transfer with hidden access control policies (HACOT) was recently proposed by Camenisch et al.~(Public-Key Cryptography~2011).
This primitive allows a user to anonymously query a database where each record is protected by a hidden attribute-based access control policy.
At each query, the user either learns the value of a single record if the attributes in his key satisfy the policy, or the mere fact that his
attributes do not satisfy the policy.
The database, even when colluding with the key issuer, learns nothing about the identity of the user, the index or the access policy of the record, or whether access was granted or denied.
At the same time, the database can keep an eye on the overall access frequency to prevent the data from being ``crawled''.
In this paper, we present a new HACOT scheme which is more efficient and offers more expressive policies
than the scheme presented by Camenisch et al.
We construct our HACOT protocol based on a hidden ciphertext-policy attribute-based encryption (HP-ABE) scheme by Nishide et al.:
users are issued HACOT decryption keys based on HP-ABE attributes and HACOT records are encrypted under HP-ABE policies.
However, as we will see, this simple approach does not work and we need to extend the Nishide et al.\
scheme as follows.
First, we add protocols that allows users to verify that the public key of the issuer and ciphertexts are correctly formed.
Second, we reserve one attribute and give the corresponding decryption key only to the database.
Thereby users can no longer decrypt records by themselves but require the help of the database.
Third, we provide a joint decryption protocol between the user and the database, so that the
database does not learn which ciphertext is decrypted.
The latter will also allow one to optionally add revocation of the users' access.
We prove our construction secure by a reduction to the security of Nishide et al.'s scheme, the Symmetric External Diffie-Hellman (SXDH) and Simultaneous Flexible Pairing (SFP) assumptions.
Algebraic Differential Fault Attacks on LED using a Single Fault Injection
This paper proposes a new fault attack technique on the LED block
cipher using a single fault injection by combining algebraic
side-channel attack (ASCA) and differential fault attack (DFA). We
name it as algebraic differential fault attack (ADFA). Firstly, a
boolean equation set is constructed for LED using algebraic
techniques. Then, the fault differences of the S-Box inputs in the
last round of LED are deduced by DFA and represented using algebraic
equations by the multiple deductions-based ASCA (MDASCA) technique
proposed in COSADE 2012. Finally, the key is recovered by solving
the equation set with the CryptoMiniSat solver. We show that, as to
ADFA on LED under the single nibble-based fault model, the 64-bit
key can be recovered within one minute on a common PC with a success
rate of 79\%, which is more efficient than previous work. We modify
the CryptoMiniSat solver to count and output multiple solutions for
the key, and conduct ADFA to calculate the reduced key search space
for DFA. The key search space of LED is reduced to $2^6 \sim
2^{17}$, which is different from previous work. We also successfully
extend ADFA on LED to other fault models using a single fault
injection, such as byte based fault model and nibble based diagonal
fault model, where traditional DFAs are difficult to work. The
results show that ADFA is an efficient and generic fault analysis
technique which significantly improves DFA.
Edwards model of elliptic curves defined over any fields
Uncategorized
Uncategorized
In this paper, we present an Edwards model for elliptic curves which is defined over any perfect field and in particular over finite fields. This Edwards model is birationally equivalent to the well known Edwards model over non-binary fields and is ordinary over binary fields. For this, we use theta functions of level four to obtain an intermediate model that we call a level $4$ theta model. This model enables us to obtain the new Edwards model with a complete and unified. Over binary fields, we present an efficient arithmetic of these curves. We also provide competitive differential addition formulas over any perfect field.
Breaking pairing-based cryptosystems using $\eta_T$ pairing over $GF(3^{97})$
There are many useful cryptographic schemes, such as ID-based encryption,
short signature, keyword searchable encryption, attribute-based encryption,
functional encryption, that use a bilinear pairing.
It is important to estimate the security of such pairing-based cryptosystems in cryptography.
The most essential number-theoretic problem in pairing-based cryptosystems is
the discrete logarithm problem (DLP)
because pairing-based cryptosystems are no longer secure once the underlining DLP is broken.
One efficient bilinear pairing is the $\eta_T$ pairing defined over a supersingular
elliptic curve $E$ on the finite field $GF(3^n)$ for a positive integer $n$.
The embedding degree of the $\eta_T$ pairing is $6$;
thus, we can reduce the DLP over $E$ on $GF(3^n)$ to that over the finite field $GF(3^{6n})$.
In this paper, for breaking the $\eta_T$ pairing over $GF(3^n)$, we discuss
solving the DLP over $GF(3^{6n})$ by using the function field sieve (FFS),
which is the asymptotically fastest algorithm for solving a DLP
over finite fields of small characteristics.
We chose the extension degree $n=97$ because it has been intensively used in benchmarking
tests for the implementation of the $\eta_T$ pairing,
and the order (923-bit) of $GF(3^{6\cdot 97})$ is substantially larger than
the previous world record (676-bit) of solving the DLP by using the FFS.
We implemented the FFS for the medium prime case (JL06-FFS),
and propose several improvements of the FFS,
for example, the lattice sieve for JL06-FFS and the filtering adjusted to the Galois action.
Finally, we succeeded in solving the DLP over $GF(3^{6\cdot 97})$.
The entire computational time of our improved FFS requires about 148.2 days using 252 CPU cores.
Our computational results contribute to the secure use of pairing-based cryptosystems with the $\eta_T$ pairing.
Construction of New Classes of Knapsack Type Public Key Cryptosystem Using Uniform Secret Sequence, K(II)$\Sigma\Pi$PKC, Constructed Based on Maximum Length Code
In this paper, we present a new class of knapsack type PKC referred to as K(II)$\Sigma\Pi$PKC.
In K(II)$\Sigma\Pi$PKC, Bob randomly constructs a very small subset of Alice's set of public key whose order is very large,
under the condition that the coding rate $\rho$ satisfies $0.01 < \rho < 0.5$.
In K(II)$\Sigma\Pi$PKC, no secret sequence such as super-increasing sequence or shifted-odd sequence but the sequence whose component is constructed by a product of the same number of many prime numbers of the same size, is used.
We show that K(II)$\Sigma\Pi$PKC is secure against the attacks such as LLL algorithm, Shamir's attack etc. , because a subset of Alice's public keys
is chosen entirely in a probabilistic manner at the sending end.
We also show that K(II)$\Sigma\Pi$PKC can be used as a member of the class of common key cryptosystems because the list
of the subset randomly chosen by Bob can be used as a common key between Bob and Alice,
provided that the conditions given in this paper are strictly observed,
without notifying Alice of his secret key through a particular secret channel.
High-Throughput Hardware Architecture for the SWIFFT / SWIFFTX Hash Functions
Uncategorized
Uncategorized
Introduced in 1996 and greatly developed over the last few years,
Lattice-based cryptography oers a whole set of primitives with nice features, including provable security and asymptotic efficiency. Going from \asymptotic" to \real-world" efficiency seems important as the set of available primitives increases in size and functionality. In this present paper, we explore the improvements that can be obtained through the use of an FPGA architecture for
implementing an ideal-lattice based cryptographic primitive. We chose to target two of the simplest, yet powerful and useful, lattice-based primitives, namely the SWIFFT and SWIFFTX primitives. Apart from being simple, those are also of central use for future primitives as Lyubashevsky's lattice-based signatures.
We present a high-throughput FPGA architecture for the SWIFFT and
SWIFFTX primitives. One of the main features of this implementation is
an efficient implementation of a variant of the Fast Fourier Transform of order 64 on Z257. On a Virtex-5 LX110T FPGA, we are able to hash 0.6GB/s, which shows a ca. 16x speedup compared to SIMD implementations of the literature. We feel that this demonstrates the revelance of FPGA as a target architecture for the implementation of ideal-lattice based primitives.
Enhancing Location Privacy for Electric Vehicles (at the right time)
An electric vehicle is a promising and futuristic automobile propelled by electric motor(s), using electrical energy stored in batteries or another energy storage device. Due to the need of battery recharging, the cars will be required to visit recharging infrastructure very frequently. This may disclose the users' private information, such as their location, which may expose users' privacy. In this paper, we provide mechanisms to enhance location privacy of electric vehicles at the right time, by proposing an anonymous payment system
with privacy protection support. Our technique further allows traceability in the case where the cars are stolen.
From Selective to Full Security: Semi-Generic Transformations in the Standard Model
In this paper, we propose an efficient, standard model, semi-generic transformation of selective-secure (Hierarchical) Identity-Based Encryption schemes into fully secure ones. The main step is a procedure that uses admissible hash functions (whose existence is implied by collision-resistant hash functions) to convert any selective-secure \emph{wildcarded} identity-based encryption (WIBE) scheme into a fully secure (H)IBE scheme. Since building a selective-secure WIBE, especially with a selective-secure HIBE already in hand, is usually much less involved than directly building a fully secure HIBE, this transform already significantly simplifies the latter task. This black-box transformation easily extends to schemes secure in the Continual Memory Leakage (CML) model of Brakerski et al. (FOCS 2010), which allows us obtain a new fully secure IBE in that model.
We furthermore show that if a selective-secure HIBE scheme satisfies a particular security notion, then it can be generically transformed into a selective-secure WIBE. We demonstrate that several current schemes already fit this new definition, while some others that do not obviously satisfy it can still be easily modified into a selective-secure WIBE.
Deciding Epistemic and Strategic Properties of Cryptographic Protocols
We propose a new, widely applicable model for analyzing knowledge-based (epistemic) and strategic properties of cryptographic protocols. We prove that the corresponding model checking problem with respect to an expressive
epistemic strategic logic is decidable. As corollaries, we obtain decidability of complex security properties including coercion-resistance of voting protocols, accountability of protocols using a trusted third party, and abuse-freeness of contract signing protocols.
Practical Polynomial Time Known Plaintext Attacks on a Stream Cipher Proposed by John Nash
Uncategorized
Uncategorized
In this paper we present two known plaintext attacks on a stream cipher
which was developed by John Nash in the early 1950's but whose design
was declassified by the NSA only in 2012. The main attack reduces
the claimed security of the scheme from $\left(\left(n-1\right)!\cdot2^{n}\right)^{2}$
to $O\left(n^{2}\log^{3}n\right)$, where $n$ is a security parameter.
This attack succeeds with high probability for randomly chosen keys
even when the only thing we know about the plaintext is that a small
fraction of isolated plaintext bits are slightly biased, but always
fails for a certain well defined class of keys which is exponentially
large but a negligibly small fraction of all the possible keys. To
show that it would not suffice to simply restrict the choice of keys
to this class, we develop a different attack which works best for
the subset of keys which are hardest to find by the first attack.
This attack reduces the security of the scheme from $2^{O\left(n\right)}$
to $O\left(n^{2}\right)$. Both attacks were verified with actual
simulations, finding cryptographic keys which are thousands of bits
long in just a few minutes on a single PC.
Characterizations on Algebraic Immunity for Multi-Output Boolean Functions
The general principle for algebraic attack for multi-output stream ciphers was proposed by Courtois [6]. Furthermore, Armknecht,
and Krause gave a definition of algebraic immunity for multi-output
Boolean functions in [2], and investigated some construction methods
of multi-output Boolean functions with maximal algebraic immunity. In
this note, several new characterizations of algebraic immunity for multi-output Boolean functions are given, and some related invariants and their relations are also investigated. Some examples are given to illustrate the usefulness of these results.
Analysis and Trade-Offs for the (Complete Tree) Layered Subset Difference Broadcast Encryption Scheme
Broadcast Encryption (BE) is an important technique for digital rights management (DRM). Two key parameters of a BE scheme are the average size of the transmission overhead and the size of the secret information to be stored by the users. The most important BE scheme till date is the subset difference (SD) scheme proposed by Naor, Naor and Lotspiech in 2001 which achieves a good balance of these two parameters. A year later, Halevy and Shamir proposed a variant of the SD scheme called the layered SD (LSD) scheme which allowed to reduce the size of user storage at the cost of increasing the transmission overhead. Since then, there has been no further study of other possible trade-offs between transmission overhead and user storage that can be obtained from the SD scheme.
In this work, we introduce several simple ideas to obtain new layering strategies with different trade-offs between user storage and transmission overhead. At one end, we introduce the notion of storage minimal layering, show that the Halevy-Shamir layering does not achieve minimal storage and describe a dynamic programming algorithm to compute layering strategies for which user storage is guaranteed to be the minimum possible. This results in user storage being 18\% to 24\% lower than that required by Halevy-Shamir layering schemes. At the other end, we consider the constrained minimization problem and show how to obtain BE schemes whose transmission overhead is not much more than that of the SD scheme but, whose user storage is still significantly lower than that of the SD scheme.
The original SD and the LSD algorithms are defined only when the number of users is a power of two. In an earlier work, we have shown how to handle arbitrary number of users in the SD scheme. Here this is extended to the LSD scheme. Finally, we obtain an $O(r\log^2 n)$ time algorithm to compute the average transmission overhead in any layering based SD scheme with $n$ users out of which $r$ are revoked.
RSA modulus generation in the two-party case
In this paper, secure two-party protocols are provided in order to
securely generate a random $k$-bit RSA modulus $n$ keeping its factorization secret. We first show that most existing
two-party protocols based on Boneh's test are not correct: an RSA modulus can be output in the malicious case.
Recently, Hazay et al. proposed the first proven secure protocol against any polynomial active adversary. However, their protocol is very costly: several hours are required to output a 1024-bit RSA modulus on a standard platform. In this paper, we propose an other approach consisting of post-processing efficient existing Boneh's based protocols. The running time of this post-processing can be neglected with respect to the running time of the whole protocol.
Constructing Vectorial Boolean Functions with High Algebraic Immunity Based on Group Decomposition
In this paper, we construct a class of vectorial Boolean functions over $\mathbb{F}_{2^{n}}$ with high algebraic immunity based on the decomposition of the multiplicative group of $\mathbb{F}_{2^n}$. By viewing $\mathbb{F}_{2^{n}}$ as $G_1G_2\bigcup \{0\} $ (where $G_1$ and $G_2$ are subgroups of $\mathbb{F}_{2^{n}}^{*},~(\#G_1,\#G_2)=1$ and $\#G_1\times \#G_2=2^{2k}-1$), we give a generalized description for constructing vectorial Boolean functions with high algebraic immunity. Moreover, when $n$ is even, we provide two special classes of vectorial Boolean functions with high(sometimes optimal) algebraic immunity, one is hyper-bent, and the other is of balancedness and optimal algebraic degree .
On the Traceability of Tags in SUAP RFID Authentication Protocols
Widespread adoption of RFID technology in all aspects of our life mainly depends on the fixing the privacy concerns of this technology's customers. Using a tagged object should not lead to existence of the tracing possibility. This concern is a challenging issue that has motivated the researchers to propose several authentication protocols to fix the traceability problem in RFID systems and also provide other security requirements.
In this paper, we analyze the security of three authentication protocols which have recently been proposed by Morshed et al. Our security analysis clearly highlights important security pitfalls in these protocols which leads to their vulnerability against traceability. The complexity of the proposed attacks are only several runs of the protocols while the adversary's advantages to trace the tagged object are maximal.