All papers in 2013 (Page 7 of 881 results)
Adapting Lyubashevsky’s Signature Schemes to the Ring Signature Setting
Basing signature schemes on strong lattice problems has been a long standing open issue. Today, two families of lattice-based signature schemes are known: the ones based on the hash-and-sign construction of Gentry et al.; and Lyubashevsky’s schemes, which are based on the Fiat-Shamir framework.
In this paper we show for the first time how to adapt the schemes of Lyubashevsky to the ring signature setting. In particular we transform the scheme of ASIACRYPT 2009 into a ring signature scheme that provides strong properties of security under the random oracle model. Anonymity is ensured in the sense that signatures of different users are within negligible statistical distance even under full key exposure. In fact, the scheme satisfies a notion which is stronger than the classical full key exposure setting as even if the keypair of the signing user is adversarially chosen, the statistical distance between signatures of different users remains negligible.
Considering unforgeability, the best lattice-based ring signature schemes provide either unforgeability against arbitrary chosen subring attacks or insider corruption in log-sized rings. In this paper we present two variants of our scheme. In the basic one, unforgeability is ensured in those two settings. Increasing signature and key sizes by a factor k (typically 80 − 100), we provide a variant in which unforgeability is ensured against insider corruption attacks for arbitrary rings. The technique used is pretty general and can be adapted to other existing schemes.
Path ORAM: An Extremely Simple Oblivious RAM Protocol
We present Path ORAM, an extremely simple Oblivious RAM protocol with a small amount of client storage. Partly due to its simplicity, Path ORAM is the most practical ORAM scheme known to date with small client storage. We formally prove that Path ORAM has a$O(log N) bandwidth cost for blocks of size B = Omega(log^2 N) bits. For such block sizes, Path ORAM is asymptotically better than the best known ORAM schemes with small client storage. Due to its practicality, Path ORAM has been adopted in the design of secure processors since its proposal.
Pinocchio: Nearly Practical Verifiable Computation
To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions. With Pinocchio, the client creates a public evaluation key to describe her computation; this setup is proportional to evaluating the computation once. The worker then evaluates the computation on a particular input and uses the evaluation key to produce a proof of correctness. The proof is only 288 bytes, regardless of the computation performed or the size of the inputs and outputs. Anyone can use a public verification key to check the proof. Pinocchio achieves strong asymptotic efficiency by refining the Quadratic Arithmetic Programs of Gennaro, Gentry, Parno, and Raykova (EuroCrypt 2013).
Crucially, our evaluation on seven applications demonstrates that Pinocchio is efficient in practice too. Pinocchio's verification time is typically 10ms: 5-7 orders of magnitude less than previous work; indeed Pinocchio is the first general-purpose system to demonstrate per-instance verification cheaper than native execution (for some apps). Pinocchio also reduces the worker's proof effort by an additional 19-60x. As an additional feature, Pinocchio generalizes to zero-knowledge proofs at a negligible cost over the base protocol. Finally, to aid development, Pinocchio provides an end-to-end toolchain that compiles a subset of C into programs that implement the verifiable computation protocol.
A Frequency Leakage Model and its application to CPA and DPA
Uncategorized
Uncategorized
This paper introduces a leakage model in the frequency domain to
enhance the efficiency of Side Channel Attacks of CMOS circuits. While usual techniques are focused on noise removal around clock harmonics, we show that the actual leakage is not necessary located in those expected bandwidths as experimentally observed by E. Mateos and C.H. Gebotys in 2010. We start by building a theoretical modeling of power consumption and electromagnetic emanations before deriving from it a criterion to guide standard attacks. This criterion is then validated on real experiments, both on FPGA and ASIC, that show an impressive increase of the yield of SCA.
ESPOON: Enforcing Encrypted Security Policies in Outsourced Environments
The enforcement of security policies in outsourced environments is still an open challenge for policy-based systems. On the one hand, taking the appropriate security decision requires access to the policies. However, if such access is allowed in an untrusted environment then confidential information might be leaked by the policies. Current solutions are based on cryptographic operations that embed security policies with the security mechanism. Therefore, the enforcement of such policies is performed by allowing the authorised parties to access the appropriate keys. We believe that such solutions are far too rigid because they strictly intertwine authorisation policies with the enforcing mechanism.
In this paper, we want to address the issue of enforcing security policies in an untrusted environment while protecting the policy confidentiality. Our solution ESPOON is aiming at providing a clear separation between security policies and the enforcement mechanism. However, the enforcement mechanism should learn as less as possible about both the policies and the requester attributes.
Towards a Practical Cryptographic Voting Scheme Based on Malleable Proofs
Mixnets are one of the main approaches to deploy secret and verifiable electronic elections.
General-purpose verifiable mixnets however suffer from the drawback that the amount of data to be verified by observers increases linearly with the number of involved mix nodes, the number of decryptors, and the number of voters. Chase et al. proposed a verifiable mixnet at Eurocrypt 2012 based on so-called \emph{malleable proofs} - proofs that do not increase with the number of mix nodes. In work published at PKC 2013, the same authors adapted malleable proofs to verifiable distributed decryption, resulting in a cryptographic voting scheme. As a result, the amount of data to be verified only increases linearly with the number of voters.
However, their scheme leaves several questions open which we address in this paper:
As a first contribution, we adapt a multi-party computation protocol to build a distributed key generation protocol for the encryption scheme underlying their voting scheme. As a second contribution, we decompress their abstract scheme description, identify elementary operations, and count the number of such operations required for mixing and verification. Based on timings for elementary operations, we extrapolate the running times of the mixing and verification processes, allowing us to assess the feasibility of their scheme. For the German case, we conclude that the replacement of postal voting by cryptographic voting based on malleable proofs is feasible on an electoral district level.
The Potential of an Individualized Set of trusted CAs: Defending against CA Failures in the Web PKI (Extended Version)
Uncategorized
Uncategorized
The security of most Internet applications relies on underlying public key infrastructures (PKIs) and thus on an ecosystem of certification authorities (CAs). The pool of PKIs responsible for the issuance and the maintenance of SSL certificates, called the Web PKI, has grown extremely large and complex. Herein, each CA is a single point of failure, leading to an attack surface, the size of which is hardly assessable.
This paper approaches the issue if and how the attack surface can be reduced in order to minimize the risk of relying on a malicious certificate. In particular, we consider the individualization of the set of trusted CAs. We present a tool called Rootopia, which allows to individually assess the respective part of the Web PKI relevant for a user.
Our analysis of browser histories of 22 Internet users reveals, that the major part of the PKI is completely irrelevant to a single user. On a per user level, the attack surface can be reduced by more than 90%, which shows the potential of the individualization of the set of trusted CAs. Furthermore, all the relevant CAs reside within a small set of countries. Our findings confirm that we unnecessarily trust in a huge number of CAs, thus exposing ourselves to unnecessary risks. Subsequently, we present an overview on our approach to realize the possible security gains.
Last updated: 2017-02-21
A time series approach for profiling attack
Uncategorized
Uncategorized
The goal of a profiling attack is to challenge the security of a cryptographic device in the worst case scenario. Though template attack are reputed as the strongest power analysis attack, they effectiveness is strongly dependent on the validity of the Gaussian assumption. This led recently to the appearance of nonparametric approaches, often based on machine learning strategies. Though these approaches outperform template attack, they tend to neglect the time series nature of the power traces. In this paper, we propose an original multi-class profiling attack that takes into account the temporal dependence of power traces. The experimental study shows that the time series analysis approach is competitive and often better than static classification alternatives.
Computing the Rank of Incidence Matrix and the Algebraic Immunity of Boolean Functions
The incidence matrix between a set of monomials and a set of vectors in $\F_2$ has a great importance in the study of coding theory, cryptography, linear algebra, combinatorics. The rank of these matrices are very useful while computing algebraic immunity($\ai$) of Boolean functions in cryptography literature~\cite{MPC04,DGM04}.
Moreover, these matrices are very sparse and well structured. Thus, for aesthetic reason finding the rank of these matrices is also very interesting in mathematics. In this paper, we have reviewed the existing algorithms with added techniques to speed up the algorithms and have proposed some new efficient algorithms for the computation of the rank of incidence matrix and solving the system of equations where the co-efficient matrix is an incidence matrix. Permuting the rows and columns of the incidence matrix with respect to an ordering, the incidence matrix can be converted to a lower block triangular matrix, which makes the computation in quadratic time complexity and linear
space complexity. Same technique is used to check and computing low degree annihilators of an $n$-variable Boolean functions in faster time complexity than the usual algorithms. Moreover, same technique is also exploited on the Dalai-Maitra algorithm in~\cite{DM06} for faster computation. On the basis of experiments, we conjecture that the $\ai$ of $n$-variable inverse S-box is $\lfloor\sqrt{n}\rfloor + \lceil\frac{n}{\lfloor\sqrt{n}\rfloor}\rceil-2$.
We have also shown the skepticism on the existing fastest algorithm
in~\cite{ACGKMR06} to find $\ai$ and lowest degree annihilators of a Boolean function.
Cryptography Challenges for Computational Privacy in Public Clouds
Uncategorized
Uncategorized
Computational privacy is a property of cryptographic
system that ensures the privacy of data (and/or operations)
while being processed at an untrusted server. Cryptography
has been an indispensable tool for computer security but its
readiness for this new generational shift of computing platform
i.e. Cloud Computing is still questionable.
Theoretical constructions like Fully Homomorphic Encryption,
Functional encryption, Server aided Multiparty Computation,
Verifiable Computation, Instance Hiding etc. are few
directions being pursued. These cryptographic techniques solve
Cloud privacy problems at different levels but most of them dont
fit well in overall scheme of things.
We state the privacy requirements for Cloud offerings in
various delivery methods. We discuss the challenges with current
cryptographic techniques being pursued by researchers and show
that they dont cater to blanket cover these privacy requirements.
We urge the need to find generalizations and connections
among these isolated techniques. As this might give more insights
into the underpinnings of Computational Privacy and lead to
better solutions.
The Legal Classification of Identity-Based Signatures
Identity-based cryptography has attracted attention in the cryptographic research community in recent years. Despite the importance of cryptographic schemes for applications in business and law, the legal implications of identity-based cryptography have not yet been discussed. We investigate how identity-based signatures fit into the legal framework. We focus on the European Signature Directive, but also take the UNCITRAL Model Law on Electronic Signatures into account. In contrast to previous assumptions, identity-based signature schemes can, in principle, be used even for qualified electronic signatures, which can replace handwritten signatures in the member states of the European Union. We derive requirements to be taken into account in the development of future identity-based signature schemes.
Pseudorandom Generators from Regular One-way Functions: New Constructions with Improved Parameters
We revisit the problem of basing pseudorandom generators on regular one-way functions, and present the following constructions:
(1) For any known-regular one-way function (on $n$-bit inputs) that is known to be $\eps$-hard to invert, we give a neat (and tighter) proof for the folklore construction of pseudorandom generator of seed length $\Theta(n)$ by making a single call to the underlying one-way function.
(2) For any unknown-regular one-way function with known $\eps$-hardness, we give a new construction with seed length $\Theta(n)$ and $O(n/\log{(1/\eps)})$ calls. Here the number of calls is also optimal by matching the lower bounds of Holenstein and Sinha [FOCS 2012].
Both constructions require the knowledge about $\eps$, but the dependency can be removed while keeping nearly the same parameters. In the latter case, we get a construction of pseudo-random generator from any unknown-regular one-way function using seed length $\tilde{O}(n)$ and $\tilde{O}(n/\log{n})$ calls, where $\tilde{O}$ omits a factor that can be made arbitrarily close to constant (e.g. $\log\log\log{n}$ or even less). This improves the \emph{randomized iterate} approach by Haitner, Harnik and Reingold [CRYPTO 2006] which requires seed length $O(n{\log}{n})$ and $O(n/\log{n})$ calls.
CMCC: Misuse Resistant Authenticated Encryption with Minimal Ciphertext Expansion
Uncategorized
Uncategorized
In some wireless environments, minimizing the size of messages is paramount due to the resulting significant energy savings. We present CMCC, an authenticated encryption scheme with associated data (AEAD) that is also nonce misuse resistant. The main focus for this work is minimizing ciphertext expansion, especially for short messages including plaintext lengths less than the underlying block cipher length (e.g., 16 bytes). For many existing AEAD schemes, a successful forgery leads directly to a loss of confidentiality. For CMCC, changes to the ciphertext randomize the resulting plaintext, thus forgeries do not necessarily result in a loss of confidentiality which allows us to reduce the length of the authentication tag. For protocols that send short messages, our scheme is similar to Counter with CBC-MAC (CCM) for computational overhead but has much smaller expansion. We prove both a misuse resistant authenticated encryption (MRAE) security bound and an authenticated encryption (AE) security bound for CMCC. We also present a variation of CMCC, CWM, which provides a further strengthening of the security bounds. Our contributions include both stateless and stateful versions which enable minimal sized message numbers using different network related trade-offs.
Dynamic Cube Attack on Grain-v1
Uncategorized
Uncategorized
This article aims to present dynamic cube attack on Grain-v1. Dynamic cube attack finds the secret key by using distinguishers gained from structural weakness. The main idea of dynamic cube attack lies in simplifying the output function. After making it simpler, dynamic cube attack will be able to exploit distinguishing attack for recovering the secret key. In this paper, we investigate Grain-v1 to which key recovery attack has never been applied because its feedback function is so sophisticated. we apply dynamic cube attack on it by utilizing both intelligent choices of Initial Value variables and appropriate simplifications. Our attack is done in feasible time complexity, and it recovers all bits of the key while the number of initialization rounds in Grain-v1 is decreased to 100. This attack is faster than exhaustive search by a factor $2^{32}$.
Multi-Party Computation of Polynomials and Branching Programs without Simultaneous Interaction
Halevi, Lindell, and Pinkas (CRYPTO 2011) recently proposed a model for secure computation that captures communication patterns that arise
in many practical settings, such as secure computation on the web. In their model, each party interacts only once, with a single centralized server. Parties do not interact with each other; in fact, the parties need not even be online simultaneously.
In this work we present a suite of new, simple and efficient protocols for secure computation in this "one-pass" model. We give protocols that obtain optimal privacy for the following general tasks:
-- Evaluating any multivariate polynomial $F(x_1, \ldots ,x_n)$ (modulo a large RSA modulus N), where the parties each hold an input $x_i$.
-- Evaluating any read once branching program over the parties' inputs.
As a special case, these function classes include all previous functions for which an optimally private, one-pass computation was known, as well as many new functions, including variance and other statistical functions, string matching, second-price auctions, classification algorithms and some classes of finite automata
and decision trees.
L-P States of RC4 Stream Cipher
Uncategorized
Uncategorized
The stream cipher RC4 was designed by R.Rivest in $1987$, and it is a widely deployed cipher. Many predictive states of RC4 for some special indices $i$ were presented in the last $20$ years. In this paper, we present several long term predictive states. These states increase the probability to guess part of the internal state in a known plaintext attack and present a cryptanalytic weakness of RC4. This paper also analyzes possible long term bias in the keystream and further propose a search method for the long term predictive states.
Attribute-Based Encryption with Fast Decryption
Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption.
This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without system-wide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of X, where X is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.
Encrypted Secret Sharing and Analysis by Plaintext Randomization
In this paper we consider the problem of secret sharing where shares
are encrypted using a public-key encryption (PKE) scheme and
ciphertexts are publicly available. While intuition tells us that the
secret should be protected if the PKE is secure against
chosen-ciphertext attacks (i.e., CCA-secure), formally proving this
reveals some subtle and non-trivial challenges. We isolate the
problems that this raises, and devise a new analysis technique called
``plaintext randomization'' that can successfully overcome these
challenges, resulting in the desired proof. The encryption of
different shares can use one key or multiple keys, with natural
applications in both scenarios.
Speeding up QUAD
QUAD is a provable secure stream cipher based on multivariate polynomials which was proposed in 2006 by Berbain, Gilbert and Patarin \cite{BG06}. In this paper we show how to speed up QUAD over GF(256) by a factor of up to 5.8. We get this by using structured systems of polynomials, in particular partially circulant polynomials and polynomials generated by a linear recurring sequence (LRS), instead of random ones. By using this strategy, we can also reduce the system parameter of QUAD by about 99 \verb!%!. We furthermore present experiments, which seem to show that using structured polynomials of this special choice does not influence the security of QUAD.
An efficient FHE based on the hardness of solving systems of non-linear multivariate equations
Uncategorized
Uncategorized
We propose a general framework to develop fully homomorphic encryption schemes (FHE) without using the Gentry's technique. The security relies on the difficulty of solving systems of non-linear equations (which is a $\mathcal{NP}$-complete problem). While the security of our scheme has not been reduced to a provably hard instance of this problem,
security is globally investigated.
Secure information transmission based on physical principles
We employ physical properties of the real world to design a protocol for secure information transmission where one of the parties is able
to transmit secret information to another party over an insecure channel, without any prior secret arrangements between the parties.
The distinctive feature of this protocol, compared to all known
public-key cryptographic protocols, is that neither party uses a
one-way function. In particular, our protocol is secure against (passive) computationally unbounded adversary.
From Weak to Strong Zero-Knowledge and Applications
Uncategorized
Uncategorized
The notion of \emph{zero-knowledge} \cite{GMR85} is formalized by requiring that for every malicious efficient verifier $V^*$, there exists an efficient simulator $S$ that can reconstruct the view of $V^*$ in a true interaction with the prover, in a way that is indistinguishable to \emph{every} polynomial-time distinguisher. \emph{Weak zero-knowledge} weakens this notions by switching the order of the quantifiers and only requires that for every distinguisher $D$, there exists a (potentially different) simulator $S_D$.
In this paper we consider various notions of zero-knowledge, and investigate whether their weak variants are equivalent to their strong variants. Although we show (under complexity assumption) that for the standard notion of zero-knowledge, its weak and strong counterparts are not equivalent, for meaningful variants of the standard notion, the weak and strong counterparts are indeed equivalent. Towards showing these equivalences, we introduce new non-black-box simulation techniques permitting us, for instance, to demonstrate that the classical 2-round graph non-isomorphism protocol of Goldreich-Micali-Wigderson \cite{GMW91} satisfies a ``distributional'' variant of zero-knowledge.
Our equivalence theorem has other applications beyond the notion of zero-knowledge. For instance, it directly implies the \emph{dense model theorem} of Reingold et al (STOC '08), and the leakage lemma of Gentry-Wichs (STOC '11), and provides a modular and arguably simpler proof of these results (while at the same time recasting these result in the language of zero-knowledge).
Private Interactive Communication Across an Adversarial Channel
Consider two parties Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x,y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x,y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors.
If Alice and Bob are only interested in computing f(x,y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors.
We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors.
Witness Encryption and its Applications
We put forth the concept of \emph{witness encryption}. A witness encryption scheme is defined for an NP language $L$ (with corresponding witness relation $R$). In such a scheme, a user can encrypt a message $M$ to a particular problem instance $x$ to produce a ciphertext. A recipient of a ciphertext is able to decrypt the message if $x$ is in the language and the recipient knows a witness $w$ where $R(x,w)$ holds. However, if $x$ is not in the language, then no polynomial-time attacker can distinguish between encryptions of any two equal length messages. We emphasize that the encrypter himself may have no idea whether $x$ is actually in the language.
Our contributions in this paper are threefold. First, we introduce and formally define witness encryption. Second, we show how to build several cryptographic primitives from witness encryption. Finally, we give a candidate construction based on the NP-complete \textsc{Exact Cover} problem and Garg, Gentry, and Halevi's recent construction of ``approximate" multilinear maps.
Our method for witness encryption also yields the first candidate construction for an open problem posed by Rudich in 1989: constructing computational secret sharing schemes for an NP-complete access structure.
Secure two-party computation: a visual way
In this paper we propose a novel method for performing secure two-party computation.
By merging together in a suitable way two beautiful ideas of the 80's and the 90's, Yao's garbled circuit construction and Naor and Shamir's visual cryptography, respectively, we
enable Alice and Bob to securely evaluate a function $f(\cdot,\cdot)$ of their inputs, $x$ and $y$, through a {\em pure physical} process. Indeed, once Alice has prepared a set of properly constructed transparencies, Bob computes the function value $f(x,y)$ by applying a sequence
of simple steps which require the use of a pair of scissors, superposing transparencies, and the human visual system. A crypto-device for the function evaluation process is not needed any more.
On the Lossiness of the Rabin Trapdoor Function
Lossy trapdoor functions, introduced by Peikert and Waters (STOC~'08), are functions that can be generated in two indistinguishable ways: either the function is injective, and there is a trapdoor to invert it, or the function is lossy, meaning that the size of its range is strictly smaller than the size of its domain. Kakvi and Kiltz (EUROCRYPT 2012) proved that the Full Domain Hash signature scheme based on a lossy trapdoor function has a \emph{tight} security reduction from the lossiness of the trapdoor function. Since Kiltz, O'Neill, and Smith (CRYPTO 2010) showed that the RSA trapdoor function is lossy under the $\Phi$-Hiding assumption of Cachin, Micali, and Stadler (EUROCRYPT~'99), this implies that the RSA Full Domain Hash signature scheme has a \emph{tight} security reduction from the $\Phi$-Hiding assumption (for public exponents $e<N^{1/4}$). In this work, we consider the Rabin trapdoor function, \emph{i.e.} modular squaring over $\mathbb{Z}_{N}^*$. We show that when adequately restricting its domain (either to the set $\mathbb{QR}_{N}$ of quadratic residues, or to $(\mathbb{J}_{N})^+$, the set of positive integers $1\le x\le(N-1)/2$ with Jacobi symbol +1) the Rabin trapdoor function is lossy, the injective mode corresponding to Blum integers $N=pq$ with $p,q\equiv 3\bmod 4$, and the lossy mode corresponding to what we call pseudo-Blum integers $N=pq$ with $p,q\equiv 1 \bmod 4$. This lossiness result holds under a natural extension of the $\Phi$-Hiding assumption to the case $e=2$ that we call the 2-$\Phi/4$-Hiding assumption. We then use this result to prove that deterministic variants of Rabin-Williams Full Domain Hash signatures have a tight reduction from the 2-$\Phi$/4-Hiding assumption. We also show that these schemes are unlikely to have a tight reduction from the factorization problem by extending a previous ``meta-reduction'' result by Coron (EUROCRYPT 2002), later corrected by Kakvi and Kiltz (EUROCRYPT 2012). These two results therefore answer one of the main questions left open by Bernstein (EUROCRYPT 2008) in his work on Rabin-Williams signatures.
How to Construct an Ideal Cipher from a Small Set of Public Permutations
We show how to construct an ideal cipher with $n$-bit blocks and $n$-bit keys (\emph{i.e.} a set of $2^n$ public $n$-bit permutations) from a small constant number of $n$-bit random public permutations. The construction that we consider is the \emph{single-key iterated Even-Mansour cipher}, which encrypts a plaintext $x\in\{0,1\}^n$ under a key $k\in\{0,1\}^n$ by alternatively xoring the key $k$ and applying independent random public $n$-bit permutations $P_1,\ldots, P_r$ (this construction is also named a \emph{key-alternating cipher}). We analyze this construction in the plain indifferentiability framework of Maurer, Renner, and Holenstein (TCC 2004), and show that twelve rounds are sufficient to achieve indifferentiability from an ideal cipher. We also show that four rounds are necessary by exhibiting attacks for three rounds or less.
Towards Adoption of DNSSEC: Availability and Security Challenges
Uncategorized
Uncategorized
DNSSEC deployment is long overdue; however, it
seems to be finally taking off. Recent cache poisoning attacks
motivate protecting DNS, with strong cryptography, rather than
with challenge-response ‘defenses’.
Our goal is to motivate and help correct DNSSEC deployment.
We discuss the state of DNSSEC deployment, obstacles to
adoption and potential ways to increase adoption. We then
present a comprehensive overview of challenges and potential
pitfalls of DNSSEC, well known and less known, including:DNSSEC deployment is long overdue; however, it
seems to be finally taking off. Recent cache poisoning attacks
motivate protecting DNS, with strong cryptography, rather than
with challenge-response ‘defenses’.
Our goal is to motivate and help correct DNSSEC deployment.
We discuss the state of DNSSEC deployment, obstacles to
adoption and potential ways to increase adoption. We then
present a comprehensive overview of challenges and potential
pitfalls of DNSSEC, well known and less known, including:
- Vulnerable configurations: we present several DNSSEC configurations,
which are natural and, based on the limited
deployment so far, expected to be popular, yet are vulnerable
to attack. This includes NSEC3 opt-out records and interdomain
referrals (in NS, MX and CNAME records).
- Incremental Deployment: we discuss potential for increased
vulnerability due to popular practices of incremental deployment,
and recommend secure practice.
- Super-sized Response Challenges: DNSSEC responses include
cryptographic keys and hence are relatively long; we
explain how this extra-long responses cause interoperability
challenges, and can be abused for DoS and even DNS
poisoning. We discuss potential solutions.
- Vulnerable configurations: we present several DNSSEC configurations,
which are natural and, based on the limited
deployment so far, expected to be popular, yet are vulnerable
to attack. This includes NSEC3 opt-out records and interdomain
referrals (in NS, MX and CNAME records).
- Incremental Deployment: we discuss potential for increased
vulnerability due to popular practices of incremental deployment,
and recommend secure practice.
- Super-sized Response Challenges: DNSSEC responses include
cryptographic keys and hence are relatively long; we
explain how this extra-long responses cause interoperability
challenges, and can be abused for DoS and even DNS
poisoning. We discuss potential solutions.
CacheAudit: A Tool for the Static Analysis of Cache Side Channels
We present CacheAudit, a versatile framework for the automatic, static analysis of cache side channels. CacheAudit takes as input a program binary and a cache configuration, and it derives formal, quantitative security guarantees for a comprehensive set of side-channel adversaries, namely those based on observing cache states, traces of hits and misses, and execution times.
Our technical contributions include novel abstractions to efficiently compute precise over-approximations of the possible side-channel observations for each of these adversaries. These approximations then yield upper bounds on the information that is revealed. In case studies we apply CacheAudit to binary executables of algorithms for symmetric encryption and sorting, obtaining the first formal proofs of security for implementations with countermeasures such as preloading and data-independent memory access patterns.
On the Primitivity of some Trinomials over Finite Fields
In this paper, we give
conditions under which the trinomials of the form $x^{n}+ax+b$ over
finite field ${\mathbb{F}}_{p^{m}}$ are not primitive and
conditions under which there are no primitive trinomials of the
form $x^{n}+ax+b$ over finite field ${\mathbb{F}}_{p^{m}}$. For
finite field ${\mathbb{F}}_{4}$, We show that there are no primitive
trinomials of the form $x^{n}+x+\alpha$, if $n\equiv1\mod 3$ or
$n\equiv0\mod 3$ or $n\equiv4\mod 5$.
Permutation Polynomials and Their Differential Properties over Residue Class Rings
This paper mainly focuses on permutation polynomials over the residue class ring $\mathbb{Z}_{N}$, where $N>3$ is composite. We have proved that for the polynomial $f(x)=a_{1}x^{1}+\cdots +a_{k}x^{k}$ with integral coefficients, $f(x)\bmod N$ permutes $\mathbb{Z}_{N}$ if and only if $f(x)\bmod N$ permutes $S_{\mu}$ for all $\mu \mid N$, where $S_{\mu}=\{0< t <N: \gcd(N,t)=\mu\}$ and $S_{N}=S_{0}=\{0\}$. Based on it, we give a lower bound of the differential uniformities for such permutation polynomials, that is, $\delta (f)\geq \frac{N}{\#S_{a}}$, where $a$ is the biggest nontrivial divisor of $N$. Especially, $f(x)$ can not be APN permutations over the residue class ring \mathbb{Z}_{N}$. It is also proved that $f(x)\bmod N$ and $(f(x)+x)\bmod N$ can not permute $\mathbb{Z}_{N}$ at the same time when $N$ is even.
Fully Homomorphic Encryption for Mathematicians
Uncategorized
Uncategorized
We give an introduction to Fully Homomorphic Encryption for mathematicians. Fully Homomorphic Encryption allows untrusted parties to take encrypted data Enc(m_1),...,Enc(m_t) and any efficiently computable function f, and compute an encryption of f(m_1,...,m_t), without knowing or learning the decryption key or the raw data m_1,...,m_t. The problem of how to do this was recently solved by Craig Gentry, using ideas from algebraic number theory and the geometry of numbers. In this paper we discuss some of the history and background, give examples of Fully Homomorphic Encryption schemes, and discuss the hard mathematical problems on which the cryptographic security is based.
How to Factor N_1 and N_2 When p_1=p_2 mod 2^t
Uncategorized
Uncategorized
Let $N_1=p_1q_1$ and $N_2=p_2q_2$ be two different RSA moduli. Suppose that $p_1=p_2 \bmod 2^t$ for some $t$, and $q_1$ and $q_2$ are $\alpha$ bit primes. Then May and Ritzenhofen showed that $N_1$ and $N_2$ can be factored in quadratic time if
\[ t \geq 2\alpha+3. \]
In this paper, we improve this lower bound on $t$. Namely we prove that $N_1$ and $N_2$ can be factored in quadratic time if
\[ t \geq 2\alpha+1. \]
Further our simulation result shows that our bound is tight.
Another Look at Security Theorems for 1-Key Nested MACs
Uncategorized
Uncategorized
We prove a security theorem without collision-resistance for a class of 1-key hash-function-based MAC schemes that includes HMAC and Envelope MAC. The proof has some advantages over earlier proofs: it is in the uniform model, it uses a weaker related-key assumption, and it covers a broad class of MACs in a single theorem. However, we also explain why our theorem is of doubtful value in assessing the real-world security of these MAC schemes. In addition, we prove a theorem assuming collision-resistance. From these two theorems we conclude that from a provable security standpoint there is little reason to prefer HMAC to Envelope MAC or similar schemes.
Leakage-resilient Attribute-based Encryptions with Fast Decryption: Model, Analysis and Construction
raditionally, in attribute-based encryption (ABE), an access structure is constructed from a linear secret sharing scheme (LSSS), a boolean formula or an access tree.
In this work, we encode the access structure as their minimal sets, which is equivalent to the existence of a smallest monotonic span program for the characteristic function of the same access structure.
We present two leakage-resilient attribute-based encryption schemes, ciphertext-policy ABE (LR-CP-ABE) and key-policy ABE (LR-KP-ABE), that can tolerate private key and master key to be partially leaked.
By using our encoding mechanism, we obtain short ciphertext in LR-CP-ABE and short key in LR-KP-ABE. Also, our schemes have higher decryption efficiency in that the decryption cost is independent to the depth of access structures. Meanwhile, our proposed schemes provide the tolerance of both master key leakage and continual leakage in the sense that there are many master keys for universal set $\Sigma$ and many private keys per attribute set $\S$. We explicitly employ a refresh algorithm to update a (master) key while the leakage information will beyond the allowable leakage bound. The schemes are proven to be adaptively leakage-resilient secure in the standard model under the static assumptions in composite order bilinear groups.
A New Lever Function with Adequate Indeterminacy
The key transform of the REESSE1+ asymmetrical cryptosystem is Ci = (Ai * W ^ l(i)) ^ d (% M) with l(i) in Omega = {5, 7, ..., 2n + 3} for i = 1, ..., n, where l(i) is called a lever function. In this paper, we give a simplified key transform Ci = Ai * W ^ l(i) (% M) with a new lever function l(i) from {1, ..., n} to Omega = {+/-5, +/-6, ..., +/-(n + 4)}. Discuss the necessity of the new l(i), namely that a simplified private key is insecure if the new l(i) is a constant but not one-to-one function. Further, expound the sufficiency of the new l(i) from four aspects: (1) indeterminacy of the new l(i), (2) insufficient conditions for neutralizing the powers of W and W ^-1 even if Omega = {5, 6, ..., n + 4}, (3) verification by examples, and (4) the running time of continued fraction attack and running time of W-parameter intersection attack which are the two most efficient of the probabilistic polytime attack algorithms so far. Last, we detail the relation between a lever function and a random oracle.
The Fiat-Shamir Transformation in a Quantum World
The Fiat-Shamir transformation is a famous technique to turn identification schemes into signature schemes. The derived scheme is provably secure in the random-oracle model against classical adversaries. Still, the technique has also been suggested to be used in connection with quantum-immune identification schemes, in order to get quantum-immune signature schemes. However, a recent paper by Boneh et al. (Asiacrypt 2011) has raised the issue that results in the random-oracle model may not be immediately applicable to quantum adversaries, because such adversaries should be allowed to query the random oracle in superposition. It has been unclear if the Fiat-Shamir technique is still secure in this quantum oracle model (QROM).
Here, we discuss that giving proofs for the Fiat-Shamir transformation in the QROM is presumably hard. We show that there cannot be black-box extractors, as long as the underlying quantum-immune identification scheme is secure against active adversaries and the first message of the prover is independent of its witness. Most schemes are of this type. We then discuss that for some schemes one may be able to resurrect the Fiat-Shamir result in the QROM by modifying the underlying protocol first. We discuss in particular a version of the Lyubashevsky scheme which is provably secure in the QROM.
Cryptographic schemes, key exchange, public key.
General cryptographic schemes are presented where keys can be one-time or ephemeral. Processes for key exchange are derived. Public key cryptographic schemes based on the new systems are established. Authentication and signature schemes are easy to implement.
The schemes may be integrated with error-correcting coding schemes
so that encryption/coding and decryption/decoding may
be done simultaneously.
A Simple ORAM
In this short note, we demonstrate a simple and practical ORAM that enjoys an extremely simple proof of security. Our construction is based on a recent ORAM due to Shi, Chan, Stefanov and Li (Asiacrypt'11), but
with some crucial modifications, which significantly simply the analysis.
AE5 Security Notions: Definitions Implicit in the CAESAR Call
A draft call for the CAESAR authenticated-encryption competition adopts an interface that is not aligned with existing definitions in the literature. It is the purpose of this brief note to formalize what we believe to be the intended definitions.
The Perils of Repeating Patterns: Observation of Some Weak Keys in RC4
We describe some observed trivially weak keys for the stream cipher RC4.
Keys with repeating patterns are found to be key length invariant. The cause of the problem is the simplistic key dependent state permutation in the RC4 initialization.
Algebraic analysis of Trivium-like ciphers
Trivium is a bit-based stream cipher in the final portfolio of the eSTREAM project. In this paper, we apply the approach of Berbain et al. to Trivium-like ciphers and perform new algebraic analyses on them, namely Trivium and its reduced versions: Trivium-N, Bivium-A and Bivium-B. In doing so, we answer an open question in the literature. We demonstrate a new algebraic attack on Bivium-A. This attack requires less time and memory than previous
techniques which use the F4 algorithm to recover Bivium-A's initial state. Though our attacks on Bivium-B, Trivium and Trivium-N are worse than exhaustive keysearch, the systems of equations which are constructed are smaller and less complex compared to previous algebraic analysis. Factors which can affect the complexity of our attack on Trivium-like ciphers are discussed in detail.
Optimizing ORAM and Using it Efficiently for Secure Computation
Oblivious RAM (ORAM) allows a client to access her data on a remote server while hiding the access pattern (which locations she is accessing) from the server. Beyond its immediate utility in allowing private computation over a client's outsourced data, ORAM also allows mutually distrustful parties to run secure-computations over their joint data with sublinear on-line complexity. In this work we revisit the tree-based ORAM of Shi et al. [SCSL11] and show how to optimize its performance as a stand-alone scheme, as well as its performance within higher level constructions. More specifically, we make several contributions:
- We describe two optimizations to the tree-based ORAM protocol of Shi et al., one reducing the storage overhead of that protocol by an $O(k)$ multiplicative factor, and another reducing its time complexity by an $O(\log k)$ multiplicative factor, where $k$ is the security parameter. Our scheme also enjoys a much simpler and tighter analysis than the original protocol.
- We describe a protocol for binary search over this ORAM construction, where the entire binary search operation is done in the same complexity as a single ORAM access (as opposed to $\log n$ accesses for the naive protocol). We then describe simple uses of this binary-search protocol for things like range queries and keyword search.
- We show how the ORAM protocol itself and our binary-search protocol can be implemented efficiently as secure computation, using somewhat-homomorphic encryption.
Since memory accesses by address (ORAM access) or by value (binary search) are basic and prevalent operations, we believe that these optimizations can be used to significantly speed-up many higher-level protocols for secure computation.
Anonymity-preserving Public-Key Encryption: A Constructive Approach
A receiver-anonymous channel allows a sender to send a message to a receiver without an adversary learning for whom the message is intended. Wireless broadcast channels naturally provide receiver anonymity, as does multi-casting one message to a receiver population containing the intended receiver. While anonymity and confidentiality appear to be orthogonal properties, making anonymous communication confidential is more involved than one might expect, since the ciphertext might reveal which public key has been used to encrypt. To address this problem, public-key cryptosystems with enhanced security properties have been proposed.
This paper investigates constructions as well as limitations for preserving receiver anonymity when using public-key encryption (PKE). We use the constructive cryptography approach by Maurer and Renner and interpret cryptographic schemes as constructions of a certain ideal resource (e.g. a confidential anonymous channel) from given real resources (e.g. a broadcast channel). We define appropriate anonymous communication resources and show that a very natural resource can be constructed by using a PKE scheme which fulfills three properties that appear in cryptographic literature (IND-CCA, key-privacy, weak robustness). We also show that a desirable stronger variant, preventing the adversary from selective “trial-deliveries” of messages, is unfortunately unachievable by any PKE scheme, no matter how strong. The constructive approach makes the guarantees achieved by applying a cryptographic scheme explicit in the constructed (ideal) resource; this specifies the exact requirements for the applicability of a cryptographic scheme in a given context. It also allows to decide which of the existing security properties of such a cryptographic scheme are adequate for the considered scenario, and which are too weak or too strong. Here, we show that weak robustness is necessary but that so-called strong robustness is unnecessarily strong in that it does not construct a (natural) stronger resource.
Type-Based Analysis of Generic Key Management APIs (Long Version)
In the past few years, cryptographic key management APIs have been shown to be subject to tricky attacks based on the improper use of cryptographic keys.
In fact, real APIs provide mechanisms to declare the intended use of keys but they are not strong enough to provide key security.
In this paper, we propose a simple imperative programming language for specifying strongly-typed APIs for the management of symmetric,
asymmetric and signing keys. The language requires that type information is stored together with the key but it is independent of the actual
low-level implementation. We develop a type-based analysis to prove the preservation of integrity and confidentiality of sensitive keys and
we show that our abstraction is expressive enough to code realistic key management APIs.
A Ciphertext-Policy Attribute-Based Proxy Re-Encryption with Chosen-Ciphertext Security
Ciphertext-Policy Attribute-Based Proxy Re-Encryption (CP-ABPRE) extends the traditional Proxy Re-Encryption (PRE) by allowing a semi-trusted proxy to transform a ciphertext under an access policy to the one with the same plaintext under another access policy (i.e.attribute-based re-encryption). The proxy, however, learns nothing about the underlying plaintext. CP-ABPRE has many real world applications, such as fine-grained access control in cloud storage systems and medical records sharing among different hospitals. Previous CP-ABPRE schemes leave how to be secure against chosen-ciphertext attacks (CCA) as an open problem. This paper, for the first time, proposes a new CP-ABPRE to tackle the problem. The new scheme supports attribute-based re-encryption with any monotonic access structures. Despite our scheme is constructed in the random oracle model, it can be proved CCA secure under the decisional $q$-parallel bilinear Diffie-Hellman exponent assumption.
Ballot secrecy and ballot independence: definitions and relations
We study ballot independence for election schemes. First, we formally define ballot independence as a cryptographic game and prove that ballot secrecy implies ballot independence. Secondly, we introduce a notion of controlled malleability and prove that it is sufficient for ballot independence. We also prove that non-malleable ballots are sufficient for ballot independence. Thirdly, we prove that ballot independence is sufficient for ballot secrecy in a special case. Our results show that ballot independence is necessary in election schemes satisfying ballot secrecy. Furthermore, our sufficient conditions enable simpler proofs of ballot secrecy.
A Cryptographic Analysis of OPACITY
We take a closer look at the Open Protocol for Access Control, Identification, and Ticketing with privacY (OPACITY).
This Diffie--Hellman-based protocol
is supposed to provide a secure and privacy-friendly key establishment for contactless environments.
It is promoted by the US Department of Defense and meanwhile available in several standards such as ISO/IEC 24727-6 and ANSI 504-1.
To the best of our knowledge, so far no detailed cryptographic analysis has been publicly available.
Thus, we investigate in how far the common security properties for authenticated key exchange and impersonation resistance,
as well as privacy-related properties like untraceability and deniability, are met.
OPACITY is not a single protocol but, in fact, a suite consisting of two protocols, one called Zero-Key Management (ZKM) and
the other one named Fully Secrecy (FS).
Our results indicate that the ZKM version does not achieve even very basic security guarantees. The FS protocol, on the other hand,
provides a decent level of security for key establishment. Yet, our results show that the persistent-binding steps, for re-establishing previous connections,
conflict with fundamental privacy properties.
Attacks on JH, Grøstl and SMASH Hash Functions
Uncategorized
Uncategorized
JH and Grøstl hash functions are two of the five finalists in NIST SHA-3 competition. JH-$s$ and Grøstl-$s$ are based on a $2n$ bit compression function and the final output is truncated to $s$ bits, where $n$ is $512$ and $s$ can be $224$,$256$,$384$ and $512$. Previous security proofs show that JH-$s$ and Grøstl-$s$ are optimal collision resistance without length padding to the last block.
~~~~In this paper we present collision and preimage attacks on JH-$s$ and Grøstl-$s$ without length padding to the last block. For collision attack on JH-$s$, after a $\frac{1}{e}2^{s/4+n}$ precomputing, the adversary needs $2^{s/4}$ queries to the underlying compression function to find a new collision. For preimage attack on JH-$s$, after a $\frac{1}{e}2^{s/2+n}$ precomputing, the adversary needs $2^{s/2}$ queries to the underlying compression function to find a new preimage. If $s=224$, the attacker only needs $2^{57}$ and $2^{113}$ compression function queries to mount a new collision attack and preimage attack respectively. For Grøstl, the query complexity of our collision and preimage attack are one half of birthday collision attack and exhaustive preimage attack respectively.
~~~~We also discuss how our attack works when the length is padded
to the last message block. Our attacks exploit structure flaws in the design of JH and Grøstl. It is easily applied to MJH and SMASH and other generalizations since they have similar structure
(we call it Evan-Mansour structure). At the same time the provable security of chopMD in the literature is challenged. Through our attack, it is easy to see that the chopMD mode used in JH or Grøstl does not improve its security.
Quantum algorithms to check Resiliency, Symmetry and Linearity of a Boolean function
Uncategorized
Uncategorized
In this paper, we present related quantum algorithms to check the order of resiliency, symmetry and linearity of a Boolean function that is available as a black-box (oracle). First we consider resiliency and show that the Deutsch-Jozsa algorithm can be immediately used for this purpose. We also point out how the quadratic improvement in query complexity can be obtained
over the Deutsch-Jozsa algorithm for this purpose using the Grover's technique. While the worst case quantum query complexity to check the resiliency order is exponential in the number of input variables of the Boolean function, we require polynomially many measurements only. We also describe a subset of $n$-variable Boolean functions for which the algorithm works in polynomially many steps, i.e., we can achieve an exponential speed-up over best known classical algorithms. A similar kind of approach can be exploited to check whether a Boolean function is symmetric (respectively linear) or not. Given a
Boolean function as an oracle, it is important to devise certain algorithms to test whether it has a specific property or it is $\epsilon$-far from having that property. The efficiency of the algorithm is judged by the number of calls to the oracle so that one can decide, with high probability, between these two alternatives. We show that this can be achieved in $O(\epsilon^{-\frac{1}{2}})$ query complexity. This is obtained by showing that the problem of checking symmetry or linearity can be efficiently reduced to testing whether a Boolean function is constant.
Sakura: a flexible coding for tree hashing
We propose a flexible, fairly general, coding for tree hash modes. The coding does not define a tree hash mode, but instead specifies a way to format the message blocks and chaining values into inputs to the underlying function for any topology, including sequential hashing. The main benefit is to avoid input clashes between different tree growing strategies, even before the hashing modes are defined, and to make the SHA-3 standard tree-hashing ready.
Relations among Privacy Notions for Signcryption and Key Invisible "Sign-then-Encrypt''
Signcryption simultaneously offers authentication through unforgeability and confidentiality through indistinguishability against chosen ciphertext attacks by combining the functionality of digital signatures and public-key encryption into a single operation. Libert and Quisquater (PKC 2004) extended this set of basic requirements with the notions of ciphertext anonymity (or key privacy) and key invisibility to protect the identities of signcryption users and were able to prove that key invisibility implies ciphertext anonymity by imposing certain conditions on the underlying signcryption scheme.
This paper revisits the relationship amongst privacy notions for signcryption. We prove that key invisibility implies ciphertext anonymity without any additional restrictions. More surprisingly, we prove that key invisibility also implies indistinguishability against chosen ciphertext attacks. This places key invisibility on the top of privacy hierarchy for public-key signcryption schemes.
On the constructive side, we show that general ``sign-then-encrypt'' approach offers key invisibility if the underlying encryption scheme satisfies two existing security notions, indistinguishable against adaptive chosen ciphertext attacks and indistinguishability of keys against adaptive chosen ciphertext attacks. By this method we obtain the first key invisible signcryption construction in the standard model.
How to Run Turing Machines on Encrypted Data
Uncategorized
Uncategorized
Algorithms for computing on encrypted data promise to be a fundamental building block of cryptography. The way one models such algorithms has a crucial effect on the efficiency and usefulness of the resulting cryptographic schemes. As of today, almost all known schemes for fully homomorphic encryption, functional encryption, and garbling schemes work by modeling algorithms as circuits rather than as Turing machines.
As a consequence of this modeling, evaluating an algorithm over encrypted data is as slow as the worst-case running time of that algorithm, a dire fact for many tasks. In addition, in settings where an evaluator needs a description of the algorithm itself in some "encoded" form, the cost of computing and communicating such encoding is as large as the worst-case running time of this algorithm.
In this work, we construct cryptographic schemes for computing Turing machines on encrypted data that avoid the worst-case problem. Specifically, we show:
– An attribute-based encryption scheme for any polynomial-time Turing machine and Random Access Machine (RAM).
– A (single-key and succinct) functional encryption scheme for any polynomial-time Turing machine.
– A reusable garbling scheme for any polynomial-time Turing machine.
These three schemes have the property that the size of a key or of a garbling for a Turing machine is very short: it depends only on the description of the Turing machine and not on its running time.
Previously, the only existing constructions of such schemes were for depth-d circuits, where all the parameters grow with d. Our constructions remove this depth d restriction, have short keys, and moreover, avoid the worst-case running time.
– A variant of fully homomorphic encryption scheme for Turing machines, where one can evaluate a Turing machine M on an encrypted input x in time that is dependent on the running time of M on input x as opposed to the worst-case runtime of M. Previously, such a result was known only for a restricted class of Turing machines and it required an expensive preprocessing phase (with worst-case runtime); our constructions remove both restrictions.
Our results are obtained via a reduction from SNARKs (Bitanski et al) and an "extractable" variant of witness encryption, a scheme introduced by Garg et al.. We prove that the new assumption is secure in the generic group model. We also point out the connection between (the variant of) witness encryption and the obfuscation of point filter functions as defined by Goldwasser and Kalai in 2005.
Public-Key Revocation and Tracing Schemes with Subset Difference Methods Revisited
Trace and revoke is broadcast encryption with the traitor tracing functionality. It is a very powerful primitive since it can revoke users whose private keys are compromised by finding them using a tracing algorithm if a pirate decoder is given. Public-key trace and revoke (PKTR) is a special type of trace and revoke such that anyone can run the tracing algorithm and anyone can create an encrypted message by using a public key. Although public-key trace and revoke schemes are attractive tools, the currently known PKTR schemes are a little bit inefficient in terms of the private key size and the public key size compared with public-key broadcast encryption schemes.
In this paper, we propose a new technique to construct an efficient PKTR scheme by using the subset cover framework. First, we introduce a new concept of public-key encryption named single revocation encryption (SRE) and propose an efficient SRE scheme in the random oracle model. The universe of SRE consists of many groups and each group consists of many members. A user in SRE is represented as a group that he belongs and a member in the group. In SRE, a sender can create a ciphertext for a specified group where one member in the group is revoked, and a receiver can decrypt the ciphertext if he belongs to the group in the ciphertext and he is not revoked in the group.
Second, we show that the subset difference (SD) scheme (or the layered subset difference (LSD) scheme) and a SRE scheme can be combined to construct a PKTR scheme. Our PKTR scheme using the LSD scheme and our SRE scheme has the ciphertext size of $O(r)$, the private key size of $O(\log^{1.5} N)$, and the public key size of $O(1)$ where $N$ is the total number of users in the system and $r$ is the size of a revoked set. Our PKTR scheme is the first one that achieves the private key size of $O(\log^{1.5} N)$ and the public key size of $O(1)$.
Analysis of authentication and key establishment in inter-generational mobile telephony
Uncategorized
Uncategorized
Second (GSM), third (UMTS), and fourth-generation (LTE) mobile telephony protocols are all in active use, giving rise to a number of interoperation situations. Although the standards address roaming by specifying switching and mapping of established security context, there is not a comprehensive specification of which are the possible interoperation cases. Nor is there comprehensive specification of the procedures to establish security context (authentication and short-term keys) in the various interoperation scenarios. This paper systematically enumerates the cases, classifying them as allowed, disallowed, or uncertain with rationale based on detailed analysis of the specifications. We identify the authentication and key agreement procedure for each of the possible cases. We formally model these scenarios and analyze their security, in the symbolic model, using the tool ProVerif. We find two scenarios that inherit a known false base station attack. We find an attack on the CMC message of another scenario.
Public key exchange using semidirect product of (semi)groups
In this paper, we describe a brand new key exchange protocol based on a semidirect product of (semi)groups (more specifically, on extension of a (semi)group by automorphisms), and then focus on practical instances of this general idea. Our protocol can be based on any group, in particular on any non-commutative group. One of its special cases is the standard Diffie-Hellman protocol, which is based on a cyclic group. However, when our protocol is used with a non-commutative (semi)group, it acquires several useful features that make it compare favorably to the Diffie-Hellman protocol. Here we also suggest a particular non-commutative semigroup (of matrices) as the platform and show that security of the relevant protocol is based on a quite different assumption compared to that of the standard Diffie-Hellman protocol.
Transparent, Distributed, and Replicated Dynamic Provable Data Possession
With the growing trend toward using outsourced storage, the problem of efficiently checking and proving data integrity needs more consideration. Starting with PDP and POR schemes in 2007, many cryptography and security researchers have addressed the problem. After the first solutions for static data, dynamic versions were developed (e.g., DPDP). Researchers also considered distributed versions of such schemes. Alas, in all such distributed schemes, the client needs to be aware of the structure of the cloud, and possibly pre-process the file accordingly, even though the security guarantees in the real world are not improved.
We propose a distributed and replicated DPDP which is transparent from the client’s viewpoint. It allows for real scenarios where the cloud storage provider (CSP) may hide its internal structure from the client, flexibly manage its resources, while still providing provable service to the client. The CSP decides on how many and which servers will store the data. Since the load is distributed on multiple servers, we observe one-to-two orders of magnitude better performance in our tests, while availability and reliability are also improved via replication. In addition, we use persistent rank-based authenticated skip lists to create centralized and distributed variants of a dynamic version control system with optimal complexity.
On the Need of Physical Security for Small Embedded Devices: a Case Study with COMP128-1 Implementations in SIM Cards
Ensuring the physical security of small embedded devices is challenging. Such devices have to be produced under strong cost constraints, and generally operate with limited power and energy budget. However, they may also be deployed in applications where physical access is indeed possible for adversaries. In this paper, we consider the case of SIM cards to discuss these issues, and report on successful side-channel attacks against several (old but still deployed) implementations of the COMP128-1 algorithm. Such attacks are able to recover cryptographic keys with limited time and data, by measuring the power consumption of the devices manipulating them, hence allowing cards cloning and communications eavesdropping. This study allows us to put forward the long term issues raised by the deployment of cryptographic implementations. It provides a motivation for improving the physical security of small embedded devices early in their development. We also use it to argue that public standards for cryptographic algorithms and transparent physical security evaluation methodologies are important tools for this purpose.
The PACE|AA Protocol for Machine Readable Travel Documents, and its Security
We discuss an efficient combination of the cryptographic protocols adopted by the International Civil Aviation Organization (ICAO) for securing the communication of machine readable travel documents and readers. Roughly, in the original protocol the parties first run the
Password-Authenticated Connection Establishment (PACE) protocol to establish a shared key and then the reader (optionally) invokes the Active Authentication (AA) protocol to verify the passport's validity. Here, we show that by carefully re-using some of the secret data of the PACE protocol for the AA protocol one can save one exponentiation on the passports's side. We call this the PACE|AA protocol. We then formally prove that this more efficient combination not only preserves the desirable security properties of the two individual protocols but also increases privacy by preventing misuse of the challenge in the Active Authentication protocol. We finally discuss a solution which allows deniable authentication in the sense that the interaction cannot be used as a proof towards third parties.
Tight security bounds for key-alternating ciphers
A $t$-round \emph{key-alternating cipher} (also called \emph{iterated
Even-Mansour cipher}) can be viewed as an abstraction of AES. It
defines a cipher $E$ from $t$ fixed public permutations $P_1, \ldots,
P_t : \bits^n \ra \bits^n$ and a key $k = k_0\Vert \cdots \Vert k_t
\in \bits^{n(t+1)}$ by setting $E_{k}(x) = k_t \oplus P_t(k_{t-1}
\oplus P_{t-1}(\cdots k_1 \oplus P_1(k_0 \oplus x) \cdots))$. The
indistinguishability of $E_k$ from a truly random permutation by an
adversary who also has oracle access to the (public) random
permutations $P_1, \ldots, P_t$ was investigated in 1997 by Even and
Mansour for $t = 1$ and for higher values of $t$ in a series of recent
papers. For $t = 1$, Even and Mansour proved indistinguishability
security up to $2^{n/2}$ queries, which is tight. Much later Bogdanov
et al$.$ (2011) conjectured that security should be $2^{\frac{t}{t+1}n}$
queries for general $t$, which matches an easy distinguishing attack
(so security cannot be more) . A number of partial results have been
obtained supporting this conjecture, besides Even and Mansour's
original result for $t = 1$: Bogdanov et al$.$ proved security of
$2^{\frac{2}{3}n}$ for $t \geq 2$, Steinberger (2012) proved security
of $2^{\frac{3}{4}n}$ for $t \geq 3$, and Lampe, Patarin and Seurin
(2012) proved security of $2^{\frac{t}{t+2}n}$ for all even values of
$t$, thus "barely" falling short of the desired
$2^{\frac{t}{t+1}n}$.
Our contribution in this work is to prove the long-sought-for security
bound of $2^{\frac{t}{t+1}n}$, up to a constant multiplicative factor
depending on $t$. Our method is essentially an application of
Patarin's H-coefficient technique.
The proof contains some coupling-like and inclusion-exclusion ideas,
but the main trick that pushes the computations through is
to stick with the combinatorics and to
refrain from rounding any quantities too early.
For the reader's interest, we include a self-contained
tutorial on the H-coefficient technique.
Identity-based Aggregate Signatures with Verifiable Single Ones
Uncategorized
Uncategorized
In an aggregate signature scheme, different signatures from different signers on different messages can be aggregated to reduce the cost of computation and communication. Using an identity-based signature method, any one can verify signatures by the identity of the signer without transmitting certificates. Currently, in most identity-based aggregate signature schemes, aggregate signature verification might require complex pairing operations, or some interactions among the signers might be required. In addition, the individual signatures in those aggregate signatures are often insecure or restricted in special scenarios, which does not satisfy the requirement that an individual signature can be used independently and can also be aggregated on-demand. This paper tries to address this issue by proposing an identity-based aggregate signature scheme in which an individual one can be securely and conveniently used. Our scheme is efficient with constant paring operation, and different signers can concurrently sign different messages. The security of our scheme is proved in the random oracle model.
Towards Efficient Private Distributed Computation on Unbounded Input Streams
In the problem of private ``swarm'' computing, $n$ agents wish to securely and
distributively perform a computation on common inputs, in such a way
that even if the entire memory contents of some of them are exposed,
no information is revealed about the state of the computation.
Recently, Dolev, Garay, Gilboa and Kolesnikov [ICS 2011] considered this
problem in the setting of information-theoretic security, showing how to perform such computations on input streams of {\em unbounded length}. The cost of their solution, however, is exponential in the size of the Finite State Automaton (FSA) computing the function.
In this work we are interested in efficient (i.e., polynomial time)
computation in the above model, at the expense of {\em minimal}
additional assumptions. Relying on the existence of one-way functions,
we show how to process unbounded inputs (but of course, polynomial in the security parameter) at a cost {\em linear} in $m$, the number of FSA states. In
particular, our algorithms achieve the following:
\begin{tiret}
\item In the case of $(n,n)$-reconstruction (i.e., in which all $n$
agents participate in the reconstruction of the distributed
computation) and at most $n-1$ agents are corrupted, the agent
storage, the time required to process each input symbol, and the time
complexity for reconstruction are all $O(mn)$.
\item
In the case of $(n-t,n)$-reconstruction (where only $n-t$ agents
take part in the reconstruction) and at most $t$ agents are corrupted,
the agents' storage and time required to process each input symbol are $O(m{n-1 \choose n-t})$.
The complexity of reconstruction is $O(mt)$.
\end{tiret}
We achieve the above through a carefully orchestrated use of pseudo-random generators and secret-sharing, and in particular a novel share
re-randomization technique which might be of independent interest.
Designing a Hybrid Attribute-Based Encryption Scheme Supporting Dynamic Attributes
This article presents the design of a novel hybrid attribute-based
encryption scheme. The scheme is attribute-based, as it allows
encrypting under logical combinations of attributes, i.e. properties that users satisfy. It is hybrid, as it combines ciphertext-policy attribute-based encryption (CP-ABE) with location-based encryption (LBE) on the level of symmetric keys. It can efficiently handle dynamic attributes with continuous values, like location, even in resource-constrained settings.
Comparing the Pairing Efficiency over Composite-Order and Prime-Order Elliptic Curves
We provide software implementation timings for pairings over
composite-order and prime-order elliptic curves. Composite orders must be large enough to be infeasible to factor. They are modulus of 2 up to 5 large prime numbers in the literature. There exists size recommendations for two-prime RSA modulus and we extend the results of Lenstra concerning the RSA modulus sizes to multi-prime modulus, for various security levels. We then implement a Tate pairing over a composite order supersingular curve and an optimal ate pairing over a prime-order Barreto-Naehrig curve, both at the 128-bit security level. We use our implementation timings to deduce the total cost of the homomorphic encryption scheme of Boneh, Goh and Nissim and its translation by Freeman in the prime-order setting. We also compare the efficiency of the unbounded Hierarchical Identity Based Encryption protocol of Lewko and Waters and its translation by Lewko in the prime order setting. Our results strengthen the previously observed inefficiency of composite-order bilinear groups and advocate the use of prime-order group whenever possible in protocol design.
Computing on Authenticated Data for Adjustable Predicates
The notion of P-homomorphic signatures, introduced by Ahn et al. (TCC 2012), generalizes various approaches for public computations on authenticated data. For a given predicate P anyone can derive a signature for a message m' from the signatures of a set of messages M, as long as P(M,m')=1. This definition hence comprises notions and constructions for concrete predicates P such as homomorphic signatures and redactable signatures.
In our work we address the question of how to combine Pi-homomorphic schemes for different predicates P1,P2,... to create a richer and more flexible class of supported predicates. One approach is to statically combine schemes for predicates into new schemes for logical formulas over the predicates, such as a scheme for AND (P1 AND P2). The other approach for more flexibility is to derive schemes which allow the signer to dynamically decide which predicate to use when signing a message, instead of supporting only a single, fixed predicate.
We present two main results. One is to show that one can indeed devise solutions for the static combination for AND, and for dynamically adjustable solutions for choosing the predicate on the fly. Moreover, our constructions are practical and add only a negligible overhead. The other main result is an impossibility result for static combinations. Namely, we prove that, in contrast to the case of AND, many other formulas like the logical OR (P1 OR P2) and the NOT (NOT P) do not admit generic combinations through so-called canonical constructions. This implies that one cannot rely on general constructions in these cases, but must use other methods instead, like finding new predicate-specific solutions from scratch.
Election Verifiability or Ballot Privacy: Do We Need to Choose?
We propose a new encryption primitive, \emph{commitment consistent encryption} (CCE), and instances of this primitive that enable building the first universally verifiable voting schemes with a perfectly private audit trail (PPAT) and practical complexity. That is:
\begin{myitemize}
\item the audit trail that is published for verifying elections
guarantees everlasting privacy, and
\item the computational load required from the participants is only
increased by a small constant factor compared to traditional
voting schemes, and is optimal in the sense of Cramer, Gennaro and
Schoenmakers~\cite{CGS97}.
\end{myitemize}
These properties make it possible to introduce election verifiability in large scale elections as a pure benefit, that is, without loss of privacy compared to a non-verifiable scheme and at a similar level of efficiency.
We propose different approaches for constructing voting schemes with PPAT from CCE, as well as two efficient CCE constructions: one is tailored for elections with a small number of candidates, while the second is suitable for elections with complex ballots.
Optical PUFs Reloaded
Uncategorized
Uncategorized
We revisit optical physical unclonable functions (PUFs), which were
proposed by Pappu et al. in their seminal first publication on PUFs
[40, 41]. The first part of the paper treats non-integrated optical
PUFs. Their security against modeling attacks is analyzed, and we
discuss new image transformations that maximize the PUF’s out-
put entropy while possessing similar error correction capacities as
previous approaches [40, 41]. Furthermore, the influence of us-
ing more than one laser beam, varying laser diameters, and smaller
scatterer sizes is systematically studied. Our findings enable the
simple enhancement of an optical PUF’s security without addi-
tional hardware costs. Next, we discuss the novel application of
non-integrated optical PUFs as so-called “Certifiable PUFs”. The
latter are useful to achieve practical security in advanced PUF-pro-
tocols, as recently observed by Rührmair and van Dijk at Oakland
2013 [48]. Our technique is the first mechanism for Certifiable
PUFs in the literature, answering an open problem posed in [48].
In the second part of the paper, we turn to integrated optical
PUFs. We build the first prototype of an integrated optical PUF
that functions without moving components and investigate its se-
curity. We show that these PUFs can surprisingly be attacked by
machine learning techniques if the employed scattering structure is
linear, and if the raw interference images of the PUF are available
to the adversary. Our result enforces the use of non-linear scattering
structures within integrated PUFs. The quest for suitable materials is identified as a central, but currently open research problem.
Our work makes intensive use of two prototypes of optical PUFs. The
presented integratable optical PUF prototype is, to our knowledge,
the first of its kind in the literature.
Remotegrity: Design and Use of an End-to-End Verifiable Remote Voting System
We propose and implement a cryptographically end-to-end verifiable (E2E) remote voting system for absentee voters and report on its deployment in a binding municipal election in Takoma Park, Maryland. Remotegrity is a hybrid mail/internet extension to the Scantegrity in-person voting system, enabling secure, electronic return of vote-by-mail ballots. It provides voters with the ability to detect unauthorized modifications to their cast ballots made by either malicious client software or a corrupt election authority—two threats not previously studied in combination. Not only can the voter detect such changes, they can prove it to a third party without giving up ballot secrecy.
On the Impacts of Mathematical Realization over Practical Security of Leakage Resilient Cryptographic Schemes
Uncategorized
Uncategorized
In real world, in order to transform an abstract and generic cryptographic scheme into actual physical implementation, one usually undergoes two processes: mathematical realization at algorithmic level and physical realization at implementation level. In the former process, the abstract and generic cryptographic scheme is transformed into an exact and specific mathematical scheme, while in the latter process the output of mathematical realization is being transformed into a physical cryptographic module runs as a piece of software, or hardware, or combination of both. In black-box model (i.e. leakage-free setting), a cryptographic scheme can be mathematically realized without affecting its theoretical security as long as the mathematical components meet the required cryptographic properties. However, up to now, no previous work formally show that whether one can mathematically realize a leakage resilient cryptographic scheme in existent ways without affecting its practical security.
Our results give a negative answer to this important question by introducing attacks against several kinds of mathematical realization of a practical leakage resilient cryptographic scheme. Our results show that there may exist a big gap between the theoretical tolerance leakage rate and the practical tolerance leakage rate of the same leakage resilient cryptographic scheme if the mathematical components in the mathematical realization are not provably secure in leakage setting. Therefore, on one hand, we suggest that all (practical) leakage resilient cryptographic schemes should at least come with a kind of mathematical realization. Using this kind of mathematical realization, its practical security can be guaranteed. On the other hand, our results inspire cryptographers to design advanced leakage resilient cryptographic schemes whose practical security is independent of the specific details of its mathematical realization.
A Closer Look at HMAC
Bellare, Canetti and Krawczyk~\cite{FOCS:BelCanKra96} show that cascading an $\eps$-secure (fixed input length) PRF gives an $O(\eps n q)$-secure (variable input length) PRF when making at most $q$ prefix-free queries of length $n$ blocks. We observe that this translates to the same bound for NMAC (which is the cascade without the prefix-free requirement but an additional application of the PRF at the end), and give a matching attack, showing this bound is tight. This contradicts the $O(\eps n)$ bound claimed by Koblitz and Menezes~\cite{KobMen12}.
A new criterion for avoiding the propagation of linear relations through an Sbox (Full version)
In several cryptographic primitives, Sboxes of small size are used to provide nonlinearity. After several iterations, all the output bits of the primitive are ideally supposed to depend in a nonlinear way on all of the input variables. However, in some cases, it is possible to find some output bits that depend in an affine way on a small number of input bits if the other input bits are fixed to a well-chosen value. Such situations are for example exploited in cube attacks or in attacks like the one presented by Fuhr against the hash function Hamsi. Here, we define a new property for nonlinear Sboxes, named $(v,w)$-linearity, which means that $2^w$ components of an Sbox are affine on all cosets of a $v$-dimensional subspace. This property is related to the generalization of the so-called Maiorana-McFarland construction for Boolean functions. We show that this concept quantifies the ability of an Sbox to propagate affine relations. As a proof of concept, we exploit this new notion for analyzing and slightly improving Fuhr's attack against Hamsi and we show that its success strongly depends on the $(v,w)$-linearity of the involved Sbox.
Cryptophia's Short Combiner for Collision-Resistant Hash Functions
A combiner for collision-resistant hash functions takes two functions as input and implements a hash function with the guarantee
that it is collision-resistant if one of the functions is. It has been shown that such a combiner
cannot have short output (Pietrzak, Crypto 2008); that is, its output length
is lower bounded by roughly $2n$ if the ingoing functions output $n$-bit hash values.
In this paper, we present two novel definitions for hash function
combiners that allow to bypass the lower bound: the first is an extended semi-black-box definition.
The second is a new game-based, fully black-box definition which allows to better analyze combiners in
idealized settings such as the random-oracle model or indifferentiability framework (Maurer, Renner, and Holenstein, TCC 2004). We then present a new combiner
which is robust for pseudorandom functions (in the traditional sense), which does not increase the output length of
its underlying functions and which is collision-resistant in the indifferentiability setting. Our combiner is particularly relevant in practical
scenarios, where security proofs are often given in idealized models, and our combiner, in the same idealized model,
yields strong security guarantees while remaining short.
New modular multiplication and division algorithms based on continued fraction expansion
In this paper, we apply results on number systems based on continued fraction expansions to modular arithmetic. We provide two new algorithms in order to compute modular multiplication and modular division. The presented algorithms are based on the Euclidean algorithm and are of quadratic complexity.
CloudHKA: A Cryptographic Approach for Hierarchical Access Control in Cloud Computing
Cloud services are blooming recently. They provide a convenient way for data accessing, sharing, and processing. A key ingredient for successful cloud services is to control data access while considering the specific features of cloud services. The specific features include great quantity of outsourced data, large number of users, honest-but-curious cloud servers, frequently changed user set, dynamic access control policies, and data accessing for light-weight mobile devices. This paper addresses a cryptographic key assignment problem for enforcing a hierarchical access control policy over cloud data.
We propose a new hierarchical key assignment scheme CloudHKA that observes the Bell- LaPadula security model and efficiently deals with the user revocation issue practically. We use CloudHKA to encrypt outsourced data so that the data are secure against honest-but- curious cloud servers. CloudHKA possesses almost all advantages of the related schemes, e.g., each user only needs to store one secret key, supporting dynamic user set and access hierarchy, and provably-secure against collusive attacks. In particular, CloudHKA provides the following distinct features that make it more suitable for controlling access of cloud data. (1) A user only needs a constant computation time for each data accessing. (2) The encrypted data are securely updatable so that the user revocation can prevent a revoked user from decrypting newly and previously encrypted data. Notably, the updates can be outsourced by using public information only. (3) CloudHKA is secure against the legal access attack. The attack is launched by an authorized, but malicious, user who pre-downloads the needed information for decrypting data ciphertexts in his authorization period. The user uses the pre-downloaded information for future decryption even after he is revoked. Note that the pre-downloaded information are often a small portion of encrypted data only, e.g. the header- cipher in a hybrid encrypted data ciphertext. (4) Each user can be flexibly authorized the access rights of Write or Read, or both.
Self-blindable Credential: Towards LightWeight Anonymous Entity Authentication
We are witnessing the rapid expansion of smart devices in our daily
life. The need for individual privacy protection calls for anonymous
entity authentication techniques with affordable efficiency upon the
resource-constrained smart devices. Towards this objective, in this
paper we propose self-blindable credential, a lightweight
anonymous entity authentication primitive. We provide a formulation
of the primitive and present two concrete instantiations. The first
scheme implements verifier-local revocation and the second scheme
enhances the former with forward security. Our analytical
performance results show that our schemes outperform relevant
existing schemes.
Privacy-Preserving Billing for e-Ticketing Systems in Public Transportation
Many electronic ticketing systems for public transportation have been deployed around the world. Using the example of Singapore's EZ-Link system we show that it is easy to invade a traveller's privacy and obtain his travel records in a real-world system. Then we propose encrypted bill processing of the travel records preventing any kind of privacy breach. Clear advantages of using bill processing instead of electronic cash are the possibility of privacy-preserving data mining analyses by the transportation company and monthly billing entailing a tighter customer relation and advanced tariffs. Moreover, we provide an implementation to demonstrate the feasibility of our solution.
Practical and Employable Protocols for UC-Secure Circuit Evaluation over $Z_n$
We present a set of new, efficient, universally composable
two-party protocols for evaluating
reactive arithmetic circuits modulo n,
where n is a safe RSA modulus of unknown factorization.
Our protocols are based on
a homomorphic encryption scheme with message space $Z_n$,
zero-knowledge proofs of existence,
and a novel "mixed" trapdoor commitment scheme.
Our protocols are proven
secure against adaptive corruptions
(assuming secure erasures) under standard assumptions
in the CRS model (without random oracles).
Our protocols appear to be the most efficient ones
that satisfy these security requirements.
In contrast to prior protocols, we provide facilities that allow for the use of our protocols
as building blocks of higher-level protocols.
An additional contribution of this paper is a universally
composable construction of the variant of the Dodis-Yampolskiy
oblivious pseudorandom function in a group of order n
as originally proposed by Jarecki and Liu.
Computing Privacy-Preserving Edit Distance and Smith-Waterman Problems on the GPU Architecture
This paper presents privacy-preserving, parallel computing algorithms on a graphic processing unit (GPU) architecture to solve the Edit-Distance (ED) and the Smith-Waterman (SW) problems. The ED and SW problems are formulated into dynamic programming (DP) computing problems, which are solved using the Secure Function Evaluation (SFE) to meet privacy protection requirements, based on the semi-honest security model. Major parallelization techniques include mapping of variables to support collision-free parallel memory access, scheduling and mapping of gate garblers on GPU devices to maximize GPU device utilization, and latency minimization of context switch for computing steps in the DP matrix. A pipelined GPU-CPU interface is developed to mask latency of CPU housekeeping components.
The new solutions were tested on a Xeon E5504 at 2GHz plus a GTX-680 GPU (as generator), connecting an i7-3770K at 3.5GHz plus a GTX-680 GPU (as evaluator) via local Internet. A 5000×5000 8-bit alphabet ED problem requires roughly 1.88 billion non-free gates, and the running time of around 26 minutes (roughly 1.209×106 gate/second). A 60×60 SW problem is computed in around 16.79 seconds. Compared to the state of art performance [5], we achieved the acceleration factor of 12.5× for the ED problem, and 24.7× for the SW problem.
From oblivious AES to efficient and secure database join in the multiparty setting
AES block cipher is an important cryptographic primitive with many applications. In this work, we describe how to efficiently implement the AES-128 block cipher in the multiparty setting where the key and the plaintext are both in a secret-shared form. In particular, we study several approaches for AES S-box substitution based on oblivious table lookup and circuit evaluation. Given this secure AES implementation, we build a universally composable database join operation for secret shared tables. The resulting protocol scales almost linearly with the database size and can join medium sized databases with 100,000 rows in few minutes, which makes many privacy-preserving data mining algorithms feasible in practice. All the practical implementations and performance measurements are done on the Sharemind secure multi-party computation platform.
Breaking NLM-MAC Generator
NLM generator, designed by HoonJae Lee, SangMin Sung, HyeongRag Kim, is the strengthened version of the LM-type summation generator with two memory bits; which uses non-linear combination of linear feedback shift register and non-linear feedback shift register. Recently, the cipher along with a massage authenticate function have been proposed for a lightweight communication framework in wireless sensor networks. Also, the generator has been used in two different RFID mutual authentication protocols and a protocol to secure access in internet. This paper indicates some critical cryptographic weak points leading to the key recovery and forgery attack. We prove the internal state of NLM-n can be recovered with time complexity about $n^{log7\times2}$ where the total length of internal state is $2\cdot n+2$ bits. The attack needs about $n^2$ key-stream bits. We also show attacker is able forge any MAC tag in real time by having only one pair (MAC tag, cipher-text). The proposed attacks are completely practical and break the scheme with negligible error probability.
Non-malleable Codes from Additive Combinatorics
Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of "tampering functions" \cF is completely unrestricted, they are known to exist for many broad tampering families \cF. One such natural family is the family of tampering functions in the so called {\em split-state} model. Here the message m is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with L and R {\em individually}. The split-state tampering arises in many realistic applications, such as the design of non-malleable secret sharing schemes, motivating the question of designing efficient non-malleable codes in this model.
Prior to this work, non-malleable codes in the split-state model received considerable attention in the literature, but were either (1) constructed in the random oracle model [DPW10], or (2) relied on advanced cryptographic assumptions (such as non-interactive zero-knowledge proofs and leakage-resilient encryption) [LL12], or (3) could only encode 1-bit messages [DKO13]. As our main result, we build the first efficient, multi-bit, information-theoretically-secure non-malleable code in the split-state model.
The heart of our construction uses the following new property of the inner-product function <L,R> over the vector space F_p^n (for any prime p and large enough dimension n): if L and R are uniformly random over F_p^n, and f,g: F_p^n \rightarrow F_p^n are two arbitrary functions on L and R, the joint distribution (<L,R>,<f(L),g(R)>) is ``close'' to the convex combination of "affine distributions" {(U,c U+d)| c,d \in F_p}, where U is uniformly random in F_p. In turn, the proof of this surprising property of the inner product function critically relies on some results from additive combinatorics, including the so called {\em Quasi-polynomial Freiman-Ruzsa Theorem} (which was recently established by Sanders [San12] as a step towards resolving the Polynomial Freiman-Ruzsa conjecture [Gre05]).
Selecting polynomials for the Function Field Sieve
The Function Field Sieve algorithm is dedicated to computing discrete logarithms in a
finite field GF(q^n) , where q is a small prime power. The scope of this article is to select good
polynomials for this algorithm by defining and measuring the size property and the so-called
root and cancellation properties. In particular we present an algorithm for rapidly testing a
large set of polynomials. Our study also explains the behaviour of inseparable polynomials, in
particular we give an easy way to see that the algorithm encompass the Coppersmith algorithm
as a particular case.
Quantum algorithms for the subset-sum problem
This paper introduces a subset-sum algorithm with heuristic asymptotic cost exponent below 0.25. The new algorithm combines the 2010 Howgrave-Graham--Joux subset-sum algorithm with a new streamlined data structure for quantum walks on Johnson graphs.
On Evaluating Circuits with Inputs Encrypted by Different Fully Homomorphic Encryption Schemes
We consider the problem of evaluating circuits whose inputs are
encrypted with possibly different encryption schemes. Let
$\mathcal{C}$ be any circuit with input $x_1, \dots, x_t \in
\{0,1\}$, and let $\mathcal{E}_i$, $1 \le i \le t$, be (possibly)
different fully homomorphic encryption schemes, whose encryption
algorithms are $\Enc_i$. Suppose $x_i$ is encrypted with
$\mathcal{E}_i$ under a public key $pk_i$, say $c_i \leftarrow
\Enc_i({pk_i}, x_i)$. Is there any algorithm $\Evaluate$ such that
$\Evaluate(\mathcal{C}, \langle \mathcal{E}_1, pk_1, c_1\rangle,
\dots, \langle \mathcal{E}_t, pk_t, c_t\rangle)$ returns a
ciphertext $c$ that, once decrypted, equals $\mathcal{C}(x_1, \dots,
x_t)$? We propose a solution to this seemingly impossible problem
with the number of different schemes and/or keys limited to a small
value. Our result also provides a partial solution to the open
problem of converting any FHE scheme to a multikey FHE scheme.
Discrete logarithm in GF(2^809) with FFS
The year 2013 has seen several major complexity advances for the discrete logarithm problem in multiplicative groups of small characteristic finite fields. These outmatch, asymptotically, the Function Field Sieve (FFS) approach, which was so far the most efficient algorithm known for this task. Yet, on the practical side, it is not clear whether the new algorithms are uniformly better than FFS. This article presents the state of the art with regard to the FFS algorithm, and reports data from a record-sized discrete logarithm computation in a prime-degree extension field.
Fast Two-Party Secure Computation with Minimal Assumptions
All recent implementations of two-party secure computation protocols require specific complexity assumptions for their correctness and/or efficiency (e.g., DDH, homomorphic encryption, Sigma protocols for specific languages). We propose and implement a Yao-based protocol for secure two-party computation against malicious adver- saries that enjoys the following benefits: (1) it assumes the minimal hardness assumption, that is, oblivious transfers; (2) it has constant round complexity; (3) its overhead is O(k) times of the Yao protocol’s, which is the best one could hope for by using the circuit-level cut-and-choose technique to achieve malicious security; and (4) it is embarrassingly parallelizable in that its depth complexity is roughly the same as the honest-but-curious Yao protocol. To achieve these properties, we use the cut-and-choose paradigm, but solve the main three problems for achiev- ing malicious security (input consistency, selective failure, and output authentication) in a novel and efficient manner. In particular, we propose an efficient witness-indistinguishable proof for output authentication; we suggest the use of an auxiliary 2-universal circuit to ensure the generator’s input consistency; and we advance the performance of the state-of-the-art approach defending the selective failure attack. Not only does our protocol require weaker complexity assumptions, but our implementation of this protocol also demonstrates a several factor improvement over the best prior two-party secure computation protocol which rely on specific number theoretic assumptions.
On the (re)design of an FPGA-based PUF
Physically Unclonable Functions (PUFs) represent a promising basis for solutions to problems such as secure key storage, and delivery of higher-level applications such as authentication. Although effective PUF designs exist for CMOS-based technologies (e.g., arbiter PUFs), their implementation on FPGAs remains a challenge (e.g., because of their routing characteristics). With this in mind, Anderson described a PUF design specifically tailored towards FPGAs. In this paper we identify and analyse a flaw in said design which renders it impractical for security-critical use. We describe two alternative solutions (relating to different tradeoffs) that eliminate this flaw.
On the Impossibility of Cryptography with Tamperable Randomness
Uncategorized
Uncategorized
We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties.
More precisely, we consider p-tampering attackers that may \emph{efficiently} tamper with each bit of the honest parties' random tape with probability p, but have to do so in an ``online'' fashion.
Our main result is a strong negative result: We show that any secure
encryption scheme, bit commitment scheme, or zero-knowledge protocol
can be ``broken'' with probability $p$ by a $p$-tampering attacker. The core of this result is a new Fourier analytic technique for biasing the output of bounded-value functions, which may be of independent interest.
We also show that this result cannot be extended to primitives such as
signature schemes and identification protocols: assuming the existence
of one-way functions, such primitives can be made resilient to (\nicefrac{1}{\poly(n)})-tampering attacks where $n$ is the security~parameter.
Certificateless Signatures: Structural Extensions of Security Models and New Provably Secure Schemes
Certificateless signatures (CLSs) were introduced to solve the key escrow problem of identity-based signatures. In CLS, the full private key is determined by neither the user nor the trusted third party. However, a certificate of a public key is not required in CLS schemes; therefore, anyone can replace the public key. On the formal security, there are two types of adversaries where the Type I adversary acts as the outsider, and the Type II as the key generation center. Huang et al. took a few security issues into consideration and provided some security models. They showed three kinds of Type I adversaries with different security levels. Moreover, Tso et al. found the existence of another Type I adversary that was not discussed by Huang et al.; however, the adversaries are still too subtle to be presently defined. In this paper, we further consider public key replacement and strong unforgeability in certificateless signatures. All feasible situations are revisited along with abilities of adversaries. Additionally, structural extensions of security models are proposed with respect to the described public key replacement and strong unforgeability. Moreover, we also present some schemes, analyze their security against different adversaries, and describe our research results. Finally, one of the proposed certificateless short signature schemes is proven to achieve the strongest security level.
A generalisation of Miller's algorithm and applications to pairing computations on abelian varieties
In this paper, we use the theory of theta functions to generalize to
all abelian varieties the usual Miller's algorithm to compute a
function associated to a principal divisor. We also explain how to
use the Frobenius morphism on abelian varieties defined over a
finite field in order to shorten the loop of the Weil and Tate
pairings algorithms. This extend preceding results about ate and
twisted ate pairings to all abelian varieties. Then building upon
the two preceding ingredients, we obtain a variant of optimal
pairings on abelian varieties. Finally, by introducing new addition
formulas, we explain how to compute optimal pairings on Kummer
varieties. We compare in term of performance the resulting
algorithms to the algorithms already known in the genus one and two
case.
Improved Differential Fault Analysis on ARIA using Small Number of Faults
In [15], Li et al. firstly proposed a differential fault analysis on ARIA-128. This attack requires average 45 random byte fault injections. In 2012, Park et al. proposed the improve DFA by using 33 random byte fault injection. Also Kim proposed differential fault analysis based on multi byte fault model. In this model, the number of fault injections is reduce to 13 and If access to the decryption oracle is allowed, only 7 faults are required. In this paper, we propose improved differential fault analysis on ARIA. Based on random byte fault model, the proposed attacks can recover the secret key of ARIA-128/192/256 by using 6 fault injections within a few minutes. Moreover, in cases of ARIA-128 and ARIA-256, it is possible to recover the secret key using only 4 fault injections under a fault assumption where an attacker can induce some faults during both encryption and decryption process, respectively.
Our results on ARIA-192/256 are the first known DFA results on them.
Power Analysis Attacks against FPGA Implementations of KLEIN
KLEIN is a family of block ciphers proposed by Zheng Gong et al. at RFIDSec 2011, and its lightweight features are suitable for resource-constrained devices. However, the original design of KLEIN does not consider the potential attacks by power analysis methods. This paper presents power analysis attacks against a FPGA implementation of KLEIN by the authors of KLEIN. The attacking strategy, attacking point and complexity of our attacks via power analysis against KLEIN are discussed in detail. Besides, the implementation of the attacks is also described, and the experimental data is given. A lot of attacking experiments are launched by this paper, and the experiments confirm that the success probability of our attacks is nearly 100%. Finally, a defensive countermeasure against our attacks is proposed.
Ideal and Perfect Hierarchical Secret Sharing Schemes based on MDS codes
An ideal conjunctive hierarchical secret sharing scheme, constructed based on the Maximum Distance Separable (MDS) codes, is proposed in this paper. The scheme, what we call, is computationally perfect. By computationally perfect, we mean, an authorized set can always reconstruct the secret in polynomial time whereas for an unauthorized set this is computationally hard. Also, in our scheme, the size of the ground field is independent of the parameters of the access structure. Further, it is efficient and requires $O(n^3)$, where $n$ is the number of participants.
Keywords: Computationally perfect, Ideal, Secret sharing scheme, Conjunctive hierarchical access structure, Disjunctive hierarchical access structure, MDS code.
A family of 6-to-4-bit S-boxes with large linear branch number
We propose a family of 6-to-4-bit S-boxes with linear branch number 3. Since they also fulfill various further desirable properties, such S-boxes can serve as a building block for various block ciphers.
Enhanced Ownership Transfer Protocol for RFID in an Extended Communication Model
Ownership Transfer Protocols for RFID allow transferring the
rights over a tag from a current owner to a new owner in a secure
and private way. Recently, Kapoor and Piramuthu have proposed two
schemes which overcome most of the security weaknesses detected in
previously published protocols. Still, this paper reviews that
work and points out that such schemes still present some practical
and security issues. In particular, they do not manage to
guarantee the privacy of the new owner without the presence of a
Trusted Third Party, and we find that the assumed communication
model is not suitable for many practical scenarios. We then
propose here a lightweight protocol that can be used in a wider
range of applications, and which incorporates recently defined
security properties such as Tag Assurance, Undeniable Ownership
Transfer, Current Ownership Proof and Owner Initiation. Finally,
this protocol is complemented with a proposed Key Change Protocol,
based on noisy tags, which provides privacy to the new owner
without either resorting to a Trusted Third Party or assuming an
Isolated Environment.
On the (Im)possibility of Projecting Property in Prime-Order Setting
Uncategorized
Uncategorized
Projecting bilinear pairings have frequently been used for designing cryptosystems since they were first derived from composite order bilinear groups. There have been only a few studies on the (im)possibility of projecting bilinear pairings. Groth and Sahai (EUROCRYPT 2008) showed that projecting bilinear pairings can be achieved in a prime-order group setting. They constructed both projecting asymmetric bilinear pairings and projecting symmetric bilinear pairings, where a bilinear pairing $e$ is symmetric if it satisfies $e(g,h)=e(h,g)$ for any group elements $g$ and $h$; otherwise, it is asymmetric. Subsequently, Freeman (EUROCRYPT 2010) generalized Groth-Sahai's projecting asymmetric bilinear pairings.
In this paper, we provide impossibility results on projecting bilinear pairings in a prime-order group setting. More precisely, we specify the lower bounds of
1. the image size of a projecting asymmetric bilinear pairing
2. the image size of a projecting symmetric bilinear pairing
3. the computational cost for a projecting asymmetric bilinear pairing
4. the computational cost for a projecting symmetric bilinear pairing
in a prime-order group setting naturally induced from the $k$-linear assumption, where the computational cost means the number of generic operations.
Our lower bounds regarding a projecting asymmetric bilinear pairing are tight, i.e., it is impossible to construct a more efficient projecting asymmetric bilinear pairing than the constructions of Groth-Sahai and Freeman. However, our lower bounds regarding a projecting symmetric bilinear pairing differ from Groth and Sahai's results regarding a symmetric bilinear pairing; We fill these gaps by constructing projecting symmetric bilinear pairings.
In addition, on the basis of the proposed symmetric bilinear pairings, we construct more efficient instantiations of cryptosystems that essentially use the projecting symmetric bilinear pairings in a modular fashion. Example applications include new instantiations of the Boneh-Goh-Nissim cryptosystem, the Groth-Sahai non-interactive proof system, and Seo-Cheon round optimal blind signatures proven secure under the DLIN assumption. These new instantiations are more efficient than the previous ones, which are also provably secure under the DLIN assumption. These applications are of independent interest.
Security Analysis of Linearly Filtered NLFSRs
Our contributions are applying distinguishing attack on Linearly Filtered NLFSR as a primitive or associated with filter generators. We extend the attack on linear combinations of Linearly Filtered NLFSRs as well. Generally, these structures can be examined by the proposed techniques and the criteria will be achieved to design secure primitive. The attacks allow attacker to mount linear attack to distinguish the output of the cipher and recover its internal state. Also, we investigate security of the modified version of Grain stream cipher to present how invulnerable is the scheme against distinguishing attacks.
The Vernam cipher is robust to small deviations from randomness
Uncategorized
Uncategorized
The Vernam cipher (or one-time pad) has played an important rule in cryptography because it is a perfect secrecy system.
For example, if an English text (presented in binary system) $X_1 X_2 ... $ is enciphered according to the formula $Z_i = (X_i + Y_i) \mod 2 $, where $Y_1 Y_2 ...$ is a key sequence generated by the Bernoulli source with equal probabilities of 0 and 1, anyone who knows $Z_1 Z_2 ... $ has no information about $X_1 X_2 ... $
without the knowledge of the key $Y_1 Y_2 ...$. (The best strategy is to guess $X_1 X_2 ... $ not paying attention to $Z_1 Z_2 ... $.)
But what should one say about secrecy of an analogous method where the key sequence $Y_1 Y_2 ...$ is generated by the Bernoulli
source with a small bias, say, $P(0) = 0.49, $ $ P(1) = 0.51$?
To the best of our knowledge, there are no theoretical estimates for the secrecy of such a system, as well as for the general case where $X_1 X_2 ... $ (the plaintext) and key sequence are described by stationary ergodic processes.
We consider the running-key ciphers where the plaintext and the key are generated by stationary ergodic sources and show how to estimate the secrecy of such systems. In particular, it is shown that, in a certain sense, the Vernam cipher is robust to small deviations from randomness.
Practical Multilinear Maps over the Integers
Extending bilinear elliptic curve pairings to multilinear maps is a long-standing open problem. The first plausible construction of such multilinear maps has recently been described by Garg, Gentry and Halevi, based on ideal lattices. In this paper we describe a
different construction that works over the integers instead of ideal lattices, similar to the DGHV fully homomorphic encryption scheme. We also describe a different technique for proving the full randomization of encodings: instead of Gaussian linear sums, we apply the classical leftover hash lemma over a quotient lattice. We show that our construction is relatively practical: for reasonable security parameters a one-round 7-party Diffie-Hellman key exchange requires about $25$ seconds per party.
Collusion-Resistant Domain-Specific Pseudonymous Signatures
At ISC 2012, Bender et al. introduced the notion of domain-specific pseudonymous signatures for ID documents. With this primitive, a user can sign with domain-specific pseudonyms, that cannot be linked across domains but that are linkable in a given domain. However, their security model assumes non-collusion of malicious users, which is a strong assumption. We therefore propose improvements to their construction. Our main contribution is a new pseudonymous signature scheme based on group signatures that is collusion-resistant.