All papers in 2017 (Page 8 of 1262 results)
Making Password Authenticated Key Exchange Suitable For Resource-Constrained Industrial Control Devices
Uncategorized
Uncategorized
Connectivity becomes increasingly important also for small embedded systems such as typically found in industrial control installations. More and more use-cases require secure remote user access increasingly incorporating handheld based human machine interfaces, using wireless links such as Bluetooth. Correspondingly secure operator authentication becomes of utmost importance. Unfortunately, often passwords with all their well-known pitfalls remain the only practical mechanism.
We present an assessment of the security requirements for the industrial setting, illustrating that offline attacks on passwords-based authentication protocols should be considered a significant threat. Correspondingly use of a Password Authenticated Key Exchange protocol becomes desirable. We review the signif-icant challenges faced for implementations on resource-constrained devices.
We explore the design space and shown how we succeeded in tailoring a partic-ular variant of the Password Authenticated Connection Establishment (PACE) protocol, such that acceptable user interface responsiveness was reached even for the constrained setting of an ARM Cortex-M0+ based Bluetooth low-energy transceiver running from a power budget of 1.5 mW without notable energy buffers for covering power peak transients.
Privacy-Free Garbled Circuits for Formulas: Size Zero and Information-Theoretic
Uncategorized
Uncategorized
Garbled circuits are of central importance in cryptography, finding widespread application in secure computation, zero-knowledge (ZK) protocols, and verifiable outsourcing of computation to name a few. We are interested in a particular kind of garbling scheme, termed privacy-free in the literature. We show that Boolean formulas can be garbled information-theoretically in the privacy-free setting, producing no ciphertexts at all. Existing garbling schemes either rely on cryptographic assumptions (and thus require cryptographic operations to construct and evaluate garbled circuits), produce garbled circuits of non-zero size, or are restricted to low depth formulaic circuits. Our result has both theoretical and practical implications for garbled circuits as a primitive. On the theory front, our result breaks the known theoretical lower bound of one ciphertext for garbling an AND gate in this setting. As an interesting implication of producing size zero garbled circuits, our scheme scores adaptive security for free. On the practical side, our garbling scheme involves only cheap XOR operations and produces size zero garbled circuits. As a side result, we propose several interesting extensions of our scheme. Namely, we show how to garble threshold and high fan-in gates.
An aspect of our garbling scheme that we believe is of theoretical interest is that it does not maintain the invariant that the garbled circuit evaluator must not at any point be in possession of both keys of any wire in the garbled circuit.
Our scheme directly finds application in ZK protocols where the verification function of the language is representable by a formulaic circuit. Such examples include Boolean formula satisfiability. The ZK protocols obtained by plugging in our scheme in the known paradigm of building ZK protocols from garbled circuits offer better proof size, while relying on standard assumptions. Furthermore, the adaptivity of our garbling scheme allows us to cast our ZK protocols in the offline-online setting and offload circuit dependent communication and computation to the offline phase. As a result, the online phase enjoys communication and computation (in terms of number of symmetric key operations) complexity that are linearly proportional to the witness size alone.
Notes on the design and analysis of SIMON and SPECK
We discuss the design rationale and analysis of the SIMON and SPECK lightweight block ciphers.
Human Computing for Handling Strong Corruptions in Authenticated Key Exchange
We propose the first user authentication and key exchange protocols that can tolerate strong corruptions on the client-side. If a user happens to log in to a server from a terminal that has been fully compromised, then the other past and future user's sessions initiated from honest terminals stay secure. We define the security model for Human Authenticated Key Exchange (HAKE) protocols and first propose two generic protocols based on human-compatible (HC) function family, password-authenticated key exchange (PAKE), commitment, and authenticated encryption. We prove our HAKE protocols secure under reasonable assumptions and discuss efficient instantiations.
We thereafter propose a variant where the human gets help from a small device such as RSA SecurID. This permits to implement an HC function family with stronger security and thus allows to weaken required assumptions on the PAKE. This leads to the very efficient HAKE which is still secure in case of strong corruptions. We believe that our work will promote further developments in the area of human-oriented cryptography.
Last updated: 2017-11-06
Detecting Large Integer Arithmetic for Defense Against Crypto Ransomware
Uncategorized
Uncategorized
The evolution of crypto ransomware has increasingly influenced
real-life systems and lead to fatal threats to data security of
individuals and enterprises. A crypto ransomware basically encrypts files of victims using either standard or their own customized crypto functions and request ransom from users to retrieve them again. In this paper, we propose a new detection and analyzing approach, called ExpMonitor, which basically targets ransomware's public key cryptographic algorithms carried out on victim's computer. ExpMonitor is based on observing public key encryption running on the CPU. Monitoring integer multiplication instructions can detect large integer arithmetic operations, which constitute the backbone of public key encryption. While existing detection mechanisms can only targets particular cryptographic functions our technique complements the state-of-the-art.
Watermarking Public-key Cryptographic Functionalities and Implementations
A watermarking scheme for a public-key cryptographic functionality enables the embedding of a mark in the instance of the secret-key algorithm such that the functionality of the original scheme is maintained, while it is infeasible for an adversary to remove the mark (unremovability) or mark a fresh object without the marking key (unforgeability). Cohen et al. [STOC'16] has provided constructions for watermarking arbitrary cryptographic functionalities; the resulting schemes rely on indistinguishability obfuscation (iO) and leave two important open questions: (i) the realization of both unremovability and unforgeability, and (ii) schemes the security of which reduces to simpler hardness assumptions than iO.
In this paper we provide a new definitional framework that distinguishes between watermarking cryptographic functionalities and implementations (think of ElGamal encryption being an implementation of the encryption functionality), while at the same time provides a
meaningful relaxation of the watermarking model that enables both unremovability and unforgeability under minimal hardness assumptions.
In this way we can answer questions regarding the ability to watermark a given implementation of a cryptographic functionality which is more refined compared to the question of whether a watermarked implementation functionality exists. Taking advantage of our new formulation we present the first constructions for watermarking public key encryption that achieve both unremovability and unforgeability under minimal hardness assumptions. Our first construction enables the watermarking of any public-key encryption implementation assuming only the existence of one-way functions for private key detection. Our second construction is at the functionality level and uses a stronger assumption (existence of identity-based encryption (IBE)) but supports public detection of the watermark.
Multiplication and Division over Extended Galois Field GF($p^q$): A new Approach to find Monic Irreducible Polynomials over any Galois Field GF($p^q$).
Irreducible Polynomials (IPs) have been of utmost importance in generation of substitution boxes in modern cryptographic ciphers. In this paper an algorithm entitled Composite Algorithm using both multiplication and division over Galois fields have been demonstrated to generate all monic IPs over extended Galois Field GF($p^q$) for large value of both p and q. A little more efficient Algorithm entitled Multiplication Algorithm and more too Division Algorithm have been illustrated in this Paper with Algorithms to find all Monic IPs over extended Galois Field GF($p^q$) for large value of both p and q. Time Complexity Analysis of three algorithms with comparison to Rabin’s Algorithms has also been exonerated in this Research Article.
Robust Non-Interactive Multiparty Computation Against Constant-Size Collusion
Uncategorized
Uncategorized
Non-Interactive Multiparty Computations (Beimel et al., Crypto 2014) is a very powerful notion equivalent (under some corruption model) to garbled circuits, Private Simultaneous Messages protocols, and obfuscation. We present robust solutions to the problem of Non-Interactive Multiparty Computation in the computational and information-theoretic models. Our results include the first efficient and robust protocols to compute any function in $NC^1$ for constant-size collusions, in the information-theoretic setting and in the computational setting, to compute any function in $P$ for constant-size collusions, assuming the existence of one-way functions. Our constructions start from a Private Simultaneous Messages construction (Feige, Killian Naor, STOC 1994 and Ishai, Kushilevitz, ISTCS 1997) and transform it into a Non-Interactive Multiparty Computation for constant-size collusions.
We also present a new Non-Interactive Multiparty Computation protocol for symmetric functions with significantly better communication complexity compared to the only known one of Beimel et al.
Trapping ECC with Invalid Curve Bug Attacks
In this paper we describe how to use a secret bug as a trapdoor to design trapped ellliptic curve E(Fp). This trapdoor can be used to mount an invalid curve attack on E(Fp). E(Fp) is designed to respect all ECC security criteria (prime order,high twist order, etc.) but for a secret exponent the point is projected on another unsecure curve. We show how to use this trap with a particular type of time/memory tradeoff to break the ECKCDSA verication process for any public key of the trapped curve. The process is highly undetectable : the chosen defender eort is quadratic in the saboter computational eort. This work provides a concrete hardly detectable and easily deniable example of cryptographic sabotage. While this proof of concept is very
narrow, it highlights the necessity of the Full Verifiable Randomness of ECC
Further Analysis of a Proposed Hash-Based Signature Standard
We analyze the concrete security of a hash-based signature
scheme described in the most recent Internet Draft by McGrew, Fluhrer and
Curcio. We perform this analysis in the random-oracle model, where the
Merkle-Damgård hash compression function is models as the random oracle.
We show that, even with a large number of different keys the attacker can choose
from, and a huge computational budget, the attacker succeeds in creating a
forgery with negligible probability ($< 2^{-129}$).
Fast Secure Two-Party ECDSA Signing
Uncategorized
Uncategorized
ECDSA is a standard digital signature schemes that is widely used in TLS, Bitcoin and elsewhere. Unlike other schemes like RSA, Schnorr signatures and more, it is particularly hard to construct efficient threshold signature protocols for ECDSA (and DSA). As a result, the best-known protocols today for secure distributed ECDSA require running heavy zero-knowledge proofs and computing many large-modulus exponentiations for every signing operation. In this paper, we consider the specific case of two parties (and thus no honest majority) and construct a protocol that is approximately two orders of magnitude faster than the previous best. Concretely, our protocol achieves good performance, with a single signing operation for curve P-256 taking approximately 37ms between two standard machine types in Azure (utilizing a single core only). Our protocol is proven secure for sequential composition under standard assumptions using a game-based definition. In addition, we prove security by simulation under a plausible yet non-standard assumption regarding Paillier. We show that partial concurrency (where if one execution aborts then all need to abort) can also be achieved.
A Fourier Analysis Based Attack against Physically Unclonable Functions
Uncategorized
Uncategorized
Electronic payment systems have leveraged the advantages offered by the RFID technology, whose security is promised to be improved by applying the notion of Physically Unclonable Functions (PUFs). Along with the evolution of PUFs, numerous successful attacks against PUFs have been proposed in the literature. Among these are machine learning (ML) attacks, ranging from heuristic approaches to provable algorithms, that have attracted great attention. Our paper pursues this line of research by introducing a Fourier analysis based attack against PUFs. More specifically, this paper focuses on two main aspects of ML attacks, namely being provable and noise tolerant. In this regard, we prove that our attack is naturally integrated into a provable Probably Approximately Correct (PAC) model. Moreover, we show that our attacks against known PUF families are effective and applicable even in the presence of noise. Our proof relies heavily on the intrinsic properties of these PUF families, namely arbiter, Ring Oscillator (RO), and Bistable Ring (BR) PUF families. We believe that our new style of ML algorithms, which take advantage of the Fourier analysis principle, can offer better measures of PUF security.
Committed MPC - Maliciously Secure Multiparty Computation from Homomorphic Commitments
We present a new multiparty computation protocol secure against a static and malicious dishonest majority. Unlike most previous protocols that were based on working on MAC-ed secret shares, our approach is based on computations on homomorphic commitments to secret shares. Specifically we show how to realize MPC using any additively-homomorphic commitment scheme, even if such a scheme is an interactive two-party protocol.
Our new approach enables us to do arithmetic computation over arbitrary finite fields. In addition, since our protocol computes over committed values, it can be readily composed within larger protocols, and can also be used for efficiently implementing committing OT or committed OT. This is done in two steps, each of independent interest:
1. Black-box extension of any (possibly interactive) two-party additively homomorphic commitment scheme to an additively homomorphic multiparty commitment scheme, only using coin-tossing and a “weak” equality evaluation functionality.
2. Realizing multiplication of multiparty commitments based on a lightweight preprocessing approach.
Finally we show how to use the fully homomorphic commitments to compute any functionality securely in the presence of a malicious adversary corrupting any number of parties.
ZeroTrace : Oblivious Memory Primitives from Intel SGX
Uncategorized
Uncategorized
We are witnessing a confluence between applied cryptography and secure hardware systems in enabling secure cloud computing.
On one hand, work in applied cryptography has enabled efficient, oblivious data-structures and memory primitives.
On the other, secure hardware and the emergence of Intel SGX has enabled a low-overhead and mass market mechanism for isolated execution. By themselves these technologies have their disadvantages.
Oblivious memory primitives carry high performance overheads, especially when run non-interactively. Intel SGX, while more efficient, suffers from numerous software-based side-channel attacks, high context switching costs, and bounded memory size.
In this work we build a new library of oblivious memory primitives, which we call ZeroTrace. ZeroTrace is designed to carefully combine state-of-the-art oblivious RAM techniques and SGX, while mitigating individual disadvantages of these technologies.
To the best of our knowledge, ZeroTrace represents the first oblivious memory primitives running on a real secure hardware platform. ZeroTrace simultaneously enables a dramatic speed-up over pure cryptography and protection from software-based side-channel attacks. The core of our design is an efficient and flexible block-level memory controller that provides oblivious execution against any active software adversary, and across asynchronous SGX enclave terminations.
Performance-wise, the memory controller can service requests for 4~B blocks in 1.2~ms and 1~KB blocks in 3.4~ms (given a 10~GB dataset).
On top of our memory controller, we evaluate Set/Dictionary/List interfaces which can all perform basic operations (e.g., get/put/insert).
Fully Homomorphic Encryption from the Finite Field Isomorphism Problem
If $q$ is a prime and $n$ is a positive integer then any two finite
fields of order $q^n$ are isomorphic. Elements of these fields can be
thought of as polynomials with coefficients chosen modulo $q$, and a
notion of length can be associated to these polynomials. A
non-trivial isomorphism between the fields, in general, does not
preserve this length, and a short element in one field will usually
have an image in the other field with coefficients appearing to be
randomly and uniformly distributed modulo $q$. This key feature
allows us to create a new family of cryptographic constructions based
on the difficulty of recovering a secret isomorphism between two
finite fields. In this paper we describe a fully homomorphic encryption scheme based on this new hard problem.
Security Analysis of an Ultra-lightweight RFID Authentication Protocol for M-commerce
Over the last few years, more people perform their social activities on mobile devices, such as mobile payment or mobile wallet. Mobile commerce (m-commerce) refers to manipulating electronic commerce (e-commerce) by using mobile devices and wireless networks. Radio frequency identification(RFID) is a technology which can be employed to complete payment functions on m-commerce. As an RFID subsystem is applied in m-commerce and supply chains, the related security concerns is very important. Recently, Fan et al. have proposed an ultra-lightweight RFID authentication scheme for m-commerce(ULRAS) and claimed that their protocol is enough efficient, and provides a high level of security. In this paper, we show that their protocol is vulnerable to secret disclosure and reader impersonation attacks. Finally, we improve the Fan et al. protocol to present a new one, which is resistant to the mentioned attacks presented in this paper and the other known attacks in the context of RFID authentication. Our proposed improvement does not impose any additional workload on the RFID tag.
X509CLOUD - FRAMEWORK FOR A UBIQUITOUS PKI
The SSL protocol has been widely used for verifying digital identities and to secure Internet traffic since the early days of the web. Although X.509 certificates have been in existence for more than two decades, individual user uptake has been low due to the high cost of issuance and maintenance of such certs. This has led to a situation whereby users are able to verify the identity of an organization or e-commerce retailer via their digital certificate, but organizations have to rely on weak username and password combinations to verify the identity of customers registered with their service. We propose the X509Cloud framework which enables organizations to issue certificates to their users at zero cost, and allows them to securely store and disseminate client certificates using the Bitcoin inspired blockchain protocol. This in turn will enable organizations and individuals to authenticate and to securely communicate with other users on the Internet.
Resource-efficient OT combiners with active security
An OT-combiner takes $n$ candidate implementations of the oblivious transfer (OT) functionality, some of which may be faulty, and produces a secure instance of oblivious transfer as long as a large enough number of the candidates are secure. We see an OT-combiner as a 2-party protocol that can make several black-box calls to each of the $n$ OT candidates, and we want to protect against an adversary that can corrupt one of the parties and a certain number of the OT candidates, obtaining their inputs and (in the active case) full control of their outputs.
In this work we consider perfectly (unconditionally, zero-error) secure OT-combiners and we focus on \emph{minimizing the number of calls} to the candidate OTs.
First, we construct a single-use (one call per OT candidate) OT-combiner which is perfectly secure against active adversaries corrupting one party and a constant fraction of the OT candidates. This extends a previous result by Ishai et al. (ISIT 2014) that proves the same fact for passive adversaries.
Second, we consider a more general asymmetric corruption model where an adversary can corrupt different sets of OT candidates depending on whether it is Alice or Bob who is corrupted. We give sufficient and necessary conditions for the existence of an OT combiner with a given number of calls to the candidate OTs in terms of the existence of secret sharing schemes with certain access structures and share-lengths. This allows in some cases to determine the optimal number of calls to the OT candidates which are needed to construct an OT combiner secure against a given adversary.
Securing Abe's Mix-net Against Malicious Verifiers via Witness Indistinguishability
We show that the simple and appealing unconditionally sound mix-net due to Abe (Asiacrypt'99) can be augmented to further guarantee anonymity against malicious verifiers. This additional guarantee implies, in particular, that when applying the Fiat-Shamir transform to the mix-net's underlying sub-protocols, anonymity is provably guaranteed for {\em any} hash function.
As our main contribution, we demonstrate how anonymity can be attained, even if most sub-protocols of a mix-net are merely witness indistinguishable (WI). We instantiate our framework with two variants of Abe's mix-net. In the first variant, ElGamal ciphertexts are replaced by an alternative, yet equally efficient, "lossy" encryption scheme. In the second variant, new "dummy" vote ciphertexts are injected prior to the mixing process, and then removed.
Our techniques center on new methods to introduce additional witnesses to the sub-protocols within the proof of security. This, in turn, enables us to leverage the WI guarantees against malicious verifiers. In our first instantiation, these witnesses follow somewhat naturally from the lossiness of the encryption scheme, whereas in our second instantiation they follow from leveraging combinatorial properties of the Benes-network. These approaches may be of independent interest.
Finally, we demonstrate cases in Abe's original mix-net (without modification) where only one witness exists, such that if the WI proof leaks information on the (single) witness in these cases, then the system will not be anonymous against malicious verifiers.
Identity-Based Encryption from the Diffie-Hellman Assumption
We provide the first constructions of identity-based encryption and hierarchical identity-based encryption based on the hardness of the (Computational) Diffie-Hellman Problem (without use of groups with pairings) or Factoring. Our construction achieves the standard notion of identity-based encryption as considered by Boneh and Franklin [CRYPTO 2001]. We bypass known impossibility results using garbled circuits that make a non-black-box use of the underlying cryptographic primitives.
A New Distribution-Sensitive Secure Sketch and Popularity-Proportional Hashing
Uncategorized
Uncategorized
Motivated by typo correction in password authentication, we investigate cryptographic error-correction of secrets in settings where the distribution of secrets is a priori (approximately) known. We refer to this as the distribution-sensitive setting.
We design a new secure sketch called the layer-hiding hash (LHH) that offers the best security to date. Roughly speaking, we show that LHH saves an additional log H_0(W) bits of entropy compared to the recent layered sketch construction due to Fuller, Reyzin, and Smith (FRS). Here H_0(W) is the size of the support of the distribution W. When supports are large, as with passwords, our new construction offers a substantial security improvement.
We provide two new constructions of typo-tolerant password-based authentication schemes. The first combines a LHH or FRS sketch with a standard slow-to-compute hash function, and the second avoids secure sketches entirely, correcting typos instead by checking all nearby passwords. Unlike the previous such brute-force-checking construction, due to Chatterjee et al., our new construction uses a hash function whose run-time is proportional to the popularity of the password (forcing a longer hashing time on more popular, lower entropy passwords). We refer to this as popularity-proportional hashing (PPH). We then introduce a frame-work for comparing different typo-tolerant authentication approaches. We show that PPH always offers a better time / security trade-off than the LHH and FRS constructions, and for certain distributions outperforms the Chatterjee et al. construction. Elsewhere, this latter construction offers the best trade-off. In aggregate our results suggest that the best known secure sketches are still inferior to simpler brute-force based approaches.
Lower Bounds on Obfuscation from All-or-Nothing Encryption Primitives
Uncategorized
Uncategorized
Indistinguishability obfuscation (IO) enables many heretofore out-of-reach applications in cryptography. However, currently all known constructions of
IO are based on multilinear maps which are poorly understood. Hence, tremendous research effort has been put towards basing obfuscation on better-understood computational assumptions. Recently, another path to IO has emerged through functional encryption [Anath and Jain, CRYPTO 2015; Bitansky and Vaikuntanathan, FOCS 2015] but such FE schemes currently are still based on multi-linear maps. In this work, we study whether IO could be based on other powerful encryption primitives.
Separations for IO: We show that (assuming that the polynomial hierarchy does not collapse and one-way functions exist) IO cannot be constructed in a
black-box manner from powerful all-or-nothing encryption primitives, such as witness encryption (WE), predicate encryption, and fully homomorphic encryption. What unifies these primitives is that they are of the ``all-or-nothing'' form, meaning either someone has the ``right key'' in which case they can decrypt the message fully, or they are not supposed to learn anything.
Stronger Model for Separations: One might argue that fully black-box uses of the considered encryption primitives limit their power too much because these primitives can easily lead to non-black-box constructions if the primitive is used in a self-feeding fashion --- namely, code of the subroutines of the considered primitive could easily be fed as input to the subroutines of the primitive itself. In fact, several important results (e.g., the construction of IO from functional encryption) follow this very recipe. In light of this, we prove our impossibility results with respect to a stronger model than the fully black-box framework of Impagliazzo and Rudich (STOC'89) and Reingold, Trevisan, and Vadhan (TCC'04) where the non-black-box technique of self-feeding is actually allowed.
Snarky Signatures: \\ Minimal Signatures of Knowledge from Simulation-Extractable SNARKs
Uncategorized
Uncategorized
We construct a pairing-based simulation-extractable succinct non-interactive argument of knowledge (SE-SNARK) that consists of only 3 group elements and has highly efficient verification. By formally linking SE-SNARKs to signatures of knowledge, we then obtain a succinct signature of knowledge consisting of only 3 group elements.
SE-SNARKs enable a prover to give a proof that they know a witness to an instance in a manner which is: (1) \textit{succinct} - proofs are short and verifier computation is small; (2) \textit{zero-knowledge} - proofs do not reveal the witness; (3) \textit{simulation-extractable} - it is only possible to prove instances to which you know a witness, even when you have already seen a number of simulated proofs.
We also prove that any pairing-based signature of knowledge or SE-SNARK must have at least 3 group elements and 2 verification equations. Since our constructions match these lower bounds, we have the smallest size signature of knowledge and the smallest size SE-SNARK possible.
Public-Seed Pseudorandom Permutations
A number of cryptographic schemes are built from (keyless) permutations, which are either designed in an ad-hoc fashion or are obtained by fixing the key in a block cipher. Security proofs for these schemes, however, idealize this permutation, i.e., making it random and accessible, as an oracle, to all parties. Finding plausible concrete assumptions on such permutations that guarantee security of the resulting schemes has remained an elusive open question.
This paper initiates the study of standard-model assumptions on permutations -- or more precisely, on families of permutations indexed by a {\em public} seed. We introduce the notion of a {\em public-seed pseudorandom permutation} (psPRP), which is inspired by the UCE notion by Bellare, Hoang, and Keelveedhi (CRYPTO '13). It considers a two-stage security game, where only the second stage learns the seed, and the first-stage adversary, known as the source, is restricted to prevent trivial attacks -- the security notion is consequently parameterized by the class of allowable sources. To this end, we define in particular unpredictable and reset-secure sources analogous to similar notions for UCEs.
We first study the relationship between psPRPs and UCEs. To start with, we provide efficient constructions of UCEs from psPRPs for both reset-secure and unpredictable sources, thus showing that most applications of the UCE framework admit instantiations from psPRPs. We also show a converse of this statement, namely that the five-round Feistel construction yields a psPRP for reset-secure sources when the round function is built from UCEs for reset-secure sources, hence making psPRP and UCE equivalent notions for such sources.
In addition to studying such reductions, we suggest generic instantiations of psPRPs from both block ciphers and (keyless) permutations, and analyze them in ideal models. Also, as an application of our notions, we show that a simple modification of a recent highly-efficient garbling scheme by Bellare et al. (S&P '13) is secure under our psPRP assumption.
New security notions and feasibility results for authentication of quantum data
Uncategorized
Uncategorized
We give a new class of security definitions for authentication in the quantum setting. These definitions capture and strengthen existing definitions of security against quantum adversaries for both classical message authentication codes (MACs) as well as full quantum state authentication schemes. The main feature of our definitions is that they precisely characterize the effective behavior of any adversary when the authentication protocol accepts, including correlations with the key. Our definitions readily yield a host of desirable properties and interesting consequences; for example, our security definition for full quantum state authentication implies that the entire secret key can be re-used if the authentication protocol succeeds.
Next, we present several protocols satisfying our security definitions. We show that the classical Wegman-Carter authentication scheme with 3-universal hashing is secure against superposition attacks, as well as adversaries with quantum side information. We then present conceptually simple constructions of full quantum state authentication.
Finally, we prove a lifting theorem which shows that, as long as a protocol can securely authenticate the maximally entangled state, it can securely authenticate any state, even those that are entangled with the adversary. Thus, this shows that protocols satisfying a fairly weak form of authentication security automatically satisfy a stronger notion of security (in particular, the definition of Dupuis, et al (2012)).
Information-theoretic Indistinguishability via the Chi-squared Method
Uncategorized
Uncategorized
Proving tight bounds on information-theoretic indistinguishability is
a central problem in symmetric cryptography. This paper introduces a
new method for information-theoretic indistinguishability proofs,
called ``the chi-squared method''. At its core, the method requires
upper-bounds on the so-called $\chi^2$ divergence (due to Neyman and
Pearson) between the output distributions of two systems being
queries. The method morally resembles, yet also considerably
simplifies, a previous approach proposed by Bellare and Impagliazzo
(ePrint, 1999), while at the same time increasing its expressiveness
and delivering tighter bounds.
We showcase the chi-squared method on some examples. In particular: (1)
We prove an optimal bound of $q/2^n$ for the XOR of two permutations,
and our proof considerably simplifies previous approaches using the
$H$-coefficient method, (2) we provide improved bounds for the
recently proposed encrypted Davies-Meyer PRF construction by Cogliati
and Seurin (CRYPTO '16), and (3) we give a tighter bound for the Swap-or-not
cipher by Hoang, Morris, and Rogaway (CRYPTO '12).
HACL*: A Verified Modern Cryptographic Library
HACL* is a verified portable C cryptographic library that implements modern cryptographic primitives such as the ChaCha20 and Salsa20 encryption algorithms, Poly1305 and HMAC message authentication, SHA-256 and SHA-512 hash functions, the Curve25519 elliptic curve, and Ed25519 signatures.
HACL* is written in the F* programming language and then compiled to readable C code. The F* source code for each crypto- graphic primitive is verified for memory safety, mitigations against timing side-channels, and functional correctness with respect to a succinct high-level specification of the primitive derived from its published standard. The translation from F* to C preserves these properties and the generated C code can itself be compiled via the CompCert verified C compiler or mainstream compilers like GCC or CLANG. When compiled with GCC on 64-bit platforms, our primitives are as fast as the fastest pure C implementations in OpenSSL and Libsodium, significantly faster than the reference C code in TweetNaCl, and between 1.1x-5.7x slower than the fastest hand-optimized vectorized assembly code in SUPERCOP.
HACL* implements the NaCl cryptographic API and can be used as a drop-in replacement for NaCl libraries like Libsodium and TweetNaCl. HACL∗ provides the cryptographic components for a new mandatory ciphersuite in TLS 1.3 and is being developed as the main cryptographic provider for the miTLS verified implementation. Primitives from HACL* are also being integrated within Mozilla’s NSS cryptographic library. Our results show that writing fast, verified, and usable C cryptographic libraries is now practical.
ZMAC: A Fast Tweakable Block Cipher Mode for Highly Secure Message Authentication
Uncategorized
Uncategorized
We propose a new mode of operation called ZMAC allowing to construct a (stateless and deterministic) message authentication code (MAC) from a tweakable block cipher (TBC). When using a TBC with $n$-bit blocks and $t$-bit tweaks, our construction provides security (as a variable-input-length PRF) beyond the birthday bound with respect to the block-length $n$ and allows to process $n+t$ bits of inputs per TBC call. In comparison, previous TBC-based modes such as PMAC1, the TBC-based generalization of the seminal PMAC mode (Black and Rogaway, EUROCRYPT 2002) or PMAC_TBC1k (Naito, ProvSec 2015) only process $n$ bits of input per TBC call. Since an $n$-bit block, $t$-bit tweak TBC can process at most $n+t$ bits of input per call, the efficiency of our construction is essentially optimal, while achieving beyond-birthday-bound security. The ZMAC mode is fully parallelizable and can be directly instantiated with several concrete TBC proposals, such as Deoxys and SKINNY. We also use ZMAC to construct a stateless and deterministic Authenticated Encryption scheme called ZAE which is very efficient and secure beyond the birthday bound.
Functional Graph Revisited: Updates on (Second) Preimage Attacks on Hash Combiners
Uncategorized
Uncategorized
This paper studies functional-graph-based (second) preimage attacks against hash combiners. By exploiting more properties of cyclic nodes of functional graph, we find an improved preimage attack against the XOR combiner with a complexity of $2^{5n/8}$, while the previous best-known complexity is $2^{2n/3}$. Moreover, we find the first generic second-preimage attack on Zipper hash with an optimal complexity of $2^{3n/5}$.
Quantum non-malleability and authentication
Uncategorized
Uncategorized
In encryption, non-malleability is a highly desirable property: it ensures that adversaries cannot manipulate the plaintext by acting on the ciphertext. Ambainis et al. gave a definition of non-malleability for the encryption of quantum data. In this work, we show that this definition is too weak, as it allows adversaries to ``inject'' plaintexts of their choice into the ciphertext. We give a new definition of quantum non-malleability which resolves this problem. Our definition is expressed in terms of entropic quantities, considers stronger adversaries, and does not assume secrecy. Rather, we prove that quantum non-malleability implies secrecy; this is in stark contrast to the classical setting, where the two properties are completely independent. For unitary schemes, our notion of non-malleability is equivalent to encryption with a two-design (and hence also to the definition of Ambainis et al.).
Our techniques also yield new results regarding the closely-related task of quantum authentication. We show that ``total authentication'' (a notion recently proposed by Garg et al.) can be satisfied with two-designs, a significant improvement over their eight-design-based construction. We also show that, under a mild adaptation of the rejection procedure, both total authentication and our notion of non-malleability yield quantum authentication as defined by Dupuis et al.
All-But-Many Lossy Trapdoor Functions from Lattices and Applications
Uncategorized
Uncategorized
``All-but-many lossy trapdoor functions'' (ABM-LTF) are a powerful cryptographic primitive studied by Hofheinz (Eurocrypt 2012). ABM-LTFs are parametrised with tags: a lossy tag makes the function lossy; an injective tag makes the function injective, and invertible with a trapdoor. Existing ABM-LTFs rely on non-standard assumptions.
Our first result is an ABM-LTF construction from lattices, based on the learning-with-errors (LWE) problem.
Unlike the previous schemes which behaved as ``encrypted signatures'', the core of our construction is an ``encrypted, homomorphic-evaluation-friendly, weak pseudorandom function''. The weak pseudorandom function outputs matrices, where the lossy tags are preimages of the zero matrix, and the injective tags are preimages of random full-rank matrices.
Our second result is a public-key system tightly secure against ``selective opening'' attacks, where an attacker gets many challenges and can ask to see the random bits of any of them. Following the steps of Hemenway et al. (Asiacrypt 2011) and Hofheinz (Eurocrypt 2012), our ABM-LTF gives the first lattice-based, compact public-key encryption (PKE) scheme that has indistinguishability against adaptive chosen-ciphertext and selective opening attacks (IND-SO-CCA2), with tight security, and whose public-key size and security reduction are independent of the number of decryption queries and ciphertext challenges.
Meanwhile, this result provides an alternative solution to the problem of building pairing-free IND-CCA2 PKE schemes with tight security in the multi-challenge setting, which was firstly answered by Gay et al. (Eurocrypt 2016).
Additionally, our ABM-LTF answers the open question of constructing all-but-many trapdoor function from lattices, first asked by Alperin-Sheriff and Peikert (PKC 2012).
Template Attack vs Bayes Classifier
Side-channel attacks represent one of the most powerful category of attacks on cryptographic devices with profiled attacks in a promi- nent place as the most powerful among them. Indeed, for instance, template attack is a well-known real-world attack that is also the most powerful attack from the information theoretic perspective. On the other hand, machine learning techniques have proven their quality in a numerous applications where one is definitely side-channel analysis. As one could expect, most of the research concerning supervised machine learning and side-channel analysis concentrated on more powerful machine learning techniques. Although valid from the practical perspective, such attacks often remain lacking from the more theoretical side. In this paper, we investigate several Bayes classifiers, which present simple supervised techniques that have significant similarities with the template attack. More specifically, our analysis aims to investigate what is the influence of the feature (in)dependence in datasets with different amount of noise and to offer further insight into the efficiency of machine learning for side-channel analysis.
Non-Malleable Codes for Space-Bounded Tampering
Uncategorized
Uncategorized
Non-malleable codes---introduced by Dziembowski, Pietrzak and Wichs at ICS 2010---are key-less coding schemes in which mauling attempts to an encoding of a given message, w.r.t.\ some class of tampering adversaries, result in a decoded value that is either identical or unrelated to the original message. Such codes are very useful for protecting arbitrary cryptographic primitives against tampering attacks against the memory.
Clearly, non-malleability is hopeless if the class of tampering adversaries includes the decoding and encoding algorithm. To circumvent this obstacle, the majority of past research focused on designing non-malleable codes for various tampering classes, albeit assuming that the adversary is unable to decode. Nonetheless, in many concrete settings, this assumption is not realistic.
In this paper, we explore one particular such scenario where the class of tampering adversaries naturally includes the decoding (but not the encoding) algorithm.
In particular, we consider the class of adversaries that are restricted in terms of memory/space. Our main contributions can be summarized as follows:
-- We initiate a general study of non-malleable codes resisting space-bounded tampering. In our model, the encoding procedure requires large space, but decoding can be done in small space, and thus can be also performed by the adversary.
Unfortunately, in such a setting it is impossible to achieve non-malleability in the standard sense, and we need to aim for slightly weaker security guarantees.
In a nutshell, our main notion (dubbed {\em leaky space-bounded non-malleability}) ensures that this is the best the adversary can do, in that space-bounded tampering attacks can be simulated given a small amount of leakage on the encoded value.
-- We provide a simple construction of a leaky space-bounded non-malleable code. Our scheme is based on any Proof of Space (PoS)---a concept recently put forward by Ateniese {\em et al.} (SCN 2014) and Dziembowski {\em et al.} (CRYPTO 2015)---satisfying a variant of soundness. As we show, our paradigm can be instantiated by extending the analysis of the PoS construction by Ren and Devadas (TCC 2016-A), based on so-called stacks of localized expander graphs.
-- Finally, we show that our flavor of non-malleability yields a natural security guarantee against memory tampering attacks, where one can trade a small amount of leakage on the secret key for protection against space-bounded tampering attacks.
Non-Full Sbox Linearization: Applications to Collision Attacks on Round-Reduced Keccak
Uncategorized
Uncategorized
The Keccak hash function is the winner of the SHA-3 competition and became the SHA-3 standard of NIST in 2015. In this paper, we focus on practical collision attacks against round-reduced Keccak hash function, and two main results are achieved: the first practical collision attacks against 5-round Keccak-224 and an instance of 6-round Keccak collision challenge. Both improve the number of practically attacked rounds by one. These results are obtained by carefully studying the algebraic properties of the nonlinear layer in the underlying permutation of Keccak and applying linearization to it. In particular, techniques for partially linearizing the output bits of the nonlinear layer are proposed, utilizing which attack complexities are reduced significantly from the previous best results.
Componentwise APNness, Walsh uniformity of APN functions and cyclic-additive difference sets
In the preprint [Characterizations of the differential uniformity of vectorial functions by the Walsh transform, IACR ePrint Archive 2017/516], the author has, for each even positive $\delta$, characterized in several ways differentially $\delta$-uniform functions by equalities satisfied by their Walsh transforms. These characterizations generalize the well-known characterization of APN functions by the fourth moment of their Walsh transform. We study two notions which are related to such known and new characterizations: (1) one for $(n,n)$-functions, that we call componentwise APNess (CAPNess), which is a stronger version of APNness related to the characterization by the fourth moment, and has been already considered in [On almost perfect nonlinear functions; Berger, Canteaut, Charpin and Laigle-Chapuy, IEEE Transactions on Information Theory 2006]; it is defined as follows: the arithmetic mean of $W_F^4(u,v)$ when $u$ ranges over ${\Bbb F}_2^n$ and $v$ is fixed nonzero in ${\Bbb F}_2^n$ equals $2^{2n+1}$, and (2) one for $(n,m)$-function ($m=n$, resp. $m= n-1$) that we call componentwise Walsh uniformity (CWU), which is a stronger version of APNness (resp. of differential 4-uniformity) related to one of the new characterizations, and is defined as follows: the arithmetic mean of $W_F^2(u_1,v_1)W_F^2(u_2,v_2)W_F^2(u_1+u_2,v_1+v_2)$ when $u_1,u_2$ range independently over ${\Bbb F}_2^n$ and $v_1,v_2$ are fixed nonzero and distinct in ${\Bbb F}_2^m$, equals $2^{3n}$. Concerning the first notion, it is easily seen, and known, that any plateaued function is CAPN if and only if it is AB and that APN power permutations are CAPN. We prove that CAPN functions can exist only if $n$ is odd; this solves an open problem by Berger et al. Concerning the second notion, we show that any APN function whose component functions are partially-bent (in particular, every quadratic APN function) is CWU, but we observe also that other APN functions like Kasami functions and the inverse of one of the Gold APN permutations are CWU. To the aim of proving these two more difficult results, we first show that the CWUness of APN power permutations is equivalent to a property which is similar to the difference set with Singer parameters property of the complement of $\Delta_F=\{F(x)+F(x+1)+1; x\in {\Bbb F}_{2^n}\}$, proved in the case of Kasami APN functions by Dillon and Dobbertin in [New cyclic difference sets with Singer parameters, FFA 2004]. This new property, that we call cyclic-additive difference set property, involves both operations of addition and multiplication and is more complex. We prove it in the case of the inverse of Gold function. In the case of Kasami functions, it seems difficult to find a direct proof, even by adapting the sophisticated proof by Dillon and Dobbertin of the cyclic difference set property. But the properties of plateaued APN functions proved recently by the author in [Boolean and vectorial plateaued functions, and APN functions, IEEE Transactions on Information Theory 2015] give more insight, for APN power functions, on the relation between the cyclic-additive difference set property and the cyclic difference set property. The author is working at finalizing this way the proof for Kasami functions in odd dimension. A stronger property proved in this same paper for the particular case of plateaued functions with unbalanced components shows a relationship between the cyclic-additive difference set property and the value set of the function. This hopefully will allow completing the proof in the case of even dimension.
Key Rotation for Authenticated Encryption
Uncategorized
Uncategorized
A common requirement in practice is to periodically rotate the keys used to
encrypt stored data. Systems used by Amazon and Google do so using a hybrid
encryption technique which is eminently practical but has questionable
security in the face of key compromises and does not provide full key
rotation. Meanwhile, symmetric updatable encryption schemes (introduced by
Boneh et al. CRYPTO 2013) support full key rotation without performing
decryption: ciphertexts created under one key can be rotated to ciphertexts
created under a different key with the help of a re-encryption token. By
design, the tokens do not leak information about keys or plaintexts and so
can be given to storage providers without compromising security. But the
prior work of Boneh et al. addresses relatively weak confidentiality goals
and does not consider integrity at all. Moreover, as we show, a subtle issue
with their concrete scheme obviates a security proof even for confidentiality
against passive attacks.
This paper presents a systematic study of updatable Authenticated Encryption
(AE). We provide a set of security notions that strengthen those in prior
work. These notions enable us to tease out real-world security requirements
of different strengths and build schemes that satisfy them efficiently. We
show that the hybrid approach currently used in industry achieves relatively
weak forms of confidentiality and integrity, but can be modified at low cost
to meet our stronger confidentiality and integrity goals. This leads to a
practical scheme that has negligible overhead beyond conventional AE. We then
introduce re-encryption indistinguishability, a security notion that formally
captures the idea of fully refreshing keys upon rotation. We show how to
repair the scheme of Boneh et al., attaining our stronger confidentiality
notion. We also show how to extend the scheme to provide integrity, and we
prove that it meets our re- encryption indistinguishability notion. Finally,
we discuss how to instantiate our scheme efficiently using off-the-shelf
cryptographic components (AE, hashing, elliptic curves). We report on the
performance of a prototype implementation, showing that fully secure key
rotations can be performed at a throughput of approximately 116 kB/s.
Evaluating web PKIs - A Survey
Uncategorized
Uncategorized
Certificate authorities serve as trusted parties to help secure web communications. They are a vital component for ensuring the security of cloud infrastructures and big data repositories. Unfortunately, recent attacks using mis-issued certificates show this model is severely broken.
Much research has been done to enhance certificate management in order to create more secure and reliable cloud architectures. However, none of it has been widely adopted yet, and it is hard to judge which one is the winner. This chapter provides a survey with critical analysis on the
existing proposals for managing public key certificates. This
evaluation framework would be helpful for future research on
designing an alternative certificate management system to secure
the internet.
Kurosawa-Desmedt Meets Tight Security
Uncategorized
Uncategorized
At EUROCRYPT 2016, Gay et al. presented the first pairing-free public-key
encryption (PKE) scheme with a tight security reduction to a standard
assumption. Their scheme is competitive in efficiency with state-of-the art
PKE schemes and has very compact ciphertexts (of three group elements), but
suffers from a large public key (of about 200 group elements).
In this work, we present an improved pairing-free PKE scheme with a tight
security reduction to the Decisional Diffie-Hellman assumption, small
ciphertexts (of three group elements), and small public keys (of six
group elements). Compared to the work of Gay et al., our scheme thus has a
considerably smaller public key and comparable other characteristics, although
our encryption and decryption algorithms are somewhat less efficient.
Technically, our scheme borrows ideas both from the work of Gay et al. and
from a recent work of Hofheinz (EUROCRYPT, 2017). The core technical novelty of
our work is an efficient and compact designated-verifier proof system for an
OR-like language. We show that adding such an OR-proof to the ciphertext of
the state-of-the-art PKE scheme from Kurosawa and Desmedt enables a tight
security reduction.
Compact Structure-preserving Signatures with Almost Tight Security
Uncategorized
Uncategorized
In structure-preserving cryptography, every building block shares the same
bilinear groups. These groups must be generated for a specific, a prior fixed security level, and thus it is vital that the security reduction of all
involved building blocks is as tight as possible. In this work, we present the first generic construction of structure-preserving signature schemes whose reduction cost is independent of the number of signing queries. Its chosen-message security is almost tightly reduced to the chosen-plaintext security of a structure-preserving public-key encryption scheme and the security of Groth-Sahai proof system. Technically, we adapt the adaptive partitioning technique by Hofheinz (Eurocrypt 2017) to the setting of structure-preserving signature schemes. To achieve a structure-preserving scheme, our new variant of the adaptive partitioning technique relies only on generic group operations in the scheme itself. Interestingly, however, we will use non-generic operations during our security analysis. Instantiated over asymmetric bilinear groups, the security of our concrete scheme is reduced to the external Diffie-Hellman assumption with linear reduction cost in the security parameter, independently of the number of signing queries. The signatures in our schemes consist of a larger number of group elements than those in other non-tight schemes, but can be verified faster, assuming their security reduction loss is compensated by increasing the security parameter to the next standard level.
Short, Invertible Elements in Partially Splitting Cyclotomic Rings and Applications to Lattice-Based Zero-Knowledge Proofs
Uncategorized
Uncategorized
When constructing practical zero-knowledge proofs based on the hardness of the Ring-LWE or the Ring-SIS problems over polynomial rings $Z_p[X]/(X^n+1)$, it is often necessary that the challenges come from a set $\mathcal{C}$ that satisfies three properties: the set should be large (around $2^{256}$), the elements in it should have small norms, and all the non-zero elements in the difference set $\mathcal{C}-\mathcal{C}$ should be invertible. The first two properties are straightforward to satisfy, while the third one requires us to make efficiency compromises. We can either work over rings where the polynomial $X^n+1$ only splits into two irreducible factors modulo $p$, which makes the speed of the multiplication operation in the ring sub-optimal; or we can limit our challenge set to polynomials of smaller degree, which requires them to have (much) larger norms.
In this work we show that one can use the optimal challenge sets $\mathcal{C}$ and still have the polynomial $X^n+1$ split into more than two factors. This comes as a direct application of our more general result that states that all non-zero polynomials with ``small'' coefficients in the cyclotomic ring $Z_p[X]/(\Phi_m(X))$ are invertible (where ``small'' depends on the size of $p$ and how many irreducible factors the $m^{th}$ cyclotomic polynomial $\Phi_m(X)$ splits into). We furthermore establish sufficient conditions for $p$ under which $\Phi_m(X)$ will split in such fashion.
For the purposes of implementation, if the polynomial $X^n+1$ splits into $k$ factors, we can run FFT for $\log{k}$ levels until switching to Karatsuba multiplication. Experimentally, we show that increasing the number of levels from one to three or four results in a speedup by a factor of $\approx 2$ -- $3$. We point out that this improvement comes completely for free simply by choosing a modulus $p$ that has certain algebraic properties. In addition to the speed improvement, having the polynomial split into many factors has other applications -- e.g. when one embeds information into the Chinese Remainder representation of the ring elements, the more the polynomial splits, the more information one can embed into an element.
On the Hardness of the Mersenne Low Hamming Ratio Assumption
In a recent paper, Aggarwal, Joux, Prakash, and Santha (AJPS) describe an ingenious public-key cryptosystem mimicking NTRU over the integers. This algorithm relies on the properties of Mersenne primes instead of polynomial rings. The security of the AJPS cryptosystem relies on the conjectured hardness of the Mersenne Low Hamming Ratio Assumption, defined in [AJPS].
This work shows that AJPS' security estimates are too optimistic and describes an algorithm allowing to recover the secret key from the public key much faster than foreseen in [AJPS].
In particular, our algorithm is \emph{experimentally practical} (within the reach of the computational capabilities of a large organization), at least for the parameter choice $\{n=1279,h=17\}$ conjectured in [AJPS] as corresponding to a $2^{120}$ security level. The algorithm is fully parallelizable.
Breaking the FF3 Format-Preserving Encryption Standard Over Small Domains
Uncategorized
Uncategorized
The National Institute of Standards and Technology (NIST) recently published a Format-Preserving Encryption standard accepting two Feistel structure based schemes called FF1 and FF3. Particularly, FF3 is a tweakable block cipher based on an 8-round Feistel network. In CCS~2016, Bellare et. al. gave an attack to break FF3 (and FF1) with time and data complexity $O(N^5\log(N))$, which is much larger than the code book (but using many tweaks), where $N^2$ is domain size to the Feistel network. In this work, we give a new practical total break attack to the FF3 scheme (also known as BPS scheme). Our FF3 attack requires $O(N^{\frac{11}{6}})$ chosen plaintexts with time complexity $O(N^{5})$. Our attack was successfully tested with $N\leq2^9$. It is a slide attack (using two tweaks) that exploits the bad domain separation of the FF3 design. Due to this weakness, we reduced the FF3 attack to an attack on 4-round Feistel network. Biryukov et. al. already gave a 4-round Feistel structure attack in SAC~2015. However, it works with chosen plaintexts and ciphertexts whereas we need a known-plaintext attack. Therefore, we developed a new generic known-plaintext attack to 4-round Feistel network that reconstructs the entire tables for all round functions. It works with $N^{\frac{3}{2}} \left( \frac{N}{2} \right)^{\frac{1}{6}}$ known plaintexts and time complexity $O(N^{3})$. Our 4-round attack is simple to extend to five and more rounds with complexity $N^{(r-5)N+o(N)}$. It shows that FF1 with $N=7$ and FF3 with $7\leq N\leq10$ do not offer a 128-bit security. Finally, we provide an easy and intuitive fix to prevent the FF3 scheme from our $O(N^{5})$ attack.
The Price of Low Communication in Secure Multi-Party Computation
Traditional protocols for secure multi-party computation among n parties
communicate at least a linear (in n) number of bits, even when computing very
simple functions. In this work we investigate the feasibility of protocols
with sublinear communication complexity. Concretely, we consider two clients,
one of which may be corrupted, who wish to perform some “small” joint
computation using n servers but without any trusted setup. We show that
enforcing sublinear communication complexity drastically affects the
feasibility bounds on the number of corrupted parties that can be tolerated in
the setting of information-theoretic security.
We provide a complete investigation of security in the presence of semi-honest
adversaries---static and adaptive, with and without erasures---and initiate
the study of security in the presence of malicious adversaries. For
semi-honest static adversaries, our bounds essentially match the corresponding
bounds when there is no communication restriction---i.e., we can tolerate up
to t < (1/2 - \epsilon)n corrupted parties. For the adaptive case, however,
the situation is different. We prove that without erasures
even a small constant fraction of corruptions is intolerable, and---more
surprisingly---when erasures are allowed, we prove that t < (1- \sqrt(0.5)
-\epsilon)n corruptions can be tolerated, which we also show to be essentially
optimal. The latter optimality proof hinges on a new treatment of
probabilistic adversary structures that may be of independent interest. In the
case of active corruptions in the sublinear communication setting, we prove
that static “security with abort” is feasible when t < (1/2 - \epsilon)n,
namely, the bound that is tight for semi-honest security. All of our negative
results in fact rule out protocols with sublinear message complexity.
Efficient, Constant-Round and Actively Secure MPC: Beyond the Three-Party Case
While the feasibility of constant-round and actively secure MPC has been known for over two decades, the last few years have witnessed a flurry of designs and implementations that make its deployment a palpable reality. To our knowledge, however, existing concretely efficient MPC constructions are only for up to three parties.
In this paper we design and implement a new actively secure 5PC protocol tolerating two corruptions that requires $8$ rounds of interaction, only uses fast symmetric-key operations, and incurs~60\% less communication than the passively secure state-of-the-art solution from the work of Ben-Efraim, Lindell, and Omri [CCS 2016]. For example, securely evaluating the AES circuit when the parties are in different regions of the U.S. and Europe only takes $1.8$s which is $2.6\times$ faster than the passively secure 5PC in the same environment.
Instrumental for our efficiency gains (less interaction, only symmetric key primitives) is a new 4-party primitive we call \emph{Attested OT}, which in addition to Sender and Receiver involves two additional ``assistant parties'' who will attest to the respective inputs of both parties, and which might be of broader applicability in practically relevant MPC scenarios. Finally, we also show how to generalize our construction to $n$ parties with similar efficiency properties where the corruption threshold is $t \approx \sqrt n$, and propose a combinatorial problem which, if solved optimally, can yield even better corruption thresholds for the same cost.
qDSA: Small and Secure Digital Signatures with Curve-based Diffie-Hellman Key Pairs
qDSA is a high-speed, high-security signature scheme that facilitates implementations with a very small memory footprint, a crucial requirement for embedded systems and IoT devices, and that uses the same public keys as modern Diffie--Hellman schemes based on Montgomery curves (such as Curve25519) or Kummer surfaces. qDSA resembles an adaptation of EdDSA to the world of Kummer varieties, which are quotients of algebraic groups by \(\pm1\). Interestingly, qDSA does not require any full group operations or point recovery: all computations, including signature verification, occur on the quotient where there is no group law. We include details on four implementations of qDSA, using Montgomery and fast Kummer surface arithmetic on the 8-bit AVR {ATmega} and 32-bit ARM Cortex~M0 platforms. We find that qDSA significantly outperforms state-of-the-art signature implementations in terms of stack usage and code size. We also include an efficient compression algorithm for points on fast Kummer surfaces, reducing them to the same size as compressed elliptic curve points for the same security level.
PRF-ODH: Relations, Instantiations, and Impossibility Results
Uncategorized
Uncategorized
The pseudorandom-function oracle-Diffie–Hellman (PRF-ODH) assumption has been introduced recently to analyze a variety of DH-based key exchange protocols, including TLS 1.2 and the TLS 1.3 candidates, as well as the extended access control (EAC) protocol. Remarkably, the assumption comes in different flavors in these settings and none of them has been scrutinized comprehensively yet. In this paper here we therefore present a systematic study of the different PRF-ODH variants in the literature. In particular, we analyze their strengths relative to each other, carving out that the variants form a hierarchy. We further investigate the boundaries between instantiating the assumptions in the standard model and the random oracle model. While we show that even the strongest variant is achievable in the random oracle model under the strong Diffie–Hellman assumption, we provide a negative result showing that it is implausible to instantiate even the weaker variants in the standard model via algebraic black-box reductions to common cryptographic problems.
Characterizations of the differential uniformity of vectorial functions by the Walsh transform
For every positive integers $n$, $m$ and every even positive integer $\delta$, we derive inequalities satisfied by the Walsh transforms of all vectorial $(n,m)$-functions and prove that the case of equality characterizes differential $\delta$-uniformity. This provides a generalization to all differentially $\delta$-uniform functions of the characterization of APN $(n,n)$-functions due to Chabaud and Vaudenay, by means of the fourth moment of the Walsh transform. Such generalization has been missing since the introduction of the notion of differential uniformity by Nyberg in 1994 and since Chabaud-Vaudenay's result the same year.\\
For each even $\delta\geq 2$, we find several such characterizations.
In particular, when $\delta=2$ and $\delta=4$, we have that, for any $(n,n)$-function (resp. any $(n,n-1)$-function), the arithmetic mean of $W_F^2(u_1,v_1)W_F^2(u_2,v_2)W_F^2(u_1+u_2,v_1+v_2)$ when $u_1,u_2$ range independently over ${\Bbb F}_2^n$ and $v_1,v_2$ are nonzero and distinct and range independently over ${\Bbb F}_2^m$, is at least $2^{3n}$, and that $F$ is APN (resp. is differentially 4-uniform) if and only if this arithmetic mean equals $2^{3n}$ (which is the value we would get with a bent function if such function could exist).
These inequalities give more knowledge on the Walsh spectrum of $(n,m)$-functions. We deduce in particular a property of the Walsh support of highly nonlinear functions. We also consider the completely open question of knowing if the nonlinearity of APN functions is necessarily non-weak (as it is the case for known APN functions); we prove new lower bounds which cover all power APN functions (and hence a large part of known APN functions), which explain why their nonlinearities are rather good, and we discuss the question of the nonlinearity of APN quadratic functions (since almost all other known APN functions are quadratic).
Be Adaptive, Avoid Overcommitting
For many cryptographic primitives, it is relatively easy to achieve selective security (where the adversary commits a-priori to some of the choices to be made later in the attack) but appears difficult to achieve the more natural notion of adaptive security (where the adversary can make all choices on the go as the attack progresses). A series of several recent works shows how to cleverly achieve adaptive security in several such scenarios including generalized selective decryption (Panjwani, TCC '07 and Fuchsbauer et al., CRYPTO '15), constrained PRFs (Fuchsbauer et al., ASIACRYPT '14), and Yao garbled circuits (Jafargholi and Wichs, TCC '16b). Although the above works expressed vague intuition that they share a common technique, the connection was never made precise. In this work we present a new framework that connects all of these works and allows us to present them in a unified and simplified fashion. Moreover, we use the framework to derive a new result for adaptively secure secret sharing over access structures defined via monotone circuits. We envision that further applications will follow in the future.
Underlying our framework is the following simple idea. It is well known that selective security, where the adversary commits to $n$-bits of information about his future choices, automatically implies adaptive security at the cost of amplifying the adversary's advantage by a factor of up to $2^n$. However, in some cases the proof of selective security proceeds via a sequence of hybrids, where each pair of adjacent hybrids locally only requires some smaller partial information consisting of $m \ll n$ bits. The partial information needed might be completely different between different pairs of hybrids, and if we look across all the hybrids we might rely on the entire $n$-bit commitment. Nevertheless, the above is sufficient to prove adaptive security, at the cost of amplifying the adversary's advantage by a factor of only $2^m \ll 2^n$.
In all of our examples using the above framework, the different hybrids are captured by some sort of a graph pebbling game and the amount of information that the adversary needs to commit to in each pair of hybrids is bounded by the maximum number of pebbles in play at any point in time. Therefore, coming up with better strategies for proving adaptive security translates to various pebbling strategies for different types of graphs.
Identity-based Encryption from Codes with Rank Metric
Uncategorized
Uncategorized
Code-based cryptography has a long history, almost as long
as the history of public-key encryption (PKE). While we can construct
almost all primitives from codes such as PKE, signature, group signature
etc, it is a long standing open problem to construct an identity-based
encryption from codes. We solve this problem by relying on codes with
rank metric.
The concept of identity-based encryption (IBE), introduced by Shamir in
1984, allows the use of users’ identifier information such as email as public
key for encryption. There are two problems that makes the design of IBE
extremely hard: the requirement that the public key can be an arbitrary
string and the possibility to extract decryption keys from the public
keys. In fact, it took nearly twenty years for the problem of designing an
efficient method to implement an IBE to be solved. The known methods
of designing IBE are based on different tools: from elliptic curve pairings
by Sakai, Ohgishi and Kasahara and by Boneh and Franklin in 2000 and
2001 respectively; from the quadratic residue problem by Cocks in 2001;
and finally from the Learning-with-Error problem by Gentry, Peikert,
and Vaikuntanathan in 2008.
Among all candidates for post-quantum cryptography, there only exist
thus lattice-based IBE. In this paper, we propose a new method, based on
the hardness of learning problems with rank metric, to design the first
code-based IBE scheme. In order to overcome the two above problems
in designing an IBE scheme, we first construct a rank-based PKE, called
RankPKE, where the public key space is dense and thus can be obtained
from a hash of any identity. We then extract a decryption key from any
public key by constructing an trapdoor function which relies on RankSign
- a signature scheme from PQCrypto 2014.
In order to prove the security of our schemes, we introduced a new prob-
lem for rank metric: the Rank Support Learning problem (RSL). A high
technical contribution of the paper is devoted to study in details the
hardness of the RSL problem.
Recovering Short Generators of Principal Fractional Ideals in Cyclotomic Fields of Conductor $p^\alpha q^\beta$
Uncategorized
Uncategorized
Several recent cryptographic constructions - including a public key encryption scheme, a fully homomorphic encryption scheme, and a candidate multilinear map construction - rely on the hardness of the short generator principal ideal problem (SG-PIP): given a $\mathbb{Z}$-basis of some principal (fractional) ideal in an algebraic number field that is guaranteed to have an exceptionally short generator with respect to the logarithmic embedding, find a shortest generator of the principal ideal. The folklore approach to solve this problem is to split it into two subproblems. First, recover some arbitrary generator of the ideal, which is known as the principal ideal problem (PIP). Second, solve a bounded distance decoding (BDD) problem in the log-unit lattice to transform this arbitrary generator into a shortest generator of the ideal.
The first problem, i.e., solving the PIP, is known to be solvable in polynomial time on quantum computers for arbitrary number fields under the generalized Riemann hypothesis due to Biasse and Song. Cramer, Ducas, Peikert, and Regev showed, based on the work of Campbell, Groves, and Shepherd, that the second problem can be solved in polynomial time on classical computers for cyclotomic number fields of prime-power conductor.
In this work, we extend the work of Cramer, Ducas, Peikert, and Regev to cyclotomic number fields $K=\mathbb{Q}(\xi_m)$ of conductor $m=p^\alpha q^\beta$, where $p,q$ are distinct odd primes.
In more detail, we show that the second problem can be solved in classical polynomial time (with quantum polynomial time precomputation) under some sufficient conditions, if $(p,q)$ is an $(\alpha, \beta)$-generator prime pair, a new notion introduced in this work. We further provide experimental evidence that suggests that roughly $35\%$ of all prime pairs are $(\alpha, \beta)$-generator prime pairs for all $\alpha$ and $\beta$. Combined with the work of Biasse and Song our results show that under sufficient conditions the SG-PIP can be solved in quantum polynomial time in cyclotomic number fields of composite conductor of the form $p^\alpha q^\beta$.
Last updated: 2018-12-30
PROVABLY SECURE TWO-FACTOR AUTHENTICATION SCHEME FOR E-HEALTH USING SMART CARD
Uncategorized
Uncategorized
Nowadays, IT enabled service gain more attention due to easy to access resources from remote place. IT enabled services are extend their service to all kind of business and personal related applications like, e-commerce, e-business, e-transactions and e-healthcare etc.,. In India, e-healthcare system gains more attention in recent years due to its effectiveness. We have to consider information assurance is an important part of e-healthcare system, because maintaining of sensitive health records of individuals. Any information system is subject to two different issues, 1) information handling and 2) information assurance. An e-healthcare system has to provide necessary security factors without compromising information loss. Information access is one of the foremost issue for providing access rights to the legal users. In this paper, we have proposed a two factor authentication scheme using Elliptic Curve Cryptography with smart card. The proposed authentication is based on two-factor authentication with smart card and password, which provides high security with minimum computational cost. The proposed scheme generates new session key for every new session with fresh time stamp and nonce value. The proposed scheme needs minimum computation cost compared with the related authentication schemes using smart card
State of the Art in Lightweight Symmetric Cryptography
Lightweight cryptography has been one of the "hot topics" in symmetric cryptography in the recent years. A huge number of lightweight algorithms have been published, standardized and/or used in commercial products.
In this paper, we discuss the different implementation constraints that a "lightweight" algorithm is usually designed to satisfy in both the software and the hardware case. We also present an extensive survey of all lightweight symmetric primitives we are aware of. It covers designs from the academic community, from government agencies and proprietary algorithms which were reverse-engineered or leaked. Relevant national (NIST...) and international (ISO/IEC...) standards are listed.
We identified several trends in the design of lightweight algorithms, such as the designers' preference for ARX-based and bitsliced-S-Box-based designs or simpler key schedules. We also discuss more general trade-offs facing the authors of such algorithms and suggest a clearer distinction between two subsets of lightweight cryptography. The first, ultra-lightweight cryptography, deals with primitives fulfilling a unique purpose while satisfying specific and narrow constraints. The second is ubiquitous cryptography and it encompasses more versatile algorithms both in terms of functionality and in terms of implementation trade-offs.
Hedging Public-Key Encryption in the Real World
Hedged PKE schemes are designed to provide useful security when the per-message randomness fails to be uniform, say, due to faulty implementations or adversarial actions. A simple and elegant theoretical approach to building such schemes works like this: Synthesize fresh random bits by hashing all of the encryption inputs, and use the resulting hash output as randomness for an underlying PKE scheme. The idea actually goes back to the Fujisaki-Okamoto transform for turning CPA-secure encryption into CCA-secure encryption, and is also used to build deterministic PKE schemes.
In practice, implementing this simple construction is surprisingly difficult, as the high- and mid-level APIs presented by the most commonly used crypto libraries (e.g. OpenSSL and forks thereof) do not permit one to specify the per-encryption randomness. Thus application developers are forced to piece together low-level functionalities and attend to any associated, security-critical algorithmic choices. Other approaches to hedged PKE present similar problems in practice.
We reconsider the matter of building hedged PKE schemes, and the security notions they aim to achieve. We lift the current best-possible security notion for hedged PKE (IND-CDA) from the CPA setting to the CCA setting, and then show how to achieve it using primitives that are readily available from high-level APIs. We also propose a new security notion, MM-CCA, which generalizes traditional IND-CCA to admit imperfect randomness. Like IND-CCA, and unlike IND-CDA, our notion gives the adversary the public key. We show that MM-CCA is achieved by RSA-OAEP in the random-oracle model; this is significant in practice because RSA-OAEP is directly available from high-level APIs across all libraries we surveyed. We sort out relationships among the various notions, and also develop new results for existing hedged PKE constructions.
Quantum Security of NMAC and Related Constructions
We prove the security of NMAC, HMAC, AMAC, and the cascade construction with fixed input-length as quantum-secure pseudo-random functions (PRFs). Namely, they are indistinguishable from a random oracle against any polynomial-time quantum adversary that can make quantum superposition queries. In contrast, many blockcipher-based PRFs including CBC-MAC were recently broken by quantum superposition attacks.
Classical proof strategies for these constructions do not generalize to the quantum setting, and we observe that they sometimes even fail completely (e.g., the universal-hash then PRF paradigm for proving security of NMAC). Instead, we propose a direct hybrid argument as a new proof strategy (both classically and quantumly). We first show that a quantum-secure PRF is secure against key-recovery attacks, and remains secure under random leakage of the key. Next, as a key technical tool, we extend the oracle indistinguishability framework of Zhandry in two directions: we consider distributions on functions rather than strings, and we also consider a relative setting, where an additional oracle, possibly correlated with the distributions, is given to the adversary as well. This enables a hybrid argument to prove the security of NMAC. Security proofs for other constructions follow similarly.
Generalized Distinguishing Attack: A New Cryptanalysis of AES-like Permutations
Uncategorized
Uncategorized
We consider highly structured truncated differential paths to mount rebound attacks on hash functions based on AES-like permutations. We explain how such differential paths can be computed using a Mixed-Integer Linear Programming approach. Together with the SuperSBox description, this allows us to build a rebound attack with a $6$-round inbound phase whereas classical rebound attacks have $4$-round inbound phases. Non-square AES-like permutations seem to be more vulnerable than square ones. We illustrate this new technique by mounting the first distinguishing attack on a $11$-round version of Gr\o{}stl-$512$ internal permutation $P_{1024}$ with $\mathit{O}(2^{72})$ computational complexity and $\mathit{O}(2^{56})$ memory complexity, to be compared with the $\mathit{O} (2^{96})$ required computations of the corresponding generic attack. Previous best results on this permutation reached $10$ rounds with a computational complexity of $\mathit{O}(2^{392})$, to be compared with $\mathit{O}(2^{448})$ required by the corresponding generic attack.
Inverted Leftover Hash Lemma
Universal hashing found a lot of applications in computer science. In cryptography the most important fact about universal families is the so called Leftover Hash Lemma, proved by Impagliazzo, Levin and Luby. In the language of modern cryptography it states that almost universal families are good extractors.
In this work we provide a somewhat surprising characterization in the opposite direction. Namely, every extractor with sufficiently good parameters yields a universal family on a noticeable fraction of its inputs.
Our proof technique is based on tools from extremal graph theory applied to the ”collision graph” induced by the extractor, and may be of independent interest. We discuss implications for randomness extractors and non-malleable codes.
Last updated: 2018-10-12
Optimal Overcoming Weak Expectations
Barak et al. (CRYPTO'11) initiated the study of so called
square-friendly applications which offer good security
for keys with entropy deficiency (weak keys), for this reason being important for key derivation. The state of the art of security bounds was established by Dodis and Yu (TCC'13), by modeling "weak" keys as distributions of high collision entropy.
In this paper we answer the question what is the minimum requirement on weak keys to be "good" for these applications. The answer gives an elegant operational meaning to the notion of smooth collision entropy. Namely, smooth collision entropy is both sufficient and necessary (with essentially the same entropy parameters) to guarantee the security of square-friendly applications under weak keys. This characterization is a consequence of constrained optimization techniques.
Side-Channel Attacks on BLISS Lattice-Based Signatures -- Exploiting Branch Tracing Against strongSwan and Electromagnetic Emanations in Microcontrollers
In this paper, we investigate the security of the BLISS lattice-based signature scheme, one of the most promising candidates for post-quantum-secure signatures, against side-channel attacks. Several works have been devoted to its efficient implementation on various platforms, from desktop CPUs to micro-controllers and FPGAs, and more recent papers have also considered its security against certain types of physical attacks, notably fault injection and cache attacks. We turn to more traditional side-channel analysis, and describe several attacks that can yield a full key recovery.
We first identify a serious source of leakage in the rejection sampling algorithm used during signature generation. Existing implementations of that rejection sampling step, which is essential for security, actually leak the ``relative norm'' of the secret key. We show how an extension of an algorithm due to Howgrave-Graham and Szydlo can be used to recover the key from that relative norm, at least when the absolute norm is easy to factor (which happens for a significant fraction of secret keys). We describe how this leakage can be exploited in practice both on an embedded device (an 8-bit AVR microcontroller) using electromagnetic analysis (EMA), and a desktop computer (recent) Intel CPU running Linux) using branch tracing. The latter attack has been mounted against the open source VPN software strongSwan.
We also show that other parts of the BLISS signing algorithm can leak secrets not just for a subset of secret keys, but for 100% of them. The BLISS Gaussian sampling algorithm in strongSwan is intrinsically variable time. This would be hard to exploit using a noisy source of leakage like EMA, but branch tracing allows to recover the entire randomness and hence the key: we show that a single execution of the strongSwan signature algorithm is actually sufficient for full key recovery. We also describe a more traditional side-channel attack on the sparse polynomial multiplications carried out in BLISS: classically, multiplications can be attacked using DPA; however, our target 8-bit AVR target implementation uses repeated shifted additions instead. Surprisingly, we manage to obtain a full key recovery in that setting using integer linear programming from a single EMA trace.
A simple and compact algorithm for SIDH with arbitrary degree isogenies
Uncategorized
Uncategorized
We derive a new formula for computing arbitrary odd-degree isogenies between elliptic curves in Montgomery form. The formula lends itself to a simple and compact algorithm that can efficiently compute any low odd-degree isogenies inside the supersingular isogeny Diffie-Hellman (SIDH) key exchange protocol. Our implementation of this algorithm shows that, beyond the commonly used 3-isogenies, there is a moderate degradation in relative performance of $(2d+1)$-isogenies as $d$ grows, but that larger values of $d$ can now be used in practical SIDH implementations.
We further show that the proposed algorithm can be used to both compute isogenies of curves and evaluate isogenies at points, unifying the two main types of functions needed for isogeny-based public-key cryptography. Together, these results open the door for practical SIDH on a much wider class of curves, and allow for simplified SIDH implementations that only need to call one general-purpose function inside the fundamental computation of the large degree secret isogenies.
As an additional contribution, we also give new explicit formulas for 3- and 4-isogenies, and show that these give immediate speedups when substituted into pre-existing SIDH libraries.
Encryption Switching Protocols Revisited: Switching modulo $p$
Uncategorized
Uncategorized
At CRYPTO 2016, Couteau, Peters and Pointcheval introduced a new primitive called Encryption Switching Protocols, allowing to switch ciphertexts between two encryption schemes. If such an ESP is built with two schemes that are respectively additively and multiplicatively homomorphic, it naturally gives rise to a secure 2-party computation protocol. It is thus perfectly suited for evaluating functions, such as multivariate polynomials, given as arithmetic circuits. Couteau et al. built an ESP to switch between Elgamal and Paillier encryptions which do not naturally fit well together. Consequently, they had to design a clever variant of Elgamal over $\mathbf{Z}/n\mathbf{Z}$ with a costly shared decryption.
In this paper, we first present a conceptually simple generic construction for encryption switching protocols. We then give an efficient instantiation of our generic approach that uses two well-suited protocols, namely a variant of Elgamal in $\mathbf{Z}/p\mathbf{Z}$ and the Castagnos-Laguillaumie encryption which is additively homomorphic over $\mathbf{Z}/p\mathbf{Z}$. Among other advantages, this allows to perform all computations modulo a prime $p$ instead of an RSA modulus. Overall, our solution leads to significant reductions in the number of rounds as well as the number of bits exchanged by the parties during the interactive protocols. We also show how to extend its security to the malicious setting.
DeepSecure: Scalable Provably-Secure Deep Learning
This paper proposes DeepSecure, a novel framework that enables scalable execution of the state-of-the-art Deep Learning (DL) models in a privacy-preserving setting. DeepSecure targets scenarios in which neither of the involved parties including the cloud servers that hold the DL model parameters or the delegating clients who own the data is willing to reveal their information. Our framework is the first to empower accurate and scalable DL analysis of data generated by distributed clients without sacrificing the security to maintain efficiency. The secure DL computation in DeepSecure is performed using Yao’s Garbled Circuit (GC) protocol. We devise GC-optimized realization of various components used in DL. Our optimized implementation achieves more than 58-fold higher throughput per sample compared with the best prior solution. In addition to our optimized GC realization, we introduce a set of novel low-overhead pre-processing techniques which further reduce the GC overall runtime in the context of deep learning. Extensive evaluations of various DL applications demonstrate up to
two orders-of-magnitude additional runtime improvement achieved as a result of our pre-processing methodology. We also provide mechanisms to securely delegate GC computations to a third party in constrained embedded settings.
A Formal Treatment of Multi-key Channels
Secure channel protocols protect data transmission over a network from being overheard or tampered with. In the common abstraction, cryptographic models for channels involve a single key for ensuring the central security notions of confidentiality and integrity. The currently developed next version of the Transport Layer Security protocol, TLS 1.3, however introduces a key updating mechanism in order to deploy a sequence of multiple, possibly independent encryption keys in its channel sub-protocol. This design aims at achieving forward security, protecting prior communication after long-term key corruption, as well as security of individual channel phases even if the key in other phases is leaked (a property we denote as phase-key insulation). Neither of these security aspects has been treated formally in the context of cryptographic channels so far, leading to a current lack of techniques to evaluate such channel designs cryptographically.
We approach this gap by introducing the first formal model of multi-key channels, where sender and receiver can update their shared secret key during the lifetime of the channel without interrupting the communication. We present modular, game-based notions for confidentiality and integrity, integrating forward security and phase-key insulation as two advanced security aspects. As we show, our framework of notions on the lower end of its hierarchy naturally connects to the existing notions of stateful encryption established for single-key channels. Like for classical channels, it further allows for generically composing chosen-ciphertext confidentiality from chosen-plaintext confidentiality and ciphertext integrity. We instantiate the strongest security notions in our model with a construction based on authenticated encryption with associated data and a pseudorandom function. Being comparatively close, our construction additionally enables us to discuss the TLS 1.3 record protocol design.
Algebraic XOR-RKA-Secure Pseudorandom Functions from Post-Zeroizing Multilinear Maps
Uncategorized
Uncategorized
Due to the vast number of successful related-key attacks against existing block-ciphers, related-key security has become a common design goal for such primitives. In these attacks, the adversary is not only capable of seeing the output of a function on inputs of its choice, but also on related keys. At Crypto 2010, Bellare and Cash proposed the first construction of a pseudorandom function that could provably withstand such attacks based on standard assumptions. Their construction, as well as several others that appeared more recently, have in common the fact that they only consider linear or polynomial functions of the secret key over complex groups. In reality, however, most related-key attacks have a simpler form, such as the XOR of the key with a known value.
To address this problem, we propose the first construction of RKA secure pseudorandom function for XOR relations. Our construction relies on multilinear maps and, hence, can only be seen as a feasibility result. Nevertheless, we remark that it can be instantiated under two of the existing multilinear-map candidates since it does not reveal any encodings of zero. To achieve this goal, we rely on several techniques that were used in the context of program obfuscation, but we also introduce new ones to address challenges that are specific to the related-key-security setting.
Optimal Security Reductions for Unique Signatures: Bypassing Impossibilities with A Counterexample
Uncategorized
Uncategorized
Optimal security reductions for unique signatures (Coron, Eurocrypt 2002) and their generalization, i.e., efficiently re-randomizable signatures (Hofheinz et al., PKC 2012 and Baderet al., Eurocrypt 2016) have been well studied in the literature. Particularly, it has been shown that under a non-interactive hard assumption, any security reduction (with or without random oracles) for a unique signature scheme or an efficiently re-randomizable signature scheme must loose a factor of at least $q_s$ in the security model of existential unforgeability against chosen-message attacks (EU-CMA), where $q_s$ denotes the number of signature queries. Note that the number $q_s$ can be as large as $2^{30}$ in practice. All unique signature schemes and efficiently re-randomizable signature schemes are concluded to be accompanied with loose reductions from these impossibility results.
Somewhat surprisingly, in contrast to previous impossibility results (Coron, Eurocrypt 2002; Hofheinz et al., PKC 2012; Bader et al., Eurocrypt 2016), in this work we show that without changing the assumption type and security model, it is not always the case that any security reduction must loose a factor of at least $q_s$. As a counterexample, we propose a unique signature scheme with a tight reduction in the EU-CMA security model under the Computational Diffie-Hellman (CDH) assumption. Precisely, in the random oracle model, we can program a security reduction with a loss factor of at most $nq^{1/{n}}$, where $n$ can be any integer independent of the security parameter for the scheme construction and $q$ is the number of hash queries to random oracles. The loss factor in our reduction can be very small. Considering $n=25$ and $q=2^{50}$ as an example, the loss factor is of at most $nq^{1/{n}}=100$ and therefore our security reduction is tight.
Notice that the previous impossibility results are derived from proofs via a so-called meta-reduction technique. We stress that instead of indicating any flaw in their meta-reduction proofs, our counterexample merely demonstrates that their given meta-reduction proofs fail to capture all security reductions. More precisely, we adopt a reduction called query-based reduction, where the reduction uses a hash query from the adversary to solve an underlying hard problem. We show that the meta-reduction proofs break down in our query-based reduction. The query-based reduction is not a new notion and it has been adopted for encryption proofs, but this work is the first seminal approach for applying query-based reduction in digital signatures.
The given counterexample in this work is of an independent interest as it implies a generic way of constructing a digital signature scheme (including unique signatures) with a tight reduction in the random oracle model from a digital signature scheme with a loose reduction. Although our proposed methodology is somewhat impractical due to the inefficiency of signature length, it introduces a completely new approach for tight proofs that is different from traditional approaches using a random salt.
Full-State Keyed Duplex With Built-In Multi-User Support
The keyed duplex construction was introduced by Bertoni et al.(SAC 2011) and recently generalized to full-state absorption by Mennink et al.(ASIACRYPT 2015). We present a generalization of the full-state keyed duplex that natively supports multiple instances by design, and perform a security analysis that improves over that of Mennink et al. in terms of a more modular security analysis and a stronger and more adaptive security bound. Via the introduction of an additional parameter to the analysis, our bound demonstrates a significant security improvement in case of nonce-respecting adversaries. Furthermore, by supporting multiple instances by design, instead of adapting the security model to it, we manage to derive a security bound that is largely independent of the number of instances.
Time-Memory Tradeoff Attacks on the MTP Proof-of-Work Scheme
Uncategorized
Uncategorized
Proof-of-work (PoW) schemes are cryptographic primitives with numerous applications, and in particular, they play a crucial role in maintaining consensus in cryptocurrency networks. Ideally, a cryptocurrency PoW scheme should have several desired properties, including efficient verification on one hand, and high memory consumption of the prover's algorithm on the other hand, making the scheme less attractive for implementation on dedicated hardware.
At the USENIX Security Symposium 2016, Biryukov and Khovratovich presented a new promising PoW scheme called MTP (Merkle Tree Proof) that achieves essentially all desired PoW properties. As a result, MTP has received substantial attention from the cryptocurrency community. The scheme uses a Merkle hash tree construction over a large array of blocks computed by a memory consuming (memory-hard) function. Despite the fact that only a small fraction of the memory is verified by the efficient verification algorithm, the designers claim that a cheating prover that uses a small amount of memory will suffer from a significant computational penalty.
In this paper, we devise a sub-linear computation-memory tradeoff attack on MTP. We apply our attack to the concrete instance proposed by the designers which uses the memory-hard function Argon2d and computes a proof by allocating 2 gigabytes of memory. The attack computes arbitrary malicious proofs using less than a megabyte of memory (about 1/3000 of the honest prover's memory) at a relatively mild penalty of 170 in computation. This is more than 55,000 times faster than what is claimed by the designers. The attack requires a one-time precomputation step of complexity $2^{64}$, but its online cost is only increased by a factor which is less than 2 when spending $2^{48}$ precomputation time.
The main idea of the attack is to exploit the fact that Argon2d accesses its memory in a way which is determined by its previous computations. This allows to inject a small fraction of carefully selected memory blocks that manipulate Argon2d's memory access patterns, significantly weakening its memory-hardness.
Modes of Operation Suitable for Computing on Encrypted Data
We examine how two parallel modes of operation for Authenticated Encryption (namely CTR+PMAC and OTR mode) work when evaluated in a multi-party computation engine. These two modes are selected because they suit the PRFs examined in previous works. In particular the modes are highly parallel, and do not require evaluation of the inverse of the underlying PRF. In order to use these modes one needs to convert them from their original instantiation of being defined on binary blocks of data, to working on elememts in a large prime finite field. The latter fitting the use case of many secret-sharing based MPC engines. In doing this conversion we examine the associated security proofs of PMAC and OTR, and show that they carry over to this new setting.
Multi-Key Authenticated Encryption with Corruptions: Reductions are Lossy
We study the security of symmetric encryption schemes in settings with multiple users and realistic adversaries who can adaptively corrupt encryption keys. To avoid confinement to any particular definitional paradigm, we propose a general framework for multi-key security definitions. By appropriate settings of the parameters of the framework, we obtain multi-key variants of many of the existing single-key security notions.
This framework is instrumental in establishing our main results. We show that for all single-key secure encryption schemes satisfying a minimal key uniqueness assumption and almost any instantiation of our general multi-key security notion, any reasonable reduction from the multi-key game to a standard single-key game necessarily incurs a linear loss in the number of keys. We prove this result for all three classical single-key security notions capturing confidentiality, authenticity and the combined authenticated encryption notion.
A Reaction Attack on the QC-LDPC McEliece Cryptosystem
Guo et al. recently presented a reaction attack against the QC-MDPC McEliece cryptosystem. Their attack is based on the observation that when a bit-flipping decoding algorithm is used in the QC-MDPC McEliece, then there exists a dependence between the secret matrix $H$ and the failure probability of the bit-flipping algorithm. This dependence can be exploited to reveal the matrix $H$ which constitutes the private key in the cryptosystem. It was conjectured that such dependence is present even when a soft-decision decoding algorithm is used instead of a bit-flipping algorithm.
This paper shows that a similar dependence between the secret matrix $H$ and the failure probability of a decoding algorithm is also present in the QC-LDPC McEliece cryptosystem. Unlike QC-MDPC McEliece, the secret key in QC-LDPC McEliece also contains matrices $S$ and $Q$ in addition to the matrix $H$. We observe that there also exists a dependence between the failure probability and the matrix $Q$. We show that these dependences leak enough information to allow an attacker to construct a sparse parity-check matrix for the public code. This parity-check matrix can then be used for decrypting ciphertexts.
We tested the attack on an implementation of the QC-LDPC McEliece using a soft-decision decoding algorithm. Thus we also confirmed that soft-decision decoding algorithms can be vulnerable to leaking information about the secret key.
Robust Fuzzy Extractors and Helper Data Manipulation Attacks Revisited: Theory vs Practice
Uncategorized
Uncategorized
Fuzzy extractors have been proposed in 2004 by Dodis et al. as a secure way to generate cryptographic keys from noisy sources. In recent years, fuzzy extractors have become an important building block in hardware security due to their use in secure key generation based on Physical Unclonable Functions (PUFs). Fuzzy extractors are provably secure against passive attackers. A year later Boyen et al. introduced robust fuzzy extractors which are also provably secure against active attackers, i.e., attackers that can manipulate the helper data.
In this paper we show that the original provable secure robust fuzzy extractor construction by Boyen et al. actually does not fulfill the error-correction requirements for practical PUF applications. The fuzzy extractors proposed for PUF-based key generation on the other hand that fulfill the error-correction requirements cannot be extended to such robust fuzzy extractors, due to a strict bound $t$ on the number of correctable errors. While it is therefore tempting to simply ignore this strict bound, we present novel helper data manipulation attacks on fuzzy extractors that also work if a ``robust fuzzy extractor-like'' construction without this strict bound is used.
Hence, this paper can be seen as a call for action to revisit this seemingly solved problem of building robust fuzzy extractors. The new focus should be on building more efficient solutions in terms of error-correction capability, even if this might come at the costs of a proof in a weaker security model.
Reducing Communication Channels in MPC
In both information-theoretic and computationally-secure Multi-Party Computation (MPC) protocols the parties are usually assumed to be connected by a complete network of secure or authenticated channels, respectively. Taking inspiration from a recent, highly efficient, three-party honest-majority computationally-secure MPC protocol of Araki et al., we show how to perform the most costly part of a computationally secure MPC protocol for an arbitrary $Q_2$ access structure over an incomplete network. We present both passive and actively secure (with abort) variants of our protocol. In all cases we require fewer communication channels for secure multiplication than Maurer's ``MPC-Made-Simple'' protocol, at the expense of requiring pre-shared secret keys for Pseudo-Random Functions (PRFs).
Laconic Oblivious Transfer and its Applications
In this work, we introduce a novel technique for secure computation over large inputs. Specifically, we provide a new oblivious transfer (OT) protocol with a laconic receiver. Laconic OT allows a receiver to commit to a large input $D$ (of length $M$) via a short message. Subsequently, a single short message by a sender allows the receiver to learn $m_{D[L]}$, where the messages $m_0, m_1$ and the location $L \in [M]$ are dynamically chosen by the sender. All prior constructions of OT required the receiver's outgoing message to grow with $D$.
Our key contribution is an instantiation of this primitive based on the Decisional Diffie-Hellman (DDH) assumption in the common reference string (CRS) model. The technical core of this construction is a novel use of somewhere statistically binding (SSB) hashing in conjunction with hash proof systems. Next, we show applications of laconic OT to non-interactive secure computation on large inputs and multi-hop homomorphic encryption for RAM programs.
To BLISS-B or not to be - Attacking strongSwan's Implementation of Post-Quantum Signatures
In the search for post-quantum secure alternatives to RSA and ECC, lattice-based cryptography appears to be an attractive and efficient option. A particularly interesting lattice-based signature scheme is BLISS, offering key and signature sizes in the range of RSA moduli. A range of works on efficient implementations of BLISS is available, and the scheme has seen a first real-world adoption in strongSwan, an IPsec-based VPN suite. In contrast, the implementation-security aspects of BLISS, and lattice-based cryptography in general, are still largely unexplored.
At CHES 2016, Groot Bruinderink et al. presented the first side-channel attack on BLISS, thus proving that this topic cannot be neglected. Nevertheless, their attack has some limitations. First, the technique is demonstrated via a proof-of-concept experiment that was not performed under realistic attack settings. Furthermore, the attack does not apply to BLISS-B, an improved variant of BLISS and also the default option in strongSwan. This problem also applies to later works on implementation security of BLISS.
In this work, we solve both of the above problems. We present a new side-channel key-recovery algorithm against both the original BLISS and the BLISS-B variant. Our key-recovery algorithm draws on a wide array of techniques, including learning-parity with noise, integer programs, maximimum likelihood tests, and a lattice-basis reduction. With each application of a technique, we reveal additional information on the secret key culminating in a complete key recovery.
Finally, we show that cache attacks on post-quantum cryptography are not only possible, but also practical. We mount an asynchronous cache attack on the production-grade BLISS-B implementation of strongSwan. The attack recovers the secret signing key after observing roughly 6000 signature generations.
Multi Collision Resistant Hash Functions and their Applications
Collision resistant hash functions are functions that shrink their input, but for which it is computationally infeasible to find a collision, namely two strings that hash to the same value (although collisions are abundant).
In this work we study multi-collision resistant hash functions (MCRH) a natural relaxation of collision resistant hash functions in which it is difficult to find a t-way collision (i.e., t strings that hash to the same value) although finding (t-1)-way collisions could be easy. We show the following:
1. The existence of MCRH follows from the average case hardness of a variant of the Entropy Approximation problem. The goal in the entropy approximation problem (Goldreich, Sahai and Vadhan, CRYPTO '99) is to distinguish circuits whose output distribution has high entropy from those having low entropy.
2. MCRH imply the existence of constant-round statistically hiding (and computationally binding) commitment schemes. As a corollary, using a result of Haitner et-al (SICOMP, 2015), we obtain a blackbox separation of MCRH from any one-way permutation.
Multi-Collision Resistance: A Paradigm for Keyless Hash Functions
We introduce a new notion of multi-collision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomial-time non-uniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications.
Assuming the existence of such hash functions, we resolve the long-standing question of the round complexity of zero knowledge protocols --- we construct a 3-message zero knowledge argument against arbitrary polynomial-size non-uniform adversaries. We also improve the round complexity in several other central applications, including a 3-message succinct argument of knowledge for NP, a 4-message zero-knowledge proof, and a 5-message public-coin zero-knowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collision-resistance to keyed multi-collision resistance.
The core technical contribution behind our results is a domain extension transformation from multi-collision-resistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new information-theoretic tools. In particular, we define and construct a new variant of list-recoverable codes, which may be of independent interest.
New Linear Attacks on Block Cipher GOST
Defined in the standard GOST 28147-89, GOST is a Soviet and Russian government standard symmetric-key block cipher. GOST has the 64-bit block size and a key length of 256 bits. It is a Feistel network of 32 rounds. In 2010, GOST was submitted to ISO 18033 to become a worldwide industrial encryption standard. GOST 28147-89 has also been published as informational RFC 5830 with IETF.
In this paper, we study linear attacks on GOST 28147-89. Prior to us, [Shorin-Jelezniakov-Gabidulin'2001] did some analysis on the linear approximation of GOST without giving any detailed results. [Shorin-Jelezniakov-Gabidulin'2001] claimed that the complexity of the linear attack on GOST is higher than $2^{256}$ after 5 rounds. In our work, we show that this is not true. First, we give the detailed bias analysis on the GOST round function for the first time. We show that the largest bias is $2^{-7}$. Secondly, we proposed the first known linear attacks on GOST. The recent idea of synthetic linear analysis [Lu-Vaudenay-Meier'2012] is then successfully applied to improve the bias for the $r$-round linear approximation of GOST. In summary, our attack on 8-round GOST recovers the key in time $2^{37}$ with $2^{50}$ known plaintexts in the single-key setting. For the 16-round GOST with last 8 rounds using subkeys in reverse order, our distinguishing attack works in time $2^{85}$ using $2^{85}$ known plaintexts, in the plain multiple-key setting without the related-key assumption. That is, the plaintexts can be encrypted by arbitrary number of keys, with each key encrypting arbitrary number of plaintexts, as long as we have a total of $2^{85}$ known plaintexts. For the 32-round GOST with the slightly tweaked key schedule, i.e., assuming last 16 rounds using subkeys in reverse order, our distinguishing attack works in time $2^{170.8}$, given $2^{170.8}$ known plaintexts, in the plain multiple-key setting without the related-key assumption. To the best of our knowledge, our distinguishing attacks are the first known distinguishers on block ciphers in the plain multiple-key setting without the usual related-key assumption. Finally, for the 32-round GOST with the original key schedule, our distinguisher works in time $2^{173.8}$, given $2^{173.8}$ known plaintexts, in the related-key setting. This is the fastest attack known so far, compared with the best attacks [Dinur-Dunkelman-Shamir'2012], [Courtois'2012] on the full 32-round GOST.
Collision Resistant Hashing for Paranoids: Dealing with Multiple Collisions
A collision resistant hash (CRH) function is one that compresses its input, yet it is hard to find a collision, i.e. a $x_1 \neq x_2$ s.t. $h(x_1) = h(x_2)$. Collision resistant hash functions are one of the more useful cryptographic primitives both in theory and in practice and two prominent applications are in signature schemes and succinct zero-knowledge arguments.
In this work we consider a relaxation of the above requirement that we call Multi-CRH: a function where it is hard to find $x_1, x_2, \ldots, x_k$ which are all distinct, yet $ h(x_1) = h(x_2) = \cdots = h(x_k)$. We show that for some of the major applications of CRH functions it is possible to replace them by the weaker notion of an Multi-CRH, albeit at the price of adding interaction: we show a statistically hiding commitment schemes with succinct interaction (committing to $\mathsf{poly}(n)$ bits requires exchanging $O(n)$ bits) that can be opened locally (without revealing the full string). This in turn can be used to provide succinct arguments for any statement. On the other hand we show black-box separation results from standard CRH and a hierarchy of such Multi-CRHs.
Authenticating compromisable storage systems
A service may be implemented over several servers, and those servers
may become compromised by an attacker, e.g. through software
vulnerabilities. When this happens, the service manager will remove
the vulnerabilities and re-instate the server. Typically, this will
involve regenerating the public key by which clients authenticate the
service, and revoking the old one.
This paper presents a scheme which allows a storage service composed
of several servers to create a group public key in a decentralised
manner, and maintain its security even when such compromises take
place. By maintaining keys for a long term, we reduce the reliance on
public-key certification. The storage servers periodically update the
decryption secrets corresponding to a public key, in such a way that
secrets gained by an attacker are rendered useless after an update
takes place. An attacker would have to compromise all the servers
within a short period lying between two updates in order to fully
compromise the system.
Cryptanalysis of Middle Lattice on the Overstretched NTRU Problem for General Modulus Polynomial
Uncategorized
Uncategorized
The overstretched NTRU problem, which is the NTRU problem with super-polynomial size q in n,
is one of the most important candidates for higher level cryptography.
Unfortunately, Albrecht et al. in Crypto 2016 and Cheon et al. in ANTS 2016 proposed so-called subfield attacks
which demonstrate that the overstretched NTRU problems with power-of-two cyclotomic modulus are not secure enough
with given parameters in GGH multilinear map and YASHE/LTV fully homomorphic encryption.
Moreover, Kirchner and Fouque presented new cryptanalysis of the overstretched NTRU problem over general modulus in Eurocrypt 2017.
They showed that a lattice basis reduction algorithm upon middle lattice, which is first presented by Howgrave-Graham
in Crypto 2007,
experimentally recover secret parameters of the overstretched NTRU problem.
In this paper, we revisit the middle lattice technique on the overstretched NTRU problem.
This analysis show that the optimized middle lattice technique has same complexity to subfield attacks,
but threaten more general base ring with poly(n) expansion factor
as common in suggested schemes like original GGH, YASHE scheme and NTRU prime rings.
Our new analysis implies that cryptosystem related to the overstretched NTRU problem cannot be secured by changing base ring.
In addition, we present an extended (trace/norm) subfield attack for the power-of-two cyclotomic modulus, which is also one
of the middle lattice technique.
This extended subfield attack has a similar asymptotic complexity to the previous subfield attacks, but with smaller constant in the exponent term.
A multi-start heuristic for multiplicative depth minimization of boolean circuits
Uncategorized
Uncategorized
In this work we propose a multi-start heuristic which aims at minimizing the multiplicative depth of boolean circuits. The multiplicative depth objective is encountered in the field of homomorphic encryption where ciphertext size depends on the number of consecutive multiplications. The heuristic is based on rewrite operators for multiplicative depth-2 paths. Even if the proposed rewrite operators are simple and easy to understand the experimental results show that they are rather powerful. The multiplicative depth of the benchmarked circuits was hugely improved. In average the obtained multiplicative depths were lower by more than 3 times than the initial ones. The proposed rewrite operators are not limited to boolean circuits and can also be used for arithmetic circuits.
On the Statistical Leak of the GGH13 Multilinear Map and some Variants
At EUROCRYPT 2013, Garg, Gentry and Halevi proposed a candidate construction (later referred as GGH13) of cryptographic multilinear map (MMap). Despite weaknesses uncovered by Hu and Jia (EUROCRYPT 2016), this candidate is still used for designing obfuscators.
The naive version of the GGH13 scheme was deemed susceptible to averaging attacks, i.e., it could suffer from a statistical leak (yet no precise attack was described). A variant was therefore devised, but it remains heuristic. Recently, to obtain MMaps with low noise and modulus, two variants of this countermeasure were developed by Döttling et al. (EPRINT:2016/599).
In this work, we propose a systematic study of this statistical leak for all these GGH13 variants. In particular, we confirm the weakness of the naive version of GGH13. We also show that, among the two variants proposed by Döttling et al., the so-called conservative method is not so effective: it leaks the same value as the unprotected method. Luckily, the leak is more noisy than in the unprotected method, making the straightforward attack unsuccessful. Additionally, we note that all the other methods also leak values correlated with secrets.
As a conclusion, we propose yet another countermeasure, for which this leak is made unrelated to all secrets. On our way, we also make explicit and tighten the hidden exponents in the size of the parameters, as an effort to assess and improve the efficiency of MMaps.
A New Public-Key Cryptosystem via Mersenne Numbers
Uncategorized
Uncategorized
In this work, we propose a new public-key cryptosystem whose security is based on the computational intractability of the following problem: Given a Mersenne number p = 2^n - 1, where n is a prime, a positive integer h, and two n-bit integers T, R, decide whether their exist n-bit integers F, G each of Hamming weight less than h such that T = F R + G modulo p.
Sharper Bounds in Lattice-Based Cryptography using the Rényi Divergence
The Rényi divergence is a measure of divergence between distributions. It has recently found several applications in lattice-based cryptography. The contribution of this paper is twofold.
First, we give theoretic results which renders it more efficient and easier to use. This is done by providing two lemmas, which give tight bounds in very common situations { for distributions that are tailcut or have a bounded relative error. We then connect the Rényi divergence to the max-log distance. This allows the Rényi divergence to indirectly benefit from all the advantages of a distance.
Second, we apply our new results to five practical usecases. It allows us to claim 256 bits of security for a floating-point precision of 53 bits, in cases that until now either required more than 150 bits of precision or were limited to 100 bits of security: rejection sampling, trapdoor sampling (61 bits in this case) and a new sampler by Micciancio and Walter. We also propose a new and compact approach for table-based sampling, and squeeze the standard deviation of trapdoor samplers by a factor that provides a gain of 30 bits of security in practice.
Privacy-Preserving Aggregation of Time-Series Data with Public Verifiability from Simple Assumptions
Aggregator oblivious encryption was proposed by Shi et al. (NDSS 2011), where an aggregator can compute an aggregated sum of data and is unable to learn anything else (aggregator obliviousness). Since the aggregator does not learn individual data that may reveal users' habits and behaviors, several applications, such as privacy-preserving smart metering, have been considered. In this paper, we propose aggregator oblivious encryption schemes with public verifiability where the aggregator is required to generate a proof of an aggregated sum and anyone can verify whether the aggregated sum has been correctly computed by the aggregator. Though Leontiadis et al. (CANS 2015) considered the verifiability, their scheme requires an interactive complexity assumption to provide the unforgeability of the proof. Our schemes are proven to be unforgeable under a static and simple assumption (a variant of the Computational Diffie-Hellman assumption). Moreover, our schemes inherit the tightness of the reduction of the Benhamouda et al. scheme (ACM TISSEC 2016) for proving aggregator obliviousness. This tight reduction allows us to employ elliptic curves of a smaller order and leads to efficient implementation.
Refined Probability of Differential Characteristics Including Dependency Between Multiple Rounds
Uncategorized
Uncategorized
The current paper studies the probability of differential characteristics for an unkeyed (or with a fixed key) construction. Most notably, it focuses on the gap between two probabilities of differential characteristics: probability with independent S-box assumption, $p_{ind}$, and exact probability, $p_{exact}$. It turns out that $p_{exact}$ is larger than $p_{ind}$ in Feistel network with some S-box based inner function. The mechanism of this gap is then theoretically analyzed. The gap is derived from interaction of S-boxes in three rounds, and the gap depends on the size and choice of the S-box. In particular the gap can never be zero when the S-box is bigger than six bits. To demonstrate the power of this improvement, a related-key differential characteristic is proposed against a lightweight block cipher RoadRunneR. For the 128-bit key version, $p_{ind}$ of $2^{-48}$ is improved to $p_{exact}$ of $2^{-43}$. For the 80-bit key version, $p_{ind}$ of $2^{-68}$ is improved to $p_{exact}$ of $2^{-62}$. The analysis is further extended to SPN with an almost-MDS binary matrix in the core primitive of the authenticated encryption scheme Minalpher: $p_{ind}$ of $2^{-128}$ is improved to $p_{exact}$ of $2^{-96}$, which allows to extend the attack by two rounds.
Constrained Keys for Invertible Pseudorandom Functions
A constrained pseudorandom function (PRF) is a secure PRF for which one can generate constrained keys that can only be used to evaluate the PRF on a subset of the domain. Constrained PRFs are used widely, most notably in applications of indistinguishability obfuscation (iO). In this paper we show how to constrain an invertible PRF (IPF), which is significantly harder. An IPF is a secure injective PRF accompanied by an inversion algorithm. A constrained key for an IPF can only be used to evaluate the IPF on a subset S of the domain, and to invert the IPF on the image of S. We first define the notion of a constrained IPF and then give two main constructions: one for puncturing an IPF and the other for (single-key) circuit constraints. Both constructions rely on recent work on private constrained PRFs. We also show that constrained pseudorandom permutations for many classes of constraints are impossible under our definition.
Forward-Security under Continual Leakage
Current signature and encryption schemes secure against continual leakage fail completely if the key in any time period is fully exposed. We suggest forward security as a second line of defense, so that in the event of full exposure of the current secret key, at least uses of keys prior to this remain secure, a big benefit in practice. (For example if the signer is a certificate authority, full exposure of the current secret key would not invalidate certificates signed under prior keys.) We provide definitions for signatures and encryption that are forward-secure under continual leakage. Achieving these definitions turns out to be challenging, and we make initial progress with some constructions and transforms.
Security of Even--Mansour Ciphers under Key-Dependent Messages
Uncategorized
Uncategorized
The iterated Even--Mansour (EM) ciphers form the basis of many block cipher designs. Several results have established their security in the CPA/CCA models, under related-key attacks, and in the indifferentiability framework. In this work, we study the Even--Mansour ciphers under key-dependent message (KDM) attacks. KDM security is particularly relevant for block ciphers since non-expanding mechanisms are convenient in setting such as full disk encryption (where various forms of key-dependency might exist). We formalize the folklore result that the ideal cipher is KDM secure. We then show that EM ciphers meet varying levels of KDM security depending on the number of rounds and permutations used. One-round EM achieves some form of KDM security, but this excludes security against offsets of keys. With two rounds we obtain KDM security against offsets, and using different round permutations we achieve KDM security against all permutation-independent claw-free functions. As a contribution of independent interest, we present a modular framework that can facilitate the security treatment of symmetric constructions in models such as RKA or KDM that allow for correlated inputs.
Insuperability of the Standard Versus Ideal Model Gap for Tweakable Blockcipher Security
Uncategorized
Uncategorized
Two types of tweakable blockciphers based on classical blockciphers have been presented over the last years: non-tweak-rekeyable and tweak-rekeyable, depending on whether the tweak may influence the key input to the underlying blockcipher. In the former direction, the best possible security is conjectured to be $2^{\sigma n/(\sigma+1)}$, where $n$ is the size of the blockcipher and $\sigma$ is the number of blockcipher calls. In the latter direction, Mennink and Wang et al. presented optimally secure schemes, but only in the ideal cipher model. We investigate the possibility to construct a tweak-rekeyable cipher that achieves optimal security in the standard cipher model. As a first step, we note that all standard-model security results in literature implicitly rely on a generic standard-to-ideal transformation, that replaces all keyed blockcipher calls by random secret permutations, at the cost of the security of the blockcipher. Then, we prove that if this proof technique is adopted, tweak-rekeying will not help in achieving optimal security: if $2^{\sigma n/(\sigma+1)}$ is the best one can get without tweak-rekeying, optimal $2^n$ provable security with tweak-rekeying is impossible.
Encrypted Davies-Meyer and Its Dual: Towards Optimal Security Using Mirror Theory
Uncategorized
Uncategorized
At CRYPTO 2016, Cogliati and Seurin introduced the Encrypted Davies-Meyer construction, $p_2(p_1(x) \oplus x)$ for two $n$-bit permutations $p_1,p_2$, and proved security up to $2^{2n/3}$. We present an improved security analysis up to $2^n/(67n)$. Additionally, we introduce the dual of the Encrypted Davies-Meyer construction, $p_2(p_1(x)) \oplus p_1(x)$, and prove even tighter security for this construction: $2^n/67$. We finally demonstrate that the analysis neatly generalizes to prove almost optimal security of the Encrypted Wegman-Carter with Davies-Meyer MAC construction. Central to our analysis is a modernization of Patarin's mirror theorem and an exposition of how it relates to fundamental cryptographic problems.
A Unified Framework for Secure Search Over Encrypted Cloud Data
This paper presents a unified framework that supports different types of privacy-preserving search queries over encrypted cloud data. In the framework, users can perform any of the multi-keyword search, range search and k-nearest neighbor search operations in a privacy-preserving manner. All three types of queries are transformed into predicate-based search leveraging bucketization, locality sensitive hashing and homomorphic encryption techniques. The proposed framework is implemented using Hadoop MapReduce, and its efficiency and accuracy are evaluated using publicly available real data sets. The implementation results show that the proposed framework can effectively be used in moderate sized data sets and it is scalable for much larger data sets provided that the number of computers in the Hadoop cluster is increased. To the best of our knowledge, the proposed framework is the first privacy-preserving solution, in which three different types of search queries are effectively applied over encrypted data.
Total Break of the Fully Homomorphic Multivariate Encryption Scheme of 2017/458: Decryption can not be of low degree
Uncategorized
Uncategorized
In this paper we show how to totally break the fully homomorphic encryption sccheme of eprint 2017/458.
On the Relation Between SIM and IND-RoR Security Models for PAKEs
Password-based Authenticated Key-Exchange (PAKE) protocols allow users, who need only to share a password, to compute a high-entropy shared session key despite passwords being taken from a dictionary. Security models for PAKE protocols aim to capture the desired security properties that such protocols must satisfy when executed in the presence of an active adversary. They are usually classified into i) indistinguishability-based (IND-based) or ii) simulation-based (SIM-based). The relation between these two security notions is unclear and mentioned as a gap in the literature. In this work, we prove that SIM-BMP security from Boyko et al. (EUROCRYPT 2000) implies IND-RoR security from Abdalla et al. (PKC 2005) and that IND-RoR security is equivalent to a slightly modified version of SIM-BMP security. We also investigate whether IND-RoR
security implies (unmodified) SIM-BMP security.
Short CCA-Secure Attribute-Based Encryption
Uncategorized
Uncategorized
Chosen-ciphertext attacks are typical threat on public-key encryption schemes. We propose a technique of individually converting an attribute-based encryption scheme (ABE) which is secure against chosen-plaintext attacks into an ABE scheme which is secure against chosen-ciphertext attacks. Our technique is helpful when a Diffie-Hellman tuple to be verified is in the target group of a bilinear map. The employed technique, the Twin Diffie-Hellman Trapdoor Test of Cash, Kiltz and Shoup, results in expansion of the secret-key length and the decryption cost by a factor of four, while the public-key and the ciphertext lengths and the encryption cost remain almost the same.
Why Your Encrypted Database Is Not Secure
Encrypted databases, a popular approach to protecting data from compromised database management systems (DBMS’s), use abstract threat models that capture neither realistic databases, nor realistic attack scenarios. In particular, the “snapshot attacker” model used to support the security claims for many encrypted databases does not reflect the information about past queries available in any snapshot attack on an actual DBMS.
We demonstrate how this gap between theory and reality causes encrypted databases to fail to achieve their “provable security” guarantees.
Access Control Encryption for General Policies from Standard Assumptions
Uncategorized
Uncategorized
Functional encryption enables fine-grained access to encrypted data. In many scenarios, however, it is important to control not only what users are allowed to read (as provided by traditional functional encryption), but also what users are allowed to send. Recently, Damgård et al. (TCC 2016) introduced a new cryptographic framework called access control encryption (ACE) for restricting information flow within a system in terms of both what users can read as well as what users can write. While a number of access control encryption schemes exist, they either rely on strong assumptions such as indistinguishability obfuscation or are restricted to simple families of access control policies.
In this work, we give the first ACE scheme for arbitrary policies from standard assumptions. Our construction is generic and can be built from the combination of a digital signature scheme, a predicate encryption scheme, and a (single-key) functional encryption scheme that supports randomized functionalities. All of these primitives can be instantiated from standard assumptions in the plain model and therefore, we obtain the first ACE scheme capable of supporting general policies from standard assumptions. One possible instantiation of our construction relies upon standard number-theoretic assumptions (namely, the DDH and RSA assumptions) and standard lattice assumptions (namely, LWE). Finally, we conclude by introducing several extensions to the ACE framework to support dynamic and more fine-grained access control policies.
Tweakable Blockciphers for Efficient Authenticated Encryptions with Beyond the Birthday-Bound Security
Uncategorized
Uncategorized
Modular design via a tweakable blockcipher (TBC) offers efficient authenticated encryption (AE) schemes (with associated data) that call a blockcipher once for each data block (of associated data or a plaintext). However, the existing efficient blockcipher-based TBCs are secure up to the birthday bound, where the underlying keyed blockcipher is a secure strong pseudorandom permutation. Existing blockcipher-based AE schemes with beyond-birthday-bound (BBB) security are not efficient, that is, a blockcipher is called twice or more for each data block.
In this paper, we present a TBC, XKX, that offers efficient blockcipher-based AE schemes with BBB security, by combining with efficient TBC-based AE schemes such as $\Theta$CB and $\mathbb{OTR}$. XKX is a combination of two TBCs, Minematsu's TBC and Liskov et al.'s TBC. In the XKX-based AE schemes, a nonce and a counter are taken as tweak; a nonce-dependent blockcipher's key is generated by using a pseudo-random function $F$ (from Minematsu); a counter is inputted to an almost xor universal hash function, and the hash value is xor-ed with the input and output blocks of a blockcipher with the nonce-dependent key (from Liskov et al.). For each query to the AE scheme, after the nonce-dependent key is generated, it can be reused, thereby a blockcipher is called once for each data block. We prove that the security bounds of the XKX-based AE schemes become roughly $\ell^2 q/2^n$, where $q$ is the number of queries to the AE scheme, $n$ is the blockcipher size, and $\ell$ is the number of blockcipher calls in one AE evaluation. Regarding the function $F$, we present two blockcipher-based instantiations, the concatenation of blockcipher calls, $F^{(1)}$, and the xor of blockcipher calls, $F^{(2)}$, where $F^{(i)}$ calls a blockcipher $i+1$ times. By the PRF/PRP switch, the security bounds of the XKX-based AE schemes with $F^{(1)}$ become roughly $\ell^2 q/2^n + q^2/2^n$, thus if $\ell \ll 2^{n/2}$ and $q \ll 2^{n/2}$, these achieve BBB security. By the xor construction, the security bounds of the XKX-based AE schemes with $F^{(2)}$ become roughly $\ell^2 q/2^n + q/2^n$, thus if $\ell \ll 2^{n/2}$, these achieve BBB security.
Lelantos: A Blockchain-based Anonymous Physical Delivery System
Real world physical shopping offers customers the privilege of maintaining their privacy by giving them the option of using cash, and thus providing no personal information such as their names and home addresses. On the contrary, electronic shopping mandates the use of all sorts of personally identifiable information for both billing and shipping purposes. Cryptocurrencies such as Bitcoin have created a stimulated growth in private billing by enabling pseudonymous payments. However, the anonymous delivery of the purchased physical goods is still an open research problem.
In this work, we present a blockchain-based physical delivery system called Lelantos that within a realistic threat model, offers customer anonymity, fair exchange and merchant-customer unlinkability. Our system is inspired by the onion routing techniques which are used to achieve anonymous message delivery. Additionally, Lelantos relies on the decentralization and pseudonymity of the blockchain to enable pseudonymity that is hard to compromise, and the distributed consensus mechanisms provided by smart contracts to enforce fair irrefutable transactions between distrustful contractual parties.
On the Structure of Unconditional UC Hybrid Protocols
Uncategorized
Uncategorized
We study the problem of secure two-party computation in the presence of a trusted setup. If there is an unconditionally UC-secure protocol for $f$ that makes use of calls to an ideal $g$, then we say that $f$ reduces to $g$ (and write $f \sqsubseteq g$).
Some $g$ are complete in the sense that all functions reduce to $g$. However, almost nothing is known about the power of an incomplete $g$ in this setting. We shed light on this gap by showing a characterization of $f \sqsubseteq g$ for incomplete $g$.
Very roughly speaking, we show that $f$ reduces to $g$ if and only if it does so by the simplest possible protocol: one that makes a single call to ideal $g$ and uses no further communication. Furthermore, such simple protocols can be characterized by a natural combinatorial condition on $f$ and $g$.
Looking more closely, our characterization applies only to a very wide class of $f$, and only for protocols that are deterministic or logarithmic-round. However, we give concrete examples showing that both of these limitations are inherent to the characterization itself. Functions not covered by our characterization exhibit qualitatively different properties. Likewise, randomized, superlogarithmic-round protocols are qualitatively more powerful than deterministic or logarithmic-round ones.
Proving Resistance against Invariant Attacks: How to Choose the Round Constants
Uncategorized
Uncategorized
Many lightweight block ciphers apply a very simple key schedule in which the round keys only differ by addition of a round-specific constant. Generally, there is not much theory on how to choose appropriate constants. In fact, several of those schemes were recently broken using invariant attacks, i.e., invariant subspace or nonlinear invariant attacks. This work analyzes the resistance of such ciphers against invariant attacks and reveals the precise mathematical properties that render those attacks applicable. As a first practical consequence, we prove that some ciphers including Prince, Skinny-64 and Mantis7 are not vulnerable to invariant attacks. Also, we show that the invariant factors of the linear layer have a major impact on the resistance against those attacks. Most notably, if the number of invariant factors of the linear layer is small (e.g., if its minimal polynomial has a high degree), we can easily find round constants which guarantee the resistance to all types of invariant attacks, independently of the choice of the S-box layer. We also explain how to construct optimal round constants for a given, but arbitrary, linear layer.