All papers in 2018 (Page 8 of 1249 results)
Adaptive Garbled RAM from Laconic Oblivious Transfer
Uncategorized
Uncategorized
We give a construction of an adaptive garbled RAM scheme. In the adaptive
setting, a client first
garbles a ``large'' persistent database which is stored on a server. Next, the
client can
provide multiple adaptively and adversarially chosen RAM garbled programs that
execute and modify the stored
database arbitrarily. The garbled database and the garbled program should
reveal
nothing more than the running time and the output of the computation.
Furthermore, the sizes of the garbled database and
the garbled program grow only linearly in the size of the database and the
running time of the
executed program respectively (up to polylogarithmic factors). The security of
our construction is based on the assumption that
laconic oblivious transfer (Cho et al., CRYPTO 2017) exists. Previously, such
adaptive garbled RAM constructions were only known using indistinguishability
obfuscation or in random oracle model. As an additional application, we note
that
this work yields the first constant round secure computation protocol for
persistent RAM
programs in the malicious setting from
standard assumptions. Prior works did not support persistence in the malicious
setting.
From Laconic Zero-Knowledge to Public-Key Cryptography
Uncategorized
Uncategorized
Since its inception, public-key encryption (PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language in the intersection of NP and SZK.
In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the NP witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.
Languages in NP with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing PKE which, in particular, captures many of the assumptions that were already known to yield PKE.
We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to PKE, thereby giving a complexity-theoretic characterization of PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol.
Indifferentiable Authenticated Encryption
Uncategorized
Uncategorized
We study Authenticated Encryption with Associated Data (AEAD) from the viewpoint of composition in arbitrary (single-stage) environments.
We use the indifferentiability framework to formalize the intuition that a good AEAD scheme should have random ciphertexts subject to decryptability.
Within this framework, we can then apply the indifferentiability composition theorem to show that such schemes offer extra safeguards wherever the relevant security properties are not known, or cannot be predicted in advance, as in general-purpose crypto libraries and standards.
We show, on the negative side, that generic composition (in many of its configurations) and well-known classical and recent schemes fail to achieve indifferentiability. On the positive side, we give a provably indifferentiable Feistel-based construction, which reduces the round complexity from at least $6$, needed for blockciphers, to only 3 for encryption.
This result is not too far off the theoretical optimum as we give a lower bound that rules out the indifferentiability of any construction with less than 2 rounds.
Quantum Lattice Enumeration and Tweaking Discrete Pruning
Enumeration is a fundamental lattice algorithm used in challenge records.
We show how to speed up enumeration on a quantum computer,
which affects the security estimates of several lattice-based submissions to NIST:
if $T$ is the number of operations of enumeration,
our quantum enumeration runs in roughly $\sqrt{T}$ operations.
This applies to the two most efficient forms of enumeration known in the extreme pruning setting:
cylinder pruning but also discrete pruning introduced at Eurocrypt '17.
Our results are based on recent quantum tree algorithms by Montanaro and Ambainis-Kokainis.
The discrete pruning case requires a crucial tweak:
we modify the preprocessing so that the running time can be rigorously proved to be essentially optimal,
which was the main open problem in discrete pruning.
We also introduce another tweak to solve the more general problem of finding close lattice vectors.
Fully Automated Differential Fault Analysis on Software Implementations of Block Ciphers
Differential Fault Analysis (DFA) is considered as the most popular fault analysis method. While there are techniques that provide a fault analysis automation on the cipher level to some degree, it can be shown that when it comes to software implementations, there are new vulnerabilities, which cannot be found by observing the cipher design specification.
This work bridges the gap by providing a fully automated way to carry out DFA on assembly implementations of symmetric block ciphers. We use a customized data flow graph to represent the program and develop a novel fault analysis methodology to capture the program behavior under faults. We establish an effective description of DFA as constraints that are passed to an SMT solver. We create a tool that takes assembly code as input, analyzes the dependencies among instructions, automatically attacks vulnerable instructions using SMT solver and outputs the attack details that recover the last round key (and possibly the earlier keys). We support our design with evaluations on lightweight ciphers SIMON, SPECK, and PRIDE, and a current NIST standard, AES. By automated assembly analysis, we were able to find new efficient DFA attacks on SPECK and PRIDE, exploiting implementation specific vulnerabilities, and previously published DFA on SIMON and AES. Moreover, we present a novel DFA on multiplication operation that has never been shown for symmetric block ciphers before. Our experimental evaluation also shows reasonable execution times that are scalable to current cipher designs and can easily outclass the manual analysis.
Moreover, we present a method to check the countermeasure-protected implementations in a way that helps implementers to decide how many rounds should be protected. We note that this is the first work that automatically carries out DFA on cipher implementations without any plaintext or ciphertext information and therefore, can be generally applied to any input data to the cipher.
Pseudorandom Quantum States
Uncategorized
Uncategorized
We propose the concept of pseudorandom quantum states, which appear random to any quantum polynomial-time adversary. This offers a computational approximation to perfect randomness on quantum states (analogous to a cryptographic pseudorandom generator), as apposed to some statistical notion of quantum pseudorandomness in the literature, such as quantum t-designs (analogous to t-wise independent distributions).
Under the assumption that quantum-secure one-way functions exist, we present efficient constructions of pseudorandom states, showing feasibility of our definition. We then prove several basic properties of any pseudorandom states, which further back up our definition. First, we show a cryptographic no-cloning theorem that no efficient quantum algorithm that can create additional copies from any polynomial-many copies of pseudorandom states. Second, as expected for random quantum states, we show that pseudorandom quantum states are highly entangled on average. Finally, as a main application, we prove that any family of pseudorandom states naturally gives rise to a private-key quantum money scheme, thanks to our cryptographic no-cloning theorem.
Practical and Tightly-Secure Digital Signatures and Authenticated Key Exchange
Tight security is increasingly gaining importance in real-world cryptography, as it allows to choose cryptographic parameters in a way that is supported by a security proof, without the need to sacrifice efficiency by compensating the security loss of a reduction with larger parameters. However, for many important cryptographic primitives, including digital signatures and authenticated key exchange (AKE), we are still lacking constructions that are suitable for real-world deployment.
We construct the first truly practical signature scheme with tight security in a real-world multi-user setting with adaptive corruptions. The scheme is based on a new way of applying the Fiat-Shamir approach to construct tightly-secure signatures from certain identification schemes.
Then we use this scheme as a building block to construct the first practical AKE protocol with tight security. It allows the establishment of a key within 1 RTT in a practical client-server setting, provides forward security, is simple and easy to implement, and thus very suitable for practical deployment. It is essentially the ``signed Diffie-Hellman'' protocol, but with an additional message, which is crucial to achieve tight security. This additional message is used to overcome a technical difficulty in constructing tightly-secure AKE protocols.
For a theoretically-sound choice of parameters and a moderate number of users and sessions, our protocol has comparable computational efficiency to the simple signed Diffie-Hellman protocol with EC-DSA, while for large-scale settings our protocol has even better computational performance, at moderately increased communication complexity.
Continuously Non-Malleable Codes in the Split-State Model from Minimal Assumptions
Uncategorized
Uncategorized
At ICS 2010, Dziembowski, Pietrzak and Wichs introduced the notion of *non-malleable codes*, a weaker form of error-correcting codes guaranteeing that the decoding of a tampered codeword either corresponds to the original message or to an unrelated value. The last few years established non-malleable codes as one of the recently invented cryptographic primitives with the highest impact and potential, with very challenging open problems and applications.
In this work, we focus on so-called *continuously* non-malleable codes in the split-state model, as proposed by Faust et al. (TCC 2014), where a codeword is made of two shares and an adaptive adversary makes a polynomial number of attempts in order to tamper the target codeword, where each attempt is allowed to modify the two shares independently (yet arbitrarily). Achieving continuous non-malleability in the split-state model has been so far very hard. Indeed, the only known constructions require strong setup assumptions (i.e., the existence of a common reference string) and strong complexity-theoretic assumptions (i.e., the existence of non-interactive zero-knowledge proofs and collision-resistant hash functions).
As our main result, we construct a continuously non-malleable code in the split-state model without setup assumptions, requiring only one-to-one one-way functions (i.e., essentially optimal computational assumptions). Our result introduces several new ideas that make progress towards understanding continuous non-malleability, and shows interesting connections with protocol-design and proof-approach techniques used in other contexts (e.g., look-ahead simulation in zero-knowledge proofs, non-malleable commitments, and leakage resilience).
Generic Attacks against Beyond-Birthday-Bound MACs
In this work, we study the security of several recent MAC constructions
with provable security beyond the birthday bound. We consider
block-cipher based constructions with a double-block internal state,
such as SUM-ECBC, PMAC+, 3kf9, GCM-SIV2, and some variants (LightMAC+,
1kPMAC+). All these MACs have a security proof up to $2^{2n/3}$
queries, but there are no known attacks with less than $2^{n}$ queries.
We describe a new cryptanalysis technique for double-block MACs based on
finding quadruples of messages with four pairwise collisions in halves
of the state. We show how to detect such quadruples in SUM-ECBC, PMAC+,
3kf9, GCM-SIV2 and their variants with $\mathcal{O}(2^{3n/4})$ queries,
and how to build a forgery attack with the same query complexity. The
time complexity of these attacks is above $2^n$, but it shows that the
schemes do not reach full security in the information theoretic model.
Surprisingly, our attack on LightMAC+ also invalidates a recent security
proof by Naito.
Moreover, we give a variant of the attack against SUM-ECBC and GCM-SIV2
with time and data complexity $\tilde{\mathcal{O}}(2^{6n/7})$. As far
as we know, this is the first attack with complexity below $2^n$ against
a deterministic beyond-birthday-bound secure MAC.
As a side result, we also give a birthday attack against 1kf9, a
single-key variant of 3kf9 that was withdrawn due to issues with the
proof.
Must the Communication Graph of MPC Protocols be an Expander?
Secure multiparty computation (MPC) on incomplete communication networks has been studied within two primary models: (1) Where a partial network is fixed a priori, and thus corruptions can occur dependent on its structure, and (2) Where edges in the communication graph are determined dynamically as part of the protocol. Whereas a rich literature has succeeded in mapping out the feasibility and limitations of graph structures supporting secure computation in the fixed-graph model (including strong classical lower bounds), these bounds do not apply in the latter dynamic-graph setting, which has recently seen exciting new results, but remains relatively unexplored.
In this work, we initiate a similar foundational study of MPC within the dynamic-graph model. As a first step, we investigate the property of graph expansion. All existing protocols (implicitly or explicitly) yield communication graphs which are expanders, but it is not clear whether this is inherent.
Our results consist of two types (for constant fraction of corruptions):
* Upper bounds: We demonstrate secure protocols whose induced communication graphs are not expander graphs, within a wide range of settings (computational, information theoretic, with low locality, even with low locality and adaptive security) each assuming some form of input-independent setup.
* Lower bounds: In the setting without setup and adaptive corruptions, we demonstrate that for certain functionalities, no protocol can maintain a non-expanding communication graph against all adversarial strategies. Our lower bound relies only on protocol correctness (not privacy), and requires a surprisingly delicate argument.
More generally, we provide a formal framework for analyzing the evolving communication graph of MPC protocols, giving a starting point for studying the relation between secure computation and further, more general graph properties.
Extracting Linearization Equations from Noisy Sources
This note was originally written under the name ``On the Security of HMFEv'' and was submitted to PQCrypto 2018. The author was informed by the referees of his oversight of an eprint work of the same name by Hashimoto, see eprint article /2017/689/, that completely breaks HMFEv, rendering the result on HMFEv obsolete. Still, the author feels that the technique used here is interesting and that, at least in principal, this method could contribute to future cryptanalysis. Thus, with a change of title indicating the direction in which this work is leading, we present the original work with all of its oversights intact and with minimal correction (only references fixed).
At PQCRYPTO 2017, a new multivariate digital signature based on Multi-HFE and utilizing the vinegar modifier was proposed. The vinegar modifier increases the Q-rank of the central map, preventing a direct application of the MinRank attack that defeated Multi-HFE. The authors were, therefore, confident enough to choose aggressive parameters for the Multi-HFE component of the central map (with vinegar variables fixed). Their analysis indicated that the security of the scheme depends on the sum of the number of variables $k$ over the extension field and the number $v$ of vinegar variables with the individual values being unimportant as long as they are not ``too small.'' We analyze the consequences of this choice of parameters and derive some new attacks showing that the parameter $v$ must be chosen with care.
Non-Malleable Codes for Partial Functions with Manipulation Detection
Uncategorized
Uncategorized
Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs (ICS
'10) and its main application is the protection of cryptographic devices
against tampering attacks on memory. In this work, we initiate a comprehensive
study on non-malleable codes for the class of partial functions, that
read/write on an arbitrary subset of codeword bits with specific cardinality.
Our constructions are efficient in terms of information rate, while allowing
the attacker to access asymptotically almost the entire codeword. In addition,
they satisfy a notion which is stronger than non-malleability, that we call
non-malleability with manipulation detection, guaranteeing that any modified
codeword decodes to either the original message or to $\bot$. Finally, our
primitive implies All-Or-Nothing Transforms (AONTs) and as a result our
constructions yield efficient AONTs under standard assumptions (only one-way
functions), which, to the best of our knowledge, was an open question until
now. In addition to this, we present a number of additional applications of
our primitive in tamper resilience.
Quantum Security Analysis of CSIDH
CSIDH is a recent proposal for post-quantum non-interactive key-exchange, presented at ASIACRYPT 2018. Based on supersingular elliptic curve isogenies, it is similar in design to a previous scheme by Couveignes, Rostovtsev and Stolbunov, but aims at an improved balance between efficiency and security.
In the proposal, the authors suggest concrete parameters in order to meet some desired levels of quantum security. These parameters are based on the hardness of recovering a hidden isogeny between two elliptic curves, using a quantum subexponential algorithm of Childs, Jao and Soukharev. This algorithm combines two building blocks: first, a quantum algorithm for recovering a hidden shift in a commutative group. Second, a computation in superposition of all isogenies originating from a given curve, which the algorithm calls as a black box.
In this paper, we give a comprehensive security analysis of CSIDH. Our first step is to revisit three quantum algorithms for the abelian hidden shift problem from the perspective of non-asymptotic cost. There are many possible tradeoffs between the quantum and classical complexities of these algorithms and all of them should be taken into account by security levels. Second, we complete the non-asymptotic study of the black box in the hidden shift algorithm.
This allows us to show that the parameters proposed by the authors of CSIDH do not meet their expected quantum security.
On the Hardness of the Computational Ring-LWR Problem and its Applications
In this paper, we propose a new assumption, the Computational Learning With Rounding over rings, which is inspired by the computational Diffie-Hellman problem. Assuming the hardness of ring-LWE, we prove this problem is hard when the secret is small, uniform and invertible.
From a theoretical point of view, we give examples of a key exchange scheme and a public key encryption scheme, and prove the worst-case
hardness for both schemes with the help of a random oracle.
Our result improves both speed, as a result of not requiring Gaussian secret or noise, and size, as a result of rounding. In practice, our result suggests that decisional ring-LWR based schemes, such as Saber, Round2 and Lizard, which are among the most efficient solutions to the NIST post-quantum cryptography competition,stem from a provable secure design.
There are no hardness results on the decisional ring-LWR with polynomial modulus prior to this work, to the best of our knowledge.
Monero - Privacy in the Blockchain
Uncategorized
Uncategorized
Monero is a cryptocurrency that aims to provide transaction confidentiality and untraceability to its users. This is achieved through various cryptographic schemes, which allow concealing amounts and parties in transactions.
Unfortunately, there has been a severe lack of literature describing with sufficient precision the cryptographic schemes at hand.
With this report we aim at bridging this gap, by supplying detailed descriptions of the algorithms used in Monero to warrant privacy to its users. Furthermore, we do so minimizing the use of technicisms, so as to appeal to a broad readership.
Cryptographic Constructions Supporting Implicit Data Integrity
Uncategorized
Uncategorized
We study a methodology for supporting data integrity called \lq implicit integrity\rq $\>$ and present cryptographic constructions supporting it. Implicit integrity allows for corruption detection without producing, storing or verifying mathematical summaries of the content such as MACs and ICVs, or any other type of message expansion. As with authenticated encryption, the main idea behind this methodology is that, whereas typical user data demonstrate patterns such as repeated bytes or words, decrypted data resulting from corrupted ciphertexts no longer demonstrate such patterns. Thus, by checking the entropy of some decrypted ciphertexts, corruption can be possibly detected.
The main contribution of this paper is a notion of security which is associated with implicit integrity, and which is different from the typical requirement that the output of cryptographic systems should be indistinguishable from the output of a random permutation. The notion of security we discuss reflects the fact that it should be computationally difficult for an adversary to corrupt some ciphertext so that the resulting plaintext demonstrates specific patterns. We introduce two kinds of adversaries. First, an input perturbing adversary performs content corruption attacks. Second an oracle replacing adversary performs content replay attacks. We discuss requirements for supporting implicit integrity in these two adversary models, and provide security bounds for a construction called IVP, a three-level confusion diffusion network which can support implicit integrity and is inexpensive to implement.
Quantum Attacks against Indistinguishablility Obfuscators Proved Secure in the Weak Multilinear Map Model
Uncategorized
Uncategorized
We present a quantum polynomial time attack against the GMMSSZ branching program obfuscator of Garg et al. (TCC'16), when instantiated with the GGH13 multilinear map of Garg et al. (EUROCRYPT'13). This candidate obfuscator was proved secure in the weak multilinear map model introduced by Miles et al. (CRYPTO'16).
Our attack uses the short principal ideal solver of Cramer et al. (EUROCRYPT'16), to recover a secret element of the GGH13 multilinear map in quantum polynomial time. We then use this secret element to mount a (classical) polynomial time mixed-input attack against the GMMSSZ obfuscator. The main result of this article can hence be seen as a classical reduction from the security of the GMMSSZ obfuscator to the short principal ideal problem (the quantum setting is then only used to solve this problem in polynomial time).
As an additional contribution, we explain how the same ideas can be adapted to mount a quantum polynomial time attack against the DGGMM obfuscator of Döttling et al. (ePrint 2016), which was also proved secure in the weak multilinear map model.
Ring packing and amortized FHEW bootstrapping
The FHEW fully homomorphic encryption scheme (Ducas and Micciancio, Eurocrypt 2015) offers very fast homomorphic NAND-gate computations (on encrypted data) and a relatively fast refreshing procedure that allows to homomorphically evaluate arbitrary NAND boolean circuits. Unfortunately, the refreshing procedure needs to be executed after every single NAND computation, and each refreshing operates on a single encrypted bit, greatly decreasing the overall throughput of the scheme. We give a new refreshing procedure that simultaneously refreshes $n$ FHEW ciphertexts, at a cost comparable to a single-bit FHEW refreshing operation. As a result, the cost of each refreshing is amortized over $n$ encrypted bits, improving the throughput for the homomorphic evaluation of boolean circuits roughly by a factor $n$.
Polynomial direct sum masking to protect against both SCA and FIA
Side Channel Attacks (SCA) and Fault Injection Attacks (FIA) allow an opponent to have partial access to the internal behavior of the hardware. Since the end of the nineties, many works have shown that this type of attacks constitute a serious threat to cryptosystems implemented in embedded devices. In the state of the art, there exist several countermeasures to protect symmetric encryption (especially AES-128). Most of them protect only against one of these two attacks (SCA or FIA). A method called ODSM has been proposed to withstand SCA and FIA , but its implementation in the whole algorithm is a big open problem when no particular hardware protection is possible. In the present paper, we propose a practical masking scheme specifying ODSM which makes it possible to protect the symmetric encryption against these two attacks.
Two-Message Statistically Sender-Private OT from LWE
Uncategorized
Uncategorized
We construct a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems and multi-party computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the post-quantum setting. This work provides the first (presumed) post-quantum secure candidate and thus allows to instantiate the aforementioned applications in a post-quantum secure manner.
Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.
Trapdoor Functions from the Computational Diffie-Hellman Assumption
Trapdoor functions (TDFs) are a fundamental primitive in cryptography. Yet, the current set of assumptions known to imply TDFs is surprisingly limited, when compared to public-key encryption. We present a new general approach for constructing TDFs. Specifically, we give a generic construction of TDFs from any Hash Encryption (Döttling and Garg [CRYPTO '17]) satisfying a novel property which we call recyclability. By showing how to adapt current Computational Diffie-Hellman (CDH) based constructions of hash encryption to yield recyclability, we obtain the first construction of TDFs with security proved under the CDH assumption. While TDFs from the Decisional Diffie-Hellman (DDH) assumption were previously known, the possibility of basing them on CDH had remained open for more than 30 years.
Recovering short secret keys of RLCE in polynomial time
We present a key recovery attack against Y. Wang's Random Linear Code Encryption (RLCE) scheme recently submitted to the NIST call for post-quantum cryptography. This attack recovers the secret key for all the short key parameters proposed by the author.
Improved Key Recovery Attacks on Reduced-Round AES with Practical Data an d Memory Complexities
Determining the security of AES is a central problem in cryptanalysis, but progress in this area had been slow and only a handful of cryptanalytic techniques led to significant advancements. At Eurocrypt 2017 Grassi et al. presented a novel type of distinguisher for AES-like structures, but so far all the published attacks which were based on this distinguisher were inferior to previously known attacks in their complexity. In this paper we combine the technique of Grassi et al. with several other techniques in a novel way to obtain the best known key recovery attack on 5-round AES in the single-key model, reducing its overall complexity from about $2^{32}$ to less than $2^{22}$. Extending our techniques to 7-round AES, we obtain the best known attacks on AES-192 which use practical amounts of data and memory, breaking the record for such attacks which was obtained in 2000 by the classical Square attack.
Towards KEM Unification
This paper highlights a particular construction of a correct KEM without failures and without ciphertext expansion from any correct deterministic PKE, and presents a simple tight proof of ROM IND-CCA2 security for the KEM assuming merely OW-CPA security for the PKE. Compared to previous proofs, this proof is simpler, and is also factored into smaller pieces that can be audited independently. In particular, this paper introduces the notion of ``IND-Hash'' security and shows that this allows a new separation between checking encryptions and randomizing decapsulations. The KEM is easy to implement in constant time, given a constant-time implementation of the PKE.
Location-Proof System based on Secure Multi-Party Computations
Location-based services are now quite popular. The variety of applications and their numerous users show it clearly. However, these applications rely on the persons' honesty to use their real location. If they are motivated to lie about their position, they can do so. A location-proof system allows a prover to obtain proofs from nearby witnesses, for being at a given location at a given time. Such a proof can be used to convince a verifier later on.
Yet, provers and witnesses may want to keep their location and their identity private. In this paper, a solution is presented in which a malicious adversary, acting as a prover, cannot cheat on his position and remain undetected. It relies on multi-party computations and group-signature schemes to protect the private information of both the prover and the witnesses against any semi-honest participant.
New Smooth Projective Hashing For Oblivious Transfer
Oblivious transfer is an important tool against malicious cloud server providers. Halevi-Kalai OT, which is based on smooth projective hash(SPH), is a famous and the most efficient framework for $1$-out-of-$2$ oblivious transfer ($\mbox{OT}^{2}_{1}$) against malicious adversaries in plain model. A natural question however, which so far has not been answered, is whether its security level can be improved, i.e., whether it can be made fully-simulatable.
In this paper, we press a new SPH variant, which enables a positive answer to above question. In more details, it even makes fully-simulatable $\mbox{OT}^{n}_{t}$ ($n,t\in \mathbb{N}$ and $n>t$) possible. We instantiate this new SPH variant under not only the decisional Diffie-Hellman assumption, the decisional $N$-th residuosity assumption and the decisional quadratic residuosity assumption as currently existing SPH constructions, but also the learning with errors (LWE) problem. Before this paper, there is a folklore that it is technically difficult to instantiate SPH under the lattice assumption (e.g., LWE). Considering quantum adversaries in the future, lattice-based SPH makes important sense.
Reducing Complexity of Pairing Comparisons using Polynomial Evaluation
We propose a new method for reducing complexity of the pairing comparisons based on polynomials. Thought the construction introduces uncertainty into (usually deterministic) checks, it is easily quantifiable and in most cases extremely small. The application to CL-LRSW signature verification under n messages and group order q allows to reduce the number of computed pairings from 4n down to just 4, while the introduced uncertainty is just (2n-1)/q.
Fast Correlation Attack Revisited --Cryptanalysis on Full Grain-128a, Grain-128, and Grain-v1
Uncategorized
Uncategorized
A fast correlation attack (FCA) is a well-known cryptanalysis technique for LFSR-based stream ciphers. The correlation between the initial state of an LFSR and corresponding key stream is exploited, and the goal is to recover the initial state of the LFSR. In this paper, we revisit the FCA from a new point of view based on a finite field, and it brings a new property for the FCA when there are multiple linear approximations. Moreover, we propose a novel algorithm based on the new property, which enables us to reduce both time and data complexities. We finally apply this technique to the Grain family, which is a well-analyzed class of stream ciphers. There are three stream ciphers, Grain-128a, Grain-128, and Grain-v1 in the Grain family, and Grain-v1 is in the eSTREAM portfolio and Grain-128a is standardized by ISO/IEC. As a result, we break them all, and especially for Grain-128a, the cryptanalysis on its full version is reported for the first time.
Ciphertext Expansion in Limited-Leakage Order-Preserving Encryption: A Tight Computational Lower Bound
Uncategorized
Uncategorized
Order-preserving encryption emerged as a key ingredient underlying the security of practical database management systems. Boldyreva et al. (EUROCRYPT '09) initiated the study of its security by introducing two natural notions of security. They proved that their first notion, a ``best-possible'' relaxation of semantic security allowing ciphertexts to reveal the ordering of their corresponding plaintexts, is not realizable. Later on Boldyreva et al. (CRYPTO '11) proved that any scheme satisfying their second notion, indistinguishability from a random order-preserving function, leaks about half of the bits of a random plaintext.
This unsettling state of affairs was recently changed by Chenette et al. (FSE '16), who relaxed the above ``best-possible'' notion and constructed a scheme satisfying it based on any pseudorandom function. In addition to revealing the ordering of any two encrypted plaintexts, ciphertexts in their scheme reveal only the position of the most significant bit on which the plaintexts differ. A significant drawback of their scheme, however, is its substantial ciphertext expansion: Encrypting plaintexts of length $m$ bits results in ciphertexts of length $m \cdot \ell$ bits, where $\ell$ determines the level of security (e.g., $\ell = 80$ in practice).
In this work we prove a lower bound on the ciphertext expansion of any order-preserving encryption scheme satisfying the ``limited-leakage'' notion of Chenette et al. with respect to non-uniform polynomial-time adversaries, matching the ciphertext expansion of their scheme up to lower-order terms. This improves a recent result of Cash and Zhang (ePrint '17), who proved such a lower bound for schemes satisfying this notion with respect to computationally-unbounded adversaries (capturing, for example, schemes whose security can be proved in the random-oracle model without relying on cryptographic assumptions). Our lower bound applies, in particular, to schemes whose security is proved in the standard model.
Bernstein Bound on WCS is Tight - Repairing Luykx-Preneel Optimal Forgeries
Uncategorized
Uncategorized
In Eurocrypt 2018, Luykx and Preneel described hash-key-recovery and forgery attacks against polynomial hash based Wegman-Carter-Shoup (WCS) authenticators. Their attacks require $2^{n/2}$ message-tag pairs and recover hash-key with probability about $1.34 \times 2^{-n}$ where $n$ is the bit-size of the hash-key. Bernstein in Eurocrypt 2005 had provided an upper bound (known as Bernstein bound) of the maximum forgery advantages.
The bound says that all adversaries making $O(2^{n/2})$ queries of WCS can have maximum forgery advantage
$O(2^{-n})$. So, Luykx and Preneel essentially analyze WCS in a range of query complexities where WCS is known to be perfectly secure. Here we revisit the bound and found that WCS remains secure against all adversaries making $q \ll \sqrt{n} \times 2^{n/2}$ queries. So it would be meaningful to analyze adversaries with beyond birthday bound complexities.
In this paper, we show that the Bernstein bound is tight by describing two attacks (one in the ``chosen-plaintext model" and other in the ``known-plaintext model") which recover the hash-key (hence forges) with probability at least $\frac{1}{2}$ based on $\sqrt{n} \times 2^{n/2}$ message-tag pairs. We also extend the forgery adversary to the Galois Counter Mode (or GCM). More precisely, we recover the hash-key of GCM with probability at least $\frac{1}{2}$ based on only $\sqrt{\frac{n}{\ell}} \times 2^{n/2}$ encryption queries, where $\ell$ is the number of blocks present in encryption queries.
Fortified Universal Composability: Taking Advantage of Simple Secure Hardware Modules
Adaptive security is the established way to capture adversaries breaking into computers during secure computations. However, adaptive security does not prevent remote hacks where adversaries learn and modify a party’s secret inputs and outputs. We initiate the study of security notions which go beyond adaptive security. To achieve such a strong security notion, we utilize realistic simple remotely unhackable hardware modules such as air-gap switches and data diodes together with isolation assumptions. Such hardware modules have, to the best of our knowledge, not been used for secure multi-party computation so far. As a result, we are able to construct protocols with very strong composable security guarantees against remote hacks, which are not provided by mere adaptive security. We call our new notion Fortified UC security.
Using only very few and very simple remotely unhackable hardware modules, we construct protocols where mounting remote attacks does not enable an adversary to learn or modify a party’s inputs and outputs unless he hacks a party via the input port before it has received its (first) input (or gains control over all parties). Hence, our protocols protect inputs and outputs against all remote attacks, except for hacks via the input port while a party is waiting for input. To achieve this level of security, the parties’ inputs and outputs are authenticated, masked and shared in our protocols in such a way that an adversary is unable to learn or modify them when gaining control over a party via a remote hack.
It is important to note that the remotely unhackable hardware modules applied in this work are based on substantially weaker assumptions than the hardware tokens proposed by Katz at EUROCRYPT ‘07. In particular, they are not assumed to be physically tamper-proof, can thus
not be passed to other (possibly malicious) parties, and are therefore not sufficient to circumvent the impossibility results in the Universal Composability (UC) framework. Our protocols therefore rely on well-established UC-complete setup assumptions in tandem with our remotely unhackable hardware modules to achieve composability.
Secure and Reliable Key Agreement with Physical Unclonable Functions
Different transforms used in binding a secret key to correlated physical-identifier outputs are compared. Decorrelation efficiency is the metric used to determine transforms that give highly-uncorrelated outputs. Scalar quantizers are applied to transform outputs to extract uniformly distributed bit sequences to which secret keys are bound. A set of transforms that perform well in terms of the decorrelation efficiency is applied to ring oscillator (RO) outputs to improve the uniqueness and reliability of extracted bit sequences, to reduce the hardware area and information leakage about the key and RO outputs, and to maximize the secret-key length. Low-complexity error-correction codes are proposed to illustrate two complete key-binding systems with perfect secrecy, and better secret-key and privacy-leakage rates than existing methods. A reference hardware implementation is also provided to demonstrate that the transform-coding approach occupies a small hardware area.
Upper and Lower Bounds for Continuous Non-Malleable Codes
Recently, Faust et al. (TCC'14) introduced the notion of continuous non-malleable codes (CNMC), which provides stronger security guarantees than standard non-malleable codes, by allowing an adversary to tamper with the codeword in continuous way instead of one-time tampering. They also showed that CNMC with information theoretic security cannot be constructed in 2-split-state tampering model, and
presented a construction of the same in CRS (common reference string) model using collision-resistant hash functions and non-interactive
zero-knowledge proofs.
In this work, we ask if it is possible to construct CNMC from weaker assumptions. We answer this question by presenting lower as well as upper bounds. Specifically, we show that it is impossible to construct 2-split-state CNMC, with no CRS, for one-bit messages from any falsifiable assumption, thus establishing the lower bound. We additionally provide an upper bound by constructing 2-split-state CNMC for one-bit messages, assuming only the existence of a family of injective one way functions.
We also present a construction of 4-split-state CNMC for multi-bit messages in CRS model from the same assumptions. Additionally, we present definitions of the following new primitives: (1) One-to-one commitments, and (2) Continuous Non-Malleable Randomness Encoders, which may be of independent interest.
Partial Key Exposure Attacks on RSA: Achieving the Boneh-Durfee Bound
Thus far, several lattice-based algorithms for partial key exposure attacks on RSA, i.e., given the most/least significant bits (MSBs/LSBs) of a secret exponent $d$ and factoring an RSA modulus $N$, have been proposed such as Blömer and May (Crypto'03), Ernst et al. (Eurocrypt'05), and Aono (PKC'09). Due to Boneh and Durfee's small secret exponent attack, partial key exposure attacks should always work for $d<N^{0.292}$ even without any partial information. However, it was difficult task to make use of the given partial information without losing the quality of Boneh-Durfee's attack. In particular, known partial key exposure attacks fail to work for $d<N^{0.292}$ with only few partial information. Such unnatural situation stems from the fact that the additional information makes underlying modular equations involved. In this paper, we propose improved attacks when a secret exponents $d$ is small. Our attacks are better than all known previous attacks in the sense that our attacks require less partial information. Specifically, our attack is better than all known ones for $d<N^{0.5625}$ and $d<N^{0.368}$ with the MSBs and the LSBs, respectively. Furthermore, our attacks fully cover the Boneh-Durfee bound, i.e., they always work for $d<N^{0.292}$. At a high level, we obtain the improved attacks by fully utilizing unravelled linearization technique proposed by Herrmann and May (Asiacrypt'09). Although Herrmann and May (PKC'10) already applied the technique to Boneh-Durfee's attack, we show elegant and impressive extensions to capture partial key exposure attacks. More concretely, we construct structured triangular matrices that enable us to recover more useful algebraic structures of underlying modular polynomials. We embed the given MSBs/LSBs to the recovered algebraic structures and construct our partial key exposure attacks. In this full version, we provide overviews and explicit proofs of the triangular matrix constructions. We believe that the additional explanations help readers to understand our techniques.
Highly Efficient and Re-executable Private Function Evaluation with Linear Complexity
Private function evaluation aims to securely compute a function f (x_1 , . . ., x_n ) without leaking any information other than what is revealed by the output, where f is a private input of one of the parties (say Party_1) and x_i is a private input of the i-th party Party_i. In this work, we propose a novel and secure two-party private function evaluation (2PFE) scheme based on DDH assumption. Our scheme introduces a reusability feature that significantly improves the state-of-the-art. Accordingly, our scheme has two variants, one is utilized in the first evaluation of the function f, and the other is utilized in its subsequent evaluations. To the best of our knowledge, this is the first and most efficient special purpose PFE scheme that enjoys a reusablity feature. Our protocols achieve linear communication and computation complexities and a constant number of rounds which is at most three.
Weak Compression and (In)security of Rational Proofs of Storage
Uncategorized
Uncategorized
We point out an implicit unproven assumption underlying the security of rational proofs of storage that is related to a concept we call weak randomized compression. This is a class of interactive proofs designed in a security model with a rational prover who is encouraged to store data (possibly in a particular format), as otherwise it either fails verification or does not save any storage costs by deviating (in some cases it may even increase costs by ``wasting" the space).
Weak randomized compression is a scheme that takes a random seed $r$ and a compressible string $s$ and outputs a compression of the concatenation $r \circ s$. Strong compression would compress $s$ by itself (and store the random seed separately).
In the context of a storage protocol, it is plausible that the adversary knows a weak compression that uses its incompressible storage advice as a seed to help compress other useful data it is storing, and yet it does not know a strong compression that would perform just as well.
It therefore may be incentivized to deviate from the protocol in order to save space. This would be particularly problematic for proofs of replication, designed to encourage provers to store data in a redundant format, which weak compression would likely destroy.
We thus motivate the question of whether weak compression can always be used to efficiently construct strong compression, and find (negatively) that a black-box reduction would imply a universal compression scheme in the random oracle model for all compressible efficiently sampleable sources. Implausibility of universal compression aside, we conclude that constructing this black-box reduction for a class of sources is at least as hard as directly constructing a universal compression scheme for that class.
Another coin bites the dust: An analysis of dust in UTXO based cryptocurrencies
Unspent Transaction Outputs (UTXOs) are the internal mechanism used in many cryp-
tocurrencies to represent coins. Such representation has some clear benefits, but also entails some complexities that, if not properly handled, may leave the system in an inefficient state. Specifically, inefficiencies arise when wallets (the software responsible for transferring coins between parties) do not manage UTXOs properly when performing payments. In this paper, we study three cryptocurrencies: Bitcoin, Bitcoin Cash and Litecoin, by analyzing the actual state of their UTXO sets, that is, the status of their sets of spendable coins. These three cryptocurrencies are the top-3 UTXO based cryptocurrencies by market capitalization. Our analysis shows that the usage of each cryptocurrency is quite different, and let to different results. Furthermore, it also points out that the management of the transactions has not been always performed efficiently and then the actual state of the UTXO set for each cryptocurrency is far from ideal.
Provably Secure Integration Cryptosystem on Non-Commutative Group
Uncategorized
Uncategorized
Braid group is a very important non-commutative group. It is also an important tool of quantum field theory, and has good topological properties. This paper focuses on the provable security research of cryptosystem over braid group, which consists of two aspects: One, we prove that the Ko's cryptosystem based on braid group is secure against chosen-plaintext-attack which proposed in CRYPTO 2000, while it dose not resist active attack. The other is to propose a new public key cryptosystem over braid group which is secure against adaptive chosen-ciphertext-attack. Our proofs are based on random oracle models, under the computational conjugacy search assumption. This kind of results have never been seen before.
Return of GGH15: Provable Security Against Zeroizing Attacks
The GGH15 multilinear maps have served as the foundation for a number of cutting-edge cryptographic proposals. Unfortunately, many schemes built on GGH15 have been explicitly broken by so-called ``zeroizing attacks,'' which exploit leakage from honest zero-test queries. The precise settings in which zeroizing attacks are possible have remained unclear. Most notably, none of the current indistinguishability obfuscation (iO) candidates from GGH15 have any formal security guarantees against zeroizing attacks.
In this work, we demonstrate that all known zeroizing attacks on GGH15 implicitly construct algebraic relations between the results of zero-testing and the encoded plaintext elements. We then propose a ``GGH15 zeroizing model" as a new general framework which greatly generalizes known attacks.
Our second contribution is to describe a new GGH15 variant, which we formally analyze in our GGH15 zeroizing model. We then construct a new iO candidate using our multilinear map, which we prove secure in the GGH15 zeroizing model. This implies resistance to all known zeroizing strategies. The proof relies on the Branching Program Un-Annihilatability (BPUA) Assumption of Garg et al. [TCC 16-B] (which is implied by PRFs in NC^1 secure against P/Poly) and the complexity-theoretic p-Bounded Speedup Hypothesis of Miles et al. [ePrint 14] (a strengthening of the Exponential Time Hypothesis).
Key-Secrecy of PACE with OTS/CafeOBJ
The ICAO-standardized Password Authenticated Connection Establishment (PACE) protocol is used all over the world to secure access to electronic passports. Key-secrecy of PACE is proven by first modeling
it as an Observational Transition System (OTS) in CafeOBJ, and then
proving invariant properties by induction.
Last updated: 2018-07-06
Improved Collision Attack on Reduced RIPEMD-160
Uncategorized
Uncategorized
In this paper, we propose a new cryptanalysis method to mount collision attack on RIPEMD-160. Firstly, we review two existent cryptanalysis methods to mount (semi-free-start) collision attack on MD-SHA hash family and briefly explain their advantages and disadvantages. To make the best use of the advantages of the two methods, we come up with a new method to find a collision. Applying the new technique, we improve the only existent collision attack on the first 30-step RIPEMD-160 presented at Asiacrypt 2017 by a factor of $2^{13}$. Moreover, our new method is much simpler than that presented at Asiacrypt 2017 and there is no need to do the sophisticated multi-step modification even though we mount collision attack until the second round. Besides, we further evaluate the pros and cons of the new method and describe how to carefully apply it in future research. We also implement this attack in C++ and can find the message words to ensure the dense right branch with time complexity $2^{28}$.
Cost-Effective Private Linear Key Agreement With Adaptive CCA Security from Prime Order Multilinear Maps and Tracing Traitors
Private linear key agreement (PLKA) enables a group of users to agree upon a common session key in a broadcast encryption (BE) scenario, while traitor tracing (TT) system allows a tracer to identify conspiracy of a troop of colluding pirate users. This paper introduces a key encapsulation mechanism in BE that provides the functionalities of both PLKA and TT in a unified cost-effective primitive. Our PLKA based traitor tracing offers a solution to the problem of achieving full collusion resistance property and public traceability simultaneously with significant efficiency and storage compared to a sequential improvement of the PLKA based traitor tracing systems. Our PLKA builds on a prime order multilinear group setting employing indistinguishability obfuscation (iO) and pseudorandom function (PRF). The resulting scheme has a fair communication, storage and computational efficiency compared to that of composite order groups. Our PLKA is adaptively chosen ciphertext attack (CCA)-secure and based on the hardness of the multilinear assumption, namely, the Decisional Hybrid Diffie-Hellman Exponent (DHDHE) assumption in standard model and so far a plausible improvement in the literature. More precisely, our PLKA design significantly reduces the ciphertext size, public parameter size and user secret key size. We frame a traitor tracing algorithm with shorter running time which can be executed publicly.
Tight Tradeoffs in Searchable Symmetric Encryption
Uncategorized
Uncategorized
A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT '14) and Asharov et al. (STOC '16) to construct SSE schemes offering various such tradeoffs, and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed.
We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the ``pad-and-split'' framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality $L$ must use space $\Omega ( N \log N / \log L )$ for databases of size $N$. This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD '17) which is captured by our pad-and-split framework.
Then, within the ``statistical-independence'' framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive $O(\log \log \log N)$ factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly-optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with $n = N^{1 - \epsilon(n)}$ document identifiers, the read efficiency is $\omega(1) \cdot \epsilon(n)^{-1}+ O(\log\log\log N)$ when retrieving its identifiers (where the $\omega(1)$ term may be arbitrarily small, and $\omega(1) \cdot \epsilon(n)^{-1}$ is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most $N^{1 - 1/o(\log \log \log N)}$ document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency $O(\log \log \log N)$ when retrieving its identifiers.
Secure Two-Party Computation over Unreliable Channels
We consider information-theoretic secure two-party computation in the
plain model where no reliable channels are assumed, and all communication is performed over the binary symmetric channel (BSC) that flips each bit with fixed probability. In this reality-driven setting we investigate feasibility of communication-optimal noise-resilient semi-honest two-party computation i.e., efficient computation which is both private and correct despite channel noise.
We devise an information-theoretic technique that converts any correct, but not necessarily private, two-party protocol that assumes reliable channels, into a protocol which is both correct and private against semi-honest adversaries, assuming BSC channels alone. Our results also apply to other types of noisy-channels such as the elastic-channel.
Our construction combines tools from the cryptographic literature with tools from the literature on interactive coding, and achieves, to our knowledge, the best known communication overhead. Specifically, if $f$ is given as a circuit of size $s$, our scheme communicates $O(s + \kappa)$ bits for $\kappa$ a security parameter.
This improves the state of the art (Ishai et al., CRYPTO' 11) where the communication is $O(s) + \text{poly}(\kappa \cdot \text{depth}(s))$.
Improved Parallel Mask Refreshing Algorithms: Generic Solutions with Parametrized Non-Interference \& Automated Optimizations
Refreshing algorithms are a critical ingredient for secure masking. They are instrumental in enabling sound composability properties for complex circuits, and their randomness requirements dominate the performance overheads in (very) high-order masking. In this paper, we improve a proposal of mask refreshing algorithms from EUROCRYPT 2017, that has excellent implementation properties in software and hardware, in two main directions. First, we provide a generic proof that this algorithm is secure at arbitrary orders -- a problem that was left open so far. We introduce Parametrized Non-Interference as a new technical ingredient for this purpose, that may be of independent interest. Second, we use automated tools to further explore the design space of such algorithms and provide the best known parallel mask refreshing gadgets for concretely relevant security orders. Incidentally, we also prove the security of a recent proposal of mask refreshing with improved resistance against horizontal attacks from CHES 2017.
Quantum Attacks on Some Feistel Block Ciphers
Uncategorized
Uncategorized
Post-quantum cryptography has attracted much attention from worldwide cryptologists. However, most research works are related to public-key cryptosystem due to Shor's attack on RSA and ECC ciphers. At CRYPTO 2016, Kaplan et al. showed that many secret-key (symmetric) systems could be broken using a quantum period finding algorithm, which encouraged researchers to evaluate symmetric systems against quantum attackers.
In this paper, we continue to study symmetric ciphers against quantum attackers. First, we convert the classical advanced slide attacks (introduced by Biryukov and Wagner) to a quantum one, that gains an exponential speed-up in time complexity. Thus, we could break 2/4K-Feistel and 2/4K-DES in polynomial time. Second, we give a new quantum key-recovery attack on full-round GOST, which is a Russian standard, with $2^{114.8}$ quantum queries of the encryption process, faster than a quantum brute-force search attack by a factor of $2^{13.2}$.
Finger Printing Data
By representing data in a unary way, the identity of the bits can be used as a printing pad to stain the data with the identity of its handlers. Passing data will identify its custodians, its pathway, and its bona fide. This technique will allow databases to recover from a massive breach as the thieves will be caught when trying to use this 'sticky data'. Heavily traveled data on networks will accumulate the 'fingerprints' of its holders, to allow for a forensic analysis of fraud attempts, or data abuse. Special applications for the financial industry, and for intellectual property management. Fingerprinting data may be used for new ways to balance between privacy concerns and public statistical interests. This technique might restore the identification power of the US Social Security Number, despite the fact that millions of them have been compromised. Another specific application regards credit card fraud. Once the credit card numbers are 'sticky' they are safe. The most prolific application though, may be in conjunction with digital money technology. The BitMint protocol, for example, establishes its superior security on 'sticky digital coins'. Advanced fingerprinting applications require high quality randomization. The price paid for the fingerprinting advantage is a larger data footprint -- more bits per content. Impacting both storage and transmission. This price is reasonable relative to the gained benefit.
Computer-aided proofs for multiparty computation with active security
Secure multi-party computation (MPC) is a general cryptographic technique that allows distrusting parties to compute a function of their individual inputs, while only revealing the output of the function. It has found applications in areas such as auctioning, email filtering, and secure teleconference. Given their importance, it is crucial that the protocols are specified and implemented correctly. In the programming language community, it has become good practice to use computer proof assistants to verify correctness proofs. In the field of cryptography, EasyCrypt is the state of the art proof assistant. It provides an embedded language for probabilistic programming,
together with a specialized logic, embedded into an ambient general purpose higher-order logic. It allows us to conveniently express cryptographic properties. EasyCrypt has been used successfully on many applications, including public-key encryption, signatures, garbled circuits and differential privacy. Here we show for the first time that it can also be used to prove security of MPC against a malicious adversary.
We formalize additive and replicated secret sharing schemes and apply them to Maurer’s MPC protocol for secure addition and multiplication. Our method extends to general polynomial functions. We follow the insights from EasyCrypt that security proofs can often be reduced to proofs about program equivalence, a topic that is well understood in the verification of programming languages. In particular, we show that for a class of MPC protocols in the passive case the non-interference-based (NI) definition is equivalent to a standard simulation-based security definition. For the active case, we provide a new non-interference based alternative to the usual simulation-based cryptographic definition that is tailored specifically to our protocol.
Last updated: 2019-12-02
Secure Grouping and Aggregation with MapReduce
MapReduce programming paradigm allows to process big data sets in parallel on a large cluster. We focus on a scenario where the data owner outsources her data on an honest-but-curious server. Our aim is to evaluate grouping and aggregation with SUM, COUNT, AVG, MIN, and MAX operations for an authorized user. For each of these five operations, we assume that the public cloud provider and the user do not collude i.e., the public cloud does not know the secret key of the user. We prove the security of our approach for each operation.
Encrypt or Decrypt? To Make a Single-Key Beyond Birthday Secure Nonce-Based MAC
Uncategorized
Uncategorized
In CRYPTO 2016, Cogliati and Seurin have proposed a highly secure nonce-based MAC called Encrypted Wegman-Carter with Davies-Meyer ($\textsf{EWCDM}$) construction, as $\textsf{E}_{K_2}\bigl(\textsf{E}_{K_1}(N)\oplus N\oplus \textsf{H}_{K_h}(M)\bigr)$ for a nonce $N$ and a message $M$. This construction achieves roughly $2^{2n/3}$ bit MAC security with the assumption that $\textsf{E}$ is a PRP secure $n$-bit block cipher and $\textsf{H}$ is an almost xor universal $n$-bit hash function. In this paper we propose Decrypted Wegman-Carter with Davies-Meyer ($\textsf{DWCDM}$) construction, which is structurally very similar to its predecessor $\textsf{EWCDM}$ except that the outer encryption call is replaced by decryption. The biggest advantage of $\textsf{DWCDM}$ is that we can make a truly single key MAC: the two block cipher calls can use the same block cipher key $K=K_1=K_2$. Moreover, we can derive the hash key as $K_h=\textsf{E}_K(1)$, as long as $|K_h|=n$. Whether we use encryption or decryption in the outer layer makes a huge difference; using the decryption instead enables us to apply an extended version of the mirror theory by Patarin to the security analysis of the construction. $\textsf{DWCDM}$ is secure beyond the birthday bound, roughly up to $2^{2n/3}$ MAC queries and $2^n$ verification queries against nonce-respecting adversaries. $\textsf{DWCDM}$ remains secure up to $2^{n/2}$ MAC queries and $2^n$ verification queries against nonce-misusing adversaries.
Secure Two-party Threshold ECDSA from ECDSA Assumptions
The Elliptic Curve Digital Signature Algorithm (ECDSA) is one of the most widely used schemes in deployed cryptography. Through its applications in code and binary authentication, web security, and cryptocurrency, it is likely one of the few cryptographic algorithms encountered on a daily basis by the average person. However, its design is such that executing multi-party or threshold signatures in a secure manner is challenging: unlike other, less widespread signature schemes, secure multi-party ECDSA requires custom protocols, which has heretofore implied reliance upon additional cryptographic assumptions and primitives such as the Paillier cryptosystem.
We propose new protocols for multi-party ECDSA key-generation and signing with a threshold of two, which we prove secure against malicious adversaries in the Random Oracle Model using only the Computational Diffie-Hellman Assumption and the assumptions already relied upon by ECDSA itself. Our scheme requires only two messages, and via implementation we find that it outperforms the best prior results in practice by a factor of 56 for key generation and 11 for signing, coming to within a factor of 18 of local signatures. Concretely, two parties can jointly sign a message in just over three milliseconds.
This document is an updated version. A new preface includes errata and notes relevant to the original work, and a brief description of a revised protocol with a revised proof. The original paper appears in unedited form at the end. The authors consider this work to be fully subsumed by the more recent three-round protocol of Doerner, Kondi, Lee, and shelat (2023), and direct new readers to that work.
Modeling Soft Analytical Side-Channel Attacks from a Coding Theory Viewpoint
One important open question in side-channel analysis is to find out whether all the leakage samples in an implementation can be exploited by an adversary, as suggested by masking security proofs. For attacks exploiting a divide-and-conquer strategy, the answer is negative: only the leakages corresponding to the first/last rounds of a block cipher can be exploited. Soft Analytical Side-Channel Attacks (SASCA) have been introduced as a powerful solution to mitigate this limitation. They represent the target implementation and its leakages as a code (similar to a Low Density Parity Check code) that is decoded thanks to belief propagation. Previous works have shown the low data complexities that SASCA can reach in practice. In this paper, we revisit these attacks by modeling them with a variation of the Random Probing Model used in masking security proofs, that we denote as the Local Random Probing Model (LRPM). Our study establishes interesting connections between this model and the erasure channel used in coding theory, leading to the following benefits. First, the LRPM allows bounding the security of concrete implementations against SASCA in a fast and intuitive manner. We use it in order to confirm that the leakage of any operation in a block cipher can be exploited, although the leakages of external operations dominate in known-plaintext/ciphertext attack scenarios. Second, we show that the LRPM is a tool of choice for the (nearly worst-case) analysis of masked implementations in the noisy leakage model, taking advantage of all the operations performed, and leading to new tradeoffs between their amount of randomness and physical noise level. Third, we show that it can considerably speed up the evaluation of other countermeasures such as shuffling.
Forward Private Searchable Symmetric Encryption with Optimized I/O Efficiency
Recently, several practical attacks raised serious concerns over the security of searchable encryption. The attacks have brought emphasis on forward privacy, which is the key concept behind solutions to the adaptive leakage-exploiting attacks, and will very likely to become a must-have property of all new searchable encryption schemes. For a long time, forward privacy implies inefficiency and thus most existing searchable encryption schemes do not support it. Very recently, Bost (CCS 2016) showed that forward privacy can be obtained without inducing a large communication overhead. However, Bost’s scheme is constructed with a relatively inefficient public key cryptographic primitive, and has poor I/O performance. Both of the deficiencies significantly hinder the practical efficiency of the scheme, and prevent it from scaling to large data settings. To address the problems, we first present FAST, which achieves forward privacy and the same communication efficiency as Bost’s scheme, but uses only symmetric cryptographic primitives. We then present FASTIO, which retains all good properties of FAST, and further improves I/O efficiency. We implemented the two schemes and compared their performance with Bost’s scheme. The experiment results show that both our schemes are highly efficient.
Efficient Delegated Private Set Intersection on Outsourced Private Datasets
Private set intersection (PSI) is an essential cryptographic protocol that has many real world applications. As cloud computing power and popularity have been swiftly growing, it is now desirable to leverage the cloud to store private datasets and delegate PSI computation to it. Although a set of efficient PSI protocols have been designed, none support outsourcing of the datasets and the computation. In this paper, we propose two protocols for delegated PSI computation on outsourced private datasets. Our protocols have a unique combination of properties that make them particularly appealing for a cloud computing setting. Our first protocol, O-PSI, satisfies these properties by using additive homomorphic encryption and point-value polynomial representation of a set. Our second protocol, EO-PSI, is mainly based on a hash table and point-value polynomial representation and it does not require public key encryption; meanwhile, it retains all the desirable properties and is much more efficient than the first one. We also provide a formal security analysis of the two protocols in the semi-honest model and we analyze their performance utilizing prototype implementations we have developed. Our performance analysis shows that EO-PSI scales well and is also more efficient than similar state-of-the-art protocols for large set sizes.
Approximating Private Set Union/Intersection Cardinality with Logarithmic Complexity
The computation of private set union/intersection cardinality (PSU-CA/PSI-CA) is one of the most intensively studied problems in Privacy Preserving Data Mining (PPDM). However, existing protocols are computationally too expensive to be employed in real-world PPDM applications. In response, we propose efficient approximate protocols, whose accuracy can be tuned according to application requirements. We first propose a two-party PSU-CA protocol based on Flajolet-Martin sketches. The protocol has logarithmic computational/communication complexity and relies mostly on symmetric key operations. Thus, it is much more efficient and scalable than existing protocols. In addition, our protocol can hide its output. This feature is necessary in PPDM applications, since the union cardinality is often an intermediate result that must not be disclosed. We then propose a two-party PSI-CA protocol, which is derived from the PSU-CA protocol with virtually no cost. Both our two-party protocols can be easily extended to the multiparty setting. We also design an efficient masking scheme for (1,n)-OT. The scheme is used in optimizing the two-party protocols and is of independent interest, since it can speed up (1,n)-OT significantly when n is large. Last, we show through experiments the effectiveness and efficiency of our protocols.
Order-LWE and the Hardness of Ring-LWE with Entropic Secrets
Uncategorized
Uncategorized
We propose a generalization of the celebrated Ring Learning with Errors (RLWE) problem (Lyubashevsky, Peikert and Regev, Eurocrypt 2010, Eurocrypt 2013), wherein the ambient ring is not the ring of integers of a number field, but rather an *order* (a full rank subring). We show that our Order-LWE problem enjoys worst-case hardness with respect to short-vector problems in invertible-ideal lattices *of the order*.
The definition allows us to provide a new analysis for the hardness of the abundantly used Polynomial-LWE (PLWE) problem (Stehlë et al., Asiacrypt 2009), different from the one recently proposed by Rosca, Stehlë and Wallet (Eurocrypt 2018).
This suggests that Order-LWE may be used to analyze and possibly *design* useful relaxations of RLWE.
We show that Order-LWE can naturally be harnessed to prove security for RLWE instances where the ``RLWE secret'' (which often corresponds to the secret-key of a cryptosystem) is not sampled uniformly as required for RLWE hardness. We start by showing worst-case hardness even if the secret is sampled from a subring of the sample space. Then, we study the case where the secret is sampled from an *ideal* of the sample space or a coset thereof (equivalently, some of its CRT coordinates are fixed or leaked). In the latter, we show an interesting threshold phenomenon where the amount of RLWE *noise* determines whether the problem is tractable.
Lastly, we address the long standing question of whether high-entropy secret is sufficient for RLWE to be intractable. Our result on sampling from ideals shows that simply requiring high entropy is insufficient. We therefore propose a broad class of distributions where we conjecture that hardness should hold, and provide evidence via reduction to a concrete lattice problem.
Out-of-Band Authentication in Group Messaging: Computational, Statistical, Optimal
Extensive efforts are currently put into securing messaging platforms, where a key challenge is that of protecting against man-in-the-middle attacks when setting up secure end-to-end channels. The vast majority of these efforts, however, have so far focused on securing user-to-user messaging, and recent attacks indicate that the security of group messaging is still quite fragile.
We initiate the study of out-of-band authentication in the group setting, extending the user-to-user setting where messaging platforms (e.g., Telegram and WhatsApp) protect against man-in-the-middle attacks by assuming that users have access to an external channel for authenticating one short value (e.g., two users who recognize each other's voice can compare a short value). Inspired by the frameworks of Vaudenay (CRYPTO '05) and Naor et al. (CRYPTO '06) in the user-to-user setting, we assume that users communicate over a completely-insecure channel, and that a group administrator can out-of-band authenticate one short message to all users. An adversary may read, remove, or delay this message (for all or for some of the users), but cannot undetectably modify it.
Within our framework we establish tight bounds on the tradeoff between the adversary's success probability and the length of the out-of-band authenticated message (which is a crucial bottleneck given that the out-of-band channel is of low bandwidth). We consider both computationally-secure and statistically-secure protocols, and for each flavor of security we construct an authentication protocol and prove a lower bound showing that our protocol achieves essentially the best possible tradeoff.
In particular, considering groups that consist of an administrator and $k$ additional users, for statistically-secure protocols we show that at least $(k+1)\cdot (\log(1/\epsilon) - \Theta(1))$ bits must be out-of-band authenticated, whereas for computationally-secure ones $\log(1/\epsilon) + \log k$ bits suffice, where $\epsilon$ is the adversary's success probability. Moreover, instantiating our computationally-secure protocol in the random-oracle model yields an efficient and practically-relevant protocol (which, alternatively, can also be based on any one-way function in the standard model).
New Instantiations of the CRYPTO 2017 Masking Schemes
At CRYPTO 2017, Belaïd et al. presented two new private multiplication algorithms over finite fields, to be used in secure masking schemes. To date, these algorithms have the lowest known complexity in terms of bilinear multiplication and random masks respectively, both being linear in the number of shares $d+1$. Yet, a practical drawback of both algorithms is that their safe instantiation relies on finding matrices satisfying certain conditions. In their work, Belaïd et al. only address these up to $d=2$ and 3 for the first and second algorithm respectively, limiting so far the practical usefulness of their constructions.
In this paper, we use in turn an algebraic, heuristic, and experimental approach to find many more safe instances of Belaïd et al.'s algorithms. This results in explicit instantiations up to order $d = 6$ over large fields, and up to $d = 4$ over practically relevant fields such as $\mathbb{F}_{2^8}$.
Conjugacy Separation Problem in Braids: an Attack on the Original Colored Burau Key Agreement Protocol
In this paper, we consider the conjugacy separation search problem in braid groups.
We deeply redesign the algorithm presented in (Myasnikov & Ushakov, 2009) and provide an experimental evidence that the problem can be solved for $100\%$ of very long randomly generated instances.
The lengths of tested randomly generated instances is increased by the factor of two compared to the lengths suggested in the original proposal for $120$ bits of security.
An implementation of our attack is freely available in CRAG.
In particular, the implementation contains all challenging instances we had to deal with on a way to $100\%$ success.
We hope it will be useful to braid-group cryptography community.
Glitch-Resistant Masking Revisited - or Why Proofs in the Robust Probing Model are Needed
Implementing the masking countermeasure in hardware is a delicate task. Various solutions have been proposed for this purpose over the last years: we focus on Threshold Implementations (TIs), Domain-Oriented Masking (DOM), the Unified Masking Approach (UMA) and Generic Low Latency Masking (GLM). The latter generally come with innovative ideas to prevent physical defaults such as glitches. Yet, and in contrast to the situation in software-oriented masking, these schemes have not been formally proven at arbitrary security orders and their composability properties were left unclear. So far, only a 2-cycle implementation of the seminal masking scheme by Ishai, Sahai and Wagner has been shown secure and composable in the robust probing model -- a variation of the probing model aimed to capture physical defaults such as glitches -- for any number of shares. In this paper, we argue that this lack of proofs for TIs, DOM, UMA and GLM makes the interpretation of their security guarantees difficult as the number of shares increases. For this purpose, we first put forward that the higher-order variants of all these schemes are affected by (local or composability) security flaws in the (robust) probing model, due to insufficient refreshing. We then show that composability and robustness against glitches cannot be analyzed independently. We finally detail how these abstract flaws translate into concrete (experimental) attacks, and discuss the additional constraints robust probing security implies on the need of registers. Despite not systematically leading to improved complexities at low security orders, e.g., with respect to the required number of measurements for a successful attack, we argue that these weaknesses provide a case for the need of security proofs in the robust probing model at higher security orders.
Betrayal, Distrust, and Rationality: Smart Counter-Collusion Contracts for Verifiable Cloud Computing
Uncategorized
Uncategorized
Cloud computing has become an irreversible trend. Together comes the pressing need for verifiability, to assure the client the correctness of computation outsourced to the cloud. Existing verifiable computation techniques all have a high overhead, thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart. To achieve verifiability at a reasonable cost, we leverage game theory and propose a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. We provide a formal analysis of the games induced by the contracts, and prove that the contracts will be effective under certain reasonable assumptions. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network.
Wide Tweakable Block Ciphers Based on Substitution-Permutation Networks: Security Beyond the Birthday Bound
Substitution-Permutation Networks (SPNs) refer to a family of constructions which build a $wn$-bit (tweakable) block cipher from $n$-bit public permutations. Many widely deployed block ciphers are part of this family and rely on very small public permutations. Surprisingly, this structure has seen little theoretical interest when compared with Feistel networks, another high-level structure for block ciphers.
This paper extends the work initiated by Dodis et al. in three directions; first, we make SPNs tweakable by allowing keyed tweakable permutations in the permutation layer, and prove their security as tweakable block ciphers. Second, we prove beyond-the-birthday-bound security for $2$-round non-linear SPNs with independent S-boxes and independent round keys. Our bounds also tend towards optimal security $2^n$ (in terms of the number of threshold queries) as the number of rounds increases. Finally, all our constructions permit their security proofs in the multi-user setting.
As an application of our results, SPNs can be used to build provably secure wide tweakable block ciphers from several public permutations, or from a block cipher. More specifically, our construction can turn two strong public $n$-bit permutations into a tweakable block cipher working on $wn$-bit blocks and using a $6n$-bit key and an $n$-bit tweak (for any $w\geq 2$); the tweakable block cipher provides security up to $2^{2n/3}$ adversarial queries in the random permutation model, while only requiring $w$ calls to each permutation and $3w$ field multiplications for each $wn$-bit block.
Unbounded Inner-Product Functional Encryption, with Succinct Keys
In 2015, Abdalla et al. introduced Inner-Product Functional Encryption, where both ciphertexts and decryption keys are vectors of fixed size $n$, and keys enable the computation of an inner product between the two. In practice, however, the size of the data parties are dealing with may vary over time. Having a public key of size $n$ can also be inconvenient when dealing with very large vectors.
We define the Unbounded Inner-Product functionality in the context of Public-Key Functional Encryption, and introduce schemes that realize it under standard assumptions. In an Unbounded Inner-Product Functional Encryption scheme, a public key allows anyone to encrypt unbounded vectors, that are essentially mappings from $\mathbb{N}^*$ to $\mathbb{Z}_p$. The owner of the master secret key can generate functional decryption keys for other unbounded vectors. These keys enable one to evaluate the inner product between the unbounded vector underlying the ciphertext and the unbounded vector in the functional decryption key, provided certain conditions on the two vectors are met. We build Unbounded Inner-Product Functional Encryption by introducing pairings, using a technique similar to that of Boneh-Franklin Identity-Based Encryption. A byproduct of this is that our scheme can be made Identity-Based "for free". It is also the first Public-Key Inner-Product Functional Encryption Scheme with a constant size public key (and master secret key), as well constant-size functional decryption keys: each consisting of just one group element.
Pushing the Communication Barrier in Secure Computation using Lookup Tables
Secure two-party computation has witnessed significant efficiency improvements in the recent years. Current implementations of protocols with security against passive adversaries generate and process data much faster than it can be sent over the network, even with a single thread. This paper introduces novel methods to further reduce the communication bottleneck and round complexity of semi-honest secure two-party computation. Our new methodology creates a trade-off between communication and computation, and we show that the added computing cost for each party is still feasible and practicable in light of the new communication savings. We first improve communication for Boolean circuits with 2-input gates by factor 1.9x when evaluated with the protocol of Goldreich-Micali-Wigderson (GMW). As a further step, we change the conventional Boolean circuit representation from 2-input gates to multi-input/multi-output lookup tables (LUTs) which can be programmed to realize arbitrary functions. We construct two protocols for evaluating LUTs offering a trade-off between online communication and total communication. Our most efficient LUT-based protocol reduces the communication and round complexity by a factor 2-4x for several basic and complex operations. Our proposed scheme results in a significant overall runtime decrease of up to a factor of 3x on several benchmark functions.
Towards practical key exchange from ordinary isogeny graphs
We revisit the ordinary isogeny-graph based cryptosystems of Couveignes and Rostovtsev–Stolbunov, long dismissed as impractical. We give algorithmic improvements that accelerate key exchange in this framework, and explore the problem of generating suitable system parameters for contemporary pre- and post-quantum security that take advantage of these new algorithms. We also prove the session-key security of this key exchange in the Canetti–Krawczyk model, and the IND-CPA security of the related public-key encryption scheme, under reasonable assumptions on the hardness of computing isogeny walks. Our systems admit efficient key-validation techniques that yield CCA-secure encryption, thus providing an important step towards efficient post-quantum non-interactive key exchange.
Authenticated Encryption with Nonce Misuse and Physical Leakages: Definitions, Separation Results, and Leveled Constructions
We propose definitions and constructions of authenticated encryption (AE) schemes that offer security guarantees even in the presence of nonce misuse and side-channel leakages. This is part of an important ongoing effort to make AE more robust, while preserving appealing efficiency properties. Our definitions consider an adversary enhanced with the leakages of all the computations of an AE scheme, together with the possibility to misuse nonces, be it during all queries (in the spirit of misuse-resistance), or only during training queries (in the spirit of misuse-resilience recently introduced by Ashur et al.). These new definitions offer various insights on the effect of leakages in the security landscape. In particular, we show that, in contrast with the black-box setting, leaking variants of INT-CTXT and IND-CPA security do not imply a leaking variant IND-CCA security, and that leaking variants of INT-PTXT and IND-CCA do not imply a leaking variant of INT-CTXT. Eventually, we propose first instances of modes of operations that satisfy our definitions. In order to optimize their efficiency, we aim at modes that support "leveled implementations" such that the encryption and decryption operations require the use of a small constant number of evaluations of an expensive and heavily protected component, while the bulk of the computations can be performed by cheap and weakly protected block cipher implementations.
Compact Multi-Signatures for Smaller Blockchains
Uncategorized
Uncategorized
We construct new multi-signature schemes that provide new functionality. Our schemes are designed to reduce the size of the Bitcoin blockchain, but are useful in many other settings where multi-signatures are needed. All our constructions support both signature compression and public-key aggregation. Hence, to verify that a number of parties signed a common message m, the verifier only needs a short multi-signature, a short aggregation of their public keys, and the message m. We give new constructions that are derived from Schnorr signatures and from BLS signatures. Our constructions are in the plain public key model, meaning that users do not need to prove knowledge or possession of their secret key.
In addition, we construct the first short accountable-subgroup multi-signature (ASM) scheme. An ASM scheme enables any subset S of a set of n parties to sign a message m so that a valid signature discloses which subset generated the signature (hence the subset S is accountable for signing m). We construct the first ASM scheme where signature size is only O(k) bits over the description of S, where k is the security parameter. Similarly, the aggregate public key is only O(k) bits, independent of n. The signing process is non-interactive. Our ASM scheme is very practical and well suited for compressing the data needed to spend funds from a t-of-n Multisig Bitcoin address, for any (polynomial size) t and n.
SPDZ2k: Efficient MPC mod 2^k for Dishonest Majority
Uncategorized
Uncategorized
Most multi-party computation protocols allow secure computation of arithmetic circuits over a finite field, such as the integers modulo a prime.
In the more natural setting of integer computations modulo $2^{k}$, which are useful for simplifying implementations and applications, no solutions with active security are known unless the majority of the participants are honest.
We present a new scheme for information-theoretic MACs that are homomorphic modulo $2^k$, and are as efficient as the well-known standard solutions that are homomorphic over fields.
We apply this to construct an MPC protocol for dishonest majority in the preprocessing model that has efficiency comparable to the well-known SPDZ protocol (Damgård et al., CRYPTO 2012), with operations modulo $2^k$ instead of over a field.
We also construct a matching preprocessing protocol based on oblivious transfer, which is in the style of the MASCOT protocol (Keller et al., CCS 2016) and almost as efficient.
On the Exact Round Complexity of Secure Three-Party Computation
We settle the exact round complexity of three-party computation (3PC) in honest-majority setting, for a range of security notions such as selective abort, unanimous abort, fairness and guaranteed output delivery. Selective abort security, the weakest in the lot, allows the corrupt parties to selectively deprive some of the honest parties of the output. In the mildly stronger version of unanimous abort, either all or none of the honest parties receive the output. Fairness implies that the corrupted parties receive their output only if all honest parties receive output and lastly, the strongest notion of guaranteed output delivery implies that the corrupted parties cannot prevent honest parties from receiving their output. It is a folklore that the implication holds from the guaranteed output delivery to fairness to unanimous abort to selective abort. We focus on two network settings-- pairwise-private channels without and with a broadcast channel.
In the minimal setting of pairwise-private channels, 3PC with selective abort is known to be feasible in just two rounds, while guaranteed output delivery is infeasible to achieve irrespective of the number of rounds. Settling the quest for exact round complexity of 3PC in this setting, we show that three rounds are necessary and sufficient for unanimous abort and fairness. Extending our study to the setting with an additional broadcast channel, we show that while unanimous abort is achievable in just two rounds, three rounds are necessary and sufficient for fairness and guaranteed output delivery. Our lower bound results extend for any number of parties in honest majority setting and imply tightness of several known constructions.
The fundamental concept of garbled circuits underlies all our upper bounds. Concretely, our constructions involve transmitting and evaluating only constant number of garbled circuits. Assumption-wise, our constructions rely on injective (one-to-one) one-way functions.
On Distributional Collision Resistant Hashing
Collision resistant hashing is a fundamental concept that is the basis for many of the important cryptographic primitives and protocols. Collision resistant hashing is a family of compressing functions such that no efficient adversary can find any collision given a random function in the family.
In this work we study a relaxation of collision resistance called distributional collision resistance, introduced by Dubrov and Ishai (STOC '06). This relaxation of collision resistance only guarantees that no efficient adversary, given a random function in the family, can sample a pair $(x,y)$ where $x$ is uniformly random and $y$ is uniformly random conditioned on colliding with $x$.
Our first result shows that distributional collision resistance can be based on the existence of multi-collision resistance hash (with no additional assumptions). Multi-collision resistance is another relaxation of collision resistance which guarantees that an efficient adversary cannot find any tuple of $k>2$ inputs that collide relative to a random function in the family. The construction is non-explicit, non-black-box, and yields an infinitely-often secure family. This partially resolves a question of Berman et al. (EUROCRYPT '18). We further observe that in a black-box model such an implication (from multi-collision resistance to distributional collision resistance) does not exist.
Our second result is a construction of a distributional collision resistant hash from the average-case hardness of SZK. Previously, this assumption was not known to imply any form of collision resistance (other than the ones implied by one-way functions).
On the security of Jhanwar-Barua Identity-Based Encryption Scheme
In 2008, Jhanwar and Barua presented an improvement of the Boneh-Gentry-Hamburg (BGH) scheme. In addition to reducing the time complexity of the algorithm to find a solution of the equation $ax^2+Sy^2\equiv 1 \bmod n$, their scheme reduces the number of equations to be solved by combining existing solutions. Susilo et al. extended the Jhanwar-Barua scheme, reducing more the number of equations to be solved. This paper presents a security flaw that appears in both schemes and shows that they are not IND-ID-CPA secure.
On Non-Monotonicity of the Success Probability in Linear Cryptanalysis
Like any other cryptanalytic attack, the success rate of a
linear attack is expected to improve as more data becomes available.
Bogdanov and Tischhauser (FSE 2013) made the rather surprising claim
that the success rate of a linear attack may go down with increasing
plaintext amount, after an optimal point. They supported this claim
with experimental evidence by an attack on SmallPresent-20. Different
explanations have been given to explain this surprising phenomenon. In
this note, we give quantitative values regarding when this phenomenon
can be observed. We conclude that it should not be an issue for attacks
in practice except for those with a tiny bias.
CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information
Machine learning has become mainstream across industries. In this work we pose the following question: Is it possible to reverse engineer a neural network by using only side-channel information? We answer the question affirmatively. To this end, we consider a multi layer perceptron as the machine learning architecture of choice and assume a passive attacker capable of measuring only passive side-channels like power, electromagnetic radiation, and timing.
We conduct all experiments on real data and common neural net architectures in order to properly assess the applicability and extendability of such attacks. Our experiments show that the side-channel attacker is able to obtain information about the activation functions, the number of layers and neurons in layers, the number of output classes, and weights in the neural network. Thus, the attacker can efficiently reverse engineer the network using side-channel information.
Next, we show that if the attacker has the knowledge about the neural network architecture, he/she could also recover the inputs to the network with only a single measurement. Finally, we discuss several mitigations one could use to thwart such attacks.
The Curse of Class Imbalance and Conflicting Metrics with Machine Learning for Side-channel Evaluations
Uncategorized
Uncategorized
We concentrate on machine learning techniques used for profiled side-channel analysis in the presence of imbalanced data. Such scenarios are realistic and often occurring, for instance in the Hamming weight or Hamming distance leakage models.
In order to deal with the imbalanced data, we use various balancing techniques and we show that most of them help in mounting successful attacks when the data is highly imbalanced. Especially, the results with the SMOTE technique are encouraging, since we observe some scenarios where it reduces the number of necessary measurements more than 8 times.
Next, we provide extensive results on comparison of machine learning and side-channel metrics, where we show that machine learning metrics (and especially accuracy as the most often used one) can be extremely deceptive. This finding opens a need to revisit the previous works and their results in order to properly assess the performance of machine learning in side-channel analysis.
Improved Non-Interactive Zero Knowledge with Applications to Post-Quantum Signatures
Recent work, including ZKBoo, ZKB++, and Ligero, has developed efficient non-interactive zero-knowledge proofs of knowledge (NIZKPoKs) for arbitrary Boolean circuits based on symmetric- key primitives alone using the “MPC-in-the-head” paradigm of Ishai et al. We show how to instantiate this paradigm with MPC protocols in the preprocessing model; once optimized, this results in an NIZKPoK with shorter proofs (and comparable computation) as in prior work for circuits containing roughly 300–100,000 AND gates. In contrast to prior work, our NIZKPoK also supports witness-independent preprocessing, which allows the prover to move most of its work to an offline phase before the witness is known.
We use our NIZKPoK to construct a signature scheme based only on symmetric-key primitives (and hence with “post-quantum” security). The resulting scheme has shorter signatures than the scheme built using ZKB++ (with comparable signing/verification time), and is even competitive with hash-based signature schemes.
To further highlight the flexibility and power of our NIZKPoK, we also use it to build efficient ring and group signatures based on symmetric-key primitives alone. To our knowledge, the resulting schemes are the most efficient constructions of these primitives that offer post-quantum security.
Minimising Communication in Honest-Majority MPC by Batchwise Multiplication Verification
In this paper, we present two new and very communication-efficient protocols for maliciously secure multi-party computation over fields in the honest-majority setting with abort. Our first protocol improves a recent protocol by Lindell and Nof. Using the so far overlooked tool of batchwise multiplication verification, we speed up their technique for checking correctness of multiplications (with some other improvements), reducing communication by 2x to 7x. In particular, in the 3PC setting, each party sends only two field elements per multiplication. We also show how to achieve fairness, which Lindell and Nof left as an open problem. Our second protocol again applies batchwise multiplication verification, this time to perform 3PC by letting two parties perform the SPDZ protocol using triples generated by a third party and verified batchwise. In this protocol, each party sends only 4/3 field elements during the online phase and 5/3 field elements during the preprocessing phase.
A Black-Box Construction of Fully-Simulatable, Round-Optimal Oblivious Transfer from Strongly Uniform Key Agreement
We show how to construct maliciously secure oblivious transfer (M-OT) from a strengthening of key agreement (KA) which we call *strongly uniform* KA (SU-KA), where the latter roughly means that the messages sent by one party are computationally close to uniform, even if the other party is malicious. Our transformation is black-box, almost round preserving (adding only a constant overhead of up to two rounds), and achieves standard simulation-based security in the plain model.
As we show, 2-round SU-KA can be realized from cryptographic assumptions such as low-noise LPN, high-noise LWE, Subset Sum, DDH, CDH and RSA---all with polynomial hardness---thus yielding a black-box construction of fully-simulatable, round-optimal, M-OT from the same set of assumptions (some of which were not known before).
Anonymous Multi-Hop Locks for Blockchain Scalability and Interoperability
Tremendous growth in cryptocurrency usage is exposing the inherent scalability issues with permissionless blockchain technology. Payment-channel networks (PCNs) have emerged as the most widely deployed solution to mitigate the scalability issues, allowing the bulk of payments between two users to be carried out off-chain. Unfortunately, as reported in the literature and further demonstrated in this paper, current PCNs do not provide meaningful security and privacy guarantees.
In this work, we study and design secure and privacy-preserving PCNs. We start with a security analysis of existing PCNs, reporting a new attack that applies to all major PCNs, including the Lightning Network, and allows an attacker to steal the fees from honest intermediaries in the same payment path. We then formally define anonymous multi-hop locks (AMHLs), a novel cryptographic primitive that serves as a cornerstone for the design of secure and privacy-preserving PCNs. We present several provably secure cryptographic instantiations that make AMHLs compatible with the vast majority of cryptocurrencies. In particular, we show that (linear) homomorphic one-way functions suffice to construct AMHLs for PCNs supporting a script language (e.g., Ethereum). We also propose a construction based on ECDSA signatures that does not require scripts, thus solving a prominent open problem in the field.
AMHLs constitute a generic primitive whose usefulness goes beyond multi-hop payments in a single PCN and we show how to realize atomic swaps and interoperable PCNs from this primitive. Finally, our performance evaluation on a commodity machine finds that AMHL operations can be performed in less than 100 millisec- onds and require less than 500 bytes of communication overhead, even in the worst case. In fact, after acknowledging our attack, the Lightning Network developers have implemented our ECDSA-based AMHLs into their PCN. This demonstrates the practicality of our approach and its impact on the security, privacy, interoperability, and scalability of today’s cryptocurrencies.
Efficient Range ORAM with $\mathbb{O}(\log^{2}{N})$ Locality
Uncategorized
Uncategorized
Oblivious RAM protocols (ORAMs) allow a client to access data from an untrusted storage device without revealing to that device any information about their access pattern. Typically this is accomplished through random shuffling of the data such that the storage device cannot determine where individual blocks are located, resulting in a highly randomized access pattern. Storage devices however, are typically optimized for \emph{sequential} access. A large number of random disk seeks during standard ORAM operation induce a substantial overhead. In this paper, we introduce rORAM, an ORAM specifically suited for accessing ranges of \emph{sequentially logical blocks} while \emph{minimizing the number of random physical disk seeks}. rORAM obtains significantly better asymptotic efficiency than prior designs (Asharov et al., ePrint 2017, Demertzis et al., CRYPTO 2018) reducing {\em both} the number of seeks and communication complexity by a multiplicative factor of $\mathbb{O}(\log N)$. An rORAM prototype is 30-50x times faster than Path ORAM for similar range-query workloads on local HDDs, 30x faster for local SSDs, and 10x faster for network block devices. rORAM's novel disk layout can also speed up standard ORAM constructions, e.g., resulting in a 2x faster Path ORAM variant. Importantly, experiments demonstrate suitability for real world applications -- rORAM is up to 5x faster running a file server and up to 11x faster running a range-query intensive video server workloads compared to standard Path ORAM.
The Usefulness of Sparsifiable Inputs: How to Avoid Subexponential iO
We consider the problem of removing subexponential reductions to indistinguishability obfuscation (iO) in the context of obfuscating probabilistic programs. Specifically, we show how to apply complexity absorption (Zhandry, Crypto 2016) to the recent notion of probabilistic indistinguishability obfuscation (piO, Canetti et al., TCC 2015). As a result, we obtain a variant of piO which allows to obfuscate a large class of probabilistic programs, from polynomially secure indistinguishability obfuscation and extremely lossy functions. Particularly, our piO variant is able to obfuscate circuits with specific input domains regardless of the performed computation. We then revisit several (direct or indirect) applications of piO, and obtain
-- a fully homomorphic encryption scheme (without circular security assumptions),
-- a multi-key fully homomorphic encryption scheme with threshold decryption,
-- an encryption scheme secure under arbitrary key-dependent messages,
-- a spooky encryption scheme for all circuits,
-- a function secret sharing scheme with additive reconstruction for all circuits,
all from polynomially secure iO, extremely lossy functions, and, depending on the scheme, also other (but polynomial and comparatively mild) assumptions. All of these assumptions are implied by polynomially secure iO and the (non-polynomial, but very well-investigated) exponential DDH assumption. Previously, all the above applications required to assume the subexponential security of iO (and more standard assumptions).
An Abstract Model of UTxO-based Cryptocurrencies with Scripts
In [1], an abstract accounting model for UTXO-based cryptocurrencies has been presented. However, that model considered only the simplest kind of transaction (known in Bitcoin as pay-to-pubkey-hash) and also abstracted away all aspects related to authorization. This paper extends that model to the general case where the transaction contains validator (a.k.a. scriptPubKey) scripts and redeemer (a.k.a. scriptSig) scripts, which together determine whether the transac- tion’s fund transfers have been authorized.
On Beyond-Birthday-Bound Security: Revisiting the Development of ISO/IEC 9797-1 MACs
ISO/IEC 9797-1 is an international standard for block-cipher-based Message
Authentication Code (MAC). The current version ISO/IEC 9797-1:2011 specifies
six single-pass CBC-like MAC structures that are capped at the birthday bound
security. For a higher security that is beyond-birthday bound, it recommends to use
the concatenation combiner of two single-pass MACs. In this paper, we reveal the
invalidity of the suggestion, by presenting a birthday bound forgery attack on the
concatenation combiner, which is essentially based on Joux’s multi-collision. Notably,
our new forgery attack for the concatenation of two MAC Algorithm 1 with padding
scheme 2 only requires 3 queries. Moreover, we look for patches by revisiting the
development of ISO/IEC 9797-1 with respect to the beyond-birthday bound security.
More specifically, we evaluate the XOR combiner of single-pass CBC-like MACs,
which was used in previous version of ISO/IEC 9797-1.
Error-Detecting in Monotone Span Programs with Application to Communication Efficient Multi-Party Computation
Recent improvements in the state-of-the-art of MPC for non-full-threshold access structures introduced the idea of using a collision-resistant hash functions and redundancy in the secret-sharing scheme to construct a communication-efficient MPC protocol which is computationally-secure against malicious adversaries, with abort. The prior work is based on replicated secret-sharing; in this work we extend this methodology to {\em any} multiplicative LSSS implementing a $Q_2$ access structure. To do so we need to establish a folklore property of error detection for such LSSS and their associated Monotone Span Programs. In doing so we obtain communication-efficient online and offline protocols for MPC in the pre-processing model.
A secure end-to-end verifiable e-voting system using zero knowledge based blockchain
In this paper, we present a cryptographic technique for an authenticated, end-to-end verifiable and secret ballot election. Voters should receive assurance that their vote is cast as intended, recorded as cast and tallied as recorded. The election system as a whole should ensure that voter coercion is unlikely, even when voters are willing to be influenced. Currently, almost all verifiable e-voting systems require trusted authorities to perform the tallying process. An exception is the DRE-i and DRE-ip system. The DRE-ip system removes the requirement of tallying authorities by encrypting ballot in such a way that the election tally can be publicly verified without decrypting cast ballots. However, the DRE-ip system necessitates a secure bulletin board (BB) for storing the encrypted ballot as without it the integrity of the system may be lost and the result can be compromised without detection during the audit phase. In this paper, we have modified the DRE-ip system so that if any recorded ballot is tampered by an adversary before the tallying phase, it will be detected during the tallying phase. In addition, we have described a method using zero knowledge based public blockchain to store these ballots so that it remains tamper proof. To the best of our knowledge, it is the first end-to-end verifiable Direct-recording electronic (DRE) based e-voting system using blockchain. In our case, we assume that the bulletin board is insecure and an adversary has read and write access to the bulletin board. We have also added a secure biometric with government provided identity card based authentication mechanism for voter authentication. The proposed system is able to encrypt ballot in such a way that the election tally can be publicly verified without decrypting cast ballots maintaining end-to-end verifiability and without requiring the secure bulletin board.
A Note on the Communication Complexity of Multiparty Computation in the Correlated Randomness Model
Secure multiparty computation (MPC) addresses the challenge of evaluating functions on secret inputs without compromising their privacy. An central question in multiparty communication is to understand the amount of communication needed to securely evaluate a circuit of size s. In this work, we revisit this fundamental question, in the setting of information-theoretically secure MPC in the correlated randomness model, where a trusted dealer distributes correlated random coins, independent of the inputs, to all parties before the start of the protocol. This setting is of strong theoretical interest, and has led to the most practically efficient MPC protocols known to date.
While it is known that protocols with optimal communication (proportional to input plus output size) can be obtained from the LWE assumption, and that protocols with sublinear communication o(s) can be obtained from the DDH assumption, the question of constructing protocols with o(s) communication remains wide open for the important case of information-theoretic MPC in the correlated randomness model. All known protocols in this model require O(s) communication in the online phase; improving this state of affairs is a long standing open question. In this work, we exhibit the first generic multiparty computation protocol in the correlated randomness model with communication sublinear in the circuit size, for a large class of circuits. More precisely, we show the following: any size-s layered circuit (whose nodes can be partitioned into layers so that any edge connects adjacent layers) can be evaluated with O(s/log log s) communication. Our results holds for both boolean and arithmetic circuits, in the honest-but-curious setting, and do not assume honest majority. For boolean circuits, we extend our results to handle malicious corruption.
Cryptanalysis of MORUS
MORUS is a high-performance authenticated encryption algorithm submitted to the CAESAR competition, and recently selected as a finalist.There are three versions of MORUS: MORUS-640 with a 128-bit key, and MORUS-1280 with 128-bit or 256-bit keys. For all versions the security claim for confidentiality matches the key size.In this paper, we analyze the components of this algorithm (initialization, state update and tag generation), and report several results.
As our main result, we present a linear correlation in the keystream of full MORUS, which can be used to distinguish its output from random and to recover some plaintext bits in the broadcast setting.For MORUS-1280, the correlation is $2^{-76}$, which can be exploited after around $2^{152}$ encryptions, less than would be expected for a 256-bit secure cipher. For MORUS-640, the same attack results in a correlation of $2^{-73}$, which does not violate the security claims of the cipher.
To identify this correlation, we make use of rotational symmetries in MORUS using linear masks that are invariant by word-rotations of the state.This motivates us to introduce single-word versions of MORUS called MiniMORUS, which simplifies the analysis. The attack has been implemented and verified on MiniMORUS, where it yields a correlation of $2^{-16}$.
We also study reduced versions of the initialization and finalization of MORUS, aiming to evaluate the security margin of these components.We show a forgery attack when finalization is reduced from 10 steps to 3, and a key-recovery attack in the nonce-misuse setting when initialization is reduced from 16 steps to 10.These additional results do not threaten the full MORUS, but studying all aspects of the design is useful to understand its strengths and weaknesses.
Generic Hardness of Inversion on Ring and Its Relation to Self-Bilinear Map
In this paper, we study the generic hardness of the inversion problem on a ring, which is a problem to compute the inverse of a given prime $c$ by just using additions, subtractions and multiplications on the ring. If the characteristic of an underlying ring is public and coprime to $c$, then it is easy to compute the inverse of $c$ by using the extended Euclidean algorithm. On the other hand, if the characteristic is hidden, it seems difficult to compute it. For discussing the generic hardness of the inversion problem, we first extend existing generic ring models to capture a ring of an unknown characteristic. Then we prove that there is no generic algorithm to solve the inversion problem in our model when the underlying ring is isomorphic to $\mathbb{Z}_p$ for a randomly chosen prime $p$ assuming the hardness of factorization of an unbalanced modulus. We also study a relation between the inversion problem on a ring and a self-bilinear map. We give a ring-based construction of a self-bilinear map, and prove that natural complexity assumptions including the multilinear computational Diffie-Hellman (MCDH) assumption hold w.r.t the resulting sef-bilinear map if the inversion problem is hard on the underlying ring.
Logistic regression over encrypted data from fully homomorphic encryption
One of the tasks in the $2017$ iDASH secure genome analysis competition was to enable training of logistic regression models over encrypted genomic data. More precisely, given a list of approximately $1500$ patient records, each with $18$ binary features containing information on specific mutations, the idea was for the data holder to encrypt the records using homomorphic encryption, and send them to an untrusted cloud for storage. The cloud could then apply a training algorithm on the encrypted data to obtain an encrypted logistic regression model, which can be sent to the data holder for decryption. In this way, the data holder could successfully outsource the training process without revealing either her sensitive data, or the trained model, to the cloud. Our solution to this problem has several novelties: we use a multi-bit plaintext space in fully homomorphic encryption together with fixed point number encoding; we combine bootstrapping in fully homomorphic encryption with a scaling operation in fixed point arithmetic; we use a minimax polynomial approximation to the sigmoid function and the $1$-bit gradient descent method to reduce the plaintext growth in the training process. As a result, our training over encrypted data takes $0.4$ -- $3.2$ hours per iteration of gradient descent.
Continuous-Source Fuzzy Extractors: Source uncertainty and security
Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a high-entropy source into the same uniformly distributed key. The functionality of a fuzzy extractor outputs the key when provided with a value close to the original reading of the source. A necessary condition for security, called fuzzy min-entropy, is that the probability of every ball of values of the noisy source is small.
Many noisy sources are best modeled using continuous metric spaces. To build continuous-source fuzzy extractors, prior work assumes that the system designer has a good model of the distribution (Verbitskiy et al., IEEE TIFS 2010). However, it is impossible to build an accurate model of a high entropy distribution just by sampling from the distribution.
Model inaccuracy may be a serious problem. We demonstrate a family of continuous distributions V that is impossible to secure. No fuzzy extractor designed for V extracts a meaningful key from an average element of V. This impossibility result is despite the fact that each element W in V has high fuzzy min-entropy. We show a qualitatively stronger negative result for secure sketches, which are used to construct most fuzzy extractors.
Our results are for the Euclidean metric and are information-theoretic in nature. To the best of our knowledge all continuous-source fuzzy extractors argue information-theoretic security.
Fuller, Reyzin, and Smith showed comparable negative results for a discrete metric space equipped with the Hamming metric (Asiacrypt 2016). Continuous Euclidean space necessitates new techniques.
RapidChain: Scaling Blockchain via Full Sharding
A major approach to overcoming the performance and scalability limitations of current blockchain protocols is to use sharding, which is to split the overheads of processing transactions among multiple, smaller groups of nodes. These groups work in parallel to maximize performance while requiring significantly smaller communication, computation, and storage per node, allowing the system to scale to large networks. However, existing sharding-based blockchain protocols still require a linear amount of communication (in the number of participants) per transaction, and hence, attain only partially the potential benefits of sharding. We show that this introduces a major bottleneck to the throughput and latency of these protocols. Aside from the limited scalability, these protocols achieve weak security guarantees due to either a small fault resiliency (e.g., 1/8 and 1/4) or high failure probability, or they rely on strong assumptions (e.g., trusted setup) that limit their applicability to mainstream payment systems.
We propose RapidChain, the first sharding-based public blockchain protocol that is resilient to Byzantine faults from up to a 1/3 fraction of its participants, and achieves complete sharding of the communication, computation, and storage overhead of processing transactions without assuming any trusted setup. We introduce an optimal intra-committee consensus algorithm that can achieve very high throughputs via block pipelining, a novel gossiping protocol for large blocks, and a provably-secure reconfiguration mechanism to ensure robustness.
Using an efficient cross-shard transaction verification technique, RapidChain avoids gossiping transactions to the entire network. Our empirical evaluations suggest that RapidChain can process (and confirm) more than 7,300 tx/sec with an expected confirmation latency of roughly 8.7 seconds in a network of 4,000 nodes with an overwhelming time-to-failure of more than 4,500 years.
Supersingular Isogeny Oblivious Transfer (SIOT)
In this paper we present an Oblivious Transfer (OT) protocol that combines an OT scheme together with the Supersingular Isogeny Diffie-Hellman (SIDH) primitive. Our proposal is a candidate for post-quantum secure OT and demonstrates that SIDH naturally supports OT functionality. We consider the protocol in the simplest configuration and analyze the protocol to verify its security.
Characterizing Collision and Second-Preimage Resistance in Linicrypt
Linicrypt (Carmer & Rosulek, Crypto 2016) refers to the class of algorithms that make calls to a random oracle and otherwise manipulate values via fixed linear operations. We give a characterization of collision-resistance and second-preimage resistance for a significant class of Linicrypt programs (specifically, those that achieve domain separation on their random oracle queries via nonces). Our characterization implies that collision-resistance and second-preimage resistance are equivalent, in an asymptotic sense, for this class. Furthermore, there is a polynomial-time procedure for determining whether such a Linicrypt program is collision/second-preimage resistant.
From FE Combiners to Secure MPC and Back
Functional encryption (FE) has incredible applications towards computing on encrypted data. However, constructing the most general form of this primitive has remained elusive. Although some candidate constructions exist, they rely on nonstandard assumptions, and thus, their security has been questioned. An FE combiner attempts to make use of these candidates while minimizing the trust placed on any individual FE candidate. Informally, an FE combiner takes in a set of FE candidates and outputs a secure FE scheme if at least one of the candidates is secure.
Another fundamental area in cryptography is secure multi-party computation (MPC), which has been extensively studied for several decades. In this work, we initiate a formal study of the relationship between functional encryption (FE) combiners and secure multi-party computation (MPC). In particular, we show implications in both directions between these primitives. As a consequence of these implications, we obtain the following main results.
1) A two round semi-honest MPC protocol in the plain model secure against up to (n-1) corruptions with communication complexity proportional only to the depth of the circuit being computed assuming LWE. Prior two round protocols that achieved this communication complexity required a common reference string.
2) A functional encryption combiner based on pseudorandom generators (PRGs) in NC^1. Such PRGs can be instantiated from assumptions such as DDH and LWE. Previous constructions of FE combiners were known only from the learning with errors assumption. Using this result, we build a universal construction of functional encryption: an explicit construction of functional encryption based only on the assumptions that functional encryption exists and PRGs in NC^1.
An efficient structural attack on NIST submission DAGS
We present an efficient key recovery attack on code based encryption schemes using some quasi–dyadic alternant codes with extension degree 2. This attack permits to break the proposal DAGS recently submitted to NIST.
On Renyi Entropies and their Applications to Guessing Attacks in Cryptography
We consider single and multiple attacker scenarios in guessing and obtain bounds on various success parameters in terms of Renyi entropies. We also obtain a new derivation of the union bound.
R3C3: Cryptographically secure Censorship Resistant Rendezvous using Cryptocurrencies
Cryptocurrencies and blockchains are set to play a major role in the financial and supply-chain systems. Their presence and acceptance across different geopolitical corridors, including in repressive regimes, have been one of their striking features. In this work, we leverage this popularity for bootstrapping censorship resistant (CR) communication. We formalize the notion of stego-bootstrapping scheme and formally describe the security notions of the scheme in terms of rareness and security against chosen-covertext attacks. We present R3C3, a Cryptographically secure Censorship-Resistant Rendezvous using Cryptocurrencies. R3C3 allows a censored user to interact with a decoder entity outside the censored region, through blockchain transactions as rendezvous, to obtain bootstrapping information such as a CR proxy and its public key. Unlike the usual bootstrapping approaches (e.g., emailing) with heuristic security if any, R3C3 employs public-key steganography over blockchain transactions to ensure cryptographic security, while the blockchain transaction costs may deter the entry-point harvesting attacks. We develop bootstrapping rendezvous over Bitcoin, Zcash, Monero and Ethereum as well as the typical mining process, and analyze their effectivity in terms of cryptocurrency network volume and introduced monetary cost. With its highly cryptographic structure, Zcash is an outright winner for normal users with 1168 byte bandwidth per transaction costing only 0.03 USD as the fee, while mining pool managers have a free, extremely high bandwidth rendezvous when they mine a block.
Floppy-Sized Group Signatures from Lattices
We present the first lattice-based group signature scheme whose cryptographic artifacts are of size small enough to be usable in practice: for a group of $2^{25}$ users, signatures take 910 kB and public keys are 501 kB. Our scheme builds upon two recently proposed lattice-based primitives: the verifiable encryption scheme by Lyubashevsky and Neven (Eurocrypt 2017) and the signature scheme by Boschini, Camenisch, and Neven (IACR ePrint 2017). To achieve such short signatures and keys, we first re-define verifiable encryption to allow one to encrypt a function of the witness, rather than the full witness. This definition enables more efficient realizations of verifiable encryption and is of independent interest. Second, to minimize the size of the signatures and public keys of our group signature scheme, we revisit the proof of knowledge of a signature and the proofs in the verifiable encryption scheme provided in the respective papers.
Time-space complexity of quantum search algorithms in symmetric cryptanalysis: applying to AES and SHA-2
Uncategorized
Uncategorized
Performance of cryptanalytic quantum search algorithms is mainly inferred from query complexity which hides overhead induced by an implementation. To shed light on quantitative complexity analysis removing hidden factors, we provide a framework for estimating time-space complexity, with carefully accounting for characteristics of target cryptographic functions. Processor and circuit parallelization methods are taken into account, resulting in the time-space trade-off curves in terms of depth and qubit. The method guides howto rank different circuit designs in order of their efficiency. The framework is applied to representative cryptosystems NIST referred to as a guideline for security parameters, reassessing the security strengths of AES and SHA-2.
Tighter Security Proofs for GPV-IBE in the Quantum Random Oracle Model
In (STOC, 2008), Gentry, Peikert, and Vaikuntanathan proposed the first identity-based encryption (GPV-IBE) scheme based on a post-quantum assumption, namely, the learning with errors (LWE) assumption.
Since their proof was only made in the random oracle model (ROM) instead of the quantum random oracle model (QROM), it remained unclear whether the scheme was truly post-quantum or not.
In (CRYPTO, 2012), Zhandry developed new techniques to be used in the QROM and proved the security of GPV-IBE in the QROM, hence answering in the affirmative that GPV-IBE is indeed post-quantum.
However, since the general technique developed by Zhandry incurred a large reduction loss, there was a wide gap between the concrete efficiency and security level provided by GPV-IBE in the ROM and QROM.
Furthermore, regardless of being in the ROM or QROM, GPV-IBE is not known to have a tight reduction in the multi-challenge setting. Considering that in the real-world an adversary can obtain many ciphertexts, it is desirable to have a security proof that does not degrade with the number of challenge ciphertext.
In this paper, we provide a much tighter proof for the GPV-IBE in the QROM in the single-challenge setting.
In addition, we also show that a slight variant of the GPV-IBE has an almost tight reduction in the multi-challenge setting both in the ROM and QROM, where the reduction loss is independent of the number of challenge ciphertext. Our proof departs from the traditional partitioning technique and resembles the approach used in the public key encryption scheme of Cramer and Shoup (CRYPTO, 1998). Our proof strategy allows the reduction algorithm to program the random oracle the same way for all identities and naturally fits the QROM setting where an adversary may query a superposition of all identities in one random oracle query. Notably, our proofs are much simpler than the one by Zhandry and conceptually much easier to follow for cryptographers not familiar with quantum computation. Although at a high level, the techniques used for the single and multi-challenge setting are similar, the technical details are quite different. For the multi-challenge setting, we rely on the Katz-Wang technique (CCS, 2003) to overcome some obstacles regarding the leftover hash lemma.
From Keys to Databases -- Real-World Applications of Secure Multi-Party Computation
We discuss the widely increasing range of applications of a cryptographic technique called Multi-Party Computation. For many decades this was perceived to be of purely theoretical interest, but now it has started to find application in a number of use cases. We highly in this paper a number of these, ranging from securing small high value items such as cryptographic keys, through to securing an entire database.