All papers in 2017 (Page 9 of 1262 results)

Last updated:  2017-05-28
Leakage-Resilient Tweakable Encryption from One-Way Functions
Uncategorized
Suvradip Chakraborty, Chester Rebeiro, Debdeep Mukhopadhyay, C. Pandu Rangan
Show abstract
Uncategorized
In this paper, we initiate the study of leakage-resilient tweakable encryption schemes in the relative key-leakage model, where the adversary can obtain (arbitrary) partial information about the secret key. We also focus on the minimal and generic assumptions needed to construct such a primitive. Interestingly, we show provably secure constructions of leakage-resilient (LR) tweakable encryption based on the sole assumption that one-way functions (OWF) exist via some interesting intermediate generic connections. A central tool used in our construction of LR-tweakable encryption is the notion of Symmetric-key tweakable weak hash proof system, which we introduce. This can be seen as a generalization of the Symmetric-key weak hash proof framework of Hazay et. al (Eurocrypt'13). Along the way, we also introduce a new primitive called tweakable weak pseudo-random functions (t-wPRF) and show how to generically construct it from weak-PRF. We then construct LR-version of t-wPRF and use it to construct LR-tweakable encryption.
Last updated:  2018-07-11
Security Definitions For Hash Functions: Combining UCE and Indifferentiability
Uncategorized
Daniel Jost, Ueli Maurer
Show abstract
Uncategorized
Hash functions are one of the most important cryptographic primitives, but their desired security properties have proven to be remarkably hard to formalize. To prove the security of a protocol using a hash function, nowadays often the random oracle model (ROM) is used due to its simplicity and its strong security guarantees. Moreover, hash function constructions are commonly proven to be secure by showing them to be indifferentiable from a random oracle when using an ideal compression function. However, it is well known that no hash function realizes a random oracle and no real compression function realizes an ideal one. As an alternative to the ROM, Bellare et al. recently proposed the notion of universal computational extractors (UCE). This notion formalizes that a family of functions ``behaves like a random oracle'' for ``real-world'' protocols while avoiding the general impossibility results. However, in contrast to the indifferentiability framework, UCE is formalized as a multi-stage game without clear composition guarantees. As a first contribution, we introduce context-restricted indifferentiability (CRI), a generalization of indifferentiability that allows us to model that the random oracle does not compose generally but can only be used within a well-specified set of protocols run by the honest parties, thereby making the provided composition guarantees explicit. We then show that UCE and its variants can be phrased as a special case of CRI. Moreover, we show how our notion of CRI leads to generalizations of UCE. As a second contribution, we prove that the hash function constructed by Merkle-Damgard satisfies one of the well-known UCE variants, if we assume that the compression function satisfies one of our generalizations of UCE, basing the overall security on a plausible assumption. This result further validates the Merkle-Damgard construction and shows that UCE-like assumptions can serve both as a valid reference point for modular protocol analyses, as well as for the design of hash functions, linking those two aspects in a framework with explicit composition guarantees.
Last updated:  2017-05-26
Transitioning to a Quantum-Resistant Public Key Infrastructure
Nina Bindel, Udyani Herath, Matthew McKague, Douglas Stebila
To ensure uninterrupted cryptographic security, it is important to begin planning the transition to post-quantum cryptography. In addition to creating post-quantum primitives, we must also plan how to adapt the cryptographic infrastructure for the transition, especially in scenarios such as public key infrastructures (PKIs) with many participants. The use of hybrids — multiple algorithms in parallel — will likely play a role during the transition for two reasons: “hedging our bets” when the security of newer primitives is not yet certain but the security of older primitives is already in question; and to achieve security and functionality both in post-quantum-aware and in a backwards-compatible way with not-yet-upgraded software. In this paper, we investigate the use of hybrid digital signature schemes. We consider several methods for combining signature schemes, and give conditions on when the resulting hybrid signature scheme is unforgeable. Additionally we address a new notion about the inability of an adversary to separate a hybrid signature into its components. For both unforgeability and non-separability, we give a novel security hierarchy based on how quantum the attack is. We then turn to three real-world standards involving digital signatures and PKI: certificates (X.509), secure channels (TLS), and email (S/MIME). We identify possible approaches to supporting hybrid signatures in these standards while retaining backwards compatibility, which we test in popular cryptographic libraries and implementations, noting specially the inability of some software to handle larger certificates.
Last updated:  2017-05-26
Security Analysis of Arbiter PUF and Its Lightweight Compositions Under Predictability Test
Phuong Ha Nguyen, Durga Prasad Sahoo, Rajat Subhra Chakraborty, Debdeep Mukhopadhyay
Unpredictability is an important security property of Physically Unclonable Function (PUF) in the context of statistical attacks, where the correlation between challenge-response pairs is explicitly exploited. In existing literature on PUFs, Hamming Distance test, denoted by $\mathrm{HDT}(t)$, was proposed to evaluate the unpredictability of PUFs, which is a simplified case of the Propagation Criterion test $\mathrm{PC}(t)$. The objective of these testing schemes is to estimate the output transition probability when there are $t$ or less than $t$ bits flips, and ideally, this probability value should be 0.5. In this work, we show that aforementioned two testing schemes are not enough to ensure the unpredictability of a PUF design. We propose a new test which is denoted as $\mathrm{HDT}(\mathbf{e},t)$. This testing scheme is a fine-tuned version of the previous schemes, as it considers the flipping bit pattern vector $\mathbf{e}$ along with parameter $t$. As a contribution, we provide a comprehensive discussion and analytic interpretation of $\mathrm{HDT}(t)$, $\mathrm{PC}(t)$ and $\mathrm{HDT}(\mathbf{e},t)$ test schemes for Arbiter PUF (APUF), XOR PUF and Lightweight Secure PUF (LSPUF). Our analysis establishes that $\mathrm{HDT}(\mathbf{e},t)$ test is more general in comparison with $\mathrm{HDT}(t)$ and $\mathrm{PC}(t)$ tests. In addition, we demonstrate a few scenarios where the adversary can exploit the information obtained from the analysis of $\mathrm{HDT}(\mathbf{e},t)$ properties of APUF, XOR PUF and LSPUF to develop statistical attacks on them, if the ideal value of $\mathrm{HDT}(\mathbf{e},t)=0.5$ is not achieved for a given PUF. We validate our theoretical observations using the simulated and FPGA implemented APUF, XOR PUF and LSPUF designs.
Last updated:  2017-06-09
Fully Homomorphic Encryption Using Multivariate Polynomials
Matthew Tamayo-Rios, Jean-Charles Faugère, Ludovic Perret, Peng Hui How, Robin Zhang
Efficient and secure third party computation has many practical applications in cloud computing. We develop new approach for building fully homomorphic encryption (FHE) schemes, by starting with the intuition of using algebraic descriptions of the encryption and decryption functions to construct a functionally complete set of homomorphic boolean operators. We use this approach to design a compact efficient asymmetric cryptosystem that supports secure third party evaluation of arbitrary boolean functions. In the process, we introduce a new hard problem that is a more difficult variant of of the classical Isomorphism of Polynomials (IP) that we call the Obfuscated-IP.
Last updated:  2017-05-25
Universal Construction of Cheater-Identifiable Secret Sharing Against Rushing Cheaters without Honest Majority
Masahito Hayashi, Takeshi Koshiba
For conventional secret sharing, if cheaters can submit possibly forged shares after observing shares of the honest users in the reconstruction phase, they can disturb the protocol and reconstruct the true secret. To overcome the problem, secret sharing scheme with properties of cheater-identification have been proposed. Existing protocols for cheater-identifiable secret sharing assumed non-rushing cheaters or honest majority. In this paper, we remove both conditions simultaneously, and give its universal construction from any secret sharing scheme. To resolve this end, we propose the concepts of "individual identification" and "agreed identification".
Last updated:  2018-01-28
Proxy Re-Encryption and Re-Signatures from Lattices
Xiong Fan, Feng-Hao Liu
Proxy re-encryption (PRE) and Proxy re-signature (PRS) were introduced by Blaze, Bleumer and Strauss [Eurocrypt '98]. Basically, PRE allows a semi-trusted proxy to transform a ciphertext encrypted under one key into an encryption of the same plaintext under another key, without revealing the underlying plaintext. Since then, many interesting applications have been explored, and many constructions in various settings have been proposed, while PRS allows a semi-trusted proxy to transform Alice's signature on a message into Bob's signature on the same message, but the proxy cannot produce new valid signature on new messages for either Alice or Bob. Recently, for PRE related progress, Cannetti and Honhenberger [CCS '07] defined a stronger notion -- CCA-security and construct a bi-directional PRE scheme. Later on, several work considered CCA-secure PRE based on bilinear group assumptions. Very recently, Kirshanova [PKC '14] proposed the first single-hop CCA1-secure PRE scheme based on learning with errors (LWE) assumption. For PRS related progress, Ateniese and Hohenberger [CCS'05] formalized this primitive and provided efficient constructions in the random oracle model. At CCS 2008, Libert and Vergnaud presented the first multi-hop uni-directional proxy re-signature scheme in the standard model, using assumptions in bilinear groups. In this work, we first point out a subtle but serious mistake in the security proof of the work by Kirshanova. This reopens the direction of lattice-based CCA1-secure constructions, even in the single-hop setting. Then we construct a single-hop PRE scheme that is proven secure in our new tag-based CCA-PRE model. Next, we construct the first multi-hop PRE construction. Lastly, we also construct the first PRS scheme from lattices that is proved secure in our proposed unified security model
Last updated:  2017-06-15
Vector Encoding over Lattices and Its Applications
Daniel Apon, Xiong Fan, Feng-Hao Liu
In this work, we design a new lattice encoding structure for vectors. Our encoding can be used to achieve a packed FHE scheme that allows some SIMD operations and can be used to improve all the prior IBE schemes and signatures in the series. In particular, with respect to FHE setting, our method improves over the prior packed GSW structure of Hiromasa et al. (PKC '15), as we do not rely on a circular assumption as required in their work. Moreover, we can use the packing and unpacking method to extract each single element, so that the homomorphic operation supports element-wise and cross-element-wise computation as well. In the IBE scenario, we improves over previous constructions supporting $O(\Lambda)$-bit length identity from lattices substantially, such as Yamada (Eurocrypt '16), Katsumata, Yamada (Asiacrypt '16) and Yamada (Crypto '17), by shrinking the master public key to three matrices from standard Learning With Errors assumption. Additionally, our techniques from IBE can be adapted to construct a compact digital signature scheme, which achieves existential unforgeability under the standard Short Integer Solution (SIS) assumption with small polynomial parameters.
Last updated:  2017-09-24
Algorand: Scaling Byzantine Agreements for Cryptocurrencies
Uncategorized
Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, Nickolai Zeldovich
Show abstract
Uncategorized
Algorand is a new cryptocurrency system that can confirm transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence. Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed. We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 30$\times$ Bitcoin's throughput, and incurs almost no penalty for scaling to more users.
Last updated:  2017-05-25
Subtleties in Security Definitions for Predicate Encryption with Public Index
Johannes Blömer, Gennadij Liske
We take a critical look at established security definitions for predicate encryption (PE) with public index under chosen-plaintext attack (CPA) and under chosen-ciphertext attack (CCA). In contrast to conventional public-key encryption (PKE), security definitions for PE have to deal with user collusion which is modeled by an additional key generation oracle. We identify three different formalizations of key handling in the literature implicitly assumed to lead to the same security notion. Contrary to this assumption we prove that the corresponding models result in two different security notions under CPA and three different security notions under CCA. Similarly to the recent results for PKE and conventional key-encapsulation mechanism (KEM) (Journal of Cryptology, 2015) we also analyze subtleties in security definitions for PE and predicate key-encapsulation mechanism (P-KEM) regarding the so-called "no-challenge-decryption" condition. While the results for PE and PKE are similar, the results for P-KEM significantly differ from the corresponding results for conventional KEM. Our analysis is based on appropriate definitions of semantic security and indistinguishability of encryptions for PE under different attacks scenarios. These definitions complement related security definitions for identity-based encryption and functional encryption. As a result of our work we suggest security definitions for PE and P-KEM under different attack scenarios.
Last updated:  2017-08-03
Oblivious Neural Network Predictions via MiniONN transformations
Uncategorized
Jian Liu, Mika Juuti, Yao Lu, N. Asokan
Show abstract
Uncategorized
Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model. We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.
Last updated:  2017-05-23
Efficient Compilers for After-the-Fact Leakage: from CPA to CCA-2 secure PKE to AKE
Uncategorized
Suvradip Chakraborty, Goutam Paul, C. Pandu Rangan
Show abstract
Uncategorized
The goal of leakage-resilient cryptography is to construct cryptographic algorithms that are secure even if the adversary obtains side-channel information from the real world implementation of these algorithms. Most of the prior works on leakage-resilient cryptography consider leakage models where the adversary has access to the leakage oracle before the challenge-ciphertext is generated (before-the-fact leakage). In this model, there are generic compilers that transform any leakage-resilient CPA-secure public key encryption (PKE) scheme to its CCA-2 variant using Naor-Yung type of transformations. In this work, we give an efficient generic compiler for transforming a leakage-resilient CPA-secure PKE to leakage-resilient CCA-2 secure PKE in presence of after-the-fact split-state (bounded) memory leakage model, where the adversary has access to the leakage oracle even after the challenge phase. The salient feature of our transformation is that the leakage rate (defined as the ratio of the amount of leakage to the size of secret key) of the transformed after-the-fact CCA-2 secure PKE is same as the leakage rate of the underlying after-the-fact CPA-secure PKE, which is $1-o(1)$. We then present another generic compiler for transforming an after-the-fact leakage-resilient CCA-2 secure PKE to a leakage-resilient authenticated key exchange (AKE) protocol in the bounded after-the-fact leakage-resilient eCK (BAFL-eCK) model proposed by Alawatugoda et al. (ASIACCS'14). To the best of our knowledge, this gives the first compiler that transform any leakage-resilient CCA-2 secure PKE to an AKE protocol in the leakage variant of the eCK model.
Last updated:  2017-05-23
Privacy-preserving biometric authentication: challenges and directions
Uncategorized
Elena Pagnin, Aikaterini Mitrokotsa
Show abstract
Uncategorized
An emerging direction for authenticating people is the adoption of biometric authentication systems. Biometric credentials are becoming increasingly popular as a mean of authenticating people due to the wide rage of advantages that they provide with respect to classical authentication methods (e.g., password-based authentication). The most characteristic feature of this authentication method is the naturally strong bond between a user and her biometric credentials. This very same advantageous property, however, raises serious security and privacy concerns in case the biometric trait gets compromised. In this article, we present the most challenging issues that need to be taken into consideration when designing secure and privacy- preserving biometric authentication protocols. More precisely, we describe the main threats against privacy-preserving biometric authentication systems and give directions on possible countermeasures in order to design secure and privacy-preserving biometric authentication protocols.
Last updated:  2017-05-23
Differentially 4-Uniform Permutations with the Best Known Nonlinearity from Butterflies
Shihui Fu, Xiutao Feng, Baofeng Wu
Many block ciphers use permutations defined over the finite field $\mathbb{F}_{2^{2k}}$ with low differential uniformity, high nonlinearity, and high algebraic degree to provide confusion. Due to the lack of knowledge about the existence of almost perfect nonlinear (APN) permutations over $\mathbb{F}_{2^{2k}}$, which have lowest possible differential uniformity, when $k>3$, constructions of differentially 4-uniform permutations are usually considered. However, it is also very difficult to construct such permutations together with high nonlinearity; there are very few known families of such functions, which can have the best known nonlinearity and a high algebraic degree. At Crypto'16, Perrin et al. introduced a structure named butterfly, which leads to permutations over $\mathbb{F}_{2^{2k}}$ with differential uniformity at most 4 and very high algebraic degree when $k$ is odd. It is posed as an open problem in Perrin et al.'s paper and solved by Canteaut et al. that the nonlinearity is equal to $2^{2k-1}-2^k$. In this paper, we extend Perrin et al.'s work and study the functions constructed from butterflies with exponent $e=2^i+1$. It turns out that these functions over $\mathbb{F}_{2^{2k}}$ with odd $k$ have differential uniformity at most 4 and algebraic degree $k+1$. Moreover, we prove that for any integer $i$ and odd $k$ such that $\gcd(i,k)=1$, the nonlinearity equality holds, which also gives another solution to the open problem proposed by Perrin et al. This greatly expands the list of differentially 4-uniform permutations with good nonlinearity and hence provides more candidates for the design of block ciphers.
Last updated:  2018-02-28
Obfuscation of Bloom Filter Queries from Ring-LWE
Alex Davidson
We devise a virtual black-box (VBB) obfuscator for querying whether set elements are stored within Bloom filters, with security based on the Ring Learning With Errors (RLWE) problem and strongly universal hash functions. Our construction uses an abstracted encoding scheme that we instantiate using the Gentry, Gorbunov and Halevi (GGH15) multilinear map, with an explicit security reduction to RLWE. This represents an improvement on the functionality and security guarantees compared with the conjunction obfuscator introduced by Brakerski et al. (ITCS 2016), where security follows from a non-standard RLWE variant. Immediate applications of our work arise from any common usage of Bloom filters, such as efficient set intersection testing. Our obfuscated program allows this functionality to be executed in a non-interactive manner, whilst preventing the natural leakage that occurs when providing offline access to a Bloom filter. Compared to more general obfuscators for evasive functions, we demonstrate a significant asymptotic reduction in size and required computation for obfuscating set intersection queries. The obfuscator of Wichs and Zirdelis (EPRINT 2017) requires \(O(4^{n \log n})\) encodings for obfuscating circuits computing the intersection of sets of size \(n\), requiring the usage of additional primitives such as FHE to allow sets of polynomial size. Our construction requires only \(O(kn)\) total encodings and operations for evaluation, where \(k << n\). Moreover, the size of our obfuscator is independent of the size of the elements that are contained in the set. Our results, alongside recent and concurrent work, can be seen as another step forward in obfuscating wider classes of evasive functions using standard assumptions and models.
Last updated:  2017-05-23
Block Chain based Searchable Symmetric Encryption
Huige Li, Haibo Tian, Fangguo Zhang
The mechanism for traditional Searchable Symmetric Encryption is pay-then-use. That is to say, if a user wants to search some documents that contain special keywords, he needs to pay to the server firstly, then he can enjoy search service. Under this situation, these kinds of things will happen: After the user paying the service fees, the server may either disappear because of the poor management or returning nothing. As a result, the money that the user paid cannot be brought back quickly. Another case is that the server may return incorrect document sets to the user in order to save his own cost. Once such events happen, it needs the arbitration institution to mediate which will cost a long time. Besides, to settle the disputes the user has to pay to the arbitration institution. Ideally, we deeply hope that when the user realizes the server has a tendency to cheat in the task of searching, he can immediately and automatically withdraw his money to safeguard his right. However, the existing SSE protocols cannot satisfy this demand. To solve this dilemma, we find a compromised method by introducing the block chain into SSE. Our scheme achieves three goals stated below. Firstly, when the server does not return any thing to user after he gets the search token, the user can get some compensation from the server, because the server can infer some important information from the Index and this token. Besides, the user also doesn't pay the service charge. Secondly, if the documents that the server returns are false, the server cannot receive service fees, meanwhile, he will be punished. Lastly, when the user receives some bitcoin from server at the beginning, he may terminate the protocol. Under this situation, the server is a victim. In order to prevent such thing from happening, the server will broadcast a transaction to redeem his pledge after an appointed time.
Last updated:  2022-03-15
Secretly Embedding Trapdoors into Contract Signing Protocols
Diana Maimut, George Teseleanu
Contract signing protocols have been proposed and analyzed for more than three decades now. One of the main problems that appeared while studying such schemes is the impossibility of achieving both fairness and guaranteed output delivery. As workarounds, cryptographers have put forth three main categories of contract signing schemes: gradual release, optimistic and concurrent or legally fair schemes. Concurrent signature schemes or legally fair protocols do not rely on trusted arbitrators and, thus, may seem more attractive for users. Boosting user trust in such manner, an attacker may cleverly come up with specific applications. Thus, our work focuses on embedding trapdoors into contract signing protocols. In particular, we describe and analyze various SETUP (Secretly Embedded Trapdoor with Universal Protection) mechanisms which can be injected in concurrent signature schemes and legally fair protocols without keystones.
Last updated:  2017-12-13
Practical Strongly Invisible and Strongly Accountable Sanitizable Signatures
Uncategorized
Michael Till Beck, Jan Camenisch, David Derler, Stephan Krenn, Henrich C. Pöhls, Kai Samelin, Daniel Slamanig
Show abstract
Uncategorized
Sanitizable signatures are a variant of digital signatures where a designated party (the sanitizer) can update admissible parts of a signed message. At PKC’17, Camenisch et al. introduced the notion of invisible sanitizable signatures that hides from an outsider which parts of a message are admissible. Their security definition of invisibility, however, does not consider dishonest signers. Along the same lines, their signer-accountability definition does not prevent the signer from falsely accusing the sanitizer of having issued a signature on a sanitized message by exploiting the malleability of the signature itself. Both issues may limit the usefulness of their scheme in certain applications. We revise their definitional framework, and present a new construction eliminating these shortcomings. In contrast to Camenisch et al.’s construction, ours requires only standard building blocks instead of chameleon hashes with ephemeral trapdoors. This makes this, now even stronger, primitive more attractive for practical use. We underpin the practical efficiency of our scheme by concrete benchmarks of a prototype implementation.
Last updated:  2018-07-11
CrowdBC: A Blockchain-based Decentralized Framework for Crowdsourcing
Uncategorized
Ming Li, Jian Weng, Anjia Yang, Wei Lu, Yue Zhang, Lin Hou, Jia-Nan Liu, Yang Xiang, Robert H. Deng
Show abstract
Uncategorized
Crowdsourcing systems which utilize the human intelligence to solve complex tasks have gained considerable interest and adoption in recent years. However, the majority of existing crowdsourcing systems rely on central servers, which are subject to the weaknesses of traditional trust-based model, such as single point of failure. They are also vulnerable to distributed denial of service (DDoS) and Sybil attacks due to malicious users involvement. In addition, high service fees from the crowdsourcing platform may hinder the development of crowdsourcing. How to address these potential issues has both research and substantial value. In this paper, we conceptualize a blockchain-based decentralized framework for crowdsourcing named CrowdBC, in which a requester’s task can be solved by a crowd of workers without relying on any third trusted institution, users’ privacy can be guaranteed and only low transaction fees are required. In particular, we introduce the architecture of our proposed framework, based on which we give a concrete scheme. We further implement a software prototype on Ethereum public test network with real-world dataset. Experiment results show the feasibility, usability and scalability of our proposed crowdsourcing system.
Last updated:  2020-01-24
Practical Graphs for Optimal Side-Channel Resistant Memory-Hard Functions
Joel Alwen, Jeremiah Blocki, Ben Harsha
A memory-hard function (MHF) $f_n$ with parameter $n$ can be computed in sequential time and space $n$. Simultaneously, a high amortized parallel area-time complexity (aAT) is incurred per evaluation. In practice, MHFs are used to limit the rate at which an adversary (using a custom computational device) can evaluate a security sensitive function that still occasionally needs to be evaluated by honest users (using an off-the-shelf general purpose device). The most prevalent examples of such sensitive functions are Key Derivation Functions (KDFs) and password hashing algorithms where rate limits help mitigate off-line dictionary attacks. As the honest users' inputs to these functions are often (low-entropy) passwords special attention is given to a class of side-channel resistant MHFs called iMHFs. Essentially all iMHFs can be viewed as some mode of operation (making $n$ calls to some round function) given by a directed acyclic graph (DAG) with very low indegree. Recently, a combinatorial property of a DAG has been identified (called ``depth-robustness'') which results in good provable security for an iMHF based on that DAG. Depth-robust DAGs have also proven useful in other cryptographic applications. Unfortunately, up till now, all known very depth-robust DAGs are impractically complicated and little is known about their exact (i.e. non-asymptotic) depth-robustness both in theory and in practice. In this work we build and analyze (both formally and empirically) several exceedingly simple and efficient to navigate practical DAGs for use in iMHFs and other applications. For each DAG we: - Prove that their depth-robustness is asymptotically maximal. - Prove bounds of at least $3$ orders of magnitude better on their exact depth-robustness compared to known bounds for other practical iMHF. - Implement and empirically evaluate their depth-robustness and aAT against a variety of state-of-the art (and several new) depth-reduction and low aAT attacks. We find that, against all attacks, the new DAGs perform significantly better in practice than Argon2i, the most widely deployed iMHF in practice. Along the way we also improve the best known empirical attacks on the aAT of Argon2i by implementing and testing several heuristic versions of a (hitherto purely theoretical) depth-reduction attack. Finally, we demonstrate practicality of our constructions by modifying the Argon2i code base to use one of the new high aAT DAGs. Experimental benchmarks on a standard off-the-shelf CPU show that the new modifications do not adversely affect the impressive throughput of Argon2i (despite seemingly enjoying significantly higher aAT).
Last updated:  2017-09-22
On the Depth-Robustness and Cumulative Pebbling Cost of Argon2i
Jeremiah Blocki, Samson Zhou
Argon2i is a data-independent memory hard function that won the password hashing competition. The password hashing algorithm has already been incorporated into several open source crypto libraries such as libsodium. In this paper we analyze the cumulative memory cost of computing Argon2i. On the positive side we provide a lower bound for Argon2i. On the negative side we exhibit an improved attack against Argon2i which demonstrates that our lower bound is nearly tight. In particular, we show that (1) An Argon2i DAG is $\left(e,O\left(n^3/e^3\right)\right))$-reducible. (2) The cumulative pebbling cost for Argon2i is at most $O\left(n^{1.768}\right)$. This improves upon the previous best upper bound of $O\left(n^{1.8}\right)$ [Alwen and Blocki, EURO S&P 2017]. (3) Argon2i DAG is $\left(e,\tilde{\Omega}\left(n^3/e^3\right)\right))$-depth robust. By contrast, analysis of [Alwen et al., EUROCRYPT 2017] only established that Argon2i was $\left(e,\tilde{\Omega}\left(n^2/e^2\right)\right))$-depth robust. (4) The cumulative pebbling complexity of Argon2i is at least $\tilde{\Omega}\left( n^{1.75}\right)$. This improves on the previous best bound of $\Omega\left( n^{1.66}\right)$ [Alwen et al., EUROCRYPT 2017] and demonstrates that Argon2i has higher cumulative memory cost than competing proposals such as Catena or Balloon Hashing. We also show that Argon2i has high {\em fractional} depth-robustness which strongly suggests that data-dependent modes of Argon2 are resistant to space-time tradeoff attacks.
Last updated:  2017-10-10
New Approach to Practical Leakage-Resilient Public-Key Cryptography
Uncategorized
Suvradip Chakraborty, Janaka Alawatugoda, C. Pandu Rangan
Show abstract
Uncategorized
We present a new approach to construct several leakage-resilient cryptographic primitives, including leakage-resilient public-key encryption (PKE) schemes, authenticated key exchange (AKE) protocols and low-latency key exchange (LLKE) protocols. To this end, we introduce a new primitive called leakage-resilient non-interactive key exchange (LR-NIKE) protocol. We introduce a generic security model for LR-NIKE protocols, which can be instantiated in both the bounded and continuous-memory leakage ((B/C)-ML) settings. We then show a secure construction of LR-NIKE protocol in the bounded- memory leakage (BML) setting, that achieves an optimal leakage rate, i.e., 1-o(1). Finally, we show how to construct the aforementioned leakage-resilient primitives from such a LR-NIKE protocol as summarized below. All the primitives also achieve the same (optimal) leakage rate as the underlying LR-NIKE protocol. We show how to construct a leakage-resilient IND-CCA-2-secure PKE scheme in the BML model generically from a LR-NIKE protocol. Our construction differs from the state-of-the-art constructions of leakage-resilient IND-CCA-2-secure PKE schemes, which use hash proof techniques to achieve leakage-resilience. Moreover, our transformation preserves the leakage-rate of the underlying LR- NIKE and admits more efficient construction than previous such PKE constructions. We introduce a new leakage model for AKE protocols, in the BML setting. We show how to construct a leakage-resilient AKE protocol starting from LR-NIKE protocol. We introduce the first-ever leakage model for LLKE protocols in the BML setting, and the first construction of such a leakage-resilient LLKE from LR-NIKE protocol.
Last updated:  2019-03-27
Cryptographic Security Analysis of T-310
Nicolas T. Courtois, Klaus Schmeh, Jörg Drobick, Jacques Patarin, Maria-Bristena Oprisanu, Matteo Scarlata, Om Bhallamudi
T-310 is an important Cold War cipher. It was the principal encryption algorithm used to protect various state communication lines in Eastern Germany throughout the 1980s. The cipher seems to be quite robust, and until now, no cryptography researcher has proposed an attack on T-310. In this paper we provide a detailed analysis of T-310 in the context of modern cryptography research and other important or similar ciphers developed in the same period. We introduce new notations which show the peculiar internal structure of this cipher in a new light. We point out a number of significant strong and weak properties of this cipher. Finally we propose several new attacks on T-310.
Last updated:  2017-05-22
Practically Efficient Secure Single-Commodity Multi-Market Auctions
Uncategorized
Abdelrahaman Aly, Mathieu Van Vyve
Show abstract
Uncategorized
We study the problem of securely building single-commodity multi-markets auction mechanisms. We introduce a novel greedy algorithm and its corresponding privacy preserving implementation using secure multi-party computation. More specifically, we determine the quantity of supply and demand bids maximizing welfare. Each bid is attached to a specific market, but exchanges between different markets are allowed up to some upper limit. The general goal is for the players to bid their intended valuations without concerns about what the other players can learn. This problem is inspired by day-ahead electricity markets where there are substantial transmission capacity between the different markets, but applies to other commodity markets like gas. Furthermore, we provide computational results with a specific C++ implementation of our algorithm and the necessary MPC primitives. We can solve problems of 1945 bids and 4 markets in 1280 seconds when online/offline phases are considered. Finally, we report on possible set-ups, workload distributions and possible trade-offs for real-life applications of our results based on this experimentation and prototyping.
Last updated:  2017-05-22
GLITCH: A Discrete Gaussian Testing Suite For Lattice-Based Cryptography
James Howe, Máire O'Neill
Lattice-based cryptography is one of the most promising areas within post-quantum cryptography, and offers versatile, efficient, and high performance security services. The aim of this paper is to verify the correctness of the discrete Gaussian sampling component, one of the most important modules within lattice-based cryptography. In this paper, the GLITCH software test suite is proposed, which performs statistical tests on discrete Gaussian sampler outputs. An incorrectly operating sampler, for example due to hardware or software errors, has the potential to leak secret-key information and could thus be a potential attack vector for an adversary. Moreover, statistical test suites are already common for use in pseudo-random number generators (PRNGs), and as lattice-based cryptography becomes more prevalent, it is important to develop a method to test the correctness and randomness for discrete Gaussian sampler designs. Additionally, due to the theoretical requirements for the discrete Gaussian distribution within lattice-based cryptography, certain statistical tests for distribution correctness become unsuitable, therefore a number of tests are surveyed. The final GLITCH test suite provides 11 adaptable statistical analysis tests that assess the exactness of a discrete Gaussian sampler, and which can be used to verify any software or hardware sampler design.
Last updated:  2024-05-19
Slothful reduction
Michael Scott
In the implementation of many public key schemes, there is a need to implement modular arithmetic. Typically this consists of addition, subtraction, multiplication and (occasionally) division with respect to a prime modulus. To resist certain side-channel attacks it helps if implementations are ``constant time''. As the calculations proceed there is potentially a need to reduce the result of an operation to its remainder modulo the prime modulus. However often this reduction can be delayed, a process known as ``lazy reduction''. The idea is that results do not have to be fully reduced at each step, that full reduction takes place only occasionally, hence providing a performance benefit. Here we extend the idea to determine the circumstances under which reduction can be delayed to the very end of a particular public key operation.
Last updated:  2017-05-22
A Uniform Class of Weak Keys for Universal Hash Functions
Kaiyan Zheng, Peng Wang
In this paper we investigate weak keys of universal hash functions (UHFs) from their combinatorial properties. We find that any UHF has a general class of keys, which makes the combinatorial properties totally disappear, and even compromises the security of the UHF-based schemes, such as the Wegman-Carter scheme, the UHF-then-PRF scheme, etc. By this class of keys, we actually get a general method to search weak-key classes of UHFs, which is able to derive all previous weak-key classes of UHFs found by intuition or experience. Moreover we give a weak-key class of the BRW polynomial function which was once believed to have no weak-key issue, and exploit such weak keys to implement a distinguish attack and a forgery attack against DTC - a BRW-based authentication encryption scheme. Furthermore in Grain-128a, with the linear structure revealed by weak-key classes of its UHF, we can recover any first $(32+b)$ bits of the UHF key, spending no more than $1$ encryption and $(2^{32} + b)$ decryption queries.
Last updated:  2017-10-02
Analyzing Multi-Key Security Degradation
Uncategorized
Atul Luykx, Bart Mennink, Kenneth G. Paterson
Show abstract
Uncategorized
The multi-key, or multi-user, setting challenges cryptographic algorithms to maintain high levels of security when used with many different keys, by many different users. Its significance lies in the fact that in the real world, cryptography is rarely used with a single key in isolation. A folklore result, proved by Bellare, Boldyreva, and Micali for public-key encryption in EUROCRYPT 2000, states that the success probability in attacking any one of many independently keyed algorithms can be bounded by the success probability of attacking a single instance of the algorithm, multiplied by the number of keys present. Although sufficient for settings in which not many keys are used, once cryptographic algorithms are used on an internet-wide scale, as is the case with TLS, the effect of multiplying by the number of keys can drastically erode security claims. We establish a sufficient condition on cryptographic schemes and security games under which multi-key degradation is avoided. As illustrative examples, we discuss how AES and GCM behave in the multi-key setting, and prove that GCM, as a mode, does not have multi-key degradation. Our analysis allows limits on the amount of data that can be processed per key by GCM to be significantly increased. This leads directly to improved security for GCM as deployed in TLS on the Internet today.
Last updated:  2017-08-19
FourQ on embedded devices with strong countermeasures against side-channel attacks
Zhe Liu, Patrick Longa, Geovandro Pereira, Oscar Reparaz, Hwajeong Seo
This work deals with the energy-efficient, high-speed and high-security implementation of elliptic curve scalar multiplication, elliptic curve Diffie-Hellman (ECDH) key exchange and elliptic curve digital signatures on embedded devices using FourQ and incorporating strong countermeasures to thwart a wide variety of side-channel attacks. First, we set new speed records for constant-time curve-based scalar multiplication, DH key exchange and digital signatures at the 128-bit security level with implementations targeting 8, 16 and 32-bit microcontrollers. For example, our software computes a static ECDH shared secret in 6.9 million cycles (or 0.86 seconds @8MHz) on a low-power 8-bit AVR microcontroller which, compared to the fastest Curve25519 and genus-2 Kummer implementations on the same platform, offers 2x and 1.4x speedups, respectively. Similarly, it computes the same operation in 496 thousand cycles on a 32-bit ARM Cortex-M4 microcontroller, achieving a factor-2.9 speedup when compared to the fastest Curve25519 implementation targeting the same platform. A similar speed performance is observed in the case of digital signatures. Second, we engineer a set of side-channel countermeasures taking advantage of FourQ's rich arithmetic and propose a secure implementation that offers protection against a wide range of sophisticated side-channel attacks, including differential power analysis (DPA). Despite the use of strong countermeasures, the experimental results show that our FourQ software is still efficient enough to outperform implementations of Curve25519 that only protect against timing attacks. Finally, we perform a differential power analysis evaluation of our software running on an ARM Cortex-M4, and report that no leakage was detected with up to 10 million traces. These results demonstrate the potential of deploying FourQ on low-power applications such as protocols for the Internet of Things.
Last updated:  2017-09-07
Two-Message Witness Indistinguishability and Secure Computation in the Plain Model from New Assumptions
Saikrishna Badrinarayanan, Sanjam Garg, Yuval Ishai, Amit Sahai, Akshay Wadia
We study the feasibility of two-message protocols for secure two-party computation in the plain model, for functionalities that deliver output to one party, with security against malicious parties. Since known impossibility results rule out polynomial-time simulation in this setting, we consider the common relaxation of allowing super-polynomial simulation. We first address the case of zero-knowledge functionalities. We present a new construction of two-message zero-knowledge protocols with super-polynomial simulation from any (sub- exponentially hard) game-based two-message oblivious transfer protocol, which we call Weak OT. As a corollary, we get the first two-message WI arguments for NP from (sub-exponential) DDH. Prior to our work, such protocols could only be constructed from assumptions that are known to imply non-interactive zero-knowledge protocols (NIZK), which do not include DDH. We then extend the above result to the case of general single-output functionalities, showing how to construct two-message secure computation protocols with quasi-polynomial simulation from Weak OT. This implies protocols based on sub-exponential variants of several standard assumptions, including Decisional Diffie Hellman (DDH), Quadratic Residuosity Assumption, and Nth Residuosity Assumption. Prior works on two-message protocols either relied on some trusted setup (such as a common reference string) or were restricted to special functionalities such as blind signatures. As a corollary, we get three-message protocols for two-output functionalities, which include coin-tossing as an interesting special case. For both types of functionalities, the number of messages (two or three) is optimal. Finally, motivated by the above, we further study the Weak OT primitive. On the positive side, we show that Weak OT can be based on any semi-honest 2-message OT with a short second message. This simplifies a previous constructions of Weak OT from the Nth Residuosity Assumption. We also present a construction of Weak OT from Witness Encryption (WE) and injective one-way functions, implying the first construction of two-message WI arguments from WE. On the negative side, we show that previous constructions of Weak OT do not satisfy simulation-based security even if the simulator can be computationally unbounded.
Last updated:  2018-01-30
Statistical and Linear Independence of Binary Random Variables
Uncategorized
Kaisa Nyberg
Show abstract
Uncategorized
Linear cryptanalysis makes use of statistical models that consider linear approximations over practical and ideal block ciphers as binary random variables. Recently, more complex models have been proposed that take also into account the statistical behavior of correlations of linear approximations over the key space of the cipher and over the randomness of the ideal cipher. The goal of this ongoing work is to investigate independence properties of linear approximations and their relationships. In this third revised version we show that the assumptions of Proposition~1 of the previous version are contradictory and hence renders that result useless. In particular, we prove that linear and statistical independence of binary random variables are equivalent properties in a vector space of variables if and only if all non-zero variables in this vector space are balanced, that is, correspond to components of a permutation. This study is motivated by finding reasonable wrong-key hypotheses for linear cryptanalysis and its generalizations which will also be discussed.
Last updated:  2017-05-22
Understanding RUP Integrity of COLM
Uncategorized
Nilanjan Datta, Atul Luykx, Bart Mennink, Mridul Nandi
Show abstract
Uncategorized
The authenticated encryption scheme COLM is a third-round candidate in the CAESAR competition. Much like its antecedents COPA, ELmE, and ELmD, COLM consists of two parallelizable encryption layers connected by a linear mixing function. While COPA uses plain XOR mixing, ELmE, ELmD, and COLM use a more involved invertible mixing function. In this work, we investigate the integrity of the COLM structure when unverified plaintext is released, and demonstrate that its security highly depends on the choice of mixing function. Our results are threefold. First, we discuss the practical nonce-respecting forgery by Andreeva et al. (ASIACRYPT 2014) against COPA's XOR mixing. Then we present a nonce-misusing forgery against arbitrary mixing functions with practical time complexity. Finally, by using significantly larger queries, we can extend the previous forgery to be nonce-respecting.
Last updated:  2017-05-22
Improving TFHE: faster packed homomorphic operations and efficient circuit bootstrapping
Ilaria Chillotti, Nicolas Gama, Mariya Georgieva, Malika Izabachène
In this paper, we present several methods to improve the evaluation of homomorphic functions, both for fully and for leveled homomorphic encryption. We propose two packing methods, in order to decrease the expansion factor and optimize the evaluation of look-up tables and random functions in TRGSW-based homomorphic schemes. We also extend the automata logic, introduced in [19, 12], to the efficient leveled evaluation of weighted automata, and present a new homomorphic counter called TBSR, that supports all the elementary operations that occur in a multiplication. These improvements speed-up the evaluation of most arithmetic functions in a packed leveled mode, with a noise overhead that remains additive. We finally present a new circuit bootstrapping that converts TLWE into low-noise TRGSW ciphertexts in just 137ms, which makes the leveled mode of TFHE composable, and which is fast enough to speed-up arithmetic functions, compared to the gate-by-gate bootstrapping given in [12]. Finally, we propose concrete parameter sets and timing comparison for all our constructions.
Last updated:  2017-11-30
Strengthening Access Control Encryption
Uncategorized
Christian Badertscher, Christian Matt, Ueli Maurer
Show abstract
Uncategorized
Access control encryption (ACE) was proposed by Damgård et al. to enable the control of information flow between several parties according to a given policy specifying which parties are, or are not, allowed to communicate. By involving a special party, called the sanitizer, policy-compliant communication is enabled while policy-violating communication is prevented, even if sender and receiver are dishonest. To allow outsourcing of the sanitizer, the secrecy of the message contents and the anonymity of the involved communication partners is guaranteed. This paper shows that in order to be resilient against realistic attacks, the security definition of ACE must be considerably strengthened in several ways. A new, substantially stronger security definition is proposed, and an ACE scheme is constructed which provably satisfies the strong definition under standard assumptions. Three aspects in which the security of ACE is strengthened are as follows. First, CCA security (rather than only CPA security) is guaranteed, which is important since senders can be dishonest in the considered setting. Second, the revealing of an (unsanitized) ciphertext (e.g., by a faulty sanitizer) cannot be exploited to communicate more in a policy-violating manner than the information contained in the ciphertext. We illustrate that this is not only a definitional subtlety by showing how in known ACE schemes, a single leaked unsanitized ciphertext allows for an arbitrary amount of policy-violating communication. Third, it is enforced that parties specified to receive a message according to the policy cannot be excluded from receiving it, even by a dishonest sender.
Last updated:  2017-05-22
Optimal Ramp Schemes and Related Combinatorial Objects
Douglas R. Stinson
In 1996, Jackson and Martin proved that a strong ideal ramp scheme is equivalent to an orthogonal array. However, there was no good characterization of ideal ramp schemes that are not strong. Here we show the equivalence of ideal ramp schemes to a new variant of orthogonal arrays that we term augmented orthogonal arrays}. We give some constructions for these new kinds of arrays, and, as a consequence, we also provide parameter situations where ideal ramp schemes exist but strong ideal ramp schemes do not exist.
Last updated:  2017-09-08
Grover Meets Simon - Quantumly Attacking the FX-construction
Uncategorized
Gregor Leander, Alexander May
Show abstract
Uncategorized
Using whitening keys is a well understood mean of increasing the key-length of any given cipher. Especially as it is known ever since Grover’s seminal work that the effective key-length is reduced by a factor of two when considering quantum adversaries, it seems tempting to use this simple and elegant way of extending the key-length of a given cipher to increase the resistance against quantum adversaries. However, as we show in this work, using whitening keys does not increase the security in the quantum-CPA setting significantly. For this we present a quantum algorithm that breaks the construction with whitening keys in essentially the same time complexity as Grover’s original algorithm breaks the underlying block cipher. Technically this result is based on the combination of the quantum algorithms of Grover and Simon for the first time in the cryptographic setting.
Last updated:  2017-06-17
FHPKE based on multivariate discrete logarithm problem
Uncategorized
Masahiro Yagisawa
Show abstract
Uncategorized
Previously I proposed fully homomorphic public-key encryption (FHPKE) based on discrete logarithm problem which is vulnerable to quantum computer attacks. In this paper I propose FHPKE based on multivariate discrete logarithm assumption. This encryption scheme is thought to withstand to quantum computer attacks. Though I can construct this scheme over many non-commutative rings, I will adopt the FHPKE scheme based on the octonion ring as the typical example for showing how this scheme is constructed. The multivariate discrete logarithm problem (MDLP) is defined such that given f(x), g(x), h(x) and a prime q, final goal is to find m0, m1, n0, n1&#8712;Fq* where h(x)=f ^m0(g^n0(x))+f ^m1(g^n1(x)) mod q over octonion ring.
Last updated:  2017-09-26
Card-Based Protocols Using Unequal Division Shuffles
Akihiro Nishimura, Takuya Nishida, Yu-ichi Hayashi, Takaaki Mizuki, Hideaki Sone
Card-based cryptographic protocols can perform secure computation of Boolean functions. In 2013, Cheung et al. presented a protocol that securely produces a hidden AND value using five cards; however, it fails with a probability of 1/2. The protocol uses an unconventional shuffle operation called an unequal division shuffle; after a sequence of five cards is divided into a two-card portion and a three-card portion, these two portions are randomly switched so that nobody knows which is which. In this paper, we first show that the protocol proposed by Cheung et al. securely produces not only a hidden AND value but also a hidden OR value (with a probability of 1/2). We then modify their protocol such that, even when it fails, we can still evaluate the AND value in the clear. Furthermore, we present two five-card copy protocols (which can duplicate a hidden value) using unequal division shuffle. Because the most efficient copy protocol currently known requires six cards, our new protocols improve upon the existing results. We also design a general copy protocol that produces multiple copies using an unequal division shuffle. Furthermore, we show feasible implementations of unequal division shuffles by the use of card cases.
Last updated:  2017-09-24
HILA5: On Reliability, Reconciliation, and Error Correction for Ring-LWE Encryption
Uncategorized
Markku-Juhani O. Saarinen
Show abstract
Uncategorized
We describe a new reconciliation method for Ring-LWE that has a significantly smaller failure rate than previous proposals while reducing ciphertext size and the amount of randomness required. It is based on a simple, deterministic variant of Peikert's reconciliation that works with our new ``safe bits'' selection and constant-time error correction techniques. The new method does not need randomized smoothing to achieve non-biased secrets. When used with the very efficient ``New Hope'' Ring-LWE parametrization we achieve a decryption failure rate well below $2^{-128}$ (compared to $2^{-60}$ of the original), making the scheme suitable for public key encryption in addition to key exchange protocols; the reconciliation approach saves about $40 \%$ in ciphertext size when compared to the common LP11 Ring-LWE encryption scheme. We perform a combinatorial failure analysis using full probability convolutions, leading to a precise understanding of decryption failure conditions on bit level. Even with additional implementation security and safety measures the new scheme is still essentially as fast as the New Hope but has slightly shorter messages. The new techniques have been instantiated and implemented as a Key Encapsulation Mechanism (KEM) and public key encryption scheme designed to meet the requirements of NIST's Post-Quantum Cryptography effort at very high security level.
Last updated:  2020-09-17
Foundations for Actively Secure Card-based Cryptography
Alexander Koch, Stefan Walzer
Card-based cryptography, as first proposed by den Boer (EUROCRYPT 1989), enables secure multiparty computation using only a deck of playing cards. Many protocols as of yet come with an “honest-but-curious” disclaimer. However, modern cryptography aims to provide security also in the presence of active attackers that deviate from the protocol description. In the few places where authors argue for the active security of their protocols, this is done ad-hoc and restricted to the concrete operations needed, often using additional physical tools, such as envelopes or sliding cover boxes. This paper provides the first systematic approach to active security in card-based protocols. The main technical contribution concerns shuffling operations. A shuffle randomly permutes the cards according to a well-defined distribution but hides the chosen permutation from the players. We show how the large and natural class of uniform closed shuffles, which are shuffles that select a permutation uniformly at random from a permutation group, can be implemented using only a linear number of helping cards. This ensures that any protocol in the model of Mizuki and Shizuya (Int. J. Inf. Secur., 2014) can be realized in an actively secure fashion, as long as it is secure in this abstract model and restricted to uniform closed shuffles. Uniform closed shuffles are already sufficient for securely computing any circuit (Mizuki and Sone, FAW 2009). In the process, we develop a more concrete model for card-based cryptographic protocols with two players, which we believe to be of independent interest.
Last updated:  2019-03-26
PUF+IBE: Blending Physically Unclonable Functions with Identity Based Encryption for Authentication and Key Exchange in IoTs
Uncategorized
Urbi Chatterjee, Vidya Govindan, Rajat Sadhukhan, Debdeep Mukhopadhyay, Rajat Subhra Chakraborty, Debashis Mahata, Mukesh Prabhu
Show abstract
Uncategorized
Physically Unclonable Functions (PUFs) promise to be a critical hardware primitive to provide unique identities to billions of connected devices in Internet of Things (IoTs). In traditional authentication protocols a user presents a set of credentials with an accompanying proof such as password or digital certificate. However, IoTs need more evolved methods as these classical techniques suffer from the pressing problems of password dependency and inability to bind access requests to the “things” from which they originate. Additionally, the protocols need to be lightweight and heterogeneous. Although PUFs seem promising to develop such mechanism, it puts forward an open problem of how to develop such mechanism without needing to store the secret challenge-response pair (CRP) explicitly at the verifier end. In this paper, we develop an authentication and key exchange protocol by combining the ideas of Identity based Encryption (IBE), PUFs and Key-ed Hash Function to show that this combination can help to do away with this requirement. The security of the protocol is proved formally under the Session Key Security and the Universal Composability Framework. A prototype of the protocol has been implemented to realize a secured video surveillance camera using a combination of an Intel Edison board, with a Digilent Nexys-4 FPGA board consisting of an Artix-7 FPGA, together serving as the IoT node. We show, though the stand-alone video camera can be subjected to man-in-the-middle attack via IP-spoofing using standard network penetration tools, the camera augmented with the proposed protocol resists such attacks and it suits aptly in an IoT infrastructure making the protocol deployable for the industry.
Last updated:  2017-05-22
Exploring Naccache-Stern Knapsack Encryption
Éric Brier, Rémi Géraud, David Naccache
The Naccache–Stern public-key cryptosystem (NS) relies on the conjectured hardness of the modular multiplicative knapsack problem: Given $p,\{v_i\},\prod v_i^{m_i} \bmod p$, find the $\{m_i\}$. Given this scheme's algebraic structure it is interesting to systematically explore its variants and generalizations. In particular it might be useful to enhance NS with features such as semantic security, re-randomizability or an extension to higher-residues. This paper addresses these questions and proposes several such variants.
Last updated:  2017-05-22
Construction and Filtration of Lightweight Formalized MDS Matrices
Uncategorized
Shiyi Zhang, Yongjuan Wang, Yang Gao, Tao Wang
Show abstract
Uncategorized
The 4x4 MDS matrix over F2 is widely used in the design of block cipher's linear diffusion layers. However, considering the cost of a lightweight cipher's implementation, the sum of XOR operations of a MDS matrix usually plays the role of measure. During the research on the construction of the lightweight 4x4 MDS matrices, this paper presents the concept of formalized MDS matrix: some of the entries that make up the matrix are known, and their positions are determined, and the criterions of the MDS matrix is satisfied. In this paper, using the period and minimal polynomial theory of entries over finite fields, a new construction method of formalized MDS matrices is proposed. A large number of MDS matrices can be obtained efficiently by this method, and their number distribution has significant structural features. However, the algebraic structure of the lightest MDS matrices is also obvious. This paper firstly investigates the construction of 4x4 lightweight MDS matrices, analyzes the distribution characteristics of the them, and the feasibility of the construction method. Then, for the lightest MDS matrices obtained from the method above, the algebraic relations in themselves and between each other are studied, and the important application of the alternating group A4 and it's subgroup, the Klein four-group is found.
Last updated:  2017-09-06
Efficient hash maps to \mathbb{G}_2 on BLS curves
Uncategorized
Alessandro Budroni, Federico Pintore
Show abstract
Uncategorized
When a pairing $e: \mathbb{G}_1 \times \mathbb{G}_2 \rightarrow \mathbb{G}_{T}$, on an elliptic curve $E$ defined over $\mathbb{F}_q$, is exploited for an identity-based protocol, there is often the need to hash binary strings into $\mathbb{G}_1$ and $\mathbb{G}_2$. Traditionally, if $E$ admits a twist $\tilde{E}$ of order $d$, then $\mathbb{G}_1=E(\mathbb{F}_q) \cap E[r]$, where $r$ is a prime integer, and $\mathbb{G}_2=\tilde{E}(\mathbb{F}_{q^{k/d}}) \cap \tilde{E}[r]$, where $k$ is the embedding degree of $E$ w.r.t. $r$. The standard approach for hashing into $\mathbb{G}_2$ is to map to a general point $P \in \tilde{E}(\mathbb{F}_{q^{k/d}})$ and then multiply it by the cofactor $c=\#\tilde{E}(\mathbb{F}_{q^{k/d}})/r$. Usually, the multiplication by $c$ is computationally expensive. In order to speed up such a computation, two different methods (by Scott et al. and by Fuentes et al.) have been proposed. In this paper we consider these two methods for BLS pairing-friendly curves having $k \in \{12,24,30,42,48\}$, providing efficiency comparisons. When $k=42,48$, the Fuentes et al. method requires an expensive one-off pre-computation which was infeasible for the computational power at our disposal. In these cases, we theoretically obtain hashing maps that follow Fuentes et al. idea.
Last updated:  2017-06-26
Strong Authenticated Key Exchange with Auxiliary Inputs
Uncategorized
Rongmao Chen, Yi Mu, Guomin Yang, Willy Susilo, Fuchun Guo
Show abstract
Uncategorized
Leakage attacks, including various kinds of side-channel attacks, allow an attacker to learn partial information about the internal secrets such as the secret key and the randomness of a cryptographic system. Designing a strong, meaningful, yet achievable security notion to capture practical leakage attacks is one of the primary goals of leakage-resilient cryptography. In this work, we revisit the modelling and design of authenticated key exchange (AKE) protocols with leakage resilience. We show that the prior works on this topic are inadequate in capturing realistic leakage attacks. To close this research gap, we propose a new security notion named \textit{leakage-resilient eCK model w.r.t. auxiliary inputs} ($\mathsf{AI\mbox{-}LR\mbox{-}eCK}$) for AKE protocols, which addresses the limitations of the previous models. Our model allows computationally hard-to-invert leakage of \textit{both the long-term secret key and the randomness}, and also addresses a limitation \tb{existing in most} of the previous models where the adversary is disallowed to make leakage queries during the challenge session. As another major contribution of this work, we present a generic framework for the construction of AKE protocols that are secure under the proposed $\mathsf{AI\mbox{-}LR\mbox{-}eCK}$ model. An instantiation based on the \textit{Decision Diffie-Hellman} (DDH) assumption in the standard model is also given to demonstrate the feasibility of our proposed framework.
Last updated:  2017-05-15
A Proof-of-Stake protocol for consensus on Bitcoin subchains
Massimo Bartoletti, Stefano Lande, Alessandro Sebastian Podda
Although the transactions on the Bitcoin blockchain have the main purpose of recording currency transfers, they can also carry a few bytes of metadata. A sequence of transaction metadata forms a subchain of the Bitcoin blockchain, and it can be used to store a tamper-proof execution trace of a smart contract. Except for the trivial case of contracts which admit any trace, in general there may exist inconsistent subchains which represent incorrect contract executions. A crucial issue is how to make it difficult, for an adversary, to subvert the execution of a contract by making its subchain inconsistent. Existing approaches either postulate that subchains are always consistent, or give weak guarantees about their security (for instance, they are susceptible to Sybil attacks). We propose a consensus protocol, based on Proof-of-Stake, that incentivizes nodes to consistently extend the subchain. We empirically evaluate the security of our protocol, and we show how to exploit it as the basis for smart contracts on Bitcoin.
Last updated:  2017-05-15
Breaking and Fixing the HB+DB protocol
Ioana Boureanu, David Gerault, Pascal Lafourcade, Cristina Onete
The HB protocol and its $HB^+$ successor are lightweight authentication schemes based on the Learning Parity with Noise (LPN) problem. They both suffer from the so-called GRS-attack whereby a man-in-the-middle (MiM) adversary can recover the secret key. At WiSec 2015, Pagnin et al. proposed the $HB+DB$ protocol: $HB^+$ with an additional distance-bounding dimension added to detect and counteract such MiM attacks. They showed experimentally that $HB+DB$ was resistant to GRS adversaries, and also advanced $HB+DB$ as a distance-bounding protocol, discussing its resistance to worst-case distance-bounding attackers. In this paper, we exhibit flaws both in the authentication and distance-bounding layers of $HB+DB$; these vulnerabilities encompass practical attacks as well as provable security shortcomings. First, we show that $HB+DB$ may be impractical as a secure distance-bounding protocol, as its distance-fraud and mafia-fraud security-levels scale poorly compared to other distance-bounding protocols. Secondly, we describe an effective MiM attack against $HB+DB$: our attack refines the GRS-strategy and still leads to key-recovery by the attacker, yet this is not deterred by $HB+DB$'s distance-bounding. Thirdly, we refute the claim that $HB+DB$'s security against passive attackers relies on the hardness of the LPN problem. We also discuss how (erroneously) requiring such hardness, in fact, lowers $HB+DB$'s efficiency and its resistance to authentication and distance-bounding attacks. Drawing on $HB+DB$'s design flaws, we also propose a new distance-bounding protocol: $\mathbb{BLOG}$. It retains parts of $HB+DB$, yet $\mathbb{BLOG}$ is provably secure, even --in particular-- against MiM attacks. Moreover, $\mathbb{BLOG}$ enjoys better practical security (asymptotical in the security parameter).
Last updated:  2020-07-14
Towards Practical PFE: An Efficient 2-Party Private Function Evaluation Protocol Based on Half Gates
Osman Bicer, Muhammed Ali Bingol, Mehmet Sabir Kiraz, Albert Levi
Private function evaluation (PFE) is a special case of secure multi-party computation (MPC), where the function to be computed is known by only one party. PFE is useful in several real-life settings where an algorithm or a function itself needs to remain secret due to its confidential classification or intellectual property. In this work, we look back at the seminal PFE framework presented by Mohassel and Sadeghian at Eurocrypt’13. We show how to adapt and utilize the well-known half gates garbling technique (Zahur et al., Eurocrypt’15) to their constant round 2-party PFE scheme. Compared to their scheme, our resulting optimization considerably improves the efficiency of both the underlying Oblivious Evaluation of Extended Permutation (OEP) and secure 2-party computation (2PC) protocol, and yields a more than 40% reduction in overall communication cost (the computation time is also slightly decreased, and the number of rounds remains unchanged).
Last updated:  2017-09-07
Symmetrically and Asymmetrically Hard Cryptography (Full Version)
Alex Biryukov, Leo Perrin
The main efficiency metrics for a cryptographic primitive are its speed, its code size and its memory complexity. For a variety of reasons, many algorithms have been proposed that, instead of optimizing, try to increase one of these hardness forms. We present for the first time a unified framework for describing the hardness of a primitive along any of these three axes: code-hardness, time-hardness and memory-hardness. This unified view allows us to present modular block cipher and sponge constructions which can have any of the three forms of hardness and can be used to build any higher level symmetric primitive: hash function, PRNG, etc. We also formalize a new concept: asymmetric hardness. It creates two classes of users: common users have to compute a function with a certain hardness while users knowing a secret can compute the same function in a far cheaper way. Functions with such an asymmetric hardness can be directly used in both our modular structures, thus constructing any symmetric primitive with an asymmetric hardness. We also propose the first asymmetrically memory-hard function, DIODON. As illustrations of our framework, we introduce WHALE and SKIPPER. WHALE is a code-hard hash function which could be used as a key derivation function and SKIPPER is the first asymmetrically time-hard block cipher.
Last updated:  2017-05-14
Correlation Power Analysis Attack against STT-MRAM Based Cyptosystems
Uncategorized
Abhishek Chakraborty, Ankit Mondal, Ankur Srivastava
Show abstract
Uncategorized
Emerging technologies such as Spin-transfer torque magnetic random-access memory (STT-MRAM) are considered potential candidates for implementing low-power, high density storage systems. The vulnerability of such nonvolatile memory (NVM) based cryptosystems to standard side-channel attacks must be thoroughly assessed before deploying them in practice. In this paper, we outline a generic Correlation Power Analysis (CPA) attack strategy against STT-MRAM based cryptographic designs using a new power model. In our proposed attack methodology, an adversary exploits the power consumption patterns during the write operation of an STT-MRAM based cryptographic implementation to successfully retrieve the secret key. In order to validate our proposed attack technique, we mounted a CPA attack on MICKEY-128 2.0 stream cipher design consisting of STT-MRAM cells with Magnetic Tunnel Junctions (MTJs) as storage elements. The results of the experiments show that the STT-MRAM based implementation of the cipher circuit is susceptible to standard differential power analysis attack strategy provided a suitable hypothetical power model (such as the one proposed in this paper) is selected. In addition, we also investigated the effectiveness of state-of-the-art side-channel attack countermeasures for MRAMs and found that our proposed scheme is able to break such protected implementations as well.
Last updated:  2018-05-16
Improved Attack on Full-round Grain-128
Uncategorized
Ximing Fu, Xiaoyun Wang, Jiazhe Chen, Marc Stevens, Xiaoyang Dong
Show abstract
Uncategorized
In this paper, we propose a series of techniques that can be used to determine the missing IV terms of a complex multivariable Boolean polynomial. Using these techniques, we revisit the dynamic cube attack on Grain-128. Based on choosing one more nullified state bit and one more dynamic bit, we are able to obtain the IV terms of degree $43$, combined with various of reduction techniques, fast discarding monomial techniques and IV representation technique for polynomials, so that the missing IV terms can be determined. As a result, we improve the time complexity of the best previous attack on Grain-128 by a factor of $2^{16}$. Moreover, our attack applies to all keys.
Last updated:  2017-06-28
A New Algorithm for Inversion mod $p^k$
Uncategorized
Çetin Kaya Koç
Show abstract
Uncategorized
A new algorithm for computing $x=a^{-1}\pmod{p^k}$ is introduced. It is based on the exact solution of linear equations using $p$-adic expansions. It starts with the initial value $c=a^{-1}\pmod{p}$ and iteratively computes the digits of the inverse $x=a^{-1}\pmod{p^k}$ in base $p$. The mod 2 version of the algorithm is significantly more efficient than the existing algorithms for small values of $k$. We also describe and analyze all existing algorithms, and compare them to the proposed algorithm. Our algorithm stands out as being the only one that works for any $p$, any $k$, and digit-by-digit. Moreover it requires the minimal number of arithmetic operations (just a single addition) per step.
Last updated:  2017-09-06
Fast Proxy Re-Encryption for Publish/Subscribe Systems
Yuriy Polyakov, Kurt Rohloff, Gyana Sahu, Vinod Vaikuntanthan
We develop two IND-CPA-secure multi-hop unidirectional Proxy Re-Encryption (PRE) schemes by applying the Ring-LWE (RLWE) key switching approach from the homomorphic encryption literature. Unidirectional PRE is ideal for secure publish-subscribe operations where a publisher encrypts information using a public key without knowing upfront who the subscriber will be and what private key will be used for decryption. The proposed PRE schemes provide a multi-hop capability, meaning that when PRE-encrypted information is published onto a PRE-enabled server, the server can either delegate access to specific clients or enable other servers the right to delegate access. Our first scheme (which we call NTRU-ABD-PRE) is based on a variant of the NTRU-RLWE homomorphic encryption scheme. Our second and main PRE scheme (which we call BV-PRE) is built on top of the Brakerski-Vaikuntanathan (BV) homomorphic encryption scheme and relies solely on the RLWE assumption. We present an open-source C++ implementation of both schemes and discuss several algorithmic and software optimizations. We examine parameter selection tradeoffs in the context of security, runtime/latency, throughput, ciphertext expansion, memory usage, and multi-hop capabilities. Our experimental analysis demonstrates that BV-PRE outperforms NTRU-ABD-PRE both in single-hop and multi-hop settings. The BV-PRE scheme has a lower time and space complexity than existing IND-CPA-secure lattice-based PRE schemes, and requires small concrete parameters, making the scheme computationally efficient for use on low-resource embedded systems while still providing 100 bits of security. We present practical recommendations for applying the PRE schemes to several use cases of ad-hoc information sharing for publish-subscribe operations.
Last updated:  2021-07-19
Maliciously Secure Oblivious Linear Function Evaluation with Constant Overhead
Satrajit Ghosh, Jesper Buus Nielsen, Tobias Nilges
BUG REPORT: In early 2021 we were made aware of a bug in Lemma 9.1 by Carmit Hazay, Muthu Venkitasubramaniam, Laasya Bangalore, and Rishabh Bhadauria. The bug does not have an easy fix and we are currently exploring whether a different proof can be found. Until then the results of this paper should not be considered proven and in particular the protocols should not be considered secure. We will later either update the e-print version with a new proof or withdraw the paper. ORIGINAL ABSTRACT: In this work we consider the problem of oblivious linear function evaluation (OLE). OLE is a special case of oblivious polynomial evaluation (OPE) and deals with the oblivious evaluation of a linear function $f(x)=ax+b$. This problem is non-trivial in the sense that the sender chooses $a,b$ and the receiver $x$, but the receiver may only learn $f(x)$. We present a highly efficient and UC-secure construction of OLE in the OT-hybrid model that requires only $O(1)$ OTs per OLE. The construction is based on noisy encodings introduced by Naor and Pinkas (STOC'99). Our main technical contribution solves a problem left open in their work, namely we show in a generic way how to achieve full simulation-based security from noisy encodings. All previous constructions using noisy encodings achieve only passive security. Our result requires novel techniques that might be of independent interest. Using our highly efficient OLE as a black box, we obtain a direct construction of an OPE protocol that simultaneously achieves UC-security and requires only $O(d)$ OTs, where $d$ is the degree of the polynomial that shall be evaluated.
Last updated:  2019-11-14
Combinatorial Subset Difference Public Key Broadcast Encryption Scheme for Secure Multicast
Jihye Kim, Jiwon Lee, Seunghwa Lee, Hyunok Oh
Public key broadcast encryption is a cryptographic method to securely transmit a message from anyone to a group of receivers such that only privileged users can decrypt it. A secure multicast system allows a user to send a message to a dynamically changing group of users. The secure multicast can be realized by the broadcast encryption. In this paper, we propose a novel combinatorial subset difference (CSD) public key broadcast encryption covering method which allows a generalized subset difference representation in which wildcards can be placed at any position. The proposed CSD is suitable for the secure multicast while minimizing the header size compared with the existing public key broadcast encryption schemes without sacrificing key storage and encryption/decryption performance. Experimental results show that the proposed CSD scheme not only reduces the ciphertext header size by 17% and 31% but also improves encryption performance (per subset) by 6 and 1.3 times, and decryption performance by 10 and 19 times compared with existing efficient subset difference (SD) and interval schemes, respectively. Furthermore, especially for subsets represented in a non-hierarchical manner, the proposed CSD reduces the number of subsets by a factor of 1000 times compared with SD and interval approaches. We prove the semantic security of our proposed CSD scheme under the l-BDHE assumption without the random oracle model.
Last updated:  2017-05-13
SplitCommit: Implementing and Analyzing Homomorphic UC Commitments
Peter Rindal, Roberto Trifiletti
In this paper we present SplitCommit, a portable and efficient C++ implementation of the recent additively homomorphic commmitment scheme of Frederiksen et al. [FJNT16]. We describe numerous optimizations that go into engineering such an implementation, including highly optimized general purpose bit-matrix transposition and efficient ECC encoding given the associated generator matrix. We also survey and analyze in detail the applicability of [FJNT16] and include a detailed comparison to the canonical (non-homomorphic) commitment scheme based on a Random Oracle. We include performance benchmarks of the implementation in various network setting, for instance on a 10 Gbps LAN we achieve amortized commitment and decommitment running times of $0.65\mu s$ and $0.27\mu s$, respectively. Finally we also include an extensive tutorial on how to use the library.
Last updated:  2018-02-21
OmniLedger: A Secure, Scale-Out, Decentralized Ledger via Sharding
Uncategorized
Eleftherios Kokoris-Kogias, Philipp Jovanovic, Linus Gasser, Nicolas Gailly, Ewa Syta, Bryan Ford
Show abstract
Uncategorized
Designing a secure permissionless distributed ledger that performs on par with centralized payment processors such as Visa is challenging. Most existing distributed ledgers are unable to "scale-out'' -- growing total processing capacity with number of participants -- and those that do compromise security or decentralization. This work presents OmniLedger, the first scale-out distributed ledger that can preserve long-term security under permissionless operation. OmniLedger ensures strong correctness and security by using a bias-resistant public randomness protocol to choose large statistically representative shards to process transactions, and by introducing an efficient cross-shard commit protocol to handle transactions affecting multiple shards atomically. In addition, OmniLedger optimizes performance via scalable intra-shard parallel transaction processing, ledger pruning via collectively-signed state blocks, and optional low-latency "trust-but-verify'' validation of low-value transactions. Evaluation of our working experimental prototype shows that OmniLedger's throughput scales linearly in the number of validators available, supporting Visa-level workloads and beyond, while confirming typical transactions in under two seconds.
Last updated:  2017-05-11
Security Analysis of ``PSLP: Privacy-Preserving Single-Layer Perceptron Learning for e-Healthcare''
Jingjing Wang, Xiaoyu Zhang, Jingjing guo, Jianfeng Wang
With the synchronous development of both cloud computing and machine learning techniques, the clients are preferring to resort to the cloud server with substantial resources to train learning model. However, in this outsourcing paradigm it is of vital significance to address the privacy concern of client's data. Many researchers have been focusing on preserving the privacy of client's data in learning model. Recently, Wang et al. presented a privacy-preserving single-layer perceptron learning for e-healthcare scheme with using paillier cryptosystem and claimed that their scheme can protect the privacy of user's medical information. By analysing the process of iteration and the communication between the cloud and the user, we present that the honest-but-curious cloud can obtain the private medical information. Besides, the leakage of medical cases will lead to the exposure of the specific single-layer perceptron model of e-healthcare, which has gigantic commercial value.
Last updated:  2017-05-11
Short generators without quantum computers: the case of multiquadratics
Jens Bauch, Daniel J. Bernstein, Henry de Valence, Tanja Lange, Christine van Vredendaal
Finding a short element $g$ of a number field, given the ideal generated by $g$, is a classic problem in computational algebraic number theory. Solving this problem recovers the private key in cryptosystems introduced by Gentry, Smart-Vercauteren, Gentry-Halevi, Garg-Gentry-Halevi, et al. Work over the last few years has shown that for some number fields this problem has a surprisingly low post-quantum security level. This paper shows, and experimentally verifies, that for some number fields this problem has a surprisingly low pre-quantum security level.
Last updated:  2017-05-15
Condition on composite numbers easily factored with elliptic curve method
Masaaki Shirase
For a composite integer $N$ that we would like to factor, we consider a condition for the elliptic curve method using $N$ as a scalar value to succeed and show that if $N$ has a prime factor $p$ such that $p=(DV^2+1)/4,\ V \in {\mathbb Z},\ D\in \{$3, 11, 19, 35, 43, 51, 67, 91, 115, 123, 163, 187, 235, 267, 403, 427$\}$, we can find a non-trivial divisor of $N$ (multiple of $p$) in a short time. In the authors' implementation on PARI/GP, a 1024-bit $N$ was factored in a few seconds when $p$ was 512 bits.
Last updated:  2017-08-05
A New Approach to Round-Optimal Secure Multiparty Computation
Prabhanjan Ananth, Arka Rai Choudhuri, Abhishek Jain
We present a new approach towards constructing round-optimal secure multiparty computation (MPC) protocols against malicious adversaries without trusted setup assumptions. Our approach builds on ideas previously developed in the context of covert multiparty computation [Chandran et al., FOCS'07] even though we do not seek covert security. Using our new approach, we obtain the following results: 1. A five round MPC protocol based on the Decisional Diffie-Hellman (DDH) assumption. 2. A four round MPC protocol based on one-way permutations and sub-exponentially secure DDH. This result is {\em optimal} in the number of rounds. Previously, no four-round MPC protocol for general functions was known and five-round protocols were only known based on indistinguishability obfuscation (and some additional assumptions) [Garg et al., EUROCRYPT'16].
Last updated:  2017-05-11
Synthesis of Adaptive Side-Channel Attacks
Quoc-Sang Phan, Lucas Bang, Corina S. Păsăreanu, Pasquale Malacaria, Tevfik Bultan
We present symbolic analysis techniques for detecting vulnerabilities that are due to adaptive side-channel attacks, and synthesizing inputs that exploit the identified vulnerabilities. We start with a symbolic attack model that encodes succinctly all the side-channel attacks that an adversary can make. Using symbolic execution over this model, we generate a set of mathematical constraints, where each constraint characterizes the set of secret values that lead to the same sequence of side-channel measurements. We then compute the optimal attack, i.e, the attack that yields maximum leakage over the secret, by solving an optimization problem over the computed constraints. We use information-theoretic concepts such as channel capacity and Shannon entropy to quantify the leakage over multiple runs in the attack, where the measurements over the side channels form the observations that an adversary can use to try to infer the secret. We also propose greedy heuristics that generate the attack by exploring a portion of the symbolic attack model in each step. We implemented the techniques in Symbolic PathFinder and applied them to Java programs encoding web services, string manipulations and cryptographic functions, demonstrating how to synthesize optimal side-channel attacks.
Last updated:  2017-05-11
A Leakage-Abuse Attack Against Multi-User Searchable Encryption
Uncategorized
Cédric Van Rompay, Refik Molva, Melek Önen
Show abstract
Uncategorized
Searchable Encryption (SE) allows a user to upload data to the cloud and to search it in a remote fashion while preserving the privacy of both the data and the queries. Recent research results describe attacks on SE schemes using the access pattern, denoting the ids of documents matching search queries, which most SE schemes reveal during query processing. However SE schemes usually leak more than just the access pattern, and this extra leakage can lead to attacks (much) more harmful than the ones using basic access pattern leakage only. We remark that in the special case of Multi-User Searchable Encryption (MUSE), where many users upload and search data in a cloud-based infrastructure, a large number of existing solutions have a common leakage in addition to the well-studied access pattern leakage. We show that this seemingly small extra leakage allows a very simple yet powerful attack, and that the privacy degree of the affected schemes have been overestimated. We also show that this new vulnerability affects existing software. Finally we formalize the newly identified leakage profile and show how it relates to previously defined ones.
Last updated:  2017-05-09
Practical Evaluation of Masking Software Countermeasures on an IoT processor
Uncategorized
David McCann, Elisabeth Oswald
Show abstract
Uncategorized
Implementing cryptography on Internet-of-Things (IoT) devices, that is resilient against side channel analysis, has so far been a task only suitable for specialist software designers in interaction with access to a sophisticated testing facility. Recently a novel tool has been developed, ELMO, which offers the potential to enable non-specialist software developers to evaluate their code w.r.t. power analysis for a popular IoT processor. We explain a crucial extension of ELMO, which enables a user to test higher-order masking schemes much more efficiently than so far possible as well as improve the ease and speed of diagnosing masking errors.
Last updated:  2018-05-16
Post-Quantum Security of Fiat-Shamir
Dominique Unruh
The Fiat-Shamir construction (Crypto 1986) is an efficient transformation in the random oracle model for creating non-interactive proof systems and signatures from sigma-protocols. In classical cryptography, Fiat-Shamir is a zero-knowledge proof of knowledge assuming that the underlying sigma-protocol has the zero-knowledge and special soundness properties. Unfortunately, Ambainis, Rosmanis, and Unruh (FOCS 2014) ruled out non-relativizing proofs under those conditions in the quantum setting. In this paper, we show under which strengthened conditions the Fiat-Shamir proof system is still post-quantum secure. Namely, we show that if we require the sigma-protocol to have computational zero-knowledge and statistical soundness, then Fiat-Shamir is a zero-knowledge simulation-sound proof system (but not a proof of knowledge!). Furthermore, we show that Fiat-Shamir leads to a post-quantum secure unforgeable signature scheme when additionally assuming a "dual-mode hard instance generator" for generating key pairs. Finally, we study the extractability (proof of knowledge) property of Fiat-Shamir. While we have no proof of the extractability itself, we show that if we can prove extractability, then other desired properties such as simulation-sound extractability (i.e., non-malleability), and unforgeable signatures follow.
Last updated:  2018-09-24
Efficient One-Time Signatures from Quasi-Cyclic Codes: a Full Treatment
Edoardo Persichetti
The design of a practical code-based signature scheme is an open problem in post-quantum cryptography. This paper is the full version of a work appeared at SIN'18 as a short paper, which introduced a simple and efficient one-time secure signature scheme based on quasi-cyclic codes. As such, this paper features, in a fully self-contained way, an accurate description of the scheme setting and related previous work, a detailed security analysis, and an extensive comparison and performance discussion.
Last updated:  2017-06-07
SecureML: A System for Scalable Privacy-Preserving Machine Learning
Payman Mohassel, Yupeng Zhang
Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns. In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to nonlinear functions such as sigmoid and softmax that are superior to prior work. We implement our system in C++. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks.
Last updated:  2017-11-15
Higher-Order Side-Channel Protected Implementations of Keccak
Hannes Gross, David Schaffenrath, Stefan Mangard
The efficient protection of security critical devices against side-channel analysis attacks is a fundamental need in the age of Internet of Things and ubiquitous computing. In this work, we introduce a configurable hardware design of Keccak (SHA-3) which can be tailored to fulfill the needs of a wide range of different applications. Our Keccak design is therefore equipped with generic side-channel protection capabilities. The design can thus be synthesized for any desired protection level by just changing one design parameter. Regardless of its generic appearance, the introduced Keccak design yields the smallest (15.7 kGE) firstorder protected Keccak implementation published to this date. Furthermore, it is to the best of our knowledge the first higher-order side-channel resistant implementation of Keccak. In total, we state results for four different Keccak variants up to the ninth protection order.
Last updated:  2017-05-09
Double-spending Prevention for Bitcoin zero-confirmation transactions
Cristina Pérez-Solà, Sergi Delgado-Segura, Guillermo Navarro-Arribas, Jordi Herrera-Joancomartı́
Zero-confirmation transactions, i.e., transactions that have been broadcast but are still pending to be included in the blockchain, have gained attention in order to enable fast payments in Bitcoin, shortening the time for performing payments. Fast payments are desirable in certain scenarios, for instance, when buying in vending machines, fast food restaurants, or withdrawing from an ATM. Despite being fast propagated through the network, zero-confirmation transactions are not protected against double-spending attacks, since the double spending protection Bitcoin offers relays on the blockchain and, by definition, such transactions are not yet included in it. In this paper, we propose a double-spending prevention mechanism for Bitcoin zero-confirmation transactions. Our proposal is based on exploiting the flexibility of the Bitcoin scripting language together with a well known vulnerability of the ECDSA signature scheme to discourage attackers from performing such an attack.
Last updated:  2017-05-09
Privacy-Preserving Interdomain Routing at Internet Scale
Gilad Asharov, Daniel Demmler, Michael Schapira, Thomas Schneider, Gil Segev, Scott Shenker, Michael Zohner
The Border Gateway Protocol (BGP) computes routes between the organizational networks that make up today's Internet. Unfortunately, BGP suffers from deficiencies, including slow convergence, security problems, a lack of innovation, and the leakage of sensitive information about domains' routing preferences. To overcome some of these problems, we revisit the idea of centralizing and using secure multi-party computation (MPC) for interdomain routing which was proposed by Gupta et al. (ACM HotNets'12). We implement two algorithms for interdomain routing with state-of-the-art MPC protocols. On an empirically derived dataset that approximates the topology of today's Internet (55,809 nodes), our protocols take as little as 6 s of topology-independent precomputation and only 3s of online time. We show, moreover, that when our MPC approach is applied at country/region-level scale, runtimes can be as low as 0.17 s online time and 0.20 s pre-computation time. Our results motivate the MPC approach for interdomain routing and furthermore demonstrate that current MPC techniques are capable of efficiently tackling real-world problems at a large scale.
Last updated:  2017-05-09
Running compression algorithms in the encrypted domain: a case-study on the homomorphic execution of RLE
Sébastien Canard, Sergiu Carpov, Donald Nokam Kuate, Renaud Sirdey
This paper is devoted to the study of the problem of running compression algorithms in the encrypted domain, using a (somewhat) Fully Homomorphic Encryption (FHE) scheme. We do so with a particular focus on conservative compression algorithms. Despite of the encrypted domain Turing-completeness which comes with the magic of FHE operators, we show that a number of subtleties crop up when it comes to running compression algorithms and, in particular, that guaranteed conservative compression is not possible to achieve in the FHE setting. To illustrate these points, we analyze the most elementary conservative compression algorithm of all, namely Run-Length Encoding (RLE). We first study the way to regularize this algorithm in order to make it (meaningfully) fit within the constraints of a FHE execution. Secondly, we analyze it from the angle of optimizing the resulting structure towards (as much as possible) FHE execution efficiency. The paper is concluded by concrete experimental results obtained using the Fan-Vercauteren cryptosystem as well as the Armadillo FHE compiler.
Last updated:  2017-12-09
Another Look at Success Probability in Linear Cryptanalysis
Subhabrata Samajder, Palash Sarkar
This work studies the success probability of key recovery attacks based on using a single linear approximation. Previous works had analysed success probability under different hypotheses on the distributions of correlations for the right and wrong key choices. This work puts forward a unifying framework of general key randomisation hypotheses. All previously used key randomisation hypotheses as also zero correlation attacks can be seen to special cases of the general framework. Derivations of expressions for the success probability are carried out under both the settings of the plaintexts being sampled with and without replacements. Compared to previous analysis, we uncover several new cases which have not been considered in the literature. For most of the cases which have been considered earlier, we provide complete expressions for the respective success probabilities. Finally, the complete picture of the dependence of the success probability on the data complexity is revealed. Compared to the extant literature, our work provides a deeper and more thorough understanding of the success probability of single linear cryptanalysis.
Last updated:  2018-01-01
On the Security of Classic Protocols for Unique Witness Relations
Uncategorized
Yi Deng, Xuyang Song, Jingyue Yu, Yu Chen
Show abstract
Uncategorized
We revisit the problem of whether the known classic constant-round public-coin argument/proof systems are witness hiding for languages/distributions with unique witnesses. Though strong black-box \emph{impossibility} results are known, we provide some less unexpected \emph{positive} results on the witness hiding security of these classic protocols: --We give sufficient conditions on a hard distribution over \emph{unique} witness NP relation for which all witness indistinguishable protocols (including all public-coin ones, such as ZAPs, Blum protocol and GMW protocol) are indeed witness hiding. We also show a wide range of cryptographic problems with unique witnesses satisfy these conditions, and thus admit constant-round public-coin witness hiding proof system. ---For the classic Schnorr protocol (for which the distribution of statements being proven seems not to satisfy the above sufficient conditions), we develop an embedding technique and extend the result of Bellare and Palacio to base the witness hiding property of the Schnorr protocol in the standalone setting on a \emph{relaxed} version of one-more like discrete logarithm (DL) assumption, and show that breaking this assumption would lead to some surprising consequences, such as instance compression for DL problem, zero knowledge protocols for the AND-DL language with extremely efficient communication and highly non-trivial hash combiner for hash functions based on DL problem. Similar results hold for the Guillou-Quisquater protocol.
Last updated:  2017-05-04
Decentralized Blacklistable Anonymous Credentials with Reputation
Rupeng Yang, Man Ho Au, Qiuliang Xu, Zuoxia Yu
Blacklistable anonymous credential systems provide service providers with a way to authenticate users according to their historical behaviors, while guaranteeing that all users can access services in an anonymous and unlinkable manner, thus are potentially useful in practice. Traditionally, to protect services from illegal access, the credential issuer, which completes the registration with users, must be trusted by the service provider. However, in practice, this trust assumption is usually unsatisfied. Besides, to better evaluate users, it is desired to use blacklists, which record historical behaviors of users, of other service providers, but currently, this will threaten the security unless a strong trust assumption is made. Another potential security issue in current blacklistable anonymous credential systems is the blacklist gaming attack, where the service provider attempt to compromise the privacy of users via generating blacklist maliciously. In this paper, we solve these problems and present the decentralized blacklistable anonymous credential system with reputation, which inherits nearly all features of the BLACR system presented in Au et.al. (NDSS'12). However, in our new system, no trusted party is needed to register users. Moreover, blacklists from other service providers can be used safely in the new system assuming a minimal trust assumption holds. Besides, the new system is also partially resilient to the blacklist gaming attack. Technically, the main approach to solving these problems is a novel use of the blockchain technique, which serve as a public append-only ledger and are used to store credentials and blacklists. To simplify the construction, we also present a generic framework for constructing our new system. The general framework can be instantiated from three different types of cryptographic systems, including the RSA system, the classical DL system, and the pairing based system, and all these three types of instantiations can be supported simultaneously in the framework. To demonstrate the practicability of our system, we also give a proof of concept implementation for the instantiation under the RSA system. The experiment results indicate that when authenticating with blacklists of reasonable size, our implementation can fulfill practical efficiency demands, and when authenticating with empty blacklists, it is more efficient than that of Garman et al. (NDSS'14), which presents a decentralized anonymous credential system without considering revocation.
Last updated:  2017-05-04
Post-Quantum Key Exchange on ARMv8-A -- A New Hope for NEON made Simple
Silvan Streit, Fabrizio De Santis
NewHope and NewHope-Simple are two recently proposed post-quantum key exchange protocols based on the hardness of the Ring-LWE problem. Due to their high security margins and performance, there have been already discussions and proposals for integrating them into Internet standards, like TLS, and anonymity network protocols, like Tor. In this work, we present time-constant and vector-optimized implementations of NewHope and NewHope-Simple for ARMv8-A 64-bit processors which target high-speed applications. This architecture is implemented in a growing number of smart phone and tablet processors, and features powerful 128-bit SIMD operations provided by the NEON engine. In particular, we propose the use of three alternative modular reduction methods, which allow to better exploit NEON parallelism by avoiding larger data types during the Number Theoretic Transform (NTT) and remove the need to transform input coefficients into Montgomery domain during pointwise multiplications. The NEON vectorized NTT uses a 16-bit unsigned integer representation and executes in only 18, 909 clock cycles on an ARM Cortex-A53 core. Our implementation improves previous assembly-optimized results on ARM NEON platforms by a factor of 3.4 and outperforms the C reference implementation on the same platform by a factor of 8.3. The total time spent on the key exchange was reduced by more than a factor of 3.5 for both protocols.
Last updated:  2017-05-04
Homomorphically Encrypted Arithmetic Operations over the Integer Ring
Uncategorized
Chen Xu, Jingwei Chen, Wenyuan Wu, Yong Feng
Show abstract
Uncategorized
Fully homomorphic encryption allows cloud servers to evaluate any computable functions for clients without revealing any information. It attracts much attention from both of the scientific community and the industry since Gentry’s seminal scheme. Currently, the Brakerski- Gentry-Vaikuntanathan scheme with its optimizations is one of the most potentially practical schemes and has been implemented in a homomorphic encryption C++ library HElib. HElib supplies friendly interfaces for arithmetic operations of polynomials over finite fields. Based on HElib, Chen and Gong (2015) implemented arithmetic over encrypted integers. In this paper, we revisit the HElib-based implementation of homomorphically arithmetic operations on encrypted integers. Due to several optimizations and more suitable arithmetic circuits for homomorphic encryption evaluation, our implementation is able to homomorphically evaluate 64-bit addition/subtraction and 16-bit multiplication for encrypted integers without bootstrapping. Experiments show that our implementation outperforms Chen and Gong’s significantly.
Last updated:  2018-03-10
Four Round Secure Computation without Setup
Zvika Brakerski, Shai Halevi, Antigoni Polychroniadou
We construct a 4-round multi-party computation protocol in the plain model for any functionality, secure against a malicious adversary. Our protocol relies on the sub-exponential hardness of the Learning with Errors (LWE) problem with slightly super-polynomial noise ratio, and on the existence of adaptively secure commitments. Lin, Pass and Soni (FOCS '17) provide an adaptively secure commitment scheme from Time-Lock Puzzles. Our round complexity matches a lower bound of Garg et al. (EUROCRYPT '16), and outperforms the state of the art of 6 rounds based on similar assumptions to ours, and 5 rounds relying on indistinguishability obfuscation and other strong assumptions. To do this, we construct an LWE based multi-key FHE scheme with a very simple one-round distributed setup procedure (vs. the trusted setup required in previous LWE based constructions). This lets us construct the first 3-round semi-malicious MPC protocol without setup from standard LWE using the approach of Mukherjee and Wichs (EUROCRYPT '16). Finally, subexponential hardness and adaptive commitments are used to ''compile'' the protocol into the fully malicious setting.
Last updated:  2017-05-04
Garbled Circuits as Randomized Encodings of Functions: a Primer
Benny Applebaum
Yao's garbled circuit construction is a central cryptographic tool with numerous applications. In this tutorial, we study garbled circuits from a foundational point of view under the framework of randomized encoding (RE) of Functions. We review old and new constructions of REs, present some lower-bounds, and describe some applications. We will also discuss new directions and open problems in the foundations of REs. This is a survey that appeared in a book of surveys in honor of Oded Goldreich's 60th birthday.
Last updated:  2017-06-27
Time-Memory-Data Tradeoff Attacks against Small-State Stream Ciphers
Matthias Hamann, Matthias Krause, Willi Meier, Bin Zhang
Time-memory-data (TMD) tradeoff attacks limit the security level of many classical stream ciphers (like $E_0$, A5/1, Trivium, Grain) to $\frac{1}{2}n$, where $n$ denotes the inner state length of the underlying keystream generator. This implies that to withstand TMD tradeoff attacks, the state size should be at least double the key size. In 2015, Armknecht and Mikhalev introduced a new line of research, which pursues the goal of reducing the inner state size of lightweight stream ciphers below this boundary by deploying a key-dependent state update function in a Grain-like stream cipher. Although their design Sprout was broken soon after publication, it has raised interest in the design principle, and a number of related ciphers have been suggested since, including Plantlet, a follow-up of Sprout, and the cipher Fruit. In this paper, existing TMD tradeoff attacks are revisited, and new insights on distinguishers and key recovery related to small-state stream ciphers are derived. A particular result is the transfer of a generic distinguishing attack suggested in 2007 by Englund, Hell, and Johansson to this new class of lightweight ciphers. Our analysis shows that the initial hope of achieving full security against TMD tradeoff attacks by continuously using the secret key has failed. In particular, we demonstrate that there are generic distinguishing attacks against Plantlet and Fruit with complexity significantly smaller than that of exhaustive key search. However, by studying the assumptions underlying the applicability of these attacks, we are able to come up with a new design idea for small-state stream ciphers which might allow to finally achieve full security against TMD tradeoff attacks. Another contribution of this paper is the first key recovery attack against the most recent version of Fruit. We show that there are at least $2^{64}$ weak keys, each of which does not provide 80-bit security as promised by designers. This new attack against Fruit, together with previous attacks against Sprout, raises the question whether a more complicated key schedule than the basic one used in Plantlet is actually beneficial for the security of such ciphers.
Last updated:  2017-05-04
Super-Isolated Elliptic Curves and Abelian Surfaces in Cryptography
Travis Scholl
We call a simple abelian variety over $\mathbb{F}_p$ super-isolated if its ($\mathbb{F}_p$-rational) isogeny class contains no other varieties. The motivation for considering these varieties comes from concerns about isogeny based attacks on the discrete log problem. We heuristically estimate that the number of super-isolated elliptic curves over $\mathbb{F}_p$ with prime order and $p \leq N$, is roughly $\tilde{\Theta}(\sqrt{N})$. In contrast, we prove that there are only 2 super-isolated surfaces of cryptographic size and near-prime order.
Last updated:  2017-05-04
A General Degenerate Grouping Power Attack with Specific Application to SIMON and SPECK
Steven Cavanaugh
A Degenerate Grouping Power Attack (DGPA) is a type of Partitioning Power Analysis (PPA) used to extract secret keys from the power sidechannel signal of an encryption algorithm running on a device along with some known and varying information such as the associated plaintext or ciphertext associated with each encryption. The DGPA is applied to SIMON and SPECK implementations on MSP430, PIC16F, and Spartan 6 platforms in this work. While keys are successfully recovered from unprotected implementations, guidance is given on a minimum number of rounds, $d$, to perform per clock cycle in FPGAs and ASICs as to mitigate against such attacks for a deployment dependent maximum quantity of data which is to be encrypted with a given key. On the Spartan 6, full key recovery of SIMON 64/128 $d\leq4$ and SPECK 64/128 $d\leq3$ is trivially achieved in seconds with no more than one million random plaintexts, requiring the use of larger $d$ for most implementations. The amount of work to recover a key as a function of the amount of collected data encrypted with that key is explored. To ensure security when performing most modes of block cipher operation with an algorithm having block size $2n$, a particular key should be used to perform no more than $2^n$ encryptions. A feasible key recovery requiring less than 80-bits of work and data from less than $2^{32}$ encryptions is excluded for SIMON 64/128 implementations having $d\geq 9$ and for SPECK 64/128 implementations having $d\geq5$. The DGPA attack method is demonstrated to succeed against a limited data set consisting of one power sample per device clock cycle against a specifically targeted instruction. This provides a basis for a low power field deployed power side channel signal capture hardware for embedded key recovery and exfiltration.
Last updated:  2017-05-01
Quantum one-way permutation over the finite field of two elements
Alexandre de Castro
In quantum cryptography, a one-way permutation is a bounded unitary operator $U: H \mapsto H$ on a Hilbert space $H$ that is easy to compute on every input, but hard to invert given the image of a random input. Levin [Probl. Inf. Transm., vol. 39 (1): 92-103 (2003)] has conjectured that the unitary transformation $g(a,x) = (a,f(x)+ax)$, where $f$ is any length-preserving function and $a,x \in GF_{2^{||x||}}$, is an information-theoretically secure operator within a polynomial factor. Here, we show that Levin’s oneway permutation is provably secure because its output values are four maximally entangled two-qubit states, and whose probability of factoring them approaches zero faster than the multiplicative inverse of any positive polynomial $poly(x)$ over the Boolean ring of all subsets of $x$. Our results demonstrate through well-known theorems that existence of classical one-way functions implies existence of a universal quantum one-way permutation that cannot be inverted in subexponential time in the worst case.
Last updated:  2017-06-02
Watermarking Cryptographic Functionalities from Standard Lattice Assumptions
Sam Kim, David J. Wu
A software watermarking scheme allows one to embed a "mark" into a program without significantly altering the behavior of the program. Moreover, it should be difficult to remove the watermark without destroying the functionality of the program. Recently, Cohen et al. (STOC 2016) and Boneh et al. (PKC 2017) showed how to watermark cryptographic functions such as PRFs using indistinguishability obfuscation. Notably, in their constructions, the watermark remains intact even against arbitrary removal strategies. A natural question is whether we can build watermarking schemes from standard assumptions that achieve this strong mark-unremovability property. We give the first construction of a watermarkable family of PRFs that satisfy this strong mark-unremovability property from standard lattice assumptions (namely, the learning with errors (LWE) and the one-dimensional short integer solution (SIS) problems). As part of our construction, we introduce a new cryptographic primitive called a translucent PRF. Next, we give a concrete construction of a translucent PRF family from standard lattice assumptions. Finally, we show that using our new lattice-based translucent PRFs, we obtain the first watermarkable family of PRFs with strong unremovability against arbitrary strategies from standard assumptions.
Last updated:  2017-05-08
Fault attack on Supersingular Isogeny Cryptosystems
Yan Bo Ti
We present the first fault attack on cryptosystems based on supersingular isogenies. During the computation of the auxiliary points, the attack aims to change the base point to a random point on the curve via a fault injection. We will show that this would reveal the secret isogeny with one successful perturbation with high probability. We will exhibit the attack by placing it against signature schemes and key-exchange protocols with validations in place. Our paper therefore demonstrates the need to incorporate checks in implementations of the cryptosystem.
Last updated:  2017-05-01
Faster Secure Multi-Party Computation of AES and DES Using Lookup Tables
Uncategorized
Marcel Keller, Emmanuela Orsini, Dragos Rotaru, Peter Scholl, Eduardo Soria-Vazquez, Srinivas Vivek
Show abstract
Uncategorized
We present an actively secure protocol for secure multi-party computation based on lookup tables, by extending the recent, two-party `TinyTable' protocol of Damgard et al. (ePrint 2016). Like TinyTable, an attractive feature of our protocol is a very fast and simple online evaluation phase. We also give a new method for efficiently implementing the preprocessing material required for the online phase using arithmetic circuits over characteristic two fields. This improves over the suggested method from TinyTable by at least a factor of 50. As an application of our protocol, we consider secure computation of the Triple DES and the AES block ciphers, computing the S-boxes via lookup tables. Additionally, we adapt a technique for evaluating (Triple) DES based on a polynomial representation of its S-boxes that was recently proposed in the side-channel countermeasures community. We compare the above two approaches with an implementation. The table lookup method leads to a very fast online time of over 230,000 blocks per second for AES and 45,000 for Triple DES. The preprocessing cost is not much more than previous methods that have a much slower online time.
Last updated:  2017-05-01
Privacy-Preserving Multi-Party Bartering Secure Against Active Adversaries
Stefan Wüller, Ulrike Meyer, Susanne Wetzel
A majority of electronic bartering transactions is carried out via online platforms. Typically, these platforms require users to disclose sensitive information about their trade capabilities which might restrict their room for negotiation. It is in this context that we propose a novel decentralized and privacy-preserving bartering protocol for multiple parties that offers the same privacy guarantees as provided by traditional bartering and by cash payments. The proposed protocol is even secure against an active attacker who controls a majority of colluding parties.
Last updated:  2018-08-30
Determining the Minimum Degree of an S-box
Uncategorized
P. R. Mishra, Sumanta Sarkar, Indivar Gupta
Uncategorized
S-boxes are important building blocks in block ciphers. For secure design one should not choose an S-box that has low degree. In this work we consider minimum degree of an S-box which is the minimum value of the degree of the nonzero component functions of the S-box. For an S-box $F : {F_2}^n \rightarrow {F_2}^m$, there are $2^m - 1$ nonzero component functions, we show that there is a better way to determine the minimum degree of an S-box which does not require to check all the $2^m - 1$ component functions. To the best of our knowledge, this is the best algorithm for determining the minimum degree of an S-box in the literature.
Last updated:  2018-11-12
Do you need a Blockchain?
Karl Wüst, Arthur Gervais
Blockchain is being praised as a technological innovation which allows to revolutionize how society trades and interacts. This reputation is in particular attributable to its properties of allowing mutually mistrusting entities to exchange financial value and interact without relying on a trusted third party. A blockchain moreover provides an integrity protected data storage and allows to provide process transparency. In this article we critically analyze whether a blockchain is indeed the appropriate technical solution for a particular application scenario. We differentiate between permissionless (e.g., Bitcoin/Ethereum) and permissioned (e.g. Hyperledger/Corda) blockchains and contrast their properties to those of a centrally managed database. We provide a structured methodology to determine the appropriate technical solution to solve a particular application problem. Given our methodology, we analyze in depth three use cases --- Supply Chain Management, Interbank and International Payments, and Decentralized Autonomous Organizations and conclude the article with an outlook for further opportunities.
Last updated:  2018-06-11
Loop-abort faults on supersingular isogeny cryptosystems
Alexandre Gélin, Benjamin Wesolowski
Cryptographic schemes based on supersingular isogenies have become an active area of research in the field of post-quantum cryptography. We investigate the resistance of these cryptosystems to fault injection attacks. It appears that the iterative structure of the secret isogeny computation renders these schemes vulnerable to loop-abort attacks. Loop-abort faults allow to perform a full key recovery, bypassing all the previously introduced validation methods. Therefore implementing additional countermeasures seems unavoidable for applications where physical attacks are relevant.
Last updated:  2017-05-01
Fully Dynamic Multi Target Homomorphic Attribute-Based Encryption
Ryo Hiromasa, Yutaka Kawai
We propose multi target homomorphic attribute-based encryption (MT-HABE) with fully dynamic homomorphic evaluation: it can take as input arbitrary additional ciphertexts during homomorphic computation. In the previous MT-HABE of Brakerski et al. (TCC 2016-B), the output of homomorphic computation, which is related to a policy set, cannot be computed with a fresh ciphertext whose attribute does not satisfy any policy in the set. This is because the underlying multi-key fully homomorphic encryption (MKFHE) is single-hop: some keys are related to the output of homomorphic computation, which cannot be combined with ciphertexts encrypted under other keys. To implement fully dynamic homomorphic evaluations, we construct MT-HABE from the multi-hop MKFHE proposed by Peikert and Shiehian (TCC 2016-B).
Last updated:  2017-04-28
A crossbred algorithm for solving Boolean polynomial systems
Uncategorized
Antoine Joux, Vanessa Vitse
Show abstract
Uncategorized
We consider the problem of solving multivariate systems of Boolean polynomial equations: starting from a system of $m$ polynomials of degree at most $d$ in $n$ variables, we want to find its solutions over $\F_2$. Except for $d=1$, the problem is known to be NP-hard, and its hardness has been used to create public cryptosystems; this motivates the search for faster algorithms to solve this problem. After reviewing the state of the art, we describe a new algorithm and show that it outperforms previously known methods in a wide range of relevant parameters. In particular, the first named author has been able to solve all the Fukuoka Type~I MQ challenges, culminating with the resolution of a system of 148 quadratic equations in 74 variables in less than a day (and with a lot of luck).
Last updated:  2017-06-13
On the Construction of Lightweight Orthogonal MDS Matrices
Uncategorized
Lijing Zhou, Licheng Wang, Yiru Sun
Show abstract
Uncategorized
In present paper, we investigate 4 problems. Firstly, it is known that, a matrix is MDS if and only if all sub-matrices of this matrix of degree from 1 to $n$ are full rank. In this paper, we propose a theorem that an orthogonal matrix is MDS if and only if all sub-matrices of this orthogonal matrix of degree from 1 to $\lfloor\frac{n}{2}\rfloor$ are full rank. With this theorem, calculation of constructing orthogonal MDS matrices is reduced largely. Secondly, Although it has been proven that the $2^d\times2^d$ circulant orthogonal matrix does not exist over the finite field, we discover that it also does not exist over a bigger set. Thirdly, previous algorithms have to continually change entries of the matrix to construct a lot of candidates. Unfortunately, in these candidates, only very few candidates are orthogonal matrices. With the matrix polynomial residue ring and the minimum polynomials of lightweight element-matrices, we propose an extremely efficient algorithm for constructing $4\times4$ circulant orthogonal MDS matrices. In this algorithm, every candidate must be an circulant orthogonal matrix. Finally, we use this algorithm to construct a lot of lightweight results, and some of them are constructed first time.
Last updated:  2017-05-24
"The Simplest Protocol for Oblivious Transfer'' Revisited
Uncategorized
Ziya Alper Genç, Vincenzo Iovino, Alfredo Rial
Show abstract
Uncategorized
In 2015, Chou and Orlandi presented an oblivious transfer protocol that already drew a lot of attention both from theorists and practitioners due to its extreme simplicity and high efficiency. Chou and Orlandi claimed that their protocol is UC-secure in the random oracle model under dynamic corruptions, which is a very strong security guarantee. Unfortunately, in this work we point out a flaw in their security proof for the case of sender corruption. We define a decisional problem and we prove that, if a correct proof is provided, then this problem can be solved correctly with overwhelming probability. Therefore, the protocol by Chou and Orlandi cannot be instantiated securely with groups for which our decisional problem cannot be solved correctly with overwhelming probability. Our decisional problem can be solved with overwhelming probability when a DDH oracle is provided. Therefore, it seems likely that the protocol by Chou and Orlandi can be instantiated securely with gap-DH groups.
Last updated:  2017-05-02
Enforcing Input Correctness via Certification in Garbled Circuit Evaluation
Yihua Zhang, Marina Blanton, Fattaneh Bayatbabolghani
Secure multi-party computation allows a number of participants to securely evaluate a function on their private inputs and has a growing number of applications. Two standard adversarial models that treat the participants as semi-honest or malicious, respectively, are normally considered for showing security of constructions in this framework. In this work, we go beyond the standard security model in the presence of malicious participants and treat the problem of enforcing correct inputs to be entered into the computation. We achieve this by having a certification authority certify user's information, which is consequently used in secure two-party computation based on garbled circuit evaluation. The focus of this work on enforcing correctness of garbler's inputs via certification, as prior work already allows one to achieve this goal for circuit evaluator's input. Thus, in this work, we put forward a novel approach for certifying user's input and tying certification to garbler's input used during secure function evaluation based on garbled circuits. Our construction achieves notable performance of adding only one (standard) signature verification and $O(n\rho)$ symmetric key/hash operations to the cost of garbled circuit evaluation in the malicious model via cut-and-choose, in which $\rho$ circuits are garbled and $n$ is the length of the garbler's input in bits. Security of our construction is rigorously proved in the standard model.
Last updated:  2017-04-28
Analysis of Toeplitz MDS Matrices
Sumanta Sarkar, Habeeb Syed
This work considers the problem of constructing efficient MDS matrices over the field $\F_{2^m}$. Efficiency is measured by the metric XOR count which was introduced by Khoo et al. in CHES 2014. Recently Sarkar and Syed (ToSC Vol. 1, 2016) have shown the existence of $4\times 4$ Toeplitz MDS matrices with optimal XOR counts. In this paper, we present some characterizations of Toeplitz matrices in light of MDS property. Our study leads to improving the known bounds of XOR counts of $8\times 8$ MDS matrices by obtaining Toeplitz MDS matrices with lower XOR counts over $\F_{2^4}$ and $\F_{2^8}$.
Last updated:  2017-12-13
Fork-Free Hybrid Consensus with Flexible Proof-of-Activity
Uncategorized
Zhiqiang Liu, Shuyang Tang, Sherman S. M. Chow, Zhen Liu, Yu Long
Show abstract
Uncategorized
Bitcoin and its underlying blockchain mechanism have been attracting much attention. One of their core innovations, Proof-of-Work (PoW), is notoriously inefficient which potentially motivates a centralization of computing power, defeating the original goal of decentralization. Proof-of-Stake (PoS) is later proposed to replace PoW. However, both PoW and PoS have different inherent advantages and disadvantages, so does Proof-of-Activity (PoA) of Bentov et al. (SIGMETRICS 2014) which only offers limited combinations of two mechanisms. On the other hand, the hybrid consensus protocol of Pass and Shi (ePrint 16/917) aims to improve the efficiency by dynamically maintaining a rotating committee. Yet, there are unsatisfactory issues including chain forks and fair committee election. In this paper, we firstly devise a generalized variant of PoW. After that, we leverage our newly proposed generalized PoW to construct a fork-free hybrid consensus protocol, which addresses issues faced by the existing hybrid consensus mechanism. We further combine our fork-free hybrid consensus mechanism with PoS for a flexible version of PoA, which offers a flexible combination of PoW and PoS. Compared with Bentov et al.’s PoA, our “flexible PoA” improves the efficiency and provides more flexible combinations of PoW and PoS, resulting in a more powerful and applicable consensus protocol.
Last updated:  2017-04-28
BitFlip: A Randomness-Rich Cipher
Gideon Samid, Serguei Popov
We present a cipher that represents a novel strategy: replacing algorithmic complexity with computational simplicity while generating cryptographic efficacy through large as desired quantities of randomness. The BitFlip cipher allows its user to defend herself with credibly appraised mathematical intractability, well-hinged on solid combinatorics. This is the situation when the amount of randomness is small relative to the accumulated amount of processed plaintext. Deploying more randomness, BitFlip will frustrate its cryptanalyst with terminal equivocation among two or more plausible message candidates. This equivocation defense can be increased by simply increasing the amount of deployed randomness, coming at-will close to Vernam’s perfect secrecy. BitFlip is structured as a super polyalphabetic cipher where a letter comprised of 2n bits is pointed-to by any 2n bits string with a Hamming distance of n from it. When a passed 2n bits string is found to have no n-valued Hamming distance from any letter in the reader’s alphabet, it is regarded as null. This allows for co-encryption of several messages each over its respective alphabet; thereby offering a powerful equivocation defense because the ciphertext does not indicate which alphabet the intended reader is using. BitFlip becomes increasingly timely and practical, exploiting the advent of high quality non-algorithmic randomness, as well as the effect of Moore’s law on the cost of handling large amounts of memory. BitFlip is a natural fit for what fast emerges as the biggest customer of cryptography: the Internet of Things
Last updated:  2017-04-28
The Complexity of Public-Key Cryptography
Uncategorized
Boaz Barak
Show abstract
Uncategorized
We survey the computational foundations for public-key cryptography. We discuss the computational assumptions that have been used as bases for public-key encryption schemes, and the types of evidence we have for the veracity of these assumptions. This survey/tutorial was published in the book "Tutorials on the Foundations of Cryptography", dedicated to Oded Goldreich on his 60th birthday.
Last updated:  2021-02-26
Round-Preserving Parallel Composition of Probabilistic-Termination Cryptographic Protocols
Ran Cohen, Sandro Coretti, Juan Garay, Vassilis Zikas
An important benchmark for multi-party computation protocols (MPC) is their round complexity. For several important MPC tasks, such as broadcast, (tight) lower bounds on the round complexity are known. However, some of these lower bounds can be circumvented when the termination round of every party is not a priori known, and simultaneous termination is not guaranteed. Protocols with this property are called \emph{probabilistic-termination (PT)} protocols. Running PT protocols in parallel affects the round complexity of the resulting protocol in somewhat unexpected ways. For instance, an execution of m protocols with constant expected round complexity might take O(\log m) rounds to complete. In a seminal work, Ben-Or and El-Yaniv (Distributed Computing '03) developed a technique for parallel execution of arbitrarily many broadcast protocols, while preserving expected round complexity. More recently, Cohen et al. (CRYPTO '16) devised a framework for universal composition of PT protocols, and provided the first composable parallel-broadcast protocol with a simulation-based proof. These constructions crucially rely on the fact that broadcast is "privacy free," and do not generalize to arbitrary protocols in a straightforward way. This raises the question of whether it is possible to execute arbitrary PT protocols in parallel, without increasing the round complexity. In this paper we tackle this question and provide both feasibility and infeasibility results. We construct a round-preserving protocol compiler, secure against a dishonest minority of actively corrupted parties, that compiles arbitrary protocols into a protocol realizing their parallel composition, while having a black-box access to the underlying \emph{protocols}. Furthermore, we prove that the same cannot be achieved, using known techniques, given only black-box access to the \emph{functionalities} realized by the protocols, unless merely security against semi-honest corruptions is required, for which case we provide a protocol. To prove our results, we utilize the language and results by Cohen et al., which we extend to capture parallel composition and reactive functionalities, and to handle the case of an honest majority.
Last updated:  2019-06-02
TOPPSS: Cost-minimal Password-Protected Secret Sharing based on Threshold OPRF
Uncategorized
Stanislaw Jarecki, Aggelos Kiayias, Hugo Krawczyk, Jiayu Xu
Show abstract
Uncategorized
We present TOPPSS, the most efficient Password-Protected Secret Sharing (PPSS) scheme to date. A (t; n)-threshold PPSS, introduced by Bagherzandi et al, allows a user to share a secret among n servers so that the secret can later be reconstructed by the user from any subset of t+1 servers with the sole knowledge of a password. It is guaranteed that any coalition of up to t corrupt servers learns nothing about the secret (or the password). In addition to providing strong protection to secrets stored online, PPSS schemes give rise to efficient Threshold PAKE (T-PAKE) protocols that armor single-server password authentication against the inherent vulnerability to offline dictionary attacks in case of server compromise. TOPPSS is password-only, i.e. it does not rely on public keys in reconstruction, and enjoys remarkable efficiency: A single communication round, a single exponentiation per server and just two exponentiations per client regardless of the number of servers. TOPPSS satises threshold security under the (Gap) One-More Diffie-Hellman (OMDH) assumption in the random-oracle model as in several prior efficient realizations of PPSS/TPAKE. Moreover, we show that TOPPSS realizes the Universally Composable PPSS notion of Jarecki et al under a generalization of OMDH, the Threshold One-More Diffie-Hellman (T-OMDH) assumption. We show that the T-OMDH and OMDH assumptions are both hard in the generic group model. The key technical tool we introduce is a universally composable Threshold Oblivious PRF which is of independent interest and applicability.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.