All papers in 2019 (Page 10 of 1498 results)

Last updated:  2020-11-27
New Primitives for Actively-Secure MPC over Rings with Applications to Private Machine Learning
Uncategorized
Ivan Damgård, Daniel Escudero, Tore Frederiksen, Marcel Keller, Peter Scholl, Nikolaj Volgushev
Show abstract
Uncategorized
At CRYPTO 2018 Cramer et al. presented SPDZ2k, a new secret-sharing based protocol for actively secure multi-party computation against a dishonest majority, that works over rings instead of fields. Their protocol uses slightly more communication than competitive schemes working over fields. However, their approach allows for arithmetic to be carried out using native 32 or 64-bit CPU operations rather than modulo a large prime. The authors thus conjectured that the increased communication would be more than made up for by the increased efficiency of implementations. In this work we answer their conjecture in the affirmative. We do so by implementing their scheme, and designing and implementing new efficient protocols for equality test, comparison, and truncation over rings. We further show that these operations find application in the machine learning domain, and indeed significantly outperform their field-based competitors. In particular, we implement and benchmark oblivious algorithms for decision tree and support vector machine (SVM) evaluation.
Last updated:  2020-02-26
Partial Secret Sharing
Amir Jafari, Reza Kaboli, Shahram Khazaei
Information ratio of an access structure is an important measure for efficiency of the best secret sharing scheme realizing it. The most common notion of secret sharing security is that of total (perfect) realization. Two well-known relaxations are the notions of statistical and quasi-total secret sharing. In this paper, we study the relation between different security notions. The most significant and technical result of this paper is that quasi-total and total information ratios coincide for linear schemes. To this end, we employ some tools from linear algebra in companion with a newly introduced relaxed security notion, called partial realization. We provide some intuition that why proving coincidence/separation between total and quasi-total information ratios for the class of abelian schemes is probably much more challenging. We also present some additional results which shed further light on our understanding of different security notions. In particular, one of our results, in combination with a recent result, shows that statistical and total security notions coincide for the class of group-homomorphic schemes, or maybe even a larger class.
Last updated:  2019-06-02
A Candidate Access Structure for Super-polynomial Lower Bound on Information Ratio
Shahram Khazaei
The contribution vector (convec) of a secret sharing scheme is the vector of all share sizes divided by the secret size. A measure on the convec (e.g., its maximum or average) is considered as a criterion of efficiency of secret sharing schemes, which is referred to as the information ratio. It is generally believed that there exists a family of access structures such that the information ratio of any secret sharing scheme realizing it is $2^{\mathrm{\Omega}(n)}$, where the parameter $n$ stands for the number of participants. The best known lower bound, due to Csirmaz (1994), is $\mathrm{\Omega}(n/\log n)$. Closing this gap is a long-standing open problem in cryptology. Using a technique called \emph{substitution}, we recursively construct a family of access structures by starting from that of Csirmaz, which might be a candidate for super-polynomial information ratio. We provide support for this possibility by showing that our family has information ratio ${n^{\mathrm{\Omega}(\frac{\log n}{\log \log n})}}$, assuming the truth of a well-stated information-theoretic conjecture, called the \emph{substitution conjecture}. The substitution method is a technique for composition of access structures, similar to the so called block composition of Boolean functions, and the substitution conjecture is reminiscent of the Karchmer-Raz-Wigderson conjecture on depth complexity of Boolean functions. It emerges after introducing the notion of convec set for an access structure, a subset of $n$-dimensional real space, which includes all achievable convecs. We prove some topological properties about convec sets and raise several open problems.
Last updated:  2019-07-26
Discretisation and Product Distributions in Ring-LWE
Uncategorized
Sean Murphy, Rachel Player
Show abstract
Uncategorized
A statistical framework applicable to Ring-LWE was outlined by Murphy and Player (IACR eprint 2019/452). Its applicability was demonstrated with an analysis of the decryption failure probability for degree-1 and degree-2 ciphertexts in the homomorphic encryption scheme of Lyubashevsky, Peikert and Regev (IACR eprint 2013/293). In this paper, we clarify and extend results presented by Murphy and Player. Firstly, we make precise the approximation of the discretisation of a Normal random variable as a Normal random variable, as used in the encryption process of Lyubashevsky, Peikert and Regev. Secondly, we show how to extend the analysis given by Murphy and Player to degree-k ciphertexts, by precisely characterising the distribution of the noise in these ciphertexts.
Last updated:  2020-10-05
DLSAG: Non-Interactive Refund Transactions For Interoperable Payment Channels in Monero
Pedro Moreno-Sanchez, Arthur Blue, Duc V. Le, Sarang Noether, Brandon Goodell, Aniket Kate
Monero has emerged as one of the leading cryptocurrencies with privacy by design. However, this comes at the price of reduced expressiveness and interoperability as well as severe scalability issues. First, Monero is restricted to coin exchanges among individual addresses and no further functionality is supported. Second, transactions are authorized by linkable ring signatures, a digital signature scheme only available in Monero, hindering thereby the interoperability with the rest of cryptocurrencies. Third, Monero transactions require high on-chain footprint, which leads to a rapid ledger growth and thus scalability issues. In this work, we extend Monero expressiveness and interoperability while mitigating its scalability issues. We present \emph{Dual Linkable Spontaneous Anonymous Group Signature for Ad Hoc Groups (DLSAG)}, a novel linkable ring signature scheme that enables for the first time \emph{refund transactions} natively in Monero: DLSAG can seamlessly be implemented along with other cryptographic tools already available in Monero such as commitments and range proofs. We formally prove that DLSAG achieves the same security and privacy notions introduced in the original linkable ring signature~\cite{Liu2004} namely, unforgeability, signer ambiguity, and linkability. We have evaluated DLSAG and showed that it imposes even slightly lower computation and similar communication overhead than the current digital signature scheme in Monero, demonstrating its practicality. We further show how to leverage DLSAG to enable off-chain scalability solutions in Monero such as payment channels and payment-channel networks as well as atomic swaps and interoperable payments with virtually all cryptocurrencies available today. DLSAG is currently being discussed within the Monero community as an option for possible adoption as a key building block for expressiveness, interoperability, and scalability.
Last updated:  2022-08-25
Computing Primitive Idempotents in Finite Commutative Rings and Applications
Mugurel Barcau, Vicentiu Pasol
In this paper, we compute an algebraic decomposition of blackbox rings in the generic ring model. More precisely, we explicitly decompose a black-box ring as a direct product of a nilpotent black-box ring and local Artinian black-box rings, by computing all its primitive idempotents. The algorithm presented in this paper uses quantum subroutines for the computation of the p-power parts of a black-box ring and then classical algorithms for the computation of the corresponding primitive idempotents. As a by-product, we get that the reduction of a black-box ring is also a black-box ring. The first application of this decomposition is an extension of the work of Maurer and Raub [26] on representation problem in black-box finite fields to the case of reduced p-power black-box rings. Another important application is an IND-CCA1 attack for any ring homomorphic encryption scheme in the generic ring model. Moreover, when the plaintext space is a nite reduced black-box ring, we present a plaintext-recovery attack based on representation problem in black-box prime fields. In particular, if the ciphertext space has smooth characteristic, the plaintext-recovery attack is effectively computable in the generic ring model.
Last updated:  2019-06-02
On Noncommutative Cryptography and homomorphism of stable cubical multivariate transformation groups of infinite dimensional affine spaces
V. Ustimenko, M. Klisowski
Noncommutative cryptography is based on applications of algebraic structures like noncommutative groups, semigroups and non-commutative rings. Its inter-section with Multivariate cryptography contains studies of cryptographic applications of subsemigroups and subgroups of affine Cremona semigroups defined overfinite commutative rings. Efficiently computed homomorphisms between stable subsemigroups of affine Cremona semigroups can be used in tame homomorphisms protocols schemes and their inverse versions. The implementation scheme with the sequence of subgroups of affine Cremona group, which defines projective limit was already suggested. We present the implementation of other scheme which uses two projective limits which define two different infinite groups and the homomorphism between them. The security of corresponding algorithm is based on a complexity of decomposition problem for an element of affine Cremona semigroup into product of given generators. These algorithms may be used in postquantum technologies.
Last updated:  2019-06-02
Statistical Analysis and Anonymity of TOR's Path Selection
Andrei Mogage, Emil Simion
Tor is a network based on the onion routing infrastructure and provides many advantages, including tracking avoidance, research, wider access and, unfortunately, illegal activities. To achieve this, the client will connect to a TOR circuit consisting of nodes chosen under certain restrictions. The purpose of this paper is to draw attention of the narrow range of available and constraints obedient nodes. This is of interest because it impacts the anonymity and the privacy of users and their internet traffic.
Last updated:  2019-05-30
Simulating Homomorphic Evaluation of Deep Learning Predictions
Christina Boura, Nicolas Gama, Mariya Georgieva, Dimitar Jetchev
Convolutional neural networks (CNNs) is a category of deep neural networks that are primarily used for classifying image data. Yet, their continuous gain in popularity poses important privacy concerns for the potentially sensitive data that they process. A solution to this problem is to combine CNNs with Fully Homomorphic Encryption (FHE) techniques. In this work, we study this approach by focusing on two popular FHE schemes, TFHE and HEAAN, that can work in the approximated computational model. We start by providing an analysis of the noise after each principal homomorphic operation, i.e. multiplication, linear combination, rotation and bootstrapping. Then, we provide a theoretical study on how the most important non-linear operations of a CNN (i.e. max, Abs, ReLU), can be evaluated in each scheme. Finally, we measure via practical experiments on the plaintext the robustness of different neural networks against perturbations of their internal weights that could potentially result from the propagation of large homomorphic noise. This allows us to simulate homomorphic evaluations with large amounts of noise and to predict the effect on the classification accuracy without a real evaluation of heavy and time-consuming homomorphic operations. In addition, this approach enables us to correctly choose smaller and more efficient parameter sets for both schemes.
Last updated:  2019-09-20
Tighter proofs of CCA security in the quantum random oracle model
Nina Bindel, Mike Hamburg, Kathrin Hövelmanns, Andreas Hülsing, Edoardo Persichetti
[Modified slightly because MathJax doesn't render $U^{notbot}$ correctly] We revisit the construction of IND-CCA secure key encapsulation mechanisms (KEM) from public-key encryption schemes (PKE). We give new, tighter security reductions for several constructions. Our main result is a tight reduction for the security of the $U^{notbot}$-transform of Hofheinz, Hövelmanns, and Kiltz (TCC'17) which turns OW-CPA secure deterministic PKEs into IND-CCA secure KEMs. This result is enabled by a new one-way to hiding (O2H) lemma which gives a tighter bound than previous O2H lemmas in certain settings and might be of independent interest. We extend this result also to the case of PKEs with non-zero decryption failure probability, partially non-injective PKEs, and non-deterministic PKEs. In addition, we analyze the impact of different variations of the $U^{notbot}$-transform discussed in the literature on the security of the final scheme. We consider the difference between explicit and implicit rejection, proving that security of the former implies security of the latter. We show that the opposite direction holds if the scheme with explicit rejection also uses key confirmation. Finally, we prove that (at least from a theoretic point of view) security is independent of whether the session keys are derived from message and ciphertext or just from the message.
Last updated:  2021-03-23
A${^2}$L: Anonymous Atomic Locks for Scalability in Payment Channel Hubs
Erkan Tairi, Pedro Moreno-Sanchez, Matteo Maffei
Payment channel hubs (PCHs) constitute a promising solution to the inherent scalability problems of blockchain technologies, allowing for off-chain payments between sender and receiver through an intermediary, called the tumbler. While state-of-the-art PCHs provide security and privacy guarantees against a malicious tumbler, they do so by relying on the scripting-based functionality available only at few cryptocurrencies, and they thus fall short of fundamental properties such as backwards compatibility and efficiency. In this work, we present the first PCH protocol to achieve all aforementioned properties. Our PCH builds upon A${^2}$L, a novel cryptographic primitive that realizes a three-party protocol for conditional transactions, where the tumbler pays the receiver only if the latter solves a cryptographic challenge with the help of the sender, which implies the sender has paid the tumbler. We prove the security and privacy guarantees of A${^2}$L (which carry over to our PCH construction) in the Universal Composability framework and present a provably secure instantiation based on adaptor signatures and randomizable puzzles. We implemented A${^2}$L and compared it to TumbleBit, the state-of-the-art Bitcoin-compatible PCH. Asymptotically, A${^2}$L has a communication complexity that is constant, as opposed to linear in the security parameter like in TumbleBit. In practice, A${^2}$L requires $\sim33$x less bandwidth than TumleBit, while retaining the computational cost (or providing $2$x speedup with a preprocessing technique). This demonstrates that A${^2}$L (and thus our PCH construction) is ready to be deployed today. In theory, we demonstrate for the first time that it is possible to design a secure and privacy-preserving PCH while requiring only digital signatures and timelock functionality from the underlying scripting language. In practice, this result makes our PCH backwards compatible with virtually all cryptocurrencies available today, even those offering a highly restricted form of scripting language such as Ripple or Stellar. The practical appealing of our construction has resulted in a proof-of-concept implementation in the COMIT Network, a blockchain technology focused on cross-currency payments.
Last updated:  2019-05-30
Formal Notions of Security for Verifiable Homomorphic Encryption
Jakub Klemsa, Ivana Trummová
Homomorphic encryption enables computations with encrypted data, however, in its plain form, it does not guarantee that the computation has been performed honestly. For the Fully Homomorphic Encryption (FHE), a verifiable variant emerged soon after the introduction of FHE itself, for a single-operation homomorphic encryption (HE), particular verifiable variant has been introduced recently, called the VeraGreg Framework. In this paper, we identify a weakness of List Non-Malleability as defined for the VeraGreg Framework—an analogy to the classical Non-Malleability—and suggest its improvement which we show not to be extendable any more in certain sense. Next, we suggest a decomposition of the abstract VeraGreg framework, introduce novel notions of security for the resulting components and show some reductions between them and/or their combinations. Finally, we conjecture that VeraGreg achieves the strongest (and desirable) security guarantee if and only if its building blocks achieve certain, much more tangible properties, in a specific case together with an assumption on hardness of particular kind of the famous Shortest Vector Problem for lattices.
Last updated:  2021-01-18
Polygraph: Accountable Byzantine Agreement
Pierre Civit, Seth Gilbert, Vincent Gramoli
In this paper, we introduce \emph{Polygraph}, the first accountable Byzantine consensus algorithm. If among $n$ users $t<n/3$ are malicious then it ensures consensus; otherwise (if $t \geq n/3$), it eventually detects malicious users that cause disagreement. Polygraph is appealing for blockchain applications as it allows them to totally order blocks in a chain whenever possible, hence avoiding forks and double spending and, otherwise, to punish (e.g., via slashing) at least $n/3$ malicious users when a fork occurs. This problem is more difficult than perhaps it first appears. One could try identifying malicious senders by extending classic Byzantine consensus algorithms to piggyback signed messages. We show however that to achieve accountability the resulting algorithms would then need to exchange $\Omega(\kappa \cdot n^2)$ more bits, where $\kappa$ is the security parameter of the signature scheme. By contrast, Polygraph has communication complexity $O(\kappa \cdot n^4)$. Finally, we implement Polygraph in a blockchain committing more than 10,000\,TPS when deployed on 80 geodistributed machines.
Last updated:  2022-11-06
Simulation-Extractable zk-SNARK with a Single Verification
Jihye Kim, Jiwon Lee, Hyunok Oh
This revised paper improves the previous simulation-extractable zk-SNARK (SE-SNARK) in terms of performance efficiency and the security. It removes the G_2 operation in verification, without degrading performance and size, and analyze the security of the nested hash collision more deeply to strengthen the security. The simulation-extractable zk-SNARK (SE-SNARK) introduces a security notion of non-malleability. The existing pairing-based zk-SNARKs designed from linear encoding are known to be vulnerable to algebraic manipulation of the proof. The latest SE-SNARKs check the proof consistency by increasing the proof size and the verification cost. In particular, the number of pairings increases almost doubles due to further verification. In this paper, we propose two novel SE-SNARK constructions with a single verification. The consistency check is subsumed in a single verification through employing a hash function. The proof size and verification time of the proposed SE-SNARK schemes are minimal in that it is the same as the state-of-the-art zk-SNARK without non-malleability. The proof in our SE-SNARK constructions comprises only three group elements (type III) in the QAP-based scheme and two group elements (type I) in the SAP-based scheme. The verification time in both requires only 3 pairings. The soundness of the proposed schemes is proven under the hash-algebraic knowledge (HAK) assumption and the collision-resistant hash assumption.
Last updated:  2019-05-30
On Misuse of Nonce-Misuse Resistance: Adapting Differential Fault Attacks on (few) CAESAR Winners
Mustafa Khairallah, Shivam Bhasin, Anupam Chattopadhyay
In this paper, we study DFA attacks on some of the CAESAR competition winners. We study the challenges imposed by the design of these modes, such as masking of the ciphertext. We also show that a very small number of nonce repetition and faults is required, which makes it very practical. We show that OCB and COLM need 1 nonce repetition and 3 faults only to uniquely identify the Key.
Last updated:  2019-12-06
2-threshold Ideal Secret Sharing Schemes Can Be Uniquely Modeled by Latin Squares
Lintao Liu, Xuehu Yan, Yuliang Lu, Huaixi Wang
In a secret sharing scheme, a secret value is encrypted into several shares, which are distributed among corresponding participants. It requires that only predefined subsets of participants can reconstruct the secret with their shares. The general model for secret sharing schemes is provided in different forms, in order to study the essential properties of secret sharing schemes. Considering that it is difficult to directly con- struct secret sharing schemes meeting the requirements of the general model, most of current theoretic researches always rely on other mathematical tools, such as matriod. However, these models can only handle with values in a finite field. In this paper, we firstly establish a one-to-one mapping relationship between Latin squares and 2-threshold secret sharing schemes. Afterwards, we utilize properties of Latin squares to further give an exact characterization for the general model of 2-threshold ideal secret sharing schemes. Furthermore, several interesting properties of 2-threshold ideal schemes are provided, which are not induced by any other means, especially nolinear schemes in an arbitrary integer domain.
Last updated:  2019-08-30
Atomic Multi-Channel Updates with Constant Collateral in Bitcoin-Compatible Payment-Channel Networks
Uncategorized
Christoph Egger, Pedro Moreno-Sanchez, Matteo Maffei
Show abstract
Uncategorized
Current cryptocurrencies provide a heavily limited transaction throughput that is clearly insufficient to cater their growing adoption. Payment-channel networks (PCNs) have emerged as an interesting solution to the scalability issue and are currently deployed by popular cryptocurrencies such as Bitcoin and Ethereum. While PCNs do increase the transaction throughput by processing payments off-chain and using the blockchain only as a dispute arbitrator, they unfortunately require high collateral (i.e., they lock coins for a non-constant time along the payment path) and are restricted to payments in a path from sender to receiver. These issues have severe consequences in practice. The high collateral enables denial-of-service attacks that hamper the throughput and utility of the PCN. Moreover, the limited functionality hinders the applicability of current PCNs in many important application scenarios. Unfortunately, current proposals do not solve either of these issues, or they require Turing-complete language support, which severely limit their applicability. In this work, we present AMCU, the first protocol for atomic multi-channel updates and reduced collateral that is compatible with Bitcoin (and other cryptocurrencies with reduced scripting capabilities). We provide a formal model in the Universal Composability framework and show that AMCU realizes it, thus demonstrating that AMCU achieves atomicity and value privacy. Moreover, the reduced collateral mitigates the consequences of griefing attacks in PCNs while the (multi-payment) atomicity achieved by AMCU opens the door to new applications such as credit rebalancing and crowdfunding that are not possible otherwise. Moreover, our evaluation results demonstrate that AMCU has a performance in line with that of the Lightning Network (the most widely deployed PCN) and thus is ready to be deployed in practice.
Last updated:  2019-05-30
EasyUC: Using EasyCrypt to Mechanize Proofs of Universally Composable Security
Ran Canetti, Alley Stoughton, Mayank Varia
We present a methodology for using the EasyCrypt proof assistant (originally designed for mechanizing the generation of proofs of game-based security of cryptographic schemes and protocols) to mechanize proofs of security of cryptographic protocols within the universally composable (UC) security framework. This allows, for the first time, the mechanization and formal verification of the entire sequence of steps needed for proving simulation-based security in a modular way: * Specifying a protocol and the desired ideal functionality. * Constructing a simulator and demonstrating its validity, via reduction to hard computational problems. * Invoking the universal composition operation and demonstrating that it indeed preserves security. We demonstrate our methodology on a simple example: stating and proving the security of secure message communication via a one-time pad, where the key comes from a Diffie-Hellman key-exchange, assuming ideally authenticated communication. We first put together EasyCrypt-verified proofs that: (a) the Diffie-Hellman protocol UC-realizes an ideal key-exchange functionality, assuming hardness of the Decisional Diffie-Hellman problem, and (b) one-time-pad encryption, with a key obtained using ideal key-exchange, UC-realizes an ideal secure-communication functionality. We then mechanically combine the two proofs into an EasyCrypt-verified proof that the composed protocol realizes the same ideal secure-communication functionality. Although formulating a methodology that is both sound and workable has proven to be a complex task, we are hopeful that it will prove to be the basis for mechanized UC security analyses for significantly more complex protocols.
Last updated:  2020-04-22
--Withdrawn--
Uncategorized
---
Uncategorized
---
Last updated:  2020-04-09
Omniring: Scaling Up Private Payments Without Trusted Setup - Formal Foundations and Constructions of Ring Confidential Transactions with Log-size Proofs
Russell W. F. Lai, Viktoria Ronge, Tim Ruffing, Dominique Schröder, Sri Aravinda Krishnan Thyagarajan, Jiafan Wang
Monero is the largest cryptocurrency with built-in cryptographic privacy features. The transactions are authenticated using spend proofs, which provide a certain level of anonymity by hiding the source accounts from which the funds are sent among a set (known as a ring) of other accounts. Due to its similarities to ring signatures, this core cryptographic component is called Ring Confidential Transactions (RingCT). Because of its practical relevance, several works attempt to analyze the security of RingCT. However, due to the complexity of RingCT they are either informal, miss fundamental functionalities, or introduce undesirable trusted setup assumptions. Regarding efficiency, Monero currently deploys a scheme in which the size of the spend proof is linear in the ring size. This limits the ring size to only a few accounts, which in turn limits the acquired anonymity significantly and facilitates de-anonymization attacks. As a solution to these problems, we present the first complete rigorous formalization of RingCT as a cryptographic primitive. We then propose a generic construction of RingCT and prove it secure in our formal security model. By instantiating our generic construction with new efficient zero-knowledge proofs we obtain Omniring, a fully-fledged RingCT scheme in the discrete logarithm setting that provides the highest concrete and asymptotic efficiency as of today. Omniring is the first RingCT scheme which 1) does not require a trusted setup or pairing-friendly elliptic curves, 2) has a proof size logarithmic in the size of the ring, and 3) allows to share the same ring between all source accounts in a transaction, thereby enabling significantly improved privacy level without sacrificing performance. Our zero-knowledge proofs rely on novel enhancements to the Bulletproofs framework (S&P 2018), which we believe are of independent interest.
Last updated:  2019-05-28
BlockQuick: Super-Light Client Protocol for Blockchain Validation on Constrained Devices
Dominic Letz
Today server authentication is largely handled through Public Key Infrastructure (PKI) in both the private and the public sector. PKI is established as the defacto standard for Internet communication through the world wide web, and its usage in HTTPS, SSL/TLS (Web PKI). However, in its application to Internet of Things (IoT) devices, using Web PKI infrastructure for server authentication has several shortcomings, including issues with validity periods, identity, revocation practice, and governance. Recently, di erent approaches to decentralized PKI (DPKI) using Blockchain technology have been proposed, but so far have lacked practicality in their application to devices commonly used in IoT deployments. The approaches are too resource intensive for IoT devices to handle and even the “light client” protocols have not been resource e cient enough to be practical. We present BlockQuick, a novel protocol for a super-light client, which features reading blockchain data securely from a remote client. BlockQuick requires less data for validation than existing approaches, like PoPoW or FlyClient, while also providing e ective means to protect against eclipse and MITM attacks on the network. BlockQuick clients have low kilobyte RAM requirements, which are optimal for IoT devices and applications with embedded MCUs.
Last updated:  2019-05-28
Deep Learning based Side Channel Attacks in Practice
Houssem Maghrebi
A recent line of research has investigated a new profiling technique based on deep learning as an alternative to the well-known template attack. The advantage of this new profiling approach is twofold: $(1)$ the approximation of the information leakage by a multivariate Gaussian distribution is relaxed (leading to a more generic approach) and $(2)$ the pre-processing phases such as the traces realignment or the selection of the Points of Interest (PoI) are no longer mandatory, in some cases, to succeed the key recovery (leading to a less complex security evaluation roadmap). The related published works have demonstrated that Deep Learning based Side-Channel Attacks (DL-SCA) are very efficient when targeting cryptographic implementations protected with the common side-channel countermeasures such as masking, jitter and random delays insertion. In this paper, we assess the efficiency of this new profiling attack under different realistic and practical scenarios. First, we study the impact of the intrinsic characteristics of the manipulated data-set (\emph{i.e.} distance in time samples between the PoI, the dimensionality of the area of interest and the pre-processing of the data) on the robustness of the attack. We demonstrate that the deep learning techniques are sensitive to these parameters and we suggest some practical recommendations that can be followed to enhance the profiling and the key recovery phases. Second, we discuss the tolerance of DL-SCA with respect to a deviation from the idealized leakage models and provide a comparison with the well-known stochastic attack. Our results show that DL-SCA are still efficient in such a context. Then, we target a more complex masking scheme based on Shamir's secret sharing and prove that this new profiling approach is still performing well. Finally, we conduct a security evaluation of a batch of several combinations of side-channel protections using simulations and real traces captured on the ChipWhisperer board. The experimental results obtained confirm that DL-SCA are very efficient even when a cryptographic implementation combines several side-channel countermeasures.
Last updated:  2019-10-29
Improved Multiplication Triple Generation over Rings via RLWE-based AHE
Deevashwer Rathee, Thomas Schneider, K. K. Shukla
An important characteristic of recent MPC protocols is an input-independent setup phase in which most computations are offloaded, which greatly reduces the execution overhead of the online phase where parties provide their inputs. For a very efficient evaluation of arithmetic circuits in an information-theoretic online phase, the MPC protocols consume Beaver multiplication triples generated in the setup phase. Triple generation is generally the most expensive part of the protocol, and improving its efficiency is the aim of our work. We specifically focus on computation over rings of the form $Z_{2^\ell}$ in the semi-honest model and the two-party setting, for which an Oblivious Transfer (OT)-based protocol is the currently best solution. To improve upon this method, we propose a protocol based on RLWE-based Additively Homomorphic Encryption. Our experiments show that our protocol is more scalable, and it outperforms the OT-based protocol in most cases. For example, we improve communication by up to 6.9x and runtime by up to 3.6x for 64-bit triple generation.
Last updated:  2020-04-28
On Group-Characterizability of Homomorphic Secret Sharing Schemes
Reza Kaboli, Shahram Khazaei, Maghsoud Parviz
A group-characterizable (GC) random variable is induced by a finite group, called main group, and a collection of its subgroups [Chan and Yeung 2002]. The notion extends directly to secret sharing schemes (SSS). It is known that multi-linear SSSs can be equivalently described in terms of GC ones. The proof extends to abelian SSSs, a more powerful generalization of multi-linear schemes, in a straightforward way. Both proofs are fairly easy considering the notion of dual for vector spaces and Pontryagin dual for abelian groups. However, group-characterizability of homomorphic SSSs (HSSSs), which are generalizations of abelian schemes, is non-trivial, and thus the main focus of this paper. We present a necessary and sufficient condition for a SSS to be equivalent to a GC one. Then, we use this result to show that HSSSs satisfy the sufficient condition, and consequently they are GC. Then, we strengthen this result by showing that a group-characterization can be found in which the subgroups are all normal in the main group. On the other hand, GC SSSs whose subgroups are normal in the main group can easily be shown to be homomorphic. Therefore, we essentially provide an equivalent characterization of HSSSs in terms of GC schemes. We also present two applications of our equivalent definition for HSSSs. One concerns lower bounding the information ratio of access structures for the class of HSSSs, and the other is about the coincidence between statistical, almost-perfect and perfect security notions for the same class.
Last updated:  2020-02-26
On Abelian and Homomorphic Secret Sharing Schemes
Amir Jafari, Shahram Khazaei
Abelian secret sharing schemes (SSS) are generalization of multi-linear SSS and similar to them, abelian schemes are homomorphic. There are numerous results on linear and multi-linear SSSs in the literature and a few ones on homomorphic SSSs too. Nevertheless, the abelian schemes have not taken that much attention. We present three main results on abelian and homomorphic SSSs in this paper: (1) abelian schemes are more powerful than multi-linear schemes (we achieve a constant factor improvement), (2) the information ratio of dual access structures are the same for the class of abelian schemes, and (3) every ideal homomorphic scheme can be transformed into an ideal multi-linear scheme with the same access structure. Our results on abelian and homomorphic SSSs have been motivated by the following concerns and questions. All known linear rank inequities have been derived using the so-called common information property of random variables [Dougherty, Freiling and Zeger, 2009], and it is an open problem that if common information is complete for deriving all such inequalities (Q1). The common information property has also been used in linear programming to find lower bounds for the information ratio of access structures [Farràs, Kaced, Molleví and Padró, 2018] and it is an open problem that if the method is complete for finding the optimal information ratio for the class of multi-linear schemes (Q2). Also, it was realized by the latter authors that the obtained lower bound does not have a good behavior with respect to duality and it is an open problem that if this behavior is inherent to their method (Q3). Our first result provides a negative answer to Q2. Even though, we are not able to completely answer Q1 and Q3, we have some observations about them.
Last updated:  2019-06-05
Subliminal channels in post-quantum digital signature schemes
Herman Galteland, Kristian Gjøsteen
We analyze the digital signatures schemes submitted to NIST's Post-Quantum Cryptography Standardization Project in search for subliminal channels.
Last updated:  2019-11-21
Security of the Suffix Keyed Sponge
Christoph Dobraunig, Bart Mennink
We formalize and analyze the general suffix keyed sponge construction, a pseudorandom function built on top of a cryptographic permutation. The construction hashes its data using the (keyless) sponge construction, transforms part of the state using the secret key, and generates the tag from the output of a final permutation call. In its simplest form, if the key and tag size are at most the rate of the sponge, one can see the suffix keyed sponge as a simple sponge function evaluation whose input is the plaintext appended with the key. The suffix keyed sponge is, however, much more general: the key and tag size may exceed the rate without any need to make extra permutation calls. We prove that the suffix keyed sponge construction achieves birthday-bound PRF security in the capacity, even if key and tag size exceed the rate. Furthermore, we prove that if the absorption of the key into the state happens in a leakage resilient manner, the suffix keyed sponge itself is leakage resilient as well. Our findings show that the suffix keyed sponge compares favorably with the hash-then-MAC construction. For instance, to reach a security level of $k$ bits, the side-channel protected component in the suffix keyed sponge just needs to process $k$ bits of input besides the key, whereas schemes following the hash-then-MAC construction need a side-channel protected MAC function that processes $2k$ bits of input besides the key. Moreover, even if we just consider black-box attacks, the MAC function in a hash-then-MAC scheme needs to be cryptographically strong whereas in the suffix keyed sponge the key may be absorbed by a simple XOR. The security proofs are performed using the H-coefficient technique, and make effective use of the multicollision limit function results of Daemen et al. (ASIACRYPT 2017), both for arguing that state manipulation larger than the rate is tolerated after key processing and for upper bounding the amount of leakage an attacker may gain about the secret key.
Last updated:  2019-05-27
On the Commitment Capacity of Unfair Noisy Channels
Claude Crépeau, Rafael Dowsley, Anderson C. A. Nascimento
Noisy channels are a valuable resource from a cryptographic point of view. They can be used for exchanging secret-keys as well as realizing other cryptographic primitives such as commitment and oblivious transfer. To be really useful, noisy channels have to be consider in the scenario where a cheating party has some degree of control over the channel characteristics. Damgård et al. (EUROCRYPT 1999) proposed a more realistic model where such level of control is permitted to an adversary, the so called unfair noisy channels, and proved that they can be used to obtain commitment and oblivious transfer protocols. Given that noisy channels are a precious resource for cryptographic purposes, one important question is determining the optimal rate in which they can be used. The commitment capacity has already been determined for the cases of discrete memoryless channels and Gaussian channels. In this work we address the problem of determining the commitment capacity of unfair noisy channels. We compute a single-letter characterization of the commitment capacity of unfair noisy channels. In the case where an adversary has no control over the channel (the fair case) our capacity reduces to the well-known capacity of a discrete memoryless binary symmetric channel.
Last updated:  2019-05-27
Multi-Party Virtual State Channels
Stefan Dziembowski, Lisa Eckey, Sebastian Faust, Julia Hesse, Kristina Hostáková
Smart contracts are self-executing agreements written in program code and are envisioned to be one of the main applications of blockchain technology. While they are supported by prominent cryptocurrencies such as Ethereum, their further adoption is hindered by fundamental scalability challenges. For instance, in Ethereum contract execution suffers from a latency of more than 15 seconds, and the total number of contracts that can be executed per second is very limited. State channel networks are one of the core primitives aiming to address these challenges. They form a second layer over the slow and expensive blockchain, thereby enabling instantaneous contract processing at negligible costs. In this work we present the first complete description of a state channel network that exhibits the following key features. First, it supports virtual multi-party state channels, i.e. state channels that can be created and closed without blockchain interaction and that allow contracts with any number of parties. Second, the worst case time complexity of our protocol is constant for arbitrary complex channels. This is in contrast to the existing virtual state channel construction that has worst case time complexity linear in the number of involved parties. In addition to our new construction, we provide a comprehensive model for the modular design.
Last updated:  2019-06-15
Bias-variance Decomposition in Machine Learning-based Side-channel Analysis
Daan van der Valk, Stjepan Picek
Machine learning techniques represent a powerful option in profiling side-channel analysis. Still, there are many settings where their performance is far from expected. In such occasions, it is very important to understand the difficulty of the problem and the behavior of the machine learning algorithm. To that end, one needs to investigate not only the performance of machine learning but also to provide insights into its explainability. One tool enabling us to do this is the bias-variance decomposition where we are able to decompose the predictive error into bias, variance, and noise. With this technique, we can analyze various scenarios and recognize what are the sources of problem difficulty and how additional measurements/features or more complex machine learning models can alleviate the problem. While such results are promising, there are still drawbacks since often it is not easy to connect the performance of side-channel attack and performance of a machine learning classifier as given by the bias-variance decomposition. In this paper, we propose a new tool for analyzing the performance of machine learning-based side-channel attacks -- the Guessing Entropy Bias-Variance Decomposition. With it, we are able to better understand the performance of various machine learning techniques and understand how a change in a setting influences the performance of an attack. To validate our claims, we give extensive experimental results for a number of different settings.
Last updated:  2020-09-16
Lattice RingCT v2.0 with Multiple Input and Output Wallets
Wilson Alberto Torres, Veronika Kuchta, Ron Steinfeld, Amin Sakzad, Joseph K. Liu, Jacob Cheng
This paper presents the Lattice-based Ring Confidential Transactions (Lattice RingCT v2.0) protocol. Unlike the previous Lattice RingCT v1.0 (LRCT v1.0) protocol, the new protocol supports Multiple-Input and Multiple-Output (MIMO) wallets in transactions, and it is a fully functional protocol construction for cryptocurrency applications such as Hcash. Since the MIMO cryptocurrency setting introduces new balance security requirements (and in particular, security against (out-of-range) amount attacks), we give a refined balance security model to capture such attacks, as well as a refined anonymity model to capture amount privacy attacks. Our protocol extends a previously proposed ring signature scheme in the LRCT v1.0 protocol, to support the MIMO requirements while preserving the post-quantum security guarantees, and uses a lattice-based zero-knowledge range proof to achieve security against (out-of-range) attacks. Preliminary parameter estimates and signature sizes are proposed as a point of reference for future studies.
Last updated:  2019-05-27
Post Quantum ECC on FPGA Platform
Debapriya Basu Roy, Debdeep Mukhopadhyay
Post-quantum cryptography has gathered significant attention in recent times due to the NIST call for standardization of quantum resistant public key algorithms. In that context, supersingular isogeny based key exchange algorithm (SIKE) has emerged as a potential candidate to replace traditional public key algorithms like RSA and ECC. SIKE provides $\mathbf{O(\sqrt[4]{p})}$ classical security and $\mathbf{O(\sqrt[6]{p})}$ quantum security where $p$ is the characteristic of the underlying field. Additionally, SIKE has the smallest key sizes among all the post-quantum public algorithm, making it very suitable for bandwidth constrained environment. In this paper, we present an efficient implementation of SIKE protocol for FPGA based applications. The proposed architecture provides the same latency as that of the best existing implementation of SIKE protocol while consuming $48\%$ less DSPs and $58\%$ less block RAM resources. Thus, our design is substantially more efficient compared to that of existing implementations of SIKE.
Last updated:  2019-05-27
(Linkable) Ring Signature from Hash-Then-One-Way Signature
Xingye Lu, Man Ho Au, Zhenfei Zhang
In this paper, we revisit the generic construction of ring signatures from hash-then-one-way type ($\mathsf{Type-H}$) signatures proposed by Abe et al. (AOS) in 2004 and made the following contributions. First, we give a proof for the generic construction, in a strengthened security model. Previously, this was only done for concrete instantiations, in a weaker model. Second, we extend AOS's framework to generically construct one-time linkable ring signatures from $\mathsf{Type-H}$ signatures and one-time signatures. Lastly, we instantiate the generic construction with an NTRU-based $\mathsf{Type-H}$ signature: Falcon~and obtain a post-quantum linkable ring signature scheme. Our analysis shows that the resulting linkable signature is more efficient than any existing lattice based solutions for small to moderate number of users.
Last updated:  2019-09-23
Deep Learning based Model Building Attacks on Arbiter PUF Compositions
Pranesh Santikellur, Aritra Bhattacharyay, Rajat Subhra Chakraborty
Robustness to modeling attacks is an important requirement for PUF circuits. Several reported Arbiter PUF com- positions have resisted modeling attacks. and often require huge computational resources for successful modeling. In this paper we present deep feedforward neural network based modeling attack on 64-bit and 128-bit Arbiter PUF (APUF), and several other PUFs composed of Arbiter PUFs, namely, XOR APUF, Lightweight Secure PUF (LSPUF), Multiplexer PUF (MPUF) and its variants (cMPUF and rMPUF), and the recently proposed Interpose PUF (IPUF, up to the (4,4)-IPUF configuration). The technique requires no auxiliary information (e.g. side-channel information or reliability information), while employing deep neural networks of relatively low structural complexity to achieve very high modeling accuracy at low computational overhead (compared to previously proposed approaches), and is reasonably robust to error-inflicted training dataset.
Last updated:  2019-05-27
Asymmetric Message Franking: Content Moderation for Metadata-Private End-to-End Encryption
Nirvan Tyagi, Paul Grubbs, Julia Len, Ian Miers, Thomas Ristenpart
Content moderation is crucial for stopping abuse and harassment via messaging on online platforms. Existing moderation mechanisms, such as message franking, require platform providers to see user identifiers on encrypted traffic. These mechanisms cannot be used in messaging systems in which users can hide their identities, such as Signal. The key technical challenge preventing moderation is in simultaneously achieving cryptographic accountability while preserving deniability. In this work, we resolve this tension with a new cryptographic primitive: asymmetric message franking schemes (AMFs). We define strong security notions for AMFs, including the first formal treatment of deniability in moderation settings. We then construct, analyze, and implement an AMF scheme that is fast enough for deployment. We detail how to use AMFs to build content moderation for metadata-private messaging.
Last updated:  2019-08-20
Verification of Authenticated Firmware Load
Sujit Kumar Muduli, Pramod Subramanyan, Sayak Ray
An important primitive in ensuring security of modern systems-on-chip designs are protocols for authenticated firmware load. These loaders read a firmware binary image from an untrusted input device, authenticate the image using cryptography and load the image into memory for execution if authentication succeeds. While these protocols are an essential part of the hardware root of trust in almost all modern computing devices, verification techniques for reasoning about end-to-end security of these protocols do not exist. This paper takes a step toward addressing this gap by introducing a system model, adversary model and end-to-end security property that enable reasoning about the security of authenticated load protocols. We then present a decomposition of the security hyperproperty into two simpler 2-safety properties that enables more scalable verification. Experiments on a protocol model demonstrate viability of the methodology.
Last updated:  2019-05-27
ShareLock: Mixing for Cryptocurrencies from Multiparty ECDSA
Omer Shlomovits, István András Seres
Many cryptocurrencies, such as Bitcoin and Ethereum, do not provide any financial privacy to their users. These systems cannot be used as a medium of exchange as long as they are transparent. Therefore the lack of privacy is the largest hurdle for cryptocurrency mass adoption next to scalability issues. Although many privacy-enhancing schemes had been already proposed in the literature, most of them did not get traction due to either their complexity or their adoption would rely on severe changes to the base protocol. To close this gap, in this work we propose ShareLock, a practical privacy-enhancing tool for cryptocurrencies which is deployable on today's cryptocurrency networks.
Last updated:  2020-03-30
Towards More Secure Constructions of Adjustable Join Schemes
Shahram Khazaei, Mojtaba Rafiee
An adjustable join ($\nadjoin$) scheme [Popa-Zeldovich 2012] is a symmetric-key primitive that enables a user to securely outsource his database to a server, and later to issue join queries for a pair of columns. When queries are extended to a list of columns, $\tp$ security of Adjoin schemes [Mironov-Segev-Shahaf 2017] does not capture the expected security. To address this deficiency, we introduce the syntax and security notion of multi-adjustable join ($\nmadjoin$) schemes. We propose a new security notion for this purpose, which we refer to as $\mtp$. The $\tp$ security of $\nadjoin$ extends to the $\mtp$ security of $\nmadjoin$ in a straightforward way. The gap between $\tp$ and $\mtp$ is filled with a sequence $\{\smtpk{k}\}_{k\in\mathbb{N}}$ of security definitions where $\smtpk{1}$ and $\smtpk{\infty}$, respectively, correspond to $\tp$ and $\mtp$. We propose constructions for achieving both $\mtp$ and $\smtpk{k}$ security levels. Our $\mtp$-secure scheme joins $m$ columns, each containing $n$ elements, in time $\mathcal{O}(n^{m-1})$. Our $\smtpk{k}$-secure scheme uses ideas from secret sharing in its construction and does the job in time $\mathcal{O}((m-1)n^{k}/k)$ with some leakage that we refer to as $k$-monotonous. It remains open if this barrier is inherent to the security definitions. Our schemes are substantially more efficient than previous ones.
Last updated:  2019-05-25
Faster Bootstrapping of FHE over the integers with large prime message space
Zhizhu Lian, Yupu Hu, Hu Chen, Baocang Wang
Bootstrapping of FHE over the integer with large message is a open problem, which is to evaluate double modulo $(c ~\text{mod}~ p )~\mod~ Q$ arithmetic homomorphically for large $Q$. In this paper, we express this double modulo reduction circuit as a arithmetic circuit of degree at most $\theta^2 \log^2\theta/2$, with $O(\theta \log^2\theta)$ multiplication gates, where $\theta= \frac{\lambda}{\log \lambda}$ and $\lambda$ is the security parameter. The complexity of decryption circuit is independent of the message space size $Q$ with a constraint $Q> \theta \log^2\theta/2$.
Last updated:  2019-05-25
Solutions of $x^{q^k}+\cdots+x^{q}+x=a$ in $GF(2^n)$
Kwang Ho Kim, Jong Hyok Choe, Dok Nam Lee, Dae Song Go, Sihem Mesnager
Though it is well known that the roots of any affine polynomial over finite field can be computed by a system of linear equations by using a normal base of the field, such solving approach appears to be difficult to apply when the field is fairly large. Thus, it may be of great interest to find explicit representation of the solutions independently of the field base. This was previously done only for quadratic equations over binary finite field. This paper gives explicit representation of solutions for much wider class of affine polynomials over binary prime field.
Last updated:  2019-05-25
Weights on affine subspaces and some other cryptographic characteristics of Boolean functions of 5 variables
Evgeny K. Alekseev, Lyudmila A. Kushchinskaya
Recently one new key recovery method for a filter generator was proposed. It is based on so-called planar approximations of such a generator. This paper contains the numerical part of the research of the Boolean functions properties which allow to protect the generator against this method. The main theoretical part of this research is presented at the CTCrypt 2019 conference.
Last updated:  2019-05-25
How to not break SIDH
Chloe Martindale, Lorenz Panny
We give a number of approaches which, to a newcomer, may seem like natural ways to attack the SIDH/SIKE protocol, and explain why each of these approaches seems to fail, at least with the specific setup and parameters of SIKE. Our aim is to save some time for others who are looking to assess the security of SIDH/SIKE. We include methods that fail to attack the pure isogeny problem, namely: looking at the $\mathbb F_p$-subgraph, lifting to characteristic zero, and using Weil restrictions. We also include methods that fail to make use of the public 2-power and 3-power torsion points, namely: interpolation techniques, any purely group-theoretic approaches, and constructing an endomorphism à la Petit to exploit the auxiliary points, but with balanced parameters.
Last updated:  2020-09-20
Extended Galbraith's Test on the Anonymity of IBEs from Higher Residuosity
Xiaopeng Zhao, Zhenfu Cao, Xiaolei Dong, Jun Shao
At PKC 2019, Clear and McGoldrick presented the first identity-based encryption (IBE) scheme that supports homomorphic addition modulo a poly-sized prime $e$. Assuming that deciding solvability of a special system of multivariate polynomial equations is hard, they proved that their scheme for $e>2$ is anonymous. In this paper, we review the classical Galbraith's test on the anonymity of the first pairing-free IBE scheme due to Cocks. With the eye of the reciprocity law over $\mathbb{F}_\mathtt{q}[x]$, we can have a profound understanding of the test and naturally extend it to give a practical attack on the anonymity of the Clear-McGoldrick IBE scheme. Furthermore, we believe that our technique plays a crucial role in anonymizing IBE schemes from higher residuosity.
Last updated:  2019-05-25
When Encryption is Not Enough -- Effective Concealment of Communication Pattern, even Existence (BitGrey, BitLoop)
Gideon Samid
How much we say, to whom, and when, is inherently telling, even if the contents of our communication is unclear. In other words: encryption is not enough; neither to secure privacy, nor to maintain confidentiality. Years ago Adi Shamir already predicted that encryption will be bypassed. And it has. The modern dweller of cyber space is routinely violated via her data behavior. Also, often an adversary has the power to compel release of cryptographic keys over well-exposed communication. The front has shifted, and now technology must build cryptographic shields beyond content, and into pattern, even as to existence of communication. We present here tools, solutions, methods to that end. They are based on equivocation. If a message is received by many recipients, it hides the intended one. If a protocol calls for decoy messages, then it protects the identity of the sender of the contents-laden message. BitGrey is a protocol that creates a "grey hole" (of various shades) around the communicating community, so that very little information leaks out. In addition the BitLoop protocol constructs a fixed rate circulating bit flow, traversing through all members of a group. The looping bits appear random, and effectively hide the pattern, even the existence of communication within the group.
Last updated:  2019-08-22
Optimal TNFS-secure pairings on elliptic curves with composite embedding degree
Georgios Fotiadis, Chloe Martindale
In this paper we present a comprehensive comparison between pairing-friendly elliptic curves, considering different curve forms and twists where possible. We define a measure of the efficiency of a parametrized pairing-friendly family that takes into account the number field sieve (NFS) attacks (unlike the $\rho$-value). This measure includes an approximation of the security of the discrete logarithm problem in $\mathbb F_{p^k}^*$, computed via the method of Barbulescu and Duquesne [4]. We compute the security of the families presented by Fotiadis and Konstantinou in [13], compute some new families, and compare the efficiency of both of these with the (adjusted) BLS, KSS, and BN families, and with the new families of [19]. Finally, we present an optimal pairing-friendly elliptic curve for security level 128 and recommend two pairing-friendly elliptic curves for security level 192.
Last updated:  2021-12-14
How to Build Pseudorandom Functions From Public Random Permutations
Yu Long Chen, Eran Lambooij, Bart Mennink
Pseudorandom functions are traditionally built upon block ciphers, but with the trend of permutation based cryptography, it is a natural question to investigate the design of pseudorandom functions from random permutations. We present a generic study of how to build beyond birthday bound secure pseudorandom functions from public random permutations. We first show that a pseudorandom function based on a single permutation call cannot be secure beyond the $2^{n/2}$ birthday bound, where n is the state size of the function. We next consider the Sum of Even-Mansour (SoEM) construction, that instantiates the sum of permutations with the Even-Mansour construction. We prove that SoEM achieves tight $2n/3$-bit security if it is constructed from two independent permutations and two randomly drawn keys. We also demonstrate a birthday bound attack if either the permutations or the keys are identical. Finally, we present the Sum of Key Alternating Ciphers (SoKAC) construction, a translation of Encrypted Davies-Meyer Dual to a public permutation based setting, and show that SoKAC achieves tight $2n/3$-bit security even when a single key is used.
Last updated:  2019-05-24
Towards post-quantum symmetric cryptography
John Gregory Underhill, Stiepan Aurélien Kovac, Xenia Bogomolec
Withthiswork, weintendondemonstratingtheneedfor improvements to the currently standardized AES family of cryptosystems, and provide a solution that meets the requirements of long-term security in the rapidly evolving threat landscape. The solution proposed is flexible, dramatically increases the potential security of the cipher, and strongly mitigates many of the most serious attacks on the AES family of cryptosystems. Further, our solution can be easily integrated into existing AES cryptosystem deployments, with only a few small changes required, thus preserving the large investments in this cipher both in hardware and software.
Last updated:  2019-05-24
Continuous Space-Bounded Non-Malleable Codes from Stronger Proofs-of-Space
Binyi Chen, Yilei Chen, Kristina Hostáková, Pratyay Mukherjee
Non-malleable codes are encoding schemes that provide protections against various classes of tampering attacks. Recently Faust et al. (CRYPTO 2017) initiated the study of space- bounded non-malleable codes that provide such protections against tampering within small- space devices. They put forward a construction based on any non-interactive proof-of-space (NIPoS). However, the scheme only protects against an a priori bounded number of tampering attacks. We construct non-malleable codes that are resilient to an unbounded polynomial number of space-bounded tamperings. Towards that we introduce a stronger variant of NIPoS called proof-extractable NIPoS (PExt-NIPoS), and propose two approaches of constructing such a primitive. Using a new proof strategy we show that the generic encoding scheme of Faust et al. achieves unbounded tamper-resilience when instantiated with a PExt-NIPoS. We show two methods to construct PExt-NIPoS: 1. The first method uses a special family of “memory-hard” graphs, called challenge-hard graphs (CHG), a notion we introduce here. We instantiate such family of graphs based on an extension of stack of localized expanders (first used by Ren and Devadas in the context of proof-of-space). In addition, we show that the graph construction used as a building block for the proof-of-space by Dziembowski et al. (CRYPTO 2015) satisfies challenge-hardness as well. These two CHG-instantiations lead to continuous space-bounded NMC with different features in the random oracle model. 2. Our second instantiation relies on a new measurable property, called uniqueness of NIPoS. We show that standard extractability can be upgraded to proof-extractability if the NIPoS also has uniqueness. We propose a simple heuristic construction of NIPoS, that achieves (partial) uniqueness, based on a candidate memory-hard function in the standard model and a publicly verifiable computation with small-space verification. Instantiating the encoding scheme of Faust et al. with this NIPoS, we obtain a continuous space-bounded NMC that supports the “most practical” parameters, complementing the provably secure but “relatively impractical” CHG-based constructions. Additionally, we revisit the construction of Faust et al. and observe that due to the lack of uniqueness of their NIPoS, the resulting encoding schemes yield “highly impractical” parameters in the continuous setting. We conclude the paper with a comparative study of all our non-malleable code constructions with an estimation of concrete parameters.
Last updated:  2019-05-24
A note on the correlations between NIST cryptographic statistical tests suite
Emil Simion, Paul Burciu
This paper is focused on an open question regarding the correlation and the power of the NIST statistical test suite. If we found some correlation between these statistical tests, then we can improve the testing strategy by executing only one of the tests that are correlated. Using the Galton-Pearson “product-moment correlation coefficient”, by simulation, we found a high correlation between five couples of this statistical tests: (frequency, cumulative sums forward), (frequency, cumulative sums reverse), (cumulative sums forward, cumulative sums reverse), (random excursions, random excursions variant), and (serial 1, serial 2).
Last updated:  2020-08-19
Spartan: Efficient and general-purpose zkSNARKs without trusted setup
Srinath Setty
This paper introduces Spartan, a new family of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for the rank-1 constraint satisfiability (R1CS), an NP-complete language that generalizes arithmetic circuit satisfiability. A distinctive feature of Spartan is that it offers the first zkSNARKs without trusted setup (i.e., transparent zkSNARKs) for NP where verifying a proof incurs sub-linear costs—without requiring uniformity in the NP statement’s structure. Furthermore, Spartan offers zkSNARKs with a time-optimal prover, a property that has remained elusive for nearly all zkSNARKs in the literature. To achieve these results, we introduce new techniques that we compose with the sum-check protocol, a seminal interactive proof protocol: (1) computation commitments, a primitive to create a succinct commitment to a description of a computation; this technique is crucial for a verifier to achieve sub-linear costs after investing a one-time, public computation to preprocess a given NP statement; (2) SPARK, a cryptographic compiler to transform any existing extractable polynomial commitment scheme for multilinear polynomials to one that efficiently handles sparse multilinear polynomials; this technique is critical for achieving a time-optimal prover; and (3) a compact encoding of an R1CS instance as a low-degree polynomial. The end result is a public-coin succinct interactive argument of knowledge for NP (which can be viewed as a succinct variant of the sum-check protocol); we transform it into a zkSNARK using prior techniques. By applying SPARK to different commitment schemes, we obtain several zkSNARKs where the verifier’s costs and the proof size range from $O(log^2{n})$ to $O(\sqrt{n})$ depending on the underlying commitment scheme ($n$ denotes the size of the NP statement). These schemes do not require a trusted setup except for one that requires a universal trusted setup. We implement Spartan as a library in about 8,000 lines of Rust. We use the library to build a transparent zkSNARK in the random oracle model where security holds under the discrete logarithm assumption. We experimentally evaluate it and compare it with recent zkSNARKs for R1CS instance sizes up to $2^{20}$ constraints. Among transparent zkSNARKs, Spartan offers the fastest prover with speedups of $36$--$152\times$ depending on the baseline, produces proofs that are shorter by $1.2$--$416\times$, and incurs the lowest verification times with speedups of $3.6$--$1326\times$. The only exception is proof sizes under Bulletproofs, but Bulletproofs incurs slower verification both asymptotically and concretely. When compared to the state-of-the-art zkSNARK with trusted setup, Spartan’s prover is $2\times$ faster for arbitrary R1CS instances and $16\times$ faster for data-parallel workloads. Spartan’s code is available from: https://github.com/Microsoft/Spartan.
Last updated:  2019-05-23
Finding a Nash Equilibrium Is No Easier Than Breaking Fiat-Shamir
Arka Rai Choudhuri, Pavel Hubacek, Chethan Kamath, Krzysztof Pietrzak, Alon Rosen, Guy N. Rothblum
The Fiat-Shamir heuristic transforms a public-coin interactive proof into a non-interactive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and long-standing open question in cryptography. We show that solving the End-of-Metered-Line problem is no easier than breaking the soundness of the Fiat-Shamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheck-based proof of its correctness, and the proof can be updated and verified in time poly(n). Combining our construction with a hash family proposed by Canetti et al. [STOC 2019] gives rise to a distribution in the class CLS, which is provably hard under the assumption that any one of a class of fully homomorphic encryption (FHE) schemes has almost-optimal security against quasi-polynomial time adversaries, and under the additional worst-case assumption that there is no polynomial time algorithm for counting the number of satisfying assignments for formulas over a polylogarithmic number of variables.
Last updated:  2019-10-29
About Wave Implementation and its Leakage Immunity
Thomas Debris-Alazard, Nicolas Sendrier, Jean-Pierre Tillich
Wave is a recent digital signature scheme. It is based on a family of trapdoor one-way Preimage Sampleable Functions and is proven EUF-CMA in the random oracle model under two code-based computational assumptions. One of its key properties is to produce signatures uniformly distributed of fixed Hamming weight. This property implies that, if properly implemented, Wave is immune to leakage attack. We describe here the key stages for the implementation of the Wave trapdoor inverse function to integrate all the features to achieve leakage-freeness. A proof of concept implementation was made in SageMath and in C. It allowed us to check that properly generated Wave signatures are uniformly distributed.
Last updated:  2020-07-22
Linearly-Homomorphic Signatures and Scalable Mix-Nets
Chloé Hébant, Duong Hieu Phan, David Pointcheval
Anonymity is a primary ingredient for our digital life. Several tools have been designed to address it such as, for authentication, blind signatures, group signatures or anonymous credentials and, for confidentiality, randomizable encryption or mix-nets. When it comes to complex electronic voting schemes, random shuffling of ciphertexts with mix-nets is the only known tool. However, it requires huge and complex zero-knowledge proofs to guarantee the actual permutation of the initial ciphertexts. In this paper, we propose a new approach for proving correct shuffling: the mix-servers can simply randomize individual ballots, which means the ciphertexts, the signatures, and the verification keys, with an additional global proof of constant size, and the output will be publicly verifiable. The computational complexity for the mix-servers is linear in the number of ciphertexts. Verification is also linear in the number of ciphertexts, independently of the number of rounds of mixing. This leads to the most efficient technique, that is highly scalable. Our constructions make use of linearly-homomorphic signatures, with new features, that are of independent interest.
Last updated:  2024-09-16
Zero-Knowledge Proof-of-Identity: Sybil-Resistant, Anonymous Authentication on Permissionless Blockchains and Incentive Compatible, Strictly Dominant Cryptocurrencies
David Cerezo Sánchez
Zero-Knowledge Proof-of-Identity from trusted public certificates (e.g., national identity cards and/or ePassports; eSIM) is introduced here to permissionless blockchains in order to remove the inefficiencies of Sybil-resistant mechanisms such as Proof-of-Work (i.e., high energy and environmental costs) and Proof-of-Stake (i.e., capital hoarding and lower transaction volume). The proposed solution effectively limits the number of mining nodes a single individual would be able to run while keeping membership open to everyone, circumventing the impossibility of full decentralization and the blockchain scalability trilemma when instantiated on a blockchain with a consensus protocol based on the cryptographic random selection of nodes. Resistance to collusion is also considered. Solving one of the most pressing problems in blockchains, a zk-PoI cryptocurrency is proved to have the following advantageous properties: - an incentive-compatible protocol for the issuing of cryptocurrency rewards based on a unique Nash equilibrium - strict domination of mining over all other PoW/PoS cryptocurrencies, thus the zk-PoI cryptocurrency becoming the preferred choice by miners is proved to be a Nash equilibrium and the Evolutionarily Stable Strategy - PoW/PoS cryptocurrencies are condemned to pay the Price of Crypto-Anarchy, redeemed by the optimal efficiency of zk-PoI as it implements the social optimum - the circulation of a zk-PoI cryptocurrency Pareto dominates other PoW/PoS cryptocurrencies - the network effects arising from the social networks inherent to national identity cards and ePassports dominate PoW/PoS cryptocurrencies - the lower costs of its infrastructure imply the existence of a unique equilibrium where it dominates other forms of payment
Last updated:  2019-11-06
Transform-and-Encode: A Countermeasure Framework for Statistical Ineffective Fault Attacks on Block Ciphers
Sayandeep Saha, Dirmanto Jap, Debapriya Basu Roy, Avik Chakraborti, Shivam Bhasin, Debdeep Mukhopadhyay
Right from its introduction by Boneh et al., fault attacks (FA) have been established to be one of the most practical threats to both public key and symmetric key based cryptosystems. Statistical Ineffective Fault Analysis (SIFA) is a recently proposed class of fault attacks introduced at CHES 2018. The fascinating feature of this attack is that it exploits the correct ciphertexts obtained during a fault injection campaign, instead of the faulty ciphertexts. The SIFA has been shown to bypass almost all of the existing fault attack countermeasures even when they are combined with provably secure masking schemes for side-channel resistance. The goal of this work is to propose a countermeasure for SIFA. It has been observed that a randomized domain transformation of the intermediate computation combined with bit-level error correction can throttle SIFA. The randomized domain transformation can be achieved by standard masking schemes. In fact, we prove that if biased faults are injected at the state register of a block cipher at a target round, then masking is sufficient to protect against SIFA, until all the shares for a specific bit are corrupted. However, masking alone cannot prevent SIFA if the faults are injected at certain specific locations inside the S-Boxes. To address this issue, we incorporate a bit-level error-correction mechanism. The strongest advantage of the proposed countermeasure, called AntiSIFA, is that it provides provable and quantifiable security guarantees. Proof-of-concept evaluations were performed on software implementations of the block cipher PRESENT, which correlates with the theoretical results.
Last updated:  2019-05-22
Evaluation of Code-based Signature Schemes
Partha Sarathi Roy, Kirill Morozov, Kazuhide Fukushima, Shinsaku Kiyomoto
Code-based cryptographic schemes recently raised to prominence as quantum-safe alternatives to the currently employed number-theoretic constructions, which do not resist quantum attacks. In this article, we discuss the Courtois-Finiasz-Sendrier signature scheme and derive code-based signature schemes using the Fiat-Shamir transformation from code-based zero-knowledge identification schemes, namely the Stern scheme, the Jain-Krenn-Pietrzak-Tentes scheme, and the Cayrel-Veron-El Yousfi scheme. We analyze the security of these code-based signature schemes and derive the security parameters to achieve the 80-bit and 128-bit level of classical security. To derive the secure parameters, we have studied the hardness of Syndrome Decoding Problem. Furthermore, we implement the signature schemes, based on the Fiat-Shamir transform, which were mentioned above, and compare their performance on a PC.
Last updated:  2019-05-22
TMPS: Ticket-Mediated Password Strengthening
John Kelsey, Dana Dachman-Soled, Sweta Mishra, Meltem Sonmez Turan
We introduce the notion of Ticket-Mediated Password Strengthening (TMPS), a technique for allowing users to derive keys from passwords while imposing a strict limit on the number of guesses of their password any attacker can make, and strongly protecting the users' privacy. We describe the security requirements of TMPS, and then a set of efficient and practical protocols to implement a TMPS scheme, requiring only hash functions, CCA2-secure encryption, and blind signatures. We provide several variant protocols, including an offline symmetric-only protocol that uses a local trusted computing environment, and online variants that use group signatures or stronger trust assumptions instead of blind signatures. We formalize the security of our scheme by defining an ideal functionality in the Universal Composability (UC) framework, and by providing game-based definitions of security. We prove that our protocol realizes the ideal functionality in the random oracle model (ROM) under adaptive corruptions with erasures, and prove that security with respect to the ideal/real definition implies security with respect to the game-based definitions.
Last updated:  2019-05-22
Formally Verified Cryptographic Web Applications in WebAssembly
Jonathan Protzenko, Benjamin Beurdouche, Denis Merigoux, Karthikeyan Bhargavan
After suffering decades of high-profile attacks, the need for formal verification of security-critical software has never been clearer. Verification-oriented programming languages like F* are now being used to build high-assurance cryptographic libraries and implementations of standard protocols like TLS. In this paper, we seek to apply these verification techniques to modern Web applications, like WhatsApp, that embed sophisticated custom cryptographic components. The problem is that these components are often implemented in JavaScript, a language that is both hostile to cryptographic code and hard to reason about. So we instead target WebAssebmy, a new instruction set that is supported by all major JavaScript runtimes. We present a new toolchain that compiles Low*, a low-level subset of the F* programming language, into WebAssembly. Unlike other WebAssembly compilers like Emscripten, our compilation pipeline is focused on compactness and auditability: we formalize the full translation rules in the paper and implement it in a few thousand lines of OCaml. Using this toolchain, we present two case studies. First, we build WHACL*, a WebAssembly version of the existing, verified HACL* cryptographic library. Then, we present LibSignal*, a brand new, verified implementation of the Signal protocol in WebAssembly, that can be readily used by messaging applications like WhatsApp, Skype, and Signal.
Last updated:  2019-05-22
A Smart Contract Refereed Data Retrieval Protocol with a Provably Low Collateral Requirement
James Shook, Scott Simon, Peter Mell
We present a protocol for a cryptoeconomic fair exchange of data previously owned by the purchaser for tokens that functions even when both parties are anonymous. This enables peer-to-peer data storage without identity verification. We use a smart contract on a decentralized ledger as a trusted third party. Actual data transfer can take place with any standard anonymous exchange channel. Due to the anonymity of the parties, the smart contract cannot punish either party's off-ledger reputation. Furthermore, the contract has limited power to arbitrate fault in off-ledger disputes. Thus, an important feature of our protocol is a collateral mechanism that collectively punishes both Alice and Bob if either of them abandons the protocol or cheats. However, we prove that parameters can be chosen such that the collateral can be made, subject to data size, arbitrarily low and still result in an expected financial loss if either Alice or Bob cheats. We are able to achieve this due to our non-standard use of error-correcting codes. In addition, the protocol allows those storing the data to exchange it without the client's participation.
Last updated:  2019-05-22
A chosen key attack against the secret S-boxes of GOST
Markku-Juhani O. Saarinen
I am making this work from August 1998 available for historical reasons. It has been cited as an ``unpublished manuscript'' more than two dozen times over the years -- even though it has not been publicly available anywhere for almost 20 years. The short memo describes a simple non-intrusive reverse engineering technique against Russian GOST chips. The technique is based on a slide attack. This may be historically interesting since slide attacks had not been ``invented yet'', at least in formal sense. The brief original abstract: We show that a simple ``black box'' chosen-key attack against GOST can recover secret S-boxes with approximately $2^{32}$ encryptions.
Last updated:  2020-05-11
Cryptanalysis of FlexAEAD
Mostafizar Rahman, Dhiman Saha, Goutam Paul
This paper analyzes the internal keyed permutation of FlexAEAD which is a round-1 candidate of the NIST LightWeight Cryptography Competition. In our analysis, we report an iterated truncated differential leveraging on a particular property of the AES S-box that becomes useful due to the particular nature of the diffusion layer of the round function. The differential holds with a low probability of 2^-7 for one round which allows it to penetrate the same number of rounds as claimed by the designers, but with a much lower complexity. Moreover, it can be easily extended to a key-recovery attack at a little extra cost. We further report a Super-Sbox construction in the internal permutation, which is exploited using the Yoyo game to devise a 6-round deterministic distinguisher and a 7-round key recovery attack for the 128-bit internal permutation. Similar attacks can be mounted for the 64-bit and 256-bit variants. All these attacks outperform the existing results of the designers as well as other third-party results. The iterated truncated differentials can be tweaked to mount forgery attacks similar to the ones given by Eichlseder et al Success probabilities of all the reported distinguishing attacks are shown to be high. All practical attacks have been experimentally verified. To the best of our knowledge, this work reports the first key-recovery attack on the internal keyed permutation of FlexAEAD.
Last updated:  2019-05-22
On Perfect Endomorphic Ciphers
Nikolay Shenets
It has been 70 years since the publication of the seminal outstanding work of Claude Elwood Shannon, in which he first gave a mathematical definition of the cryptosystem and introduced the concept of perfect ciphers. He also examined the conditions in which such a ciphers exist. Shannon's results in one form or another are presented in almost all books on cryptography. One of his result deals with so-called endomorphic ciphers in which the cardinalities of the message space $\mathcal{M}$ and the ciphertexts $\mathcal{C}$ are the same. The Vernam cipher (one-time pad) is the most famous representative of such ciphers. Moreover, it's the only one known to be perfect. Surprisingly, we have found a mistake in the Shannon's result. Namely, Shannon stated that an endomorphic cipher, in which the keyspace $\mathcal{K}$ has the same cardinality as message space, is perfect if and only if two conditions are satisfied. The first suggests that for any pair plaintext - ciphertext there exists only one key that translates this plaintext into this ciphertext. The second argues that the key distribution must be uniform. We show, that these two conditions are not really enough. We prove in three different ways that the plaintexts must also be equally probable. Moreover, we study the general endomorphic cipher and get the same result. It follows, that in practice perfect endomorphic ciphers do not exist.
Last updated:  2019-05-23
Efficient Search for Optimal Diffusion Layers of Generalized Feistel Networks
Uncategorized
Patrick Derbez, Pierre-Alain Fouque, Baptiste Lambin, Victor Mollimard
Show abstract
Uncategorized
The Feistel construction is one of the most studied ways of building block ciphers. Several generalizations were then proposed in the literature, leading to the Generalized Feistel Network, where the round function first applies a classical Feistel operation in parallel on an even number of blocks, and then a permutation is applied to this set of blocks. In 2010 at FSE, Suzaki and Minematsu studied the diffusion of such construction, raising the question of how many rounds are required so that each block of the ciphertext depends on all blocks of the plaintext. They thus gave some optimal permutations, with respect to this diffusion criteria, for a Generalized Feistel Network consisting of 2 to 16 blocks, as well as giving a good candidate for 32 blocks. Later at FSE'19, Cauchois et al. went further and were able to propose optimal even-odd permutations for up to 26 blocks. In this paper, we complete the literature by building optimal even-odd permutations for 28, 30, 32, 36 blocks which to the best of our knowledge were unknown until now. The main idea behind our constructions and impossibility proof is a new characterization of the total diffusion of a permutation after a given number of rounds. In fact, we propose an efficient algorithm based on this new characterization which constructs all optimal even-odd permutations for the 28, 30, 32, 36 blocks cases and proves a better lower bound for the 34, 38, 40 and 42 blocks cases. In particular, we improve the 32 blocks case by exhibiting optimal even-odd permutations with diffusion round of 9. The existence of such a permutation was an open problem for almost 10 years and the best known permutation in the literature had a diffusion round of 10. Moreover, our characterization can be implemented very efficiently and allows us to easily re-find all optimal even-odd permutations for up to 26 blocks with a basic exhaustive search.
Last updated:  2024-06-07
Protecting against Statistical Ineffective Fault Attacks
Joan Daemen, Christoph Dobraunig, Maria Eichlseder, Hannes Gross, Florian Mendel, and Robert Primas
At ASIACRYPT 2018 it was shown that Statistical Ineffective Fault Attacks (SIFA) pose a threat for many practical implementations of symmetric primitives. In particular, countermeasures against both power analysis and fault attacks typically do not prevent straightforward SIFA attacks that require only very limited knowledge about the concrete attacked implementation. Consequently, the exploration of countermeasures against SIFA that do not rely on protocols or physical protection mechanisms is of great interest. In this paper, we explore different countermeasure strategies against SIFA. First, we introduce an abstraction layer between the algorithmic specification of a cipher and its implementation in hardware or software to study and describe resistance against SIFA. We then show that by basing the masked implementation on permutations as building blocks, we can build circuits that withstand single-fault SIFA and DPA attacks. We show how this approach can be applied to 3-bit, 4-bit, and 5-bit S-boxes and the AES S-box. Additionally, we present a strategy based on fine-grained fault detection suitable for protecting any circuit against SIFA attacks. Although this approach may lead to a higher implementation cost due to the fine-grained detection needed, it can be used to protect arbitrary circuits and can be generalized to cover multi-fault SIFA.
Last updated:  2019-05-22
SIKE Round 2 Speed Record on ARM Cortex-M4
Hwajeong soe, Amir Jalali, Reza Azarderakhsh
We present the first practical software implementation of Supersingular Isogeny Key Encapsulation (SIKE) round 2, targeting NIST’s 1, 2, and 5 security levels on 32-bit ARM Cortex-M4 microcontrollers. The proposed library introduces a new speed record of SIKE protocol on the target platform. We achieved this record by adopting several state-of-the-art engineering techniques as well as highly-optimized hand-crafted assembly implementation of finite field arithmetic. In particular, we carefully redesign the previous optimized implementations of filed arithmetic on 32-bit ARM Cortex-M4 platform and propose a set of novel techniques which are explicitly suitable for SIKE/SIDH primes. Moreover, the proposed arithmetic implementations are fully scalable to larger bit-length integers and can be adopted over different security levels. The benchmark result on STM32F4 Discovery board equipped with 32-bit ARM Cortex-M4 microcontrollers shows that the entire key encapsulation over p434 takes about 326 million clock cycles (i.e. 1.94 seconds @168MHz). In contrast to the previous optimized implementation of the isogeny-based key exchange on low-power 32-bit ARM Cortex-M4, our performance evaluation shows feasibility of using SIKE mechanism on the target platform. In comparison to the most of the post-quantum candidates, SIKE requires an excessive number of arithmetic operations, resulting in significantly slower timings. However, its small key size makes this scheme as a promising candidate on low-end microcontrollers in the quantum era by ensuring the lower energy consumption for key transmission than other schemes.
Last updated:  2019-05-22
Theoretical and Practical Approaches for Hardness Amplification of PUFs
Fatemeh Ganji, Shahin Tajik, Pascal Stauss, Jean-Pierre Seifert, Domenic Forte, Mark Tehranipoor
The era of PUFs has been characterized by the efforts put into research and the development of PUFs that are robust against attacks, in particular, machine learning (ML) attacks. In the lack of systematic and provable methods for this purpose, we have witnessed the ever-continuing competition between PUF designers/ manufacturers, cryptanalysts, and of course, adversaries that maliciously break the security of PUFs. This is despite a series of acknowledged principles developed in cryptography and complexity theory, under the umbrella term ``hardness amplification." The goal of studies on the hardness amplification is to build a strongly secure construction out of considerably weaker primitives. This paper aims at narrowing the gap between these studies and hardware security, specifically for applications in the domain of PUFs. To this end, we first review an example of practical efforts made to construct more secure PUFs, namely the concept of rolling PUFs. Based on what can be learned from this and central insights provided by the ML and complexity theory, we propose a new PUF-based scheme built around the idea of using a new function, namely, the Tribes function, which combines the outputs of a set of PUFs to generate the final response. Our theoretical findings are discussed in an exhaustive manner and supported by the results of experiments, conducted extensively on real-world PUFs.
Last updated:  2019-05-22
Stopping time signatures for some algorithms in cryptography
Percy Deift, Stephen D. Miller, Thomas Trogdon
We consider the normalized distribution of the overall running times of some cryptographic algorithms, and what information they reveal about the algorithms. Recent work of Deift, Menon, Olver, Pfrang, and Trogdon has shown that certain numerical algorithms applied to large random matrices exhibit a characteristic distribution of running times, which depends only on the algorithm but are independent of the choice of probability distributions for the matrices. Different algorithms often exhibit different running time distributions, and so the histograms for these running time distributions provide a "time-signature" for the algorithms, making it possible, in many cases, to distinguish one algorithm from another. In this paper we extend this analysis to cryptographic algorithms, and present examples of such algorithms with time-signatures that are indistinguishable, and others with time-signatures that are clearly distinct.
Last updated:  2020-02-16
Concretely-Efficient Zero-Knowledge Arguments for Arithmetic Circuits and Their Application to Lattice-Based Cryptography
Carsten Baum, Ariel Nof
In this work we present a new interactive Zero-Knowledge Argument of knowledge for general arithmetic circuits. Our protocol is based on the ``MPC-in-the-head''-paradigm of Ishai et al. (STOC 2009) and follows the recent ``MPC-in-the-head with Preprocessing'' as proposed by Katz, Kolesnikov and Wang (ACM CCS 2018). However, in contrast to Katz et al. who used the ``cut-and-choose'' approach for pre-processing, we show how to incorporate the well-known ``sacrificing'' paradigm into ``MPC-in-the-head'', which reduces the proof size when working over arithmetic circuits. Our argument system uses only lightweight symmetric-key primitives and utilizes a simplified version of the so-called SPDZ-protocol. Based on specific properties of our protocol we then show how it can be used to construct an efficient Zero-Knowledge Argument of Knowledge for instances of the Short Integer Solution (SIS) problem. We present different protocols that are tailored to specific uses of SIS, while utilizing the advantages of our scheme. In particular, we present a variant of our argument system that allows the parties to sample the circuit ``on the fly'', which may be of independent interest. We furthermore implemented our Zero-Knowledge argument for SIS and show that using our protocols it is possible to run a complete interactive proof, even for general SIS instances which result in a circuit with $>10^6$ gates, in less than 0.5 seconds.
Last updated:  2019-09-04
How to Correct Errors in Multi-Server PIR
Uncategorized
Kaoru Kurosawa
Show abstract
Uncategorized
Suppose that there exist a user and $\ell$ servers $S_1, \ldots, S_{\ell}$. Each server $S_j$ holds a copy of a database $x=(x_1, \ldots, x_n) \in \{0,1\}^n$, and the user holds a secret index $i_0 \in \{1, \ldots, n\}$. A b error correcting $\ell$ server PIR (Private Information Retrieval) scheme allows a user to retrieve $x_{i_0}$ correctly even if and $b$ or less servers return false answers while each server learns no information on $i_0$ in the information theoretic sense. Although there exists such a scheme with the total communication cost $O(n^{1/(2k-1)} \times k\ell \log{\ell})$ where $k=\ell-2b$, the decoding algorithm is very inefficient. In this paper, we show an efficient decoding algorithm for this $b$ error correcting $\ell$ server PIR scheme. It runs in time $O(\ell^3)$.
Last updated:  2019-05-21
Speed-up of SCA attacks on 32-bit multiplications
Robert Nguyen, Adrien Facon, Sylvain Guilley, Guillaume Gautier, Safwan El Assad
Many crypto-algorithms, Deep-Learning, DSP compute on words larger than 8-bit. SCA attacks can easily be done on Boolean operations like XOR, AND, OR, and substitution operations like s-box, p-box or q-box, as 8-bit hypothesis or less are enough to forge attacks. However, attacking larger hypothesis word increases exponentially required resources: memory and computation power. Considering multiplication, 32-bit operation implies $2^{32}$ hypothesis. Then a direct SCA attack cannot be efficiently performed. We propose to perform instead 4 small 8-bit SCA attacks. 32-bit attack complexity is reduced to 8-bit only complexity.
Last updated:  2019-05-20
UC-Commitment Schemes with Phase-Adaptive Security from Trapdoor Functions
Pedro Branco, Manuel Goulão, Paulo Mateus
We propose a generic framework for perfectly hiding UC-Commitment schemes in the Global Random Oracle model of Canetti \textit{el at.} (CCS 14). The main building block of our construction is a novel primitive called Sampleable-Range Trapdoor Function, that is, a trapdoor function for which there is a non-negligible probability of finding preimages when given a uniformly chosen element of its codomain and the corresponding trapdoor. To show the versatility of the framework, we give concrete instantiations based on factoring, code-based, and lattice-based hardness assumptions. Our construction yields the first lattice-based UC-Commitment scheme (not constructed via generic transformations, such as via Oblivious Transfer), and achieves what we call \textit{phase-adaptive security}, a novel security notion we introduce which is stronger than static security. Achieving adaptive security for UC-Commitment schemes is non-trivial and, usually, comes at the price of efficiency. Phase-adaptive security stands between adaptive and static security, and may be of independent interest. In this model, adversaries can corrupt at the beginning or between the commitment and opening phases of the protocol, but not during their execution. This new model is motivated by the fact that, in practice, it is more likely that parties are corrupted between phases of the protocol (where a relatively long period may elapse) than during their execution.
Last updated:  2019-09-10
Anomalies and Vector Space Search: Tools for S-Box Analysis (Full Version)
Xavier Bonnetain, Léo Perrin, Shizhu Tian
S-boxes are functions with an input so small that the simplest way to specify them is their lookup table (LUT). Unfortunately, some algorithm designers exploit this fact to avoid providing the algorithm used to generate said lookup table. In this paper, we provide tools for finding the hidden structure in an S-box or to identify it as the output of a complex generation process rather than a random sample. We introduce various "anomalies". These real numbers are such that a property with an anomaly equal to $a$ should be found roughly once in a set of $2^{a}$ random S-boxes. First, we revisit the literature on S-box reverse-engineering to present statistical anomalies based on the distribution of the coefficients in the difference distribution table, linear approximation table, and for the first time, the boomerang connectivity table. We then count the number of S-boxes that have block-cipher like structures to estimate the anomaly associated to those. In order to recover these structures, we show that the most general tool for decomposing S-boxes is an algorithm efficiently listing all the vector spaces of a given dimension contained in a given set, and we present such an algorithm. Finally, we propose general methods to formally quantify the complexity of any S-box. It relies on the production of the smallest program evaluating it and on combinatorial arguments. Combining these approaches, we show that all permutations that are actually picked uniformly at random always have essentially the same cryptographic properties, and can never be decomposed in a simpler way. These conclusions show that multiple claims made by the designers of the latest Russian standards are factually incorrect.
Last updated:  2019-05-20
Revisiting Privacy-aware Blockchain Public Key Infrastructure
Olamide Omolola, Paul Plessing
Privacy-aware Blockchain Public Key Infrastructure (PB- PKI) is a recent proposal by Louise Axon (2017) to create a privacy-preserving Public Key Infrastructure on the Blockchain. However, PB-PKI suffers from operational problems. We found that the most important change, i.e., the key update process proposed in PB-PKI for privacy is broken. Other issues include authenticating a user during key update and ensuring proper key revocation. In this paper, we provide solutions to the problems of PB-PKI. We suggest generating fresh keys during key update. Furthermore, we use ring signatures for authenticating the user requesting key updates and use Asynchronous accumulators to handle the deletion of revoked keys. We show that the approach is feasible and implement a proof of concept.
Last updated:  2020-01-29
Prime, Order Please! Revisiting Small Subgroup and Invalid Curve Attacks on Protocols using Diffie-Hellman
Cas Cremers, Dennis Jackson
Diffie-Hellman groups are a widely used component in cryptographic protocols in which a shared secret is needed. These protocols are typically proven to be secure under the assumption they are implemented with prime order Diffie Hellman groups. However, in practice, many implementations either choose to use non-prime order groups for reasons of efficiency, or can be manipulated into operating in non-prime order groups. This leaves a gap between the proofs of protocol security, which assume prime order groups, and the real world implementations. This is not merely a theoretical possibility: many attacks exploiting small subgroups or invalid curve points have been found in the real world. While many advances have been made in automated protocol analysis, modern tools such as Tamarin and ProVerif represent DH groups using an abstraction of prime order groups. This means they, like many cryptographic proofs, may miss practical attacks on real world protocols. In this work we develop a novel extension of the symbolic model of Diffie-Hellman groups. By more accurately modelling internal group structure, our approach captures many more differences between prime order groups and their actual implementations. The additional behaviours that our models capture are surprisingly diverse, and include not only attacks using small subgroups and invalid curve points, but also a range of proposed mitigation techniques, such as excluding low order elements, single coordinate ladders, and checking the elliptic curve equation. Our models thereby capture a large family of attacks that were previously outside the symbolic model. We implement our improved models in the Tamarin prover. We find a new attack on the Secure Scuttlebutt Gossip protocol, independently discover a recent attack on Tendermint’s secure handshake, and evaluate the effectiveness of the proposed mitigations for recent Bluetooth attacks.
Last updated:  2019-05-20
Misuse Attacks on Post-Quantum Cryptosystems
Ciprian Băetu, F. Betül Durak, Loïs Huguenin-Dumittan, Abdullah Talayhan, Serge Vaudenay
Many post-quantum cryptosystems which have been proposed in the National Institute of Standards and Technology (NIST) standardization process follow the same meta-algorithm, but in different algebras or different encoding methods. They usually propose two constructions, one being weaker and the other requiring a random oracle. We focus on the weak version of nine submissions to NIST. Submitters claim no security when the secret key is used several times. In this paper, we analyze how easy it is to run a key recovery under multiple key reuse. We mount a classical key recovery under plaintext checking attacks (i.e., with a plaintext checking oracle saying if a given ciphertext decrypts well to a given plaintext) and a quantum key recovery under chosen ciphertext attacks. In the latter case, we assume quantum access to the decryption oracle.
Last updated:  2019-09-19
Efficient Multi-Key Homomorphic Encryption with Packed Ciphertexts with Application to Oblivious Neural Network Inference
Hao Chen, Wei Dai, Miran Kim, Yongsoo Song
Homomorphic Encryption (HE) is a cryptosystem which supports computation on encrypted data. Löpez-Alt et al. (STOC 2012) proposed a generalized notion of HE, called Multi-Key Homomorphic Encryption (MKHE), which is capable of performing arithmetic operations on ciphertexts encrypted under different keys. In this paper, we present multi-key variants of two HE schemes with packed ciphertexts. We present new relinearization algorithms which are simpler and faster than previous method by Chen et al. (TCC 2017). We then generalize the bootstrapping techniques for HE to obtain multi-key fully homomorphic encryption schemes. We provide a proof-of-concept implementation of both MKHE schemes using Microsoft SEAL. For example, when the dimension of base ring is 8192, homomorphic multiplication between multi-key BFV (resp. CKKS) ciphertexts associated with four parties followed by a relinearization takes about 116 (resp. 67) milliseconds. Our MKHE schemes have a wide range of applications in secure computation between multiple data providers. As a benchmark, we homomorphically classify an image using a pre-trained neural network model, where input data and model are encrypted under different keys. Our implementation takes about 1.8 seconds to evaluate one convolutional layer followed by two fully connected layers on an encrypted image from the MNIST dataset.
Last updated:  2020-05-22
Threshold ECDSA from ECDSA Assumptions: The Multiparty Case
Jack Doerner, Yashvanth Kondi, Eysa Lee, abhi shelat
Cryptocurrency applications have spurred a resurgence of interest in the computation of ECDSA signatures using threshold protocols---that is, protocols in which the signing key is secret-shared among $n$ parties, of which any subset of size $t$ must interact in order to compute a signature. Among the resulting works to date, that of Doerner et al. requires the most natural assumptions while also achieving the best practical signing speed. It is, however, limited to the setting in which the threshold is two. We propose an extension of their scheme to arbitrary thresholds, and prove it secure against a malicious adversary corrupting up to one party less than the threshold under only the Computational Diffie-Hellman Assumption in the Global Random Oracle model, an assumption strictly weaker than those under which ECDSA is proven. We implement our scheme and evaluate it among groups of up to 256 of co-located and geographically-distributed parties, and among small groups of embedded devices. In the LAN setting, our scheme outperforms all prior works by orders of magnitude, and that it is efficient enough for use even on smartphones or hardware tokens. In the WAN setting, our protocol outperforms the best constant-round protocols in realistic scenarios, despite its logarithmic round count.
Last updated:  2019-05-20
Secret-Sharing from Robust Conditional Disclosure of Secrets
Amos Beimel, Naty Peter
A secret-sharing scheme is a method by which a dealer, holding a secret string, distributes shares to parties such that only authorized subsets of parties can reconstruct the secret. The collection of authorized subsets is called an access structure. Secret-sharing schemes are an important tool in cryptography and they are used as a building box in many secure protocols. In the original constructions of secret-sharing schemes by Ito et al. [Globecom 1987], the share size of each party is $\tilde{O}(2^{n})$ (where $n$ is the number of parties in the access structure). New constructions of secret-sharing schemes followed; however, the share size in these schemes remains basically the same. Although much efforts have been devoted to this problem, no progress was made for more than 30 years. Recently, in a breakthrough paper, Liu and Vaikuntanathan [STOC 2018] constructed a secret-sharing scheme for a general access structure with share size $\tilde{O}(2^{0.994n})$. The construction is based on new protocols for conditional disclosure of secrets (CDS). This was improved by Applebaum et al. [EUROCRYPT 2019] to $\tilde{O}(2^{0.892n})$. In this work, we construct improved secret-sharing schemes for a general access structure with share size $\tilde{O}(2^{0.762n})$. Our schemes are linear, that is, the shares are a linear function of the secret and some random elements from a finite field. Previously, the best linear secret-sharing scheme had shares of size $\tilde{O}(2^{0.942n})$. Most applications of secret-sharing require linearity. Our scheme is conceptually simpler than previous schemes, using a new reduction to two-party CDS protocols (previous schemes used a reduction to multi-party CDS protocols). In a CDS protocol for a function $f$, there are $k$ parties and a referee; each party holds a private input and a common secret, and sends one message to the referee (without seeing the other messages). On one hand, if the function $f$ applied to the inputs returns $1$, then it is required that the referee, which knows the inputs, can reconstruct the secret from the messages. On the other hand, if the function $f$ applied to the inputs returns $0$, then the referee should get no information on the secret from the messages. However, if the referee gets two messages from a party, corresponding to two different inputs (as happens in our reduction from secret-sharing to CDS), then the referee might be able to reconstruct the secret although it should not. To overcome this problem, we define and construct $t$-robust CDS protocols, where the referee cannot get any information on the secret when it gets $t$ messages for a set of zero-inputs of $f$. We show that if a function $f$ has a two-party CDS protocol with message size $c_f$, then it has a two-party $t$-robust CDS protocol with normalized message size $\tilde{O}(t c_f)$. Furthermore, we show that every function $f:[N] \times [N]\rightarrow \{0,1\}$ has a multi-linear $t$-robust CDS protocol with normalized message size $\tilde{O}(t+\sqrt{N})$. We use a variant of this protocol (with $t$ slightly larger than $\sqrt{N}$) to construct our improved linear secret-sharing schemes. Finally, we construct robust $k$-party CDS protocols for $k>2$.
Last updated:  2019-05-20
Fully Homomorphic Encryption with k-bit Arithmetic Operations
Benjamin M. Case, Shuhong Gao, Gengran Hu, Qiuxia Xu
We present a fully homomorphic encryption scheme continuing the line of works of Ducas and Micciancio (2015, [DM15]), Chillotti et al. (2016, [CGGI16a]; 2017, [CGGI17]; 2018, [CGGI18a]), and Gao (2018,[Gao18]). Ducas and Micciancio (2015) show that homomorphic computation of one bit operation on LWE ciphers can be done in less than a second, which is then reduced by Chillotti et al. (2016, 2017, 2018) to 13ms. According to Chillotti et al. (2018, [CGGI18b]), the cipher expansion for TFHE is still 8000. The ciphertext expansion problem was greatly reduced by Gao (2018) to 6 with private-key encryption and 20 for public key encryption. The bootstrapping in Gao (2018) is only done one bit at a time, and the bootstrapping design matches the previous two works in efficiency. Our contribution is to present a fully homomorphic encryption scheme based on these preceding schemes that generalizes the Gao (2018) scheme to perform operations on k-bit encrypted data and also removes the need for the Independence Heuristic of the Chillotti et al. papers. The amortized cost of computing k-bits at a time improves the efficiency. Operations supported include addition and multiplication modulo $2^k$, addition and multiplication in the integers as well as exponentiation, field inversion and the machine learning activation function RELU. The ciphertext expansion factor is also further improved, for $k = 4$ our scheme achieves a ciphertext expansion factor of 2.5 under secret key and 6.5 under public key. Asymptotically as k increases, our scheme achieves the optimal ciphertext expansion factor of 1 under private key encryption and 2 under public key encryption. We also introduces techniques for reducing the size of the bootstrapping key. Keywords. FHE, lattices, learning with errors (LWE), ring learning with errors (RLWE), TFHE, data security, RELU, machine learning
Last updated:  2019-05-20
A Note on Sub-Gaussian Random Variables
Benjamin M. Case, Colin Gallagher, Shuhong Gao
A sub-Gaussian distribution is any probability distribution that has tails bounded by a Gaussian and has a mean of zero. It is well known that the sum of independent sub-Gaussians is again sub-Gaussian. This note generalizes this result to sums of sub- Gaussians that may not be independent, under the assumption a certain conditional distribution is also sub-Gaussian. This general result is useful in the study of noise growth in (fully) homomorphic encryption schemes [CGHX19, CGGI17], and hopefully useful for other applications.
Last updated:  2025-04-09
Security in the Presence of Key Reuse: Context-Separable Interfaces and their Applications
Christopher Patton and Thomas Shrimpton
Key separation is often difficult to enforce in practice. While key reuse can be catastrophic for security, we know of a number of cryptographic schemes for which it is provably safe. But existing formal models, such as the notions of joint security (Haber-Pinkas, CCS ’01) and agility (Acar et al., EUROCRYPT ’10), do not address the full range of key-reuse attacks—in particular, those that break the abstraction of the scheme, or exploit protocol interactions at a higher level of abstraction. This work attends to these vectors by focusing on two key elements: the game that codifies the scheme under attack, as well as its intended adversarial model; and the underlying interface that exposes secret key operations for use by the game. Our main security experiment considers the implications of using an interface (in practice, the API of a software library or a hardware platform such as TPM) to realize the scheme specified by the game when the interface is shared with other unspecified, insecure, or even malicious applications. After building up a definitional framework, we apply it to the analysis of two real-world schemes: the EdDSA signature algorithm and the Noise protocol framework. Both provide some degree of context separability, a design pattern for interfaces and their applications that aids in the deployment of secure protocols.
Last updated:  2020-10-23
Fast Database Joins and PSI for Secret Shared Data
Payman Mohassel, Peter Rindal, Mike Rosulek
We present a scalable protocol for database joins on secret shared data in the honest-majority three-party setting. The key features of our protocol are a rich set of SQL-like join/select queries and the ability to compose join operations together due to the inputs and outputs being generically secret shared between the parties. Provided that all joins operate on unique primary keys, no information is revealed to any party during the protocol. In particular, not even the sizes of intermediate joins are revealed. All of our protocols are constant-round and achieve $O(n)$ communication and computation overhead for joining two tables of $n$ rows. These properties make our protocol ideal for outsourced secure computation. In this setting several non-colluding servers are setup and the input data is shared among them. These servers then perform the relevant secret shared computation and output the result. This model has recently been gaining traction in industry, e.g. Facebook's Crypten, Cape Privacy's TFEncrypted, Mozilla Telemetry. We additionally implement two applications on top of our framework. The first application detects voter registration errors within and between agencies of 50 US states, in a privacy-preserving manner. The second application allows several organizations to compare network security logs to more accurately identify common security threats, e.g. the IP addresses of a bot net. In both cases, the practicality of these applications depends on efficiently performing joins on millions of secret shared records. For example, our three party protocol can perform a join on two sets of 1 million records in 4.9 seconds or, alternatively, compute the cardinality of this join in just 3.1 seconds.
Last updated:  2019-05-24
Mobile Private Contact Discovery at Scale
Daniel Kales, Christian Rechberger, Thomas Schneider, Matthias Senker, Christian Weinert
Mobile messengers like WhatsApp perform contact discovery by uploading the user's entire address book to the service provider. This allows the service provider to determine which of the user's contacts are registered to the messaging service. However, such a procedure poses significant privacy risks and legal challenges. As we find, even messengers with privacy in mind currently do not deploy proper mechanisms to perform contact discovery privately. The most promising approaches addressing this problem revolve around private set intersection (PSI) protocols. Unfortunately, even in a weak security model where clients are assumed to follow the protocol honestly, previous protocols and implementations turned out to be far from practical when used at scale. This is due to their high computation and/or communication complexity as well as lacking optimization for mobile devices. In our work, we remove most obstacles for large-scale global deployment by significantly improving two promising protocols by Kiss et al. (PoPETS'17) while also allowing for malicious clients. Concretely, we present novel precomputation techniques for correlated oblivious transfers (reducing the online communication by factor 2x), Cuckoo filter compression (with a compression ratio of $\approx 70\%$), as well as 4.3x smaller Cuckoo filter updates. In a protocol performing oblivious PRF evaluations via garbled circuits, we replace AES as the evaluated PRF with a variant of LowMC (Albrecht et al., EUROCRYPT'15) for which we determine optimal parameters, thereby reducing the communication by factor 8.2x. Furthermore, we implement both protocols with security against malicious clients in C/C++ and utilize the ARM Cryptography Extensions available in most recent smartphones. Compared to previous smartphone implementations, this yields a performance improvement of factor 1000x for circuit evaluations. The online phase of our fastest protocol takes only 2.92s measured on a real WiFi connection (6.53s on LTE) to check 1024 client contacts against a large-scale database with $2^{28}$ entries. As a proof-of-concept, we integrate our protocols in the client application of the open-source messenger Signal.
Last updated:  2019-05-20
CellTree: A New Paradigm for Distributed Data Repositories
Anasuya Acharya, Manoj Prabhakaran, Akash Trehan
We present CellTree, a new architecture for distributed data repositories. The repository allows data to be stored in largely independent, and highly programmable cells, which are “assimilated” into a tree structure. The data in the cells are allowed to change over time, subject to each cell’s own policies; a cell’s policies also govern how the policies themselves can evolve. A design goal of the architecture is to let a CellTree evolve organically over time, and adapt itself to multiple applications. Different parts of the tree may be maintained by different sets of parties interested in the respective parts, and the core mechanisms used for maintaining the tree can also vary across the tree and over time. We present provable guarantees of liveness, correctness and consistency (the last one being a generalization of the typical blockchain guarantee of “persistence,” when data is dynamic), when the CellTree architecture is instantiated using a simple set of modules. These properties can be guaranteed for individual cells that satisfy requisite trust assumptions, even if these trust assumptions do not hold for other cells in the tree. We also discuss several features of a CellTree that can be exploited by applications. We leave it for future work to develop full-fledged applications on top of this powerful architecture.
Last updated:  2022-09-13
A Countermeasure Against Statistical Ineffective Fault Analysis
Jakub Breier, Mustafa Khairallah, Xiaolu Hou, Yang Liu
When considering practical attacks against cryptographic implementations, Fault Injection Attacks (FIA) pose a powerful tool that can recover the secret key within few encryptions. Over the past few decades they have become a well-studied topic both by academic an industry practitioners. Current state-of-the-art countermeasures against Fault Injection Attacks (FIA) provide good protection against analysis methods that require the differences in the correct and faulty ciphertext to derive the secret information, such as Differential Fault Analysis (DFA) or collision fault analysis. However, recent progress in Ineffective Fault Analysis (IFA) and Statistical IFA (SIFA) constitutes a real threat against cryptographic implementations. Such methods cannot be thwarted by standard FIA countermeasures that focus on detecting the change in the intermediate data. In this paper, we present a novel method based on error correcting codes that protects implementations against SIFA. We design a set of universal error-correcting gates that can be used for block cipher implementations. We analyze a hardware implementation of protected GIFT-64 and show that our method provides 100% protection against SIFA.
Last updated:  2019-12-03
Pixel: Multi-signatures for Consensus
Manu Drijvers, Sergey Gorbunov, Gregory Neven, Hoeteck Wee
In Proof-of-Stake (PoS) and permissioned blockchains, a committee of verifiers agrees and sign every new block of transactions. These blocks are validated, propagated, and stored by all users in the network. However, posterior corruptions pose a common threat to these designs, because the adversary can corrupt committee verifiers after they certified a block and use their signing keys to certify a different block. Designing efficient and secure digital signatures for use in PoS blockchains can substantially reduce bandwidth, storage and computing requirements from nodes, thereby enabling more efficient applications. We present Pixel, a pairing-based forward-secure multi-signature scheme optimized for use in blockchains, that achieves substantial savings in bandwidth, storage requirements, and verification effort. Pixel signatures consist of two group elements, regardless of the number of signers, can be verified using three pairings and one exponentiation, and support non-interactive aggregation of individual signatures into a multi-signature. Pixel signatures are also forward-secure and let signers evolve their keys over time, such that new keys cannot be used to sign on old blocks, protecting against posterior corruptions attacks on blockchains. We show how to integrate Pixel into any PoS blockchain. Next, we evaluate Pixel in a real-world PoS blockchain implementation, showing that it yields notable savings in storage, bandwidth, and block verification time. In particular, Pixel signatures reduce the size of blocks with 1500 transactions by 35% and reduce block verification time by 38%.
Last updated:  2019-09-10
New Code-Based Privacy-Preserving Cryptographic Constructions
Khoa Nguyen, Hanh Tang, Huaxiong Wang, Neng Zeng
Code-based cryptography has a long history but did suffer from periods of slow development. The field has recently attracted a lot of attention as one of the major branches of post-quantum cryptography. However, its subfield of privacy-preserving cryptographic constructions is still rather underdeveloped, e.g., important building blocks such as zero-knowledge range proofs and set membership proofs, and even proofs of knowledge of a hash preimage, have not been known under code-based assumptions. Moreover, almost no substantial technical development has been introduced in the last several years. This work introduces several new code-based privacy-preserving cryptographic constructions that considerably advance the state-of-the-art in code-based cryptography. Specifically, we present $3$ major contributions, each of which potentially yields various other applications. Our first contribution is a code-based statistically hiding and computationally binding commitment scheme with companion zero-knowledge (ZK) argument of knowledge of a valid opening that can be easily extended to prove that the committed bits satisfy other relations. Our second contribution is the first code-based zero-knowledge range argument for committed values, with communication cost logarithmic in the size of the range. A special feature of our range argument is that, while previous works on range proofs/arguments (in all branches of cryptography) only address ranges of non-negative integers, our protocol can handle signed fractional numbers, and hence, can potentially find a larger scope of applications. Our third contribution is the first code-based Merkle-tree accumulator supported by ZK argument of membership, which has been known to enable various interesting applications. In particular, it allows us to obtain the first code-based ring signatures and group signatures with logarithmic signature sizes.
Last updated:  2019-05-23
Tight Leakage-Resilient CCA-Security from Quasi-Adaptive Hash Proof System
Shuai Han, Shengli Liu, Lin Lyu, Dawu Gu
We propose the concept of quasi-adaptive hash proof system (QAHPS), where the projection key is allowed to depend on the specific language for which hash values are computed. We formalize leakage-resilient(LR)-ardency for QAHPS by defining two statistical properties, including LR-<L_0,L_1>-universal and LR-<L_0,L_1>-key-switching. We provide a generic approach to tightly leakage-resilient CCA (LR-CCA) secure public-key encryption (PKE) from LR-ardent QAHPS. Our approach is reminiscent of the seminal work of Cramer and Shoup (Eurocrypt'02), and employ three QAHPS schemes, one for generating a uniform string to hide the plaintext, and the other two for proving the well-formedness of the ciphertext. The LR-ardency of QAHPS makes possible the tight LR-CCA security. We give instantiations based on the standard k-Linear (k-LIN) assumptions over asymmetric and symmetric pairing groups, respectively, and obtain fully compact PKE with tight LR-CCA security. The security loss is O(log Q_e) where Q_e denotes the number of encryption queries. Specifically, our tightly LR-CCA secure PKE instantiation from SXDH has only 4 group elements in the public key and 7 group elements in the ciphertext, thus is the most efficient one.
Last updated:  2020-08-19
GALACTICS: Gaussian Sampling for Lattice-Based Constant-Time Implementation of Cryptographic Signatures, Revisited
Gilles Barthe, Sonia Belaïd, Thomas Espitau, Pierre-Alain Fouque, Mélissa Rossi, Mehdi Tibouchi
In this paper, we propose a constant-time implementation of the BLISS lattice-based signature scheme. BLISS is possibly the most efficient lattice-based signature scheme proposed so far, with a level of performance on par with widely used pre-quantum primitives like ECDSA. It is only one of the few postquantum signatures to have seen real-world deployment, as part of the strongSwan VPN software suite. The outstanding performance of the BLISS signature scheme stems in large part from its reliance on discrete Gaussian distributions, which allow for better parameters and security reductions. However, that advantage has also proved to be its Achilles’ heel, as discrete Gaussians pose serious challenges in terms of secure implementations. Implementations of BLISS so far have included secret-dependent branches and memory accesses, both as part of the discrete Gaussian sampling and of the essential rejection sampling step in signature generation. These defects have led to multiple devastating timing attacks, and were a key reason why BLISS was not submitted to the NIST postquantum standardization effort. In fact, almost all of the actual candidates chose to stay away from Gaussians despite their efficiency advantage, due to the serious concerns surrounding implementation security. Moreover, naive countermeasures will often not cut it: we show that a reasonable-looking countermeasure suggested in previous work to protect the BLISS rejection sampling can again be defeated using novel timing attacks, in which the timing information is fed to phase retrieval machine learning algorithm in order to achieve a full key recovery. Fortunately, we also present careful implementation techniques that allow us to describe an implementation of BLISS with complete timing attack protection, achieving the same level of efficiency as the original unprotected code, without resorting on floating point arithmetic or platform-specific optimizations like AVX intrinsics. These techniques, including a new approach to the polynomial approximation of transcendental function, can also be applied to the masking of the BLISS signature scheme, and will hopefully make more efficient and secure implementations of lattice-based cryptography possible going forward.
Last updated:  2019-05-20
Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices: KEMs and Signatures of Smaller Sizes
Jiang Zhang, Yu Yu, Shuqin Fan, Zhenfeng Zhang, Kang Yang
Lattice-based cryptosystems are less efficient than their number-theoretic counterparts (based on RSA, discrete logarithm, etc.) in terms of key and ciphertext (signature) sizes. For adequate security the former typically needs thousands of bytes while in contrast the latter only requires at most hundreds of bytes. This significant difference has become one of the main concerns in replacing currently deployed public-key cryptosystems with lattice-based ones. Observing the inherent asymmetries in existing lattice-based cryptosystems, we propose asymmetric variants of the (module-)LWE and (module-)SIS assumptions, which yield further size-optimized KEM and signature schemes than those from standard counterparts. Following the framework of Lindner and Peikert (CT-RSA 2011) and the Crystals-Kyber proposal (EuroS&P 2018), we propose an IND-CCA secure KEM scheme from the hardness of the asymmetric module-LWE (AMLWE), whose asymmetry is fully exploited to obtain shorter public keys and ciphertexts. To target at a 128-bit security, the public key (resp., ciphertext) of our KEM only has 896 bytes (resp., 992 bytes), which gives an improvement of 192 bytes (resp.,160 bytes) over Kyber. Our signature scheme bears most resemblance to and improves upon the Crystals-Dilithium scheme (ToCHES 2018). By making full use of the underlying asymmetric module-LWE and module-SIS assumptions and carefully selecting the parameters, we obtain better compromise between computational costs, storage overheads and security and therefore construct an SUF-CMA secure signature scheme with shorter public keys and signatures. For a 128-bit security, the public key (resp., signature) of our signature scheme only has 1312 bytes (resp., 2445 bytes), which gives an improvement of 160 bytes (resp, 256 bytes) over Dilithium. We adapt the best known attacks and their variants to our AMLWE and AMSIS problems and conduct a comprehensive and thorough analysis of several parameter choices (aiming at different security strengths) and their impacts on the sizes, security and error probability of lattice-based cryptosystems. Our analysis demonstrates that AMLWE and AMSIS problems admit more flexible and size-efficient choices of parameters than the respective standard versions. Furthermore, implementations of our proposed schemes appear to be (slightly) more computationally efficient than their counterparts.
Last updated:  2019-10-04
New Slide Attacks on Almost Self-Similar Ciphers
Orr Dunkelman, Nathan Keller, Noam Lasry, Adi Shamir
The slide attack is a powerful cryptanalytic tool which has the unusual property that it can break iterated block ciphers with a complexity that does not depend on their number of rounds. However, it requires complete self similarity in the sense that all the rounds must be identical. While this can be the case in Feistel structures, this rarely happens in SP networks since the last round must end with an additional post-whitening subkey. In addition, in many SP networks the final round has additional asymmetries -- for example, in AES the last round omits the MixColumns operation. Such asymmetry in the last round can make it difficult to utilize most of the advanced tools which were developed for slide attacks, such as deriving from one slid pair additional slid pairs by repeatedly re-encrypting their ciphertexts. In this paper we overcome this "last round problem" by developing four new types of slide attacks. We demonstrate their power by applying them to many types of AES-like structures (with and without linear mixing in the last round, with known or secret S-boxes, with 1,2 and 3 periodicity in their subkeys, etc). In most of these cases, the time complexity of our attack is close to $2^{n/2}$, which is the smallest possible complexity for slide attacks. Our new slide attacks have several unique properties: The first attack uses slid sets in which each plaintext from the first set forms a slid pair with some plaintext from the second set, but without knowing the exact correspondence. The second attack makes it possible to create from several slid pairs an exponential number of new slid pairs which form a hypercube spanned by the given pairs. The third attack has the unusual property that it is always successful, and the fourth attack can use known messages instead of chosen messages, with only slightly higher time complexity.
Last updated:  2019-09-17
RingCT 3.0 for Blockchain Confidential Transaction: Shorter Size and Stronger Security
Tsz Hon Yuen, Shi-feng Sun, Joseph K. Liu, Man Ho Au, Muhammed F. Esgin, Qingzhao Zhang, Dawu Gu
In this paper, we propose the most competent blockchain ring confidential transaction protocol (RingCT3.0) for protecting the privacy of the sender's identity, the recipient's identity and the confidentiality of the transaction amount. For a typical 2-input transaction with a ring size of 1024, the ring signature size of our RingCT3.0 protocol is 98% less than the ring signature size of the original RingCT1.0 protocol used in Monero. Taking the advantage of our compact RingCT3.0 transcript size, privacy-preserving cryptocurrencies can enjoy a much lower transaction fee which will have a significant impact to the crypto-economy. Our implementation result shows that our protocol outperforms existing solutions, in terms of efficiency and security. In addition to the significant improvement in terms of efficiency, our scheme is proven secure in a stronger security model. We remove the trusted setup assumption used in RingCT2.0. Our scheme is anonymous against ring insider (non-signing users who are included in the ring), while we show that the RingCT1.0 is not secure in this strong model. Our RingCT3.0 protocol relies on our brand new designed ring signature scheme as an underlying primitive, which is believed to be the most efficient ring signature scheme up-to-date (in terms of signature size) without trusted setup. Our ring signature scheme is derived from our novel design of an efficient set membership proof of n public keys, with the proof size of O(log n). It is the first set membership proof without trusted setup for public keys in the base group, instead of in the exponent. These two primitives are of independent interest.
Last updated:  2019-08-02
Simple Schemes in the Bounded Storage Model
Jiaxin Guan, Mark Zhandry
The bounded storage model promises unconditional security proofs against computationally unbounded adversaries, so long as the adversary’s space is bounded. In this work, we develop simple new constructions of two-party key agreement, bit commitment, and oblivious transfer in this model. In addition to simplicity, our constructions have several advantages over prior work, including an improved number of rounds and enhanced correctness. Our schemes are based on Raz’s lower bound for learning parities.
Last updated:  2024-01-17
Forward Security with Crash Recovery for Secure Logs
Erik-Oliver Blass and Guevara Noubir
Logging is a key mechanism in the security of computer systems. Beyond supporting important forward security properties, it is critical that logging withstands both failures and intentional tampering to prevent subtle attacks leaving the system in an inconsistent state with inconclusive evidence. We propose new techniques combining forward security with crash recovery for secure log data storage. As the support of specifically forward integrity and the online nature of logging prevent the use of conventional coding, we propose and analyze a coding scheme resolving these unique design constraints. Specifically, our coding enables forward integrity, online encoding, and most importantly a constant number of operations per encoding. It adds a new log item by XORing it to $k$ cells of a table. If up to a certain threshold of cells is modified by the adversary, or lost due to a crash, we still guarantee recovery of all stored log items. The main advantage of the coding scheme is its efficiency and compatibility with forward integrity. The key contribution of the paper is the use of spectral graph theory techniques to prove that $k$ is constant in the number $n$ of all log items ever stored and small in practice, e.g., $k=5$. Moreover, we prove that to cope with up to $\sqrt{n}$ modified or lost log items, storage expansion is constant in $n$ and small in practice. For $k=5$, the size of the table is only $12\%$ more than the simple concatenation of all $n$ items. We propose and evaluate original techniques to scale the computation cost of recovery to several GBytes of security logs. We instantiate our scheme into an abstract data structure which allows to either detect adversarial modifications to log items or treat modifications like data loss in a system crash. The data structure can recover lost log items, thereby effectively reverting adversarial modifications.
Last updated:  2021-04-12
DL-LA: Deep Learning Leakage Assessment: A modern roadmap for SCA evaluations
Thorben Moos, Felix Wegener, Amir Moradi
In recent years, deep learning has become an attractive ingredient to side-channel analysis (SCA) due to its potential to improve the success probability or enhance the performance of certain frequently executed tasks. One task that is commonly assisted by machine learning techniques is the profiling of a device's leakage behavior in order to carry out a template attack. At CHES 2019, deep learning has also been applied to non-profiled scenarios for the first time, extending its reach within SCA beyond template attacks. The proposed method, called DDLA, has some tempting advantages over traditional SCA due to merits inherited from (convolutional) neural networks. Most notably, it greatly reduces the need for pre-processing steps when the SCA traces are misaligned or when the leakage is of a multivariate nature. However, similar to traditional attack scenarios the success of this approach highly depends on the correct choice of a leakage model and the intermediate value to target. In this work we explore, for the first time in literature, whether deep learning can similarly be used as an instrument to advance another crucial (non-profiled) discipline of SCA which is inherently independent of leakage models and targeted intermediates, namely leakage assessment. In fact, given the simple classification-based nature of common leakage assessment techniques, in particular distinguishing two groups fixed-vs-random or fixed-vs-fixed, it comes as a surprise that machine learning has not been brought into this context, yet. Our contribution is the development of the first full leakage assessment methodology based on deep learning. It gives the evaluator the freedom to not worry about location, alignment and statistical order of the leakages and easily covers multivariate and horizontal patterns as well. We test our approach against a number of case studies based on FPGA, ASIC and µC implementations of the PRESENT block cipher, equipped with state-of-the-art SCA countermeasures. Our results clearly show that the proposed methodology and network structures are robust across all case studies and outperform the classical detection approaches ($t$-test and $\chi^2$-test) in all considered scenarios.
Last updated:  2020-11-17
Afgjort: A Partially Synchronous Finality Layer for Blockchains
Thomas Dinsdale-Young, Bernardo Magri, Christian Matt, Jesper Buus Nielsen, Daniel Tschudi
Most existing blockchains either rely on a Nakamoto-style of consensus, where the chain can fork and produce rollbacks, or on a committee-based Byzantine fault tolerant (CBFT) consensus, where no rollbacks are possible. While the latter ones offer better consistency, the former tolerate more corruptions. To achieve the best of both worlds, we initiate the formal study of finality layers. Such a finality layer can be combined with a Nakamoto-style blockchain (NSB) and periodically declare blocks as final, preventing rollbacks beyond final blocks. As conceptual contributions, we formalize the concept of a finality layer and identify the following properties to be crucial for finality layers: finalized blocks form a chain (chain-forming), all parties agree on the finalized blocks (agreement), the last finalized block does not fall too far behind the last block in the underlying blockchain (updated), and all finalized blocks at some point have been on the chain adopted by honest parties holding at least $k$ units of the resource on which consensus is based, e.g., stake or computing power ($k$-support). As technical contributions, we propose two variants of a finality layer protocol we call Afgjort. The first variant satisfies all of the aforementioned requirements when combined with an arbitrary blockchain that satisfies the usual common-prefix, chain-growth, and chain-quality properties. The second one needs an additional, mild assumption on the underlying blockchain, but is more efficient and has higher support. For both variants, we prove these properties in the setting with less than $1/3$ corruption among the finalizers and a partially synchronous network. We further show that tolerating less than $1/3$ corruption is optimal for partially synchronous finality layers. Finally, we provide data from experiments ran with an implementation of our protocol; the data confirms that finality is reached much faster than without our finality layer.
Last updated:  2020-07-09
Two-Party ECDSA from Hash Proof Systems and Efficient Instantiations
Guilhem Castagnos, Dario Catalano, Fabien Laguillaumie, Federico Savasta, Ida Tucker
ECDSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell [Lin17] recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier's cryptosystem. In this paper we generalize Lindell's solution using hash proof systems. The main advantage of our generic method is that it results in a simulation-based security proof without resorting to non-standard interactive assumptions. Moving to concrete constructions, we show how to instantiate our framework using class groups of imaginary quadratic fields. Our implementations show that the practical impact of dropping such interactive assumptions is minimal. Indeed, while for 128-bit security our scheme is marginally slower than Lindell's, for 256-bit security it turns out to be better both in key generation and signing time. Moreover, in terms of communication cost, our implementation significantly reduces both the number of rounds and the transmitted bits without exception.
Last updated:  2019-05-20
A refined analysis of the cost for solving LWE via uSVP
Shi Bai, Shaun Miller, Weiqiang Wen
The learning with errors (LWE) problem (STOC'05) introduced by Regev is one of the fundamental problems in lattice-based cryptography. One standard strategy to solve the LWE problem is to reduce it to a unique SVP (uSVP) problem via Kannan's embedding and then apply a lattice reduction to solve the uSVP problem. There are two methods for estimating the cost for solving LWE via this strategy: the first method considers the largeness of the gap in the uSVP problem (Gama-Nguyen, Eurocrypt'08) and the second method (Alkim et al., USENIX'16) considers the shortness of the projection of the shortest vector to the Gram-Schmidt vectors. These two estimates have been investigated by Albrecht et al. (Asiacrypt'16) who present a sound analysis and show that the lattice reduction experiments fit more consistently with the second estimate. They also observe that in some cases the lattice reduction even behaves better than the second estimate perhaps due to the second intersection of the projected vector with the Gram-Schmidt vectors. In this work, we revisit the work of Alkim et al. and Albrecht et al. We first report further experiments providing more comparisons and suggest that the second estimate leads to a more accurate prediction in practice. We also present empirical evidence confirming the assumptions used in the second estimate. Furthermore, we examine the gaps in uSVP derived from the embedded lattice and explain why it is preferable to use embedding height equal to 1 for the embedded lattice. This shows there is a coherent relation between the second estimate and the gaps in uSVP. Finally, it has been conjectured by Albrecht et al. that the second intersection will not happen for large parameters. We will show that this is indeed the case: there is no second intersection as the block size goes to infinity.
Last updated:  2021-12-09
Optimal Merging in Quantum k-xor and k-sum Algorithms
María Naya-Plasencia, André Schrottenloher
The k-xor or Generalized Birthday Problem aims at finding, given k lists of bit-strings, a k-tuple among them XORing to 0. If the lists are unbounded, the best classical (exponential) time complexity has withstood since Wagner's CRYPTO 2002 paper. If the lists are bounded (of the same size) and such that there is a single solution, the dissection algorithms of Dinur et al. (CRYPTO 2012) improve the memory usage over a simple meet-in-the-middle. In this paper, we study quantum algorithms for the k-xor problem. With unbounded lists and quantum access, we improve previous work by Grassi et al. (ASIACRYPT 2018) for almost all k. Next, we extend our study to lists of any size and with classical access only. We define a set of ``merging trees'' which represent the best known strategies for quantum and classical merging in k-xor algorithms, and prove that our method is optimal among these. Our complexities are confirmed by a Mixed Integer Linear Program that computes the best strategy for a given k-xor problem. All our algorithms apply also when considering modular additions instead of bitwise xors. This framework enables us to give new improved quantum k-xor algorithms for all k and list sizes. Applications include the subset-sum problem, LPN with limited memory and the multiple-encryption problem.
Last updated:  2019-05-20
An HPR variant of the FV scheme: Computationally Cheaper, Asymptotically Faster
Jean-Claude Bajard, Julien Eynard, Paulo Martins, Leonel Sousa, Vincent Zucca
State-of-the-art implementations of homomorphic encryption exploit the Fan and Vercauteren (FV) scheme and the Residue Number System (RNS). While the RNS breaks down large integer arithmetic into smaller independent channels, its non-positional nature makes operations such as division and rounding hard to implement, and makes the representation of small values inefficient. In this work, we propose the application of the Hybrid Position-Residues Number System representation to the FV scheme. This is a positional representation of large radix where the digits are represented in RNS. It inherits the benefits from RNS and allows to accelerate the critical division and rounding operations while also making the representation of smaller values more compact. This directly benefits the decryption and the homomorphic multiplication procedures, reducing their asymptotic complexity, in dimension $n$, from $\mathcal{O} (n^2 \log n)$ to $\mathcal{O} (n \log n)$ and from $\mathcal{O}(n^3 \log n)$ to $\mathcal{O} (n^{3})$, respectively. This has also resulted in noticeable speedups when experimentally compared to related art RNS implementations.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.