All papers in 2018 (Page 6 of 1249 results)
Prime and Prejudice: Primality Testing Under Adversarial Conditions
This work provides a systematic analysis of primality testing under adversarial conditions, where the numbers being tested for primality are not generated randomly, but instead provided by a possibly malicious party. Such a situation can arise in secure messaging protocols where a server supplies Diffie-Hellman parameters to the peers, or in a secure communications protocol like TLS where a developer can insert such a number to be able to later passively spy on client-server data. We study a broad range of cryptographic libraries and assess their performance in this adversarial setting. As examples of our findings, we are able to construct 2048-bit composites that are declared prime with probability \(1/16\) by OpenSSL's primality testing in its default configuration; the advertised performance is \(2^{-80}\). We can also construct 1024-bit composites that always pass the primality testing routine in GNU GMP when configured with the recommended minimum number of rounds. And, for a number of libraries (Cryptlib, LibTomCrypt, JavaScript Big Number, WolfSSL), we can construct composites that always pass the supplied primality tests. We explore the implications of these security failures in applications, focusing on the construction of malicious Diffie-Hellman parameters. We show that, unless careful primality testing is performed, an adversary can supply parameters $(p,q,g)$ which on the surface look secure, but where the discrete logarithm problem in the subgroup of order $q$ generated by $g$ is easy. We close by making recommendations for users and developers. In particular, we promote the Baillie-PSW primality test which is both efficient and conjectured to be robust even in the adversarial setting for numbers up to a few thousand bits.
Definitions for Plaintext-Existence Hiding in Cloud Storage
Uncategorized
Uncategorized
Cloud storage services use deduplication for saving bandwidth and storage. An adversary can exploit side-channel information in several attack scenarios when deduplication takes place at the client side, leaking information on whether a specific plaintext exists in the cloud storage. Generalising existing security definitions, we introduce formal security games for a number of possible adversaries in this domain, and show that games representing all natural adversarial behaviors are in fact equivalent. These results allow users and practitioners alike to accurately assess the vulnerability of deployed systems to this real-world concern.
Pseudo Constant Time Implementations of TLS Are Only Pseudo Secure
Today, about 10% of TLS connections are still using CBC-mode cipher suites, despite a long history of attacks and the availability of better options (e.g. AES-GCM). In this work, we present three new types of attack against four popular fully patched implementations
of TLS (Amazon's s2n, GnuTLS, mbed TLS and wolfSSL) which elected to use ``pseudo constant time'' countermeasures against the Lucky 13 attack on CBC-mode. Our attacks combine several variants of the PRIME+PROBE cache timing technique with a new extension of the original Lucky 13 attack. They apply in a cross-VM attack setting and are capable of recovering most of the plaintext whilst requiring only a moderate number of TLS connections. Along the way, we uncovered additional serious (but easy to patch) bugs in all four of the TLS implementations that we studied; in three cases, these bugs lead to Lucky 13 style attacks that can be mounted remotely with no access to a shared cache. Our work shows that adopting pseudo constant time countermeasures is not sufficient to attain real security in TLS implementations in CBC mode.
Secret Sharing with Binary Shares
Uncategorized
Uncategorized
Shamir's celebrated secret sharing scheme provides an efficient method for
encoding a secret of arbitrary length \ell among any N \leq 2^\ell players
such that for a threshold parameter t, (i) the knowledge of any t shares
does not reveal any information about the secret and, (ii) any choice of t+1
shares fully reveals the secret. It is known that any such threshold secret
sharing scheme necessarily requires shares of length \ell, and in this sense
Shamir's scheme is optimal. The more general notion of ramp schemes requires
the reconstruction of secret from any t+g shares, for a positive integer gap
parameter g. Ramp secret sharing scheme necessarily requires shares of length
\ell/g. Other than the bound related to secret length \ell, the share
lengths of ramp schemes can not go below a quantity that depends only on the
gap ratio g/N. In this work, we study secret sharing in the extremal case of
bit-long shares and arbitrarily small gap ratio g/N, where standard ramp
secret sharing becomes impossible. We show, however, that a slightly relaxed
but equally effective notion of semantic security for the secret, and
negligible reconstruction error probability, eliminate the impossibility.
Moreover, we provide explicit constructions of such schemes. One of the
consequences of our relaxation is that, unlike standard ramp schemes with
perfect secrecy, adaptive and non-adaptive adversaries need different analysis
and construction. For non-adaptive adversaries, we explicitly construct secret
sharing schemes that provide secrecy against any \tau fraction of observed
shares, and reconstruction from any \rho fraction of shares, for any choices
of 0 \leq \tau < \rho \leq 1. Our construction achieves secret length
N(\rho-\tau-o(1)), which we show to be optimal. For adaptive adversaries, we
construct explicit schemes attaining a secret length \Omega(N(\rho-\tau)).
Achilles' Heel: the Unbalanced Mask Sets May Destroy a Masking Countermeasure
Low Entropy Masking Scheme (LEMS) has attracted wide attention for its low-cost feature of small fixed mask sets in Side-Channel-Analysis (SCA). To achieve the expected side channel security, it is necessary to find a balanced mask set to reduce the correlations between key dependent variables and their corresponding leakages. However, the security proof of LEMS, based on an inadequate assumption, might lead to consequent mask sets proposed without balance property, which could cause vulnerable LEMS implementations. This paper focusing on correcting and improving this scheme, first gives the formal definitions of univariate balance property on mask sets and extends it to multivariate settings. From these definitions, we propose three fundamental properties to analyze the balance of mask sets in Rotating Sbox Masking (RSM), the most popular LEMS implementations. To demonstrate the definitions and properties, three state-of-the-art RSM mask sets were selected as research objects. The corresponding attacks when any properties violated distinctly indicate the necessity of evaluating the balance property of the mask set in advance (during the design phase). However, it is found impossible to get a mask set for the RSM with all three properties satisfied, which means the vulnerabilities of RSM scheme in its unbalanced mask set are unavoidable. Thus, this promising masking scheme may be broken for its unqualified mask set.
BAdASS: Preserving Privacy in Behavioural Advertising with Applied Secret Sharing
Online advertising is a multi-billion dollar industry, forming the primary source of income for many publishers offering free web content. Serving advertisements tailored to users' interests greatly improves the effectiveness of advertisements, which benefits both publishers and users. The privacy of users, however, is threatened by the widespread collection of data that is required for behavioural advertising. In this paper, we present BAdASS, a novel privacy-preserving protocol for Online Behavioural Advertising that achieves significant performance improvements over the state of the art without disclosing any information about user interests to any party. BAdASS ensures user privacy by processing data within the secret-shared domain, using the heavily fragmented shape of the online advertising landscape to its advantage and combining efficient secret-sharing techniques with a machine learning method commonly encountered in existing advertising systems. Our protocol serves advertisements within a fraction of a second, based on highly detailed user profiles and widely used machine learning methods.
On the Leakage of Corrupted Garbled Circuits
Secure two-party computation provides a way for two parties to compute a function, that depends on the two parties' inputs, while keeping them private. Known since the 1980s, Yao's garbled circuits appear to be a general solution to this problem, in the semi-honest model. Decades of optimizations have made this tool a very practical solution. However, it is well known that a malicious adversary could modify a garbled circuit before submitting it. Many protocols, mostly based on cut-&-choose, have been proposed to secure Yao's garbled circuits in the presence of malicious adversaries. Nevertheless, how much an adversary can modify a circuit and make it still executable has not been studied yet. The main contribution of this paper is to prove that any modification made by an adversary is equivalent to adding/removing NOT gates arbitrarily in the original circuit, otherwise the adversary can get caught. Thereafter, we study some evaluation functions for which, even without using cut-&-choose, no adversary can gain more information about the inputs by modifying the circuit. We also give an improvement over most recent cut-&-choose solutions by requiring that different circuits of the same function are used instead of just one.
Witness-Indistinguishable Arguments with $\Sigma$-Protocols for Bundled Witness Spaces and its Application to Global Identities
We propose a generic construction of a $\Sigma$-protocol of commit-and-prove type, which is an AND-composition of $\Sigma$-protocols on statements that include a common commitment. Our protocol enables a prover to convince a verifier that the prover knows a bundle of witnesses that have a common component which we call a base witness point. When the component $\Sigma$-protocols are of witness-indistinguishable argument systems, our $\Sigma$-protocol is also a witness-indistinguishable argument system as a whole. As an application, we propose a decentralized multi-authority anonymous authentication scheme. We first give a syntax and security definitions of the scheme. Then we give a generic construction of the scheme. There a witness is a bundle of witnesses each of which decomposes into a common global identity string and a digital signature on it. We mention an instantiation in the setting of bilinear groups.
LP Solutions of Vectorial Integer Subset Sums - Cryptanalysis of Galbraith's Binary Matrix LWE
We consider Galbraith's space efficient LWE variant, where the $(m \times n)$-matrix $A$ is binary. In this binary case, solving a vectorial subset sum problem over the integers allows for decryption. We show how to solve this problem using (Integer) Linear Programming. Our attack requires only a fraction of a second for all instances in a regime for $m$ that cannot be attacked by current lattice algorithms. E.g.\ we are able to solve 100 instances of Galbraith's small LWE challenge $(n,m) = (256, 400)$ all in a fraction of a second. We also show under a mild assumption that instances with $m \leq 2n$ can be broken in polynomial time via LP relaxation. Moreover, we develop a method that identifies weak instances for Galbraith's large LWE challenge $(n,m)=(256, 640)$.
FairSwap: How to fairly exchange digital goods
We introduce FairSwap -- an efficient protocol for fair exchange of digital goods using smart contracts.
A fair exchange protocol allows a sender S to sell a digital commodity x for a fixed price p to a receiver R.
The protocol is said to be secure if R only pays if he receives the correct x.
Our solution guarantees fairness by relying on smart contracts executed over decentralized cryptocurrencies, where the contract takes the role of an external judge that completes the exchange in case of disagreement.
While in the past there have been several proposals for building fair exchange protocols over cryptocurrencies, our solution has two distinctive features that makes it particular attractive when users deal with large commodities.
These advantages are: (1) minimizing the cost for running the smart contract on the blockchain, and (2) avoiding expensive cryptographic tools such as zero-knowledge proofs.
In addition to our new protocols, we provide formal security definitions for smart contract based fair exchange, and prove security of our construction.
Finally, we illustrate several applications of our basic protocol and evaluate practicality of our approach via a prototype implementation for fairly selling large files over the cryptocurrency Ethereum.
Using MILP in Analysis of Feistel Structures and Improving Type II GFS by Switching Mechanism
Uncategorized
Uncategorized
Some features of Feistel structures have caused them to be considered as an efficient structure for design of block ciphers. Although several structures are proposed relied on Feistel structure, the type-II generalized Feistel structures (GFS) based on SP-functions are more prominent. Because of difference cancellation, which occurs in Feistel structures, their resistance against differential and linear attack is not as expected. Hitherto, to improve the immunity of Feistel structures against differential and linear attack, two methods are proposed. One of them is using multiple MDS matrices, and the other is using changing permutations of sub-blocks.
In this paper by using MILP and summation representation method, a technique to count the active S-boxes is proposed. Moreover in some cases, the results proposed by Shibutani at SAC 2010 are improved. Also multiple MDS matrices are applied to GFS, and by relying on a new proposed approach, the new inequalities related to using multiple MDS matrices are extracted, and results of using the multiple MDS matrices in type II GFS are evaluated. Finally results related to linear cryptanalysis are presented. Our results show that using multiple MDS matrices leads to 22% and 19% improvement in differential cryptanalysis of standard and improved 8 sub-blocks structures, respectively, after 18 rounds.
Towards Static Assumption Based Cryptosystem in Pairing Setting: Further Applications of DéjàQ and Dual-Form Signature
A large number of parameterized complexity assumptions have been introduced in the bilinear pairing setting to design novel cryptosystems and an important question is whether such ``$q$-type" assumptions can be replaced by some static one. Recently Ghadafi and Groth captured several such parameterized assumptions in the pairing setting in a family called bilinear target assumption (BTA). We apply the DéjàQ techniques for all $q$-type assumptions in the BTA family. In this process, first we formalize the notion of extended adaptive parameter-hiding property and use it in the Chase-Meiklejohn's DéjàQ framework to reduce those $q$-type assumptions from subgroup hiding assumption in the asymmetric composite-order pairing. In addition, we extend the BTA family further
into BTA1 and BTA2 and study the relation between different BTA variants. We also discuss the inapplicability of DéjàQ techniques on the $q$-type assumptions that belong to BTA1 or BTA2 family. We then provide one further application of Gerbush et al's dual-form signature techniques to remove the dependence on a $q$-type assumption for which existing DéjàQ techniques are not applicable. This results in a variant of Abe et al's structure-preserving signature with security based on a static assumption in composite order setting.
Steady: A Simple End-to-End Secure Logging System
Uncategorized
Uncategorized
We present Steady: an end-to-end secure logging system engineered to be simple in terms of design, implementation, and assumptions for real-world use. Steady gets its name from being based on a steady (heart)beat of events from a forward-secure device sent over an untrusted network through untrusted relays to a trusted collector. Properties include optional encryption and compression (with loss of confidentiality but significant gain in goodput), detection of tampering, relays that can function in unidirectional networks (e.g., as part of a data diode), cost-effective use of cloud services for relays, and publicly verifiable proofs of event authenticity. The design is formalized and security proven in the standard model. Our prototype implementation (about 2,200 loc) shows reliable goodput of over 1M events/s (about 160 MiB/s) for a realistic dataset with commodity hardware for a device on a GigE network using 16 MiB of memory connected to a relay running at Amazon EC2.
Improved Signature Schemes for Secure Multi-Party Computation with Certified Inputs
The motivation for this work comes from the need to strengthen security of secure multi-party protocols with the ability to guarantee that the participants provide their truthful inputs in the computation. This is outside the traditional security models even in the presence of malicious participants, but input manipulation can often lead to privacy and result correctness violations. Thus, in this work we treat the problem of combining secure multi-party computation (SMC) techniques based on secret sharing with signatures to enforce input correctness in the form of certification. We modify two currently available signature schemes to achieve private verification and efficiency of batch verification and show how to integrate them with two prominent SMC protocols.
Last updated: 2018-11-07
AntNest: Fully Non-interactive Secure Multi-party Computation
Uncategorized
Uncategorized
In this paper, we focus on the research of non-interactive secure multi-party computation (MPC). At first, we propose a fully homomorphic non-interactive verifiable secret sharing (FHNVSS) scheme. In this scheme, shareholders can generate shares of any-degree polynomials of shared numbers without interaction, and the dealer can verify the correctness of shares sent by shareholders without interaction. We implemented the FHNVSS scheme in Python with a detailed performance evaluation. According to our tests, the performance of FHNVSS is satisfactory. For instance, when the request is a 10-degree polynomial of secret value, generating a response takes about 0.0017263 s; verifying a response takes about 0.1221394 s; recovering a result takes about 0.0003862 s. Besides, we make an extension on the FHNVSS scheme to obtain a fully non-interactive secure multi-party computation, called AntNest. In the AntNest scheme, distrustful parties can jointly calculate a any-degree negotiated function, the inputs of which are inputs of all parties, without interaction, and each party can verify the correctness of responses sent by parties without interaction. To the best of our knowledge, it is the first work to realize that parties can jointly calculate any-degree function, the inputs of which are inputs of all parties, without interaction.
Random Number Generators Can Be Fooled to Behave Badly
Uncategorized
Uncategorized
In this paper, we extend the work on purely mathematical Trojan horses initially presented by Young and Yung. This kind of mechanism affects the statistical properties of an infected random number generator (RNG) by making it very sensitive to input entropy. Thereby, when inputs have the correct distribution the Trojan has no effect, but when the distribution becomes biased the Trojan worsens it. Besides its obvious malicious usage, this mechanism can also be applied to devise lightweight health tests for RNGs. Currently, RNG designs are required to implement an early detection mechanism for entropy failure, and this class of Trojan horses is perfect for this job.
Threshold Partially-Oblivious PRFs with Applications to Key Management
An Oblivious PRF (OPRF) is a protocol between a server holding a key to a PRF and a user holding an input. At the end of the interaction, the user learns the output of the OPRF on its input and nothing else. The server learns nothing, including nothing about the user's input or the function's output. OPRFs have found many applications in multiple areas of cryptography. Everspaugh et al. (Usenix 2015) introduced Partially Oblivious PRF (pOPRF) in which the OPRF accepts an additional non-secret input that can be chosen by the server itself, and showed applications in the setting of password hardening protocols. We further investigate pOPRFs showing new constructions, including distributed multi-server schemes, and new applications. We build simple pOPRFs from regular OPRFs, in particular obtaining very efficient DH-based pOPRFs, and provide (n,t)-threshold implementation of such schemes.
We apply these schemes to build Oblivious Key Management Systems (KMS) as a much more secure alternative to traditional wrapping-based KMS. The new system hides keys and object identifiers from the KMS, offers unconditional security for key transport, enables forward security, provides key verifiability, reduces storage, and more. Further, we show how to provide all these features in a distributed threshold implementation that additionally protects the service against server compromise. Finally, we extend the scheme to a threshold Oblivious KMS with updatable encryption so that upon the periodic change of OPRF keys by the server, an efficient update procedure allows a client of the KMS service to non-interactively update all its encrypted data to be decryptable only by the new key. Our techniques improve on the efficiency and security of several recent works on updatable encryption from Crypto and Eurocrypt.
We report on an implementation of the above schemes and their performance, showing their practicality and readiness for use in real-world systems. In particular, our pOPRF constructions achieve speeds of over an order of magnitude relative to previous pOPRF schemes.
Data Oblivious Genome Variants Search on Intel SGX
Uncategorized
Uncategorized
We show how to build a practical, private data oblivious genome variants search using Intel SGX. More precisely, we consider the problem posed in Track 2 of the iDash Privacy and Security Workshop 2017 competition, which was to search for variants with high $\chi^{2}$ statistic among certain genetic data over two populations. The winning solution of this iDash competition (developed by Carpov and Tortech) is extremely efficient, but not memory oblivious, which potentially made it vulnerable to a whole host of memory- and cache-based side channel attacks on SGX. In this paper, we adapt a framework in which we can exactly quantify this leakage. We provide a memory oblivious implementation with reasonable information leakage at the cost of some efficiency. Our solution is roughly an order of magnitude slower than the non-memory oblivious implementation, but still practical and much more efficient than naive memory-oblivious solutions--it solves the iDash problem in approximately 5 minutes. In order to do this, we develop novel definitions and models for oblivious dictionary merging, which may be of independent theoretical interest.
An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing
Uncategorized
Uncategorized
The distributed discrete logarithm (DDL) problem was introduced by Boyle et al. at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs.
Let $g$ be a generator of a multiplicative group $\mathbb{G}$. Given a random group element $g^{x}$ and an unknown integer $b \in [-M,M]$ for a small $M$, two parties $A$ and $B$ (that cannot communicate) successfully solve DDL if $A(g^{x}) - B(g^{x+b}) = b$. Otherwise, the parties err. In the DDL protocol of Boyle et al., $A$ and $B$ run in time $T$ and have error probability that is roughly linear in $M/T$. Since it has a significant impact on the HSS scheme's performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of $T$.
In this paper we devise a new DDL protocol that substantially reduces the error probability to $O(M \cdot T^{-2})$. Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size $S$ from $O(S^2)$ to $O(S^{3/2})$. We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a \emph{short} interval of length $R$ in time $o(\sqrt{R})$.
Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.
Supersingular Isogeny Diffie-Hellman Authenticated Key Exchange
We propose two authenticated key exchange protocols from supersingular
isogenies. Our protocols are the first post-quantum one-round Diffie-Hellman type authenticated key exchange ones in the following points: one is secure under the quantum random oracle model and the other resists against maximum exposure where a non-trivial combination of secret keys is revealed. The security of the former and the latter is proven under isogeny versions of the decisional and gap Diffie-Hellman assumptions, respectively. We also propose a new approach for invalidating the Galbraith-Vercauteren-type attack for the gap problem.
Last updated: 2020-04-14
Towards Lightweight Cryptographic Primitives with Built-in Fault-Detection
We introduce a novel approach for designing symmetric ciphers to resist fault injection. The approach is fairly generic and applies to round functions of block ciphers, cryptographic permutations and stream ciphers. We showcase our method with a new permutation called FRIT and perform fault analysis on a simulated hardware and actual software implementation. We present performance results for software and hardware implementations with and without the fault detection mechanism. On a Cortex-M4 platform the overhead of the countermeasure in cycles is 83%. The penalty on resources for hardware implementations depends on the hardware and can be as low as 56%.
A $k$-out-of-$n$ Ring Signature with Flexible Participation for Signers
A $k$-out-of-$n$ ring signature is a kind of anonymous signature that
can be performed by any member in a group.
This signature allows the creation of valid signatures if and only if
actual signers more than or equal to $k$ sign the message among $n$
possible signers.
In this paper, we present a new $k$-out-of-$n$ ring signature.
Our signature has a remarkable property: When the signature is
updated from $k$-out-of-$n$ to $(k+\alpha)$-out-of-$n$, the previous
signers do not need to sign a message again.
Our scheme can ``reuse'' the old signature, whereas the previous schemes
revoke it and create a signature from scratch.
We call this property ``{{flexibility}}'' and formalize it rigorously.
Our signature scheme has a multiple ring structure, each ring of which
is based on $1$-out-of-$n$ ring signature. The structure of our scheme
is completely different from that of conventional schemes, such as a
secret-sharing type. The signers' keys are mostly independent of each
user, thanks to a part of keys which use a special hash function.
We give the results of provable security for our scheme.
DiSE: Distributed Symmetric-key Encryption
Threshold cryptography provides a mechanism for protecting secret keys by sharing them among multiple parties, who then jointly perform cryptographic operations. An attacker who corrupts upto a threshold number of parties cannot recover the secrets or violate security. Prior works in this space have mostly focused on definitions and constructions for public-key cryptography and digital signatures, and thus do not capture the security concerns and efficiency challenges of symmetric-key based applications which commonly use long-term (unprotected) master keys to protect data at rest, authenticate clients on enterprise networks, and secure data and payments on IoT devices.
We put forth the first formal treatment for distributed symmetric-key encryption, proposing new notions of correctness, privacy and authenticity in presence of malicious attackers. We provide strong and intuitive game-based definitions that are easy to understand and yield efficient constructions.
We propose a generic construction of threshold authenticated encryption based on any distributed pseudorandom function (DPRF). When instantiated with the two different DPRF constructions proposed by Naor, Pinkas and Reingold (Eurocrypt 1999) and our enhanced versions, we obtain several efficient constructions meeting different security definitions. We implement these variants and provide extensive performance comparisons. Our most efficient instantiation uses only symmetric-key primitives and achieves a throughput of upto 1 million encryptions/decryptions per seconds, or alternatively a sub-millisecond latency with upto 18 participating parties.
Towards Key-Dependent Integral and Impossible Differential Distinguishers on 5-Round AES
Reduced-round AES has been a popular underlying primitive to design new cryptographic schemes and thus its security including distinguishing properties deserves more attention.
At Crypto'16, a key-dependent integral distinguisher on 5-round AES was put forward, which opened up a new direction to take more insights into the distinguishing properties of AES.
After that, two key-dependent impossible differential (ID) distinguishers on 5-round AES were proposed at FSE'16 and CT-RSA'18, respectively.
It is strange that the current key-dependent integral distinguisher requires significantly higher complexities than the key-dependent ID distinguishers, even though they are constructed with the same property of MixColumns ($2^{128} \gg 2^{98.2}$).
Proposers of the 5-round key-dependent distinguishers claimed that the corresponding integral and ID distinguishers can only work under chosen-ciphertext and chosen-plaintext settings, respectively, which is very different from the situations of traditional key-independent distinguishers.
In this paper, we first construct a novel key-dependent integral distinguisher on 5-round AES with $2^{96}$ chosen plaintexts, which is much better than the previous key-dependent integral distinguisher that requires the full codebook proposed at Crypto'16.
Secondly,
we show that both distinguishers are valid under either chosen-plaintext setting or chosen-ciphertext setting, which is different from the claims of previous cryptanalysis.
However, under different settings, complexities of key-dependent integral distinguishers are very different while those of the key-dependent ID distinguishers are almost the same.
We analyze the reasons for it.
Round5: KEM and PKE based on GLWR
Standardization bodies such as NIST and ETSI are currently seeking quantum resistant alternatives to vulnerable RSA and elliptic curve-based public-key algorithms. In this context, we present Round5, a lattice-based cryptosystem providing a key encapsulation mechanism and a public-key encryption scheme. Round5 is based on the General Learning with Rounding problem, unifying non-ring and ring lattice rounding problems into one. Usage of rounding combined with a tight analysis leads to significantly reduced bandwidth and randomness requirements. Round5's reliance on prime-order cyclotomic rings offers a large design space allowing fine-grained parameter optimization. The use of sparse-ternary secret keys improves performance and significantly reduces decryption failure rates at minimal additional cost. The use of error-correcting codes, in combination with ring multiplications in $\mathbb{Z}[x]/(x^{n+1}-1)$ that ensures non-correlated errors, further improves the latter. Round5 parameters have been carefully optimized for bandwidth, while the design facilitates efficient implementation.
As a result, Round5 has leading performance characteristics among all NIST post-quantum candidates, and at the same time attains conservative security levels that fully fit NIST's security categories. Round5's schemes share common building blocks, simplifying (security and operational) analysis and code review. Finally, Round5 proposes various approaches of refreshing the system public parameter A, which efficiently prevent precomputation and back-door attacks.
Disclaimer: This is a draft version, not all sections are included.
Rethinking Secure FPGAs: Towards a Cryptography-friendly Configurable Cell Architecture and its Automated Design Flow
This work proposes the first fine-grained configurable cell array specifically tailored for cryptographic implementations. The proposed architecture can be added to future FPGAs as an application-specific configurable building block, or to an ASIC as an embedded FPGA (eFPGA). The goal is to map cryptographic ciphers on combinatorial cells that are more efficient than general purpose lookup tables in terms of silicon area, configuration memory and combinatorial delay. As a first step in this research direction, we focus on block ciphers and we derive the most suitable cell structure for mapping state-of-the-art algorithms. We develop the related automated design flow, exploiting the synthesis capabilities of Synopsys Design Compiler and the routing capabilities of Xilinx ISE. Our solution is the first cryptography-oriented fine-grained architecture that can be configured using common hardware description languages. We evaluate the performance of our solution by mapping a number of well-known block ciphers onto our new cells. The obtained results show that our proposed architecture drastically outperforms commercial FPGAs in terms of silicon area and configuration memory resources, while obtaining a similar throughput.
Shorter Messages and Faster Post-Quantum Encryption with Round5 on Cortex M
Round5 is a Public Key Encryption and Key Encapsulation Mechanism (KEM)
based on General Learning with Rounding (GLWR), a lattice problem.
We argue that the ring variant of GLWR is better suited for embedded
targets than the more common RLWE (Ring Learning With Errors) due to
significantly shorter keys and messages. Round5 incorporates GLWR with
error correction, building on design features from NIST Post-Quantum
Standardization candidates Round2 and Hila5. The proposal avoids
Number Theoretic Transforms (NTT), allowing more flexibility in
parameter selection and making it simpler to implement. We discuss
implementation techniques of Round5 ring variants and compare
them to other NIST PQC candidates on lightweight Cortex M4 platform. We
show that the current development version of Round5 offers not only
the shortest key and ciphertext sizes among Lattice-based candidates, but
also has leading performance and implementation size characteristics.
uMine: a Blockchain based on Human Miners
Blockchain technology like Bitcoin is a rapidly growing field of research which has found a wide array of applications. However, the power consumption of the mining process in the Bitcoin blockchain alone is estimated to be at least as high as the electricity consumption of Ireland which constitutes a serious liability to the widespread adoption of blockchain technology.
We propose a novel instantiation of a proof of human-work which is a cryptographic proof that an amount of human work has been exercised, and show its use in the mining process of a blockchain. Next to our instantiation there is only one other instantiation known which relies on indistinguishability obfuscation, a cryptographic primitive whose existence is only conjectured.
In contrast, our construction is based on the cryptographic principle of multiparty computation (which we use in a black box manner) and thus is the first known feasible proof of human-work scheme.
Our blockchain mining algorithm called uMine, can be regarded as an alternative energy-efficient approach to mining.
Transparency Logs via Append-only Authenticated Dictionaries
Transparency logs allow users to audit a potentially malicious service, paving the way towards a more accountable Internet.
For example, Certificate Transparency (CT) enables domain owners to audit Certificate Authorities (CAs) and detect impersonation attacks.
Yet, to achieve their full potential, transparency logs must be bandwidth-efficient when queried by users.
Specifically, everyone should be able to efficiently look up log entries by their key and efficiently verify that the log remains append-only.
Unfortunately, without additional trust assumptions, current transparency logs cannot provide both small-sized lookup proofs and small-sized append-only proofs.
In fact, one of the proofs always requires bandwidth linear in the size of the log, making it expensive for everyone to query the log.
In this paper, we address this gap with a new primitive called an append-only authenticated dictionary (AAD).
Our construction is the first to achieve (poly)logarithmic size for both proof types and helps reduce bandwidth consumption in transparency logs.
This comes at the cost of increased append times and high memory usage, both of which remain to be improved to make practical deployment possible.
{Adiantum}: length-preserving encryption for entry-level processors
Uncategorized
Uncategorized
We present HBSH, a simple construction for tweakable length-preserving encryption which
supports the fastest options for hashing and stream encryption for processors
without AES or other crypto instructions, with a provable
quadratic advantage bound. Our composition Adiantum uses NH, Poly1305, XChaCha12,
and a single AES invocation. On an ARM Cortex-A7 processor, Adiantum decrypts
4096-byte messages at 10.6 cycles per byte, over five times faster than
AES-256-XTS, with a constant-time implementation. We also define HPolyC which is
simpler and has excellent key agility at 13.6 cycles per byte.
Data Recovery on Encrypted Databases With k-Nearest Neighbor Query Leakage
Uncategorized
Uncategorized
Recent works by Kellaris et al. (CCS’16) and Lacharite et al. (SP’18) demonstrated attacks of data recovery for encrypted databases that support rich queries such as range queries. In this paper, we develop the first data recovery attacks on encrypted databases supporting one-dimensional k-nearest neighbor (k-NN) queries, which are widely used in spatial data management. Our attacks exploit a generic k-NN query leakage profile: the attacker observes the identifiers of matched records. We consider both unordered responses, where the leakage is a set, and ordered responses, where the leakage is a k-tuple ordered by distance from the query point.
As a first step, we perform a theoretical feasibility study on exact reconstruction, i.e., recovery of the exact plaintext values of the encrypted database. For ordered responses, we show that exact reconstruction is feasible if the attacker has additional access to some auxiliary information that is normally not available in practice. For unordered responses, we prove that exact reconstruction is impossible due to the infinite number of valid reconstructions. As a next step, we propose practical and more realistic approximate reconstruction attacks so as to recover an approximation of the plaintext values. For ordered responses, we show that after observing enough query responses, the attacker can approximate the client’s encrypted database with considerable accuracy. For unordered responses we characterize the set of valid reconstructions as a convex polytope in a k-dimensional space and present a rigorous attack that reconstructs the plaintext database with bounded approximation error.
As multidimensional spatial data can be efficiently processed by mapping it to one dimension via Hilbert curves, we demonstrate our approximate reconstruction attacks on privacy-sensitive geolocation data. Our experiments on real-world datasets show that our attacks reconstruct the plaintext values with relative error ranging from 2.9% to 0.003%.
Cryptographic Pseudorandom Generators Can Make Cryptosystems Problematic
Randomness is an essential resource for cryptography. For practical randomness generation, the security notion of pseudorandom generators (PRGs) intends to automatically preserve (computational) security of cryptosystems when used in implementation. Nevertheless, some opposite case such as in computational randomness extractors (Barak et al., CRYPTO 2011) is known (but not yet systematically studied so far) where the security can be lost even by applying secure PRGs. The present paper aims at pushing ahead the observation and understanding about such a phenomenon; we reveal such situations at layers of primitives and protocols as well, not just of building blocks like randomness extractors. We present three typical types of such cases: (1) adversaries can legally see the seed of the PRGs (including the case of randomness extractors); (2) the set of "bad'' randomness may be not efficiently recognizable; (3) the formulation of a desired property implicitly involves non-uniform distinguishers for PRGs. We point out that the semi-honest security of multiparty computation also belongs to Type 1, while the correctness with negligible decryption error probability for public key encryption belongs to Types 2 and 3. We construct examples for each type where a secure PRG (against uniform distinguishers only, for Type 3) does not preserve the security/correctness of the original scheme; and discuss some countermeasures to avoid such an issue.
Key Extraction using Thermal Laser Stimulation: A Case Study on Xilinx Ultrascale FPGAs
Thermal laser stimulation (TLS) is a failure analysis technique, which can be deployed by an adversary to localize and read out stored secrets in the SRAM of a chip. To this date, a few proof-of-concept experiments based on TLS or similar approaches have been reported in the literature, which do not reflect a real attack scenario. Therefore, it is still questionable whether this attack technique is applicable to modern ICs equipped with side-channel countermeasures. The primary aim of this work is to assess the feasibility of launching a TLS attack against a device with robust security features. To this end, we select a modern FPGA, and more specifically, its key memory, the so-called battery-backed SRAM (BBRAM), as a target. We demonstrate that an attacker is able to extract the stored 256-bit AES key used for the decryption of the FPGA’s bitstream, by conducting just a single non-invasive measurement. Moreover, it becomes evident that conventional countermeasures are incapable of preventing our attack since the FPGA is turned off during key recovery. Based on our time measurements, the required effort to develop the attack is shown to be less than 7 hours. To avert this powerful attack, we propose a low-cost and CMOS compatible countermeasure circuit, which is capable of protecting the BBRAM from TLS attempts even when the FPGA is powered off. Using a proof-of-concept prototype of our countermeasure, we demonstrate its effectiveness against TLS key extraction attempts.
Lattice-Based Zero-Knowledge Arguments for Integer Relations
We provide lattice-based protocols allowing to prove relations among committed integers. While the most general zero-knowledge proof techniques can handle arithmetic circuits in the lattice setting, adapting them to prove statements over the integers is non-trivial, at least if we want to handle exponentially large integers while working with a polynomial-size modulus $q$. For a polynomial $L$, we provide zero-knowledge arguments allowing a prover to convince a verifier that committed $L$-bit bitstrings $x$, $y$ and $z$ are the binary representations of integers $X$, $Y$ and $Z$ satisfying $Z=X+Y$ over $\mathbb{Z}$. The complexity of our arguments is only linear in $L$. Using them, we construct arguments allowing to prove inequalities $X<Z$ among committed integers, as well as arguments showing that a committed $X$ belongs to a public interval $[\alpha,\beta]$, where $\alpha$ and $\beta$ can be arbitrarily large. Our range arguments have logarithmic cost (i.e., linear in $L$) in the maximal range magnitude. Using these tools, we obtain zero-knowledge arguments showing that a committed element $X$ does not belong to a public set $S$ using $\widetilde{\mathcal{O}}(n \cdot \log |S|)$ bits of communication, where $n$ is the security parameter. We finally give a protocol allowing to argue that committed $L$-bit integers $X$, $Y$ and $Z$ satisfy multiplicative relations $Z=XY$ over the integers, with communication cost subquadratic in $L$. To this end, we use our protocol for integer addition to prove the correct recursive execution of Karatsuba's multiplication algorithm. The security of our protocols relies on standard lattice assumptions with polynomial modulus and polynomial approximation factor.
Practical Attacks on Relational Databases Protected via Searchable Encryption
Searchable symmetric encryption (SSE) schemes are commonly proposed to enable search in a protected unstructured documents such as email archives or any set of sensitive text files. However, some SSE schemes have been recently proposed in order to protect relational databases. Most of the previous attacks on SSE schemes have only targeted its common use case, protecting unstructured data. In this work, we propose a new inference attack on relational databases protected via SSE schemes. Our inference attack enables a passive adversary with only basic knowledge about the meta-data information of the target relational database to recover the attribute names of some observed queries. This violates query privacy since the attribute name of a query is secret.
PKP-Based Signature Scheme
In this document, we introduce PKP-DSS: a Digital Signature Scheme based on the Permuted Kernel Problem (PKP).
PKP is a simple NP-hard combinatorial problem that consists of finding a kernel for a publicly known matrix, such that the kernel vector is a permutation of a publicly known vector. This problem was used to develop an Identification Scheme which has a very efficient implementation on low-cost smart cards. From this zero-knowledge identification scheme, we derive PKP-DSS with the traditional Fiat-Shamir transform. Thus, PKP-DSS has security that can be provably reduced, in the classical random oracle model, to the hardness of random instances of PKP (or, if wanted, to any specific family of PKP instances). We propose parameter sets following the analysis of State-of-the-art attacks on PKP. We show that PKP-DSS is competitive with other signatures derived from Zero-Knowledge identification schemes. In particular, PKP-DSS-128 gives a signature size of approximately 20 KBytes for 128 bits of classical security, which is approximately 30% smaller than MQDSS. Moreover, our proof-of-concept implementation shows that PKP-DSS-128 is an order of magnitude faster than MQDSS which in turn is faster than Picnic2, SPHINCS,...
Since the PKP is NP-hard and since there are no known quantum attacks for solving PKP significantly better than classical attacks, we believe that our scheme is post-quantum secure.
On CCZ-Equivalence, Extended-Affine Equivalence, and Function Twisting
Two vectorial Boolean functions are ``CCZ-equivalent'' if there exists an affine permutation mapping the graph of one to the other. It preserves many of the cryptographic properties of a function such as its differential and Walsh spectra, which is why it could be used by Dillon et al. to find the first APN permutation on an even number of variables. However, the meaning of this form of equivalence remains unclear. In fact, to the best of our knowledge, it is not known how to partition a CCZ-equivalence class into its Extended-Affine (EA) equivalence classes; EA-equivalence being a simple particular case of CCZ-equivalence.
In this paper, we characterize CCZ-equivalence as a property of the zeroes in the Walsh spectrum of a function $F : \mathbb{F}_2^{n} \to \mathbb{F}_2^{m}$ or, equivalently, of the zeroes in its Difference Distribution Table. We use this framework to show how to efficiently upper bound the number of distinct EA-equivalence classes in a given CCZ-equivalence class. More importantly, we prove that it is possible to go from a specific member of any EA-equivalence class to a specific member of another EA-equivalence class in the same CCZ-equivalence class using an operation called \emph{twisting}; so that CCZ-equivalence can be reduced to the association of EA-equivalence and twisting. Twisting a function is a simple process and its possibility is equivalent to the existence of a particular decomposition of the function considered. Using this knowledge, we revisit several results from the literature on CCZ-equivalence and show how they can be interpreted in light of our new framework.
Our results rely on a new concept, the ``thickness'' of a space (or linear permutation), which can be of independent interest.
A Survey of Two Verifiable Delay Functions
Uncategorized
Uncategorized
A verifiable delay function (VDF) is an important tool used for adding delay in decentralized
applications. This short note briefly surveys and compares two recent beautiful Verifiable Delay
Functions (VDFs), one due to Pietrzak and the other due to Wesolowski. We also provide a
new computational proof of security for one of them, and compare the complexity assumptions
needed for both schemes.
Cryptanalysis of a Group Key Transfer Protocol Based on Secret Sharing: Generalization and Countermeasures
Group key distribution protocol is a mechanism in which a group key is generated and distributed by KGC to a set of communicating parties in a group. This group key generally ensures secure communication among communicating parties in an unsecure channel. Harn and Lin protocol is one such. It is based on Shamir's secret sharing scheme. Nam et al. exposed the vulnerability in Harn and Lin protocol through their replay attack and proposed a countermeasure using nonce mechanism. In this paper, we are generalizing the replay attack proposed by Nam et al. and proposing an alternative countermeasure without using nonce mechanism. Novelty of our countermeasure is that KGC is not required to detect replay messages and hence each user doesn't need to compute authentication message as in Nam et al. Proposed countermeasure thereby brings down the computational complexity of the scheme.
Fast Secure Computation for Small Population over the Internet
Secure Multi-Party Computation (MPC) with small number of parties is an interesting area of research, primarily due to its ability to model most real-life MPC applications and the simplicity and efficiency of the resulting protocols. In this work, we present efficient, constant-round 3-party (3PC) and 4-party (4PC) protocols in the honest-majority setting that achieve strong security notions of fairness (corrupted parties receive their output only if all honest parties receive output) and guaranteed output delivery (corrupted parties cannot prevent honest parties from receiving their output). Being constant-round, our constructions are suitable for Internet-like high-latency networks and are built from garbled circuits (GC).
Assuming the minimal model of pairwise-private channels, we present two protocols that involve computation and communication of a single GC-- (a) a 4-round 3PC with fairness, (b) a 5-round 4PC with guaranteed output delivery. Empirically, our protocols are on par with the best known 3PC protocol of Mohassel et al. [CCS 2015] that only achieves security with selective abort, in terms of the computation time, LAN runtime, WAN runtime and communication cost. In fact, our 4PC outperforms the 3PC of Mohassel et al. significantly in terms of per-party computation and communication cost. With an extra GC, we improve the round complexity of our 4PC to four rounds. The only 4PC in our setting, given by Ishai et al. [CRYPTO 2015], involves 12 GCs.
Assuming an additional broadcast channel, we present a 5-round 3PC with guaranteed output delivery that involves computation and communication of a single GC. A broadcast channel is inevitable in this setting for achieving guaranteed output delivery, owing to an impossibility result in the literature. The overall broadcast communication of our protocol is nominal and most importantly, is independent of the circuit size. This protocol too induces a nominal overhead compared to the protocol of Mohassel et al.
Simple oblivious transfer protocols compatible with Kummer and supersingular isogenies
Uncategorized
Uncategorized
The key exchange protocol of Diffie and Hellman, which can be defined for any group, has the special feature of using only exponentiations. In particular, it can also be instantiated in Kummer varieties, which are not groups, and in the post-quantum isogeny-based setting with the supersingular isogeny DH scheme of De Feo, Jao and Plût (SIDH).
In this article, we propose a new simple oblivious transfer (OT) protocol, based on the Diffie-Hellman key exchange, that only uses exponentiations; we also revisit the older Wu-Zhang-Wang scheme. Both protocols can be directly instantiated on fast Kummer varieties; more importantly, they can also be transposed in the post-quantum SIDH setting. The semantic security of our proposals relies on the hardness of non-standard versions of the (supersingular) Diffie-Hellman problem, that are investigated within this article. To the best of our knowledge, these protocols are the simplest secure discrete-log based OT schemes using only exponentiations, and the first isogeny-based OT schemes.
Masking the Lightweight Authenticated Ciphers ACORN and Ascon in Software
The ongoing CAESAR competition aims at finding authenticated encryption schemes that offer advantages over AES-GCM for several use-cases, including lightweight applications. ACORN and Ascon are the two finalists for this profile. Our paper compares these two candidates according to their resilience against differential power analysis and their ability to integrate countermeasures against such attacks. Especially, we focus on software implementations and provide benchmarks for several security levels on an ARM Cortex-M3 embedded microprocessor.
Function Secret Sharing: Improvements and Extensions
Function Secret Sharing (FSS), introduced by Boyle et al. (Eurocrypt 2015), provides a way for additively secret-sharing a function from a given function family $\mathcal F$. More concretely, an $m$-party FSS scheme splits a function $f:\{0,1\}^n\to \mathbb G$, for some abelian group $\mathbb G$, into functions $f_1,\ldots,f_m$, described by keys $k_1,\ldots,k_m$, such that $f=f_1+\ldots+f_m$ and every strict subset of the keys hides $f$. A Distributed Point Function (DPF) is a special case where $\mathcal F$ is the family of point functions, namely functions $f_{\alpha,\beta}$ that evaluate to $\beta$ on the input $\alpha$ and to 0 on all other inputs.
FSS schemes are useful for applications that involve privately reading from or writing to distributed databases while minimizing the amount of communication. These include different flavors of private information retrieval (PIR), as well as a recent application of DPF for large-scale anonymous messaging.
We improve and extend previous results in several ways:
- Simplified FSS constructions. We introduce a tensoring operation for FSS which is used to obtain a conceptually simpler derivation of previous constructions and present our new constructions.
- Improved 2-party DPF. We reduce the key size of the PRG-based DPF scheme of Boyle et al. roughly by a factor of 4 and optimize its computational cost. The optimized DPF significantly improves the concrete costs of 2-server PIR and related primitives.
- FSS for new function families. We present an efficient PRG-based 2-party FSS scheme for the family of decision trees, leaking only the topology of the tree and the internal node labels. We apply this towards FSS for multi-dimensional intervals. We also present a general technique for obtaining more expressive FSS schemes by increasing the number of parties.
- Verifiable FSS. We present efficient protocols for verifying that keys $(k^*_1,\ldots,k^*_m)$, obtained from a potentially malicious user, are consistent with some $f\in\mathcal F$. Such a verification may be critical for applications that involve private writing or voting by many users.
Efficient 3-Party Distributed ORAM
Distributed Oblivious RAM (DORAM) protocols---in which parties obliviously access a shared location in a shared array---are a fundamental component of secure-computation protocols in the RAM model. We show here an efficient, 3-party DORAM protocol with semi-honest security for a single corrupted party. To the best of our knowledge, ours is the first protocol for this setting that runs in constant rounds, requires sublinear communication and linear work, and makes only black-box use of cryptographic primitives. We believe our protocol is also concretely more efficient than existing solutions.
As a building block of independent interest, we construct a 3-server distributed point function with security against two colluding servers that is simpler and has better concrete efficiency than prior work.
Subvector Commitments with Application to Succinct Arguments
Uncategorized
Uncategorized
We put forward the notion of subvector commitments (SVC): An SVC allows one to open a committed vector at a set of positions, where the opening size is independent of length of the committed vector and the number of positions to be opened. We propose two constructions under variants of the root assumption and the CDH assumption, respectively. We further generalize SVC to a notion called linear map commitments (LMC), which allows one to open a committed vector to its images under linear maps with a single short message, and propose a construction over pairing groups.
Equipped with these newly developed tools, we revisit the ``CS proofs'' paradigm [Micali, FOCS 1994] which turns any arguments with public-coin verifiers into non-interactive arguments using the Fiat-Shamir transform in the random oracle model. We propose a compiler that turns any (linear, resp.) PCP into a non-interactive argument, using exclusively SVCs (LMCs, resp.). For an approximate $80$ bits of soundness, we highlight the following new implications:
- There exists a succinct non-interactive argument of knowledge (SNARK) with public-coin setup with proofs of size 5360 bits, under the adaptive root assumption over class groups of imaginary quadratic orders against adversaries with runtime $2^{128}$. At the time of writing, this is the shortest SNARK with public-coin setup.
- There exists a non-interactive argument with private-coin setup, where proofs consist of 2 group elements and 3 field elements, in the generic bilinear group model.
Verifiable Sealed-Bid Auction on the Ethereum Blockchain
Uncategorized
Uncategorized
The success of the Ethereum blockchain as a decentralized application platform with a distributed consensus protocol has made many organizations start to invest into running their business on top of it. Technically, the most impressive feature behind the success of Ethereum is its support for a Turing complete language.On the other hand, the inherent transparency and, consequently, the lack of privacy poses a great challenge for many financial applications.
In this paper, we tackle this challenge and present a smart contract for a verifiable sealed-bid auction on the Ethereum blockchain. In a nutshell, initially, the bidders submit homomorphic commitments to their sealed-bids on the contract. Subsequently, they reveal their commitments secretly to the auctioneer via a public key encryption scheme. Then, according to the auction rules, the auctioneer determines and claims the winner of the auction. Finally, we utilize interactive zero-knowledge proof protocols between the smart contract and the auctioneer to verify the correctness of such a claim. The underlying protocol of the proposed smart contract is partially privacy-preserving. To be precise, no information about the losing bids is leaked to the bidders. We provide an analysis of the proposed protocol and the smart contract design, in addition to the estimated gas costs associated with the different transactions.
New Protocols for Secure Linear Algebra: Pivoting-Free Elimination and Fast Block-Recursive Matrix Decomposition
Cramer and Damg\aa{}rd were the first to propose a constant-rounds protocol for securely solving a linear system of unknown rank over a finite field in multiparty computation (MPC). For $m$ linear equations and $n$ unknowns, and for the case $m\leq n$, the computational complexity of their protocol is $O(n^5)$. Follow-up work (by Cramer, Kiltz, and Padró) proposes another constant-rounds protocol for solving this problem, which has complexity $O(m^4+n^2 m)$. For certain applications, such asymptotic complexities might be prohibitive. In this work, we improve the asymptotic computational complexity of solving a linear system over a finite field, thereby sacrificing the constant-rounds property. We propose two protocols: (1) a protocol based on pivoting-free Gaussian elimination with computational complexity $O(n^3)$ and linear round complexity, and (2) a protocol based on block-recursive matrix decomposition, having $O(n^2)$ computational complexity (assuming ``cheap'' secure inner products as in Shamir's secret-sharing scheme) and $O(n^{1.585})$ (super-linear) round complexity.
Tight Proofs of Space and Replication
Uncategorized
Uncategorized
We construct a concretely practical proof-of-space (PoS) with arbitrarily tight security based on stacked depth robust graphs and constant-degree expander graphs. A proof-of-space (PoS) is an interactive proof system where a prover demonstrates that it is persistently using space to store information. A PoS is arbitrarily tight if the honest prover uses exactly N space and for any $\epsilon > 0$ the construction can be tuned such that no adversary can pass verification using less than $1-\epsilon N$ space. Most notably, the degree of the graphs in our construction are independent of $\epsilon$, and the number of layers is only $O(\log(1/\epsilon))$. The proof size is $O(d/\epsilon)$. The degree $d$ depends on the depth robust graphs, which are only required to maintain $\Omega(N)$ depth in subgraphs on 80% of the nodes. Our tight PoS is also secure against parallel attacks.
Tight proofs of space are necessary for proof-of-replication (PoRep), which is a publicly verifiable proof that the prover is dedicating unique resources to storing one or more retrievable replicas of a file. Our main PoS construction can be used as a PoRep, but data extraction is as inefficient as replica generation. We present a second variant of our construction called ZigZag PoRep that has fast/parallelizable data extraction compared to replica generation and maintains the same space tightness while only increasing the number of levels by roughly a factor two.
Last updated: 2019-11-16
Secure Sketch for All Noisy Sources
Secure sketch produces public information of its input $w$ without revealing it, yet, allows the exact recovery of $w$ given another value $w'$ that is close to $w$. Therefore, it can be used to reliably reproduce any error-prone a secret sources (i.e., biometrics) stored in secret storage. However, some sources have lower entropy compared to the error itself, formally called ``more error than entropy", a standard secure sketch cannot show its security promise perfectly to these kind of sources. This paper focuses on secure sketch. We propose a concrete construction for secure sketch. We show security to all noisy sources, including the trivial source with zero min-entropy. In addition, our construction comes with efficient recovery algorithm operates in polynomial time in the sketch size, which can tolerate high number of error rate arbitrary close to 1/2. Above result act in conjunction to our derivation on the solution to a well-known NP-complete coding problem, implying $P=NP$.
SIDH on ARM: Faster Modular Multiplications for Faster Post-Quantum Supersingular Isogeny Key Exchange
We present high-speed implementations of the post-quantum supersingular isogeny Diffie-Hellman key exchange (SIDH) and the supersingular isogeny key encapsulation (SIKE) protocols for 32-bit ARMv7-A processors with NEON support. The high performance of our implementations is mainly due to carefully optimized multiprecision and modular arithmetic that finely integrates both ARM and NEON instructions in order to reduce the number of pipeline stalls and memory accesses, and a new Montgomery reduction technique that combines the use of the UMAAL instruction with a variant of the hybrid-scanning approach. In addition, we present efficient implementations of SIDH and SIKE for 64-bit ARMv8-A processors, based on a high-speed Montgomery multiplication that leverages the power of 64-bit instructions. Our experimental results consolidate the practicality of supersingular isogeny-based protocols for many real-world applications. For example, a full key-exchange execution of SIDHp503 is performed in about 176 million cycles on an ARM Cortex-A15 from the ARMv7-A family (i.e., 88 milliseconds @2.0GHz). On an ARM Cortex-A72 from the ARMv8-A family, the same operation can be carried out in about 90 million cycles (i.e., 45 milliseconds @1.992GHz). All our software is protected against timing and cache attacks. The techniques for modular multiplication presented in this work have broad applications to other cryptographic schemes.
Correlated Sequence Attack on Reduced-Round Simon-32/64 and Simeck-32/64
Uncategorized
Uncategorized
In this paper, we propose a novel cryptanalytic technique called correlated sequence attack on block ciphers. Our attack exploits the properties of given key dependent sequences of length $t$ to obtain other keyed sequences of same length with $\sigma$ ($0\le \sigma < t$) computations of the non-linear function. We call these sequences $(\sigma,t)$-correlated sequences, and utilize them in a meet-in-the-middle attack for $2t$ rounds. We apply this technique on Simon-32/64 and Simeck-32/64 block ciphers, construct $(1, 8)$-correlated sequences and present the first 25-round attack on both ciphers. Next, we analyze the 8-th element of these sequences by considering the key scheduling algorithms and differential properties, and show that the attack can be improved by two rounds with the same complexities as of the 25-round attack. Overall, our technique is used to attack up to 27 rounds of both Simon-32/64 and Simeck-32/64 with a time complexity less than that of average exhaustive search and data complexity of 3.
Our attack extends the number of previously attacked rounds by 4 and has a success probability 1. This reduces the security margin of both these ciphers to 16%. Up to our knowledge, this is currently the best attack on Simon-32/64 and Simeck-32/64.
Parameter-Hiding Order Revealing Encryption
Order-revealing encryption (ORE) is a popular primitive for outsourcing encrypted databases, as it allows for efficiently performing range queries over encrypted data. Unfortunately, a series of works, starting with Naveed et al. (CCS 2015), have shown that when the adversary has a good estimate of the distribution of the data, ORE provides little protection. In this work, we consider the case that the database entries are drawn identically and independently from a distribution of known shape, but for which the mean and variance are not (and thus the attacks of Naveed et al. do not apply). We define a new notion of security for ORE, called parameter-hiding ORE, which maintains the secrecy of these parameters. We give a construction of ORE satisfying our new definition from bilinear maps.
AUDIT: Practical Accountability of Secret Processes
Uncategorized
Uncategorized
The US federal court system is exploring ways to improve the accountability of electronic surveillance, an opaque process often involving cases sealed from public view and tech companies subject to gag orders against informing surveilled users. One judge has proposed publicly releasing some metadata about each case on a paper cover sheet as a way to balance the competing goals of (1) secrecy, so the target of an investigation does not discover and sabotage it, and (2) accountability, to assure the public that surveillance powers are not misused or abused.
Inspired by the courts' accountability challenge, we illustrate how accountability and secrecy are simultaneously achievable when modern cryptography is brought to bear. Our system improves configurability while preserving secrecy, offering new tradeoffs potentially more palatable to the risk-averse court system. Judges, law enforcement, and companies publish commitments to surveillance actions, argue in zero-knowledge that their behavior is consistent, and compute aggregate surveillance statistics by multi-party computation (MPC).
We demonstrate that these primitives perform efficiently at the scale of the federal judiciary. To do so, we implement a hierarchical form of MPC that mirrors the hierarchy of the court system. We also develop statements in succinct zero-knowledge (SNARKs) whose specificity can be tuned to calibrate the amount of information released. All told, our proposal not only offers the court system a flexible range of options for enhancing accountability in the face of necessary secrecy, but also yields a general framework for accountability in a broader class of "secret information processes."
Unbounded Inner Product Functional Encryption from Bilinear Maps
Inner product functional encryption (IPFE), introduced by Abdalla et al. (PKC2015), is a kind of functional encryption supporting only inner product functionality.
All previous IPFE schemes are bounded schemes, meaning that the vector length that can be handled in the scheme is fixed in the setup phase.
In this paper, we propose the first unbounded IPFE schemes, in which we do not have to fix the lengths of vectors in the setup phase and
can handle (a priori) unbounded polynomial lengths of vectors.
Our first scheme is private-key based and fully function hiding.
That is, secret keys hide the information of the associated function.
Our second scheme is public-key based and provides adaptive security in the indistinguishability based security definition.
Both our schemes are based on SXDH, which is a well-studied standard assumption, and secure in the standard model.
Furthermore, our schemes are quite efficient, incurring an efficiency loss by only a small constant factor from previous bounded function hiding schemes.
SPHINX: A Password Store that Perfectly Hides Passwords from Itself
Password managers (aka stores or vaults) allow a user to store and retrieve (usually high-entropy) passwords for her multiple password-protected services by interacting with a "device" serving the role of the manager (e.g., a smartphone or an online third-party service) on the basis of a single memorable (low-entropy) master password. Existing password managers work well to defeat offline dictionary attacks upon web service compromise, assuming the use of high-entropy passwords is enforced. However, they are vulnerable to leakage of all passwords in the event the device is compromised, due to the need to store the passwords encrypted under the master password and/or the need to input the master password to the device (as in smartphone managers). Evidence exists that password managers can be attractive attack targets.
In this paper, we introduce a novel approach to password management, called SPHINX, which remains secure even when the password manager itself has been compromised. In SPHINX the information stored on the device is information theoretically independent of the user's master password --- an attacker breaking into the device learns no information about the master password or the user's site-specific passwords. Moreover, an attacker with full control of the device, even at the time the user interacts with it, learns nothing about the master password --- the password is not entered into the device in plaintext form or in any other way that may leak information on it. Unlike existing managers, SPHINX produces strictly high-entropy passwords and makes it compulsory for the users to register these randomized passwords with the web services, hence fully defeating offline dictionary attack upon service compromise. The design and security of SPHINX is based on the device-enhanced PAKE model of Jarecki et al. that provides the theoretical basis for this construction and is backed by rigorous cryptographic proofs of security.
While SPHINX is suitable for different device and online platforms, in this paper, we report on its concrete instantiation on smartphones given their popularity and trustworthiness as password managers (or even two-factor authentication). We present the design, implementation and performance evaluation of SPHINX, offering prototype browser plugins, smartphone apps and transparent device-client communication. Based on our inspection analysis, the overall user experience of SPHINX improves upon current managers. We also report on a lab-based usability study of SPHINX, which indicates that users' perception of SPHINX security and usability is high and satisfactory when compared to regular password-based authentication. Finally, we discuss how SPHINX may be extended to an online service for the purpose of back-up or as an independent password manager.
Faster Privacy-Preserving Location Proximity Schemes
In the last decade, location information became easily obtainable using off-the-shelf mobile devices. This gave a momentum to developing Location Based Services (LBSs) such as location proximity detection, which can be used to find friends or taxis nearby. LBSs can, however, be easily misused to track users, which draws attention to the need of protecting privacy of these users.
In this work, we address this issue by designing, implementing, and evaluating multiple algorithms for Privacy-Preserving Location Proximity (PPLP) that are based on different secure computation protocols. Our PPLP protocols are well-suited for different scenarios: for saving bandwidth, energy/computational power, or for faster runtimes. Furthermore, our algorithms have runtimes of a few milliseconds to hundreds of milliseconds and bandwidth of hundreds of bytes to one megabyte. In addition, the computationally most expensive parts of the PPLP computation can be precomputed in our protocols, such that the input-dependent online phase runs in just a few milliseconds.
Efficient Side-Channel Protections of ARX Ciphers
The current state of the art of Boolean masking for the modular addition operation in software has a very high performance overhead. Firstly, the instruction count is very high compared to a normal addition operation. Secondly, until recently, the entropy consumed by such protections was also quite high. Our paper significantly improves both aspects, by applying the Threshold Implementation (TI) methodology with two shares and by reusing internal values as randomness source in such a way that the uniformity is always preserved. Our approach performs considerably faster compared to the previously known masked addition and subtraction algorithms by Coron et al. and Biryukov et al. improving the state of the art by 36%, if we only consider the number of ARM assembly instructions. Furthermore, similar to the masked adder from Biryukov et al. we reduce the amount of randomness and only require one bit additional entroy per addition, which is a good trade-off for the improved performance. We applied our improved masked adder to ChaCha20, for which we provide two new first-order protected implementations and achieve a 36% improvement over the best published result for ChaCha20 using an ARM Cortex-M4 microprocessor.
New Configurations of Grain Ciphers: Security Against Slide Attacks
eSTREAM brought to the attention of the cryptographic community a number of stream ciphers including Grain v0 and its revised version Grain v1. The latter was selected as a finalist of the competition's hardware-based portfolio. The Grain family includes two more instantiations, namely Grain 128 and Grain 128a.
The scope our paper is to provide an insight on how to obtain secure configurations of the Grain family of stream ciphers. We propose different variants for Grain and analyze their security with respect to slide attacks. More precisely, as various attacks against initialization algorithms of Grain were discussed in the literature, we study the security impact of various parameters which may influence the LFSR's initialization scheme.
DIZK: A Distributed Zero Knowledge Proof System
Recently there has been much academic and industrial interest in practical implementations of *zero knowledge proofs*. These techniques allow a party to *prove* to another party that a given statement is true without revealing any additional information. In a Bitcoin-like system, this allows a payer to prove validity of a payment without disclosing the payment's details.
Unfortunately, the existing systems for generating such proofs are very expensive, especially in terms of memory overhead. Worse yet, these systems are "monolithic", so they are limited by the memory resources of a single machine. This severely limits their practical applicability.
We describe DIZK, a system that *distributes* the generation of a zero knowledge proof across machines in a compute cluster. Using a set of new techniques, we show that DIZK scales to computations of up to billions of logical gates (100x larger than prior art) at a cost of 10$\mu$s per gate (100x faster than prior art). We then use DIZK to study various security applications.
Efficient KEA-Style Lattice-Based Authenticated Key Exchange
Lattice-based cryptographic primitives are believed to have the property against attacks by quantum computers. In this work, we present a KEA-style authenticated key exchange protocol based on the ring learning with errors problem whose security is proven in the BR model with weak perfect forward secrecy. With properties of KEA such as implicit key authentication and simplicity, our protocol also enjoys many properties of lattice-based cryptography, namely asymptotic efficiency, conceptual simplicity, worst-case hardness assumption, and resistance to attacks by quantum computers. Our lattice-based authenticated key exchange protocol is more efficient than the protocol of Zhang et al. (EUROCRYPT 2015) with more concise structure, smaller key size and lower bandwidth. Also, our protocol enjoys the advantage of optimal online efficiency and we improve our protocol with pre-computation.
Mind the Gap - A Closer Look at the Security of Block Ciphers against Differential Cryptanalysis
Resistance against differential cryptanalysis is an important design criteria for any modern block cipher and most designs rely on finding some upper bound on probability of single differential characteristics. However, already at EUROCRYPT'91, Lai et al. comprehended that differential cryptanalysis rather uses differentials instead of single characteristics.
In this paper, we consider exactly the gap between these two approaches and investigate this gap in the context of recent lightweight cryptographic primitives. This shows that for many recent designs like Midori, Skinny or Sparx one has to be careful as bounds from counting the number of active S-boxes only give an inaccurate evaluation of the best differential distinguishers. For several designs we found new differential distinguishers and show how this gap evolves. We found an 8-round differential distinguisher for Skinny-64 with a probability of $2^{-56.93}$, while the best single characteristic only suggests a probability of $2^{-72}$. Our approach is integrated into publicly available tools and can easily be used when developing new cryptographic primitives.
Moreover, as differential cryptanalysis is critically dependent on the distribution over the keys for the probability of differentials, we provide experiments for some of these new differentials found, in order to confirm that our estimates for the probability are correct. While for Skinny-64 the distribution over the keys follows a Poisson distribution, as one would expect, we noticed that Speck-64 follows a bimodal distribution, and the distribution of Midori-64 suggests a large class of weak keys.
Finding Integral Distinguishers with Ease
The division property method is a technique to determine integral distinguishers on block ciphers. While the complexity of finding these distinguishers is higher, it has recently been shown that MILP and SAT solvers can efficiently find such distinguishers. In this paper, we provide a framework to automatically find those distinguishers which solely requires a description of the cryptographic primitive. We demonstrate that by finding integral distinguishers for 30 primitives with different design strategies.
We provide several new or improved bit-based division property distinguishers for ChaCha, Chaskey, DES, GIFT, LBlock, Mantis, Qarma, RoadRunner, Salsa and SM4. Furthermore, we present an algorithm to find distinguishers with lower data complexity more efficiently.
Assessing the Feasibility of Single Trace Power Analysis of Frodo
Lattice-based schemes are among the most promising post-quantum schemes, yet the effect of both parameter and implementation choices on their side-channel resilience is still poorly understood. Aysu et al. (HOST'18) recently investigated single-trace attacks against the core lattice operation, namely multiplication between a public matrix and a "small" secret vector, in the context of a hardware implementation. We complement this work by considering single-trace attacks against software implementations of "ring-less" LWE-based constructions. Specifically, we target Frodo, one of the submissions to the standardisation process of NIST, when implemented on an (emulated) ARM Cortex M0 processor. We confirm Aysu et al.'s observation that a standard divide-and-conquer attack is insufficient and instead we resort to a sequential, extend-and-prune approach. In contrast to Aysu et al. we find that, in our setting where the power model is far from being as clear as theirs, both profiling and less aggressive pruning are needed to obtain reasonable key recovery rates for SNRs of practical relevance. Our work drives home the message that parameter selection for LWE schemes is a double-edged sword: the schemes that are deemed most secure against (black-box) lattice attacks can provide the least security when considering side-channels. Finally, we suggest some easy countermeasures that thwart standard extend-and-prune attacks.
Standard Lattice-Based Key Encapsulation on Embedded Devices
Lattice-based cryptography is one of the most promising candidates being considered to replace current public-key systems in the era of quantum computing. In 2016, Bos et al. proposed the key exchange scheme FrodoCCS, that is also a submission to the NIST post-quantum standardization process, modified as a key encapsulation mechanism (FrodoKEM). The security of the scheme is based on standard lattices and the learning with errors problem. Due to the large parameters, standard lattice-based schemes have long been considered impractical on embedded devices. The FrodoKEM proposal actually comes with parameters that bring standard lattice-based cryptography within reach of being feasible on constrained devices. In this work, we take the final step of efficiently implementing the scheme on a low-cost FPGA and microcontroller devices and thus making conservative post-quantum cryptography practical on small devices. Our FPGA implementation of the decapsulation (the computationally most expensive operation) needs 7,220 look-up tables (LUTs), 3,549 flip-flops (FFs), a single DSP, and only 16 block RAM modules. The maximum clock frequency is 162 MHz and it takes 20.7 ms for the execution of the decapsulation. Our microcontroller implementation has a 66% reduced peak stack usage in comparison to the reference implementation and needs 266 ms for key pair generation, 284 ms for encapsulation, and 286 ms for encapsulation. Our results contribute to the practical evaluation of a post-quantum standardization candidate.
On Trade-offs of Applying Block Chains for Electronic Voting Bulletin Boards
This paper takes a critical look at the recent trend of building electronic voting systems on top of block chain technology. Even though being very appealing from the election integrity perspective, block chains have numerous technical, economical and even political drawbacks that need to be taken into account. Selecting a good trade-off between desirable properties and restrictions imposed by different block chain implementations is a highly non-trivial task. This paper aims at bringing some clarity into performing this task. We will mostly be concentrating on public permissionless block chains and their applications as bulletin board implementations as these are the favourite choices in majority of the recent block chain based voting protocol proposals.
PIEs: Public Incompressible Encodings for Decentralized Storage
We present a new primitive supporting file replication in distributed storage networks (DSNs) called a Public Incompressible Encoding (PIE). PIEs operate in the challenging public DSN setting where files must be encoded and decoded with public randomness—i.e., without encryption—and retention of redundant data must be publicly verifiable. They prevent undetectable data compression, allowing DSNs to use monetary rewards or penalties in incentivizing economically rational servers to properly replicate data. Their definition also precludes critical, demonstrated attacks involving parallelism via ASICs and other custom hardware.
Our PIE construction is the first to achieve experimentally validated near-optimal performance—within a factor of 4 of optimal by one metric. It also allows decoding orders of magnitude faster than encoding, unlike other comparable constructions. We achieve this high security and performance using a graph construction called a Dagwood Sandwich Graph (DSaG), built from a novel interleaving of depth-robust graphs and superconcentrators.
PIEs' performance makes them appealing for DSNs, such as the proposed Filecoin system and Ethereum data sharding. Conversely, their near-optimality establishes concerning bounds on the practical financial and energy costs of DSNs allowing arbitrary data.
Usability is not Enough: Lessons Learned from 'Human Factors in Security' Research for Verifiability
Uncategorized
Uncategorized
A well-known issue in electronic voting is the risk of manipulation of the cast vote. For countering this risk, a number of methods have been proposed that enable the voter to verify that their cast vote actually represents their intention, the so-called cast-as-intended verification.
Yet, the empirical studies on the voter's behaviour towards using these methods show that often only a small amount of voters attempts the verification or succeeds in performing it. Poor usability of the verification procedure has been often named as the main reason for such a failure of the voters to verify.
Research into human factors in other security domains, however, reveals other reasons aside from poor usability, that hinder the proper adoption of security practices among end users. In this paper we discuss these factors with respect to their applicability to cast-as-intended verification. Our results indicate, that many of these factors are potentially relevant in the electronic voting context, too. Correspondingly, we conclude that additional measures aside from ensuring the usability of the cast as intended verification mechanisms are required in order to make sure that the voters successfully verify the integrity of their votes. As such, corresponding mechanisms are proposed.
Saber on ARM CCA-secure module lattice-based key encapsulation on ARM
The CCA-secure lattice-based post-quantum key encapsulation scheme Saber is a candidate in the NIST's post-quantum cryptography standardization process. In this paper, we study the implementation aspects of Saber in resource-constrained microcontrollers from the ARM Cortex-M series which are very popular for realizing IoT applications. In this work, we carefully optimize various parts of Saber for speed and memory. We exploit digital signal processing instructions and efficient memory access for a fast implementation of polynomial multiplication. We also use memory efficient Karatsuba and just-in-time strategy for generating the public matrix of the module lattice to reduce the memory footprint. We also show that our optimizations can be combined with each other seamlessly to provide various speed-memory trade-offs. Our speed optimized software takes just 1,147K, 1,444K, and 1,543K clock cycles on a Cortex-M4 platform for key generation, encapsulation and decapsulation respectively. Our memory efficient software takes 4,786K, 6,328K, and 7,509K clock cycles on an ultra resource-constrained Cortex-M0 platform for key generation, encapsulation, and decapsulation respectively while consuming only 6.2 KB of memory at most. These results show that lattice-based key encapsulation schemes are perfectly practical for securing IoT devices from quantum computing attacks.
A Reusable Fuzzy Extractor with Practical Storage Size
After the concept of a Fuzzy Extractor (FE) was rst introduced by Dodis et al. , it has
been regarded as one of the candidate solutions for key management utilizing biometric data. With
a noisy input such as biometrics, FE generates a public helper value and a random secret key which
is reproducible given another input similar to the original input. However, "helper values" may
cause some leakage of information when generated repeatedly by correlated inputs, thus reusability
should be considered as an important property. Recently, Canetti et al. (Eurocrypt 2016) proposed
a FE satisfying both reusability and robustness with inputs from low-entropy distributions. Their
strategy, the so-called Sample-then-Lock method, is to sample many partial strings from a noisy
input string and to lock one secret key with each partial string independently.
In this paper, modifying this reusable FE, we propose a new FE with size-reduced helper data
hiring a threshold scheme. Our new FE also satises both reusability and robustness, and requires
much less storage memory than the original. To show the advantages of this scheme, we analyze
and compare our scheme with the original in concrete parameters of the biometric, IrisCode. As a
result, on 1024-bit inputs, with false rejection rate 0.5 and error tolerance 0.25, while the original
requires about 1TB for each helper value, our scheme requires only 300MB with an additional
1.35GB of common data which can be used for all helper values.
Related-Tweakey Impossible Differential Attack on Reduced-Round Deoxys-BC-256
Uncategorized
Uncategorized
Deoxys-BC is the internal tweakable block cipher of Deoxys, a third-round authenticated encryption candidate at the CAESAR competition. In this study, by adequately studying the tweakey schedule, we seek a six-round related-tweakey impossible distinguisher of Deoxys-BC-256, which is transformed from a 3.5-round single-key impossible distinguisher of AES, by application of the mixed integer linear programming (MILP) method. We present a detailed description of this interesting transformation method and the MILP-modeling process.
Based on this distinguisher, we mount a key-recovery attack on 10 (out of 14) rounds of Deoxys-BC-256.
Compared to previous results that are valid only when the key size $>204$ and the tweak size $<52$, our method can attack 10-round Deoxys-BC-256 as long as the key size $\geq174$ and the tweak size $\leq82$. For the popular setting in which the key size is 192 bits, we can attack one round more than previous works.
This version gives the distinguisher and the attack differential which follows the description of the $h$ permutation in the Deoxys document, instead of that in the Deoxys reference implementation in the SUPERCOP package, which is wrong confirmed by the designers.
Note that this work only gives a more accurate security evaluation and does not threaten the security of full-round Deoxys-BC-256.
DeepChain: Auditable and Privacy-Preserving Deep Learning with Blockchain-based Incentive
Uncategorized
Uncategorized
Deep learning can achieve higher accuracy than traditional machine learning algorithms in a variety of machine learning
tasks. Recently, privacy-preserving deep learning has drawn tremendous attention from information security community, in which neither training data nor the training model is expected to be exposed. Federated learning is a popular learning mechanism, where multiple parties upload local gradients to a server and the server updates model parameters with the collected gradients. However, there are many security problems neglected in federated learning, for example, the participants may behave incorrectly in gradient collecting or parameter updating, and the server may be malicious as well. In this paper, we present a distributed, secure, and fair deep learning framework named DeepChain to solve these problems. DeepChain provides a value-driven incentive mechanism based on Blockchain to force the participants to behave correctly. Meanwhile, DeepChain guarantees data privacy for each participant and provides auditability for the whole training process. We implement a DeepChain prototype and conduct experiments on a real dataset for different settings, and the results show that our DeepChain is promising.
PoReps: Proofs of Space on Useful Data
Uncategorized
Uncategorized
A proof-of-replication (PoRep) is an interactive proof system in which a prover defends a publicly verifiable claim that it is dedicating unique resources to storing one or more retrievable replicas of a data file.
In this sense a PoRep is both a proof of space (PoS) and a proof of retrievability (PoR).
This paper is a foundational study of PoReps, exploring both their capabilities and their limitations. While PoReps may unconditionally demonstrate possession of data, they fundamentally cannot guarantee that the data is stored redundantly. Furthermore, as PoReps are proofs of space, they must rely either on rational time/space tradeoffs or timing bounds on the online prover's runtime.
We introduce a rational security notion for PoReps called epsilon-rational replication based on the notion of an epsilon-Nash equilibrium, which captures the property that a server does not gain any significant advantage by storing its data in any other (non-redundant) format.
We apply our definitions to formally analyze two recently proposed PoRep constructions based on verifiable delay functions and depth robust graphs.
Lastly, we reflect on a notable application of PoReps---its unique suitability as a Nakamoto consensus mechanism that replaces proof-of-work with PoReps on real data, simultaneously incentivizing and subsidizing the cost of file storage.
Module-lattice KEM Over a Ring of Dimension 128 for Embedded Systems
Uncategorized
Uncategorized
Following the development of quantum computing, the demand for post-quantum alternatives to current cryptosystems has firmly increased recently. The main disadvantage of those schemes is the amount of resources needed to implement them in comparison to their classical counterpart. In conjunction with the growth of the Internet of Things, it is crucial to know if post-quantum algorithms can evolve in constraint environments without incurring an unacceptable performance penalty. In this paper, we propose an instantiation of a module-lattice-based KEM working over a ring of dimension 128 using a limited amount of memory at runtime. It can be seen as a lightweight version of Kyber or a module version of Frodo. We propose parameters targeting popular 8-bit AVR microcontrollers and security level 1 of NIST. Our implementation fits in around 2 KB of RAM while still providing reasonable efficiency and 128 bits of security, but at the cost of a reduced correctness.
Static Power Side-Channel Analysis - An Investigation of Measurement Factors
The static power consumption of modern CMOS devices has become a substantial concern in the context of the side-channel security of cryptographic hardware. Its continuous growth in nanometer-scaled technologies is not only inconvenient for effective low power designs, but does also create a new target for power analysis adversaries. Additionally, it has to be noted
that several of the numerous sources of static power dissipation in CMOS circuits exhibit an exponential dependency on environmental factors which a classical power analysis adversary is in control of. These factors include the operating conditions temperature and supply voltage. Furthermore, in case of clock control, the measurement interval can be adjusted arbitrarily. Our experiments on a 150nm CMOS ASIC reveal that with respect to the signal-to-noise ratio in static power side-channel analyses, stretching the measurement interval decreases the noise exponentially and even more importantly that raising the working temperature increases the signal exponentially. Control over the supply voltage has a far smaller, but still noticeable, positive
impact as well. In summary, a static power analysis adversary can physically force a device to leak more information by controlling its operating environment and furthermore measure these leakages with arbitrary precision by modifying the interval
length.
A signature scheme from the finite field isomorphism problem
In a recent paper the authors and their collaborators proposed
a new hard problem, called the finite field isomorphism problem,
and they used it to construct a fully homomorphic encryption scheme.
In this paper, we investigate how one might build a digital signature
scheme from this new problem. Intuitively, the hidden field isomorphism
allows us to convert short vectors in the underlying lattice of one field
into generic looking vectors in an isomorphic field.
Practical Fault Injection Attacks on SPHINCS
The majority of currently deployed cryptographic public-key schemes are at risk of becoming insecure once large scale quantum computers become practical. Therefore, substitutes resistant to quantum attacks楊nown as post-quantum cryptography預re required. In particular, hash-based signature schemes appear to be the most conservative choice for post-quantum digital signatures. In this work, we mount the first practical fault attack against hash-based cryptography. The attack was originally proposed by Castelnovi, Martinelli, and Prest [9] and allows the creation of a universal signature forgery that applies to all current standardisation candidates (XMSS, LMS, SPHINCS+, and Gravity-SPHINCS). We perform the attack on an Arduino Due board featuring an ARM Cortex-M3 microprocessor running the original stateless scheme SPHINCS with a focus on practicality. We describe how the attack is mountable with a simple voltage glitch injection on the targeted platform, which allowed us to collect enough faulty signatures to create a universal forgery within seconds. As the attack also applies to stateful schemes, we show how caching one-time signatures can entirely prevent the attack for stateful schemes, such as XMSS and LMS. However, we discuss how protecting stateless schemes, like SPHINCS, SPHINCS+, and Gravity-SPHINCS, is more challenging, as this countermeasure does not apply as efficiently as in stateful schemes.
Differential Power Analysis of XMSS and SPHINCS
Quantum computing threatens conventional public-key cryptography. In response, standards bodies such as NIST increasingly focus on post-quantum cryptography. In particular, hash-based signature schemes are notable candidates for deployment. No rigorous side-channel analysis of hash-based signature schemes has been conducted so far. This work bridges this gap. We analyse the stateful hash-based signature schemes XMSS and XMSS^MT, which are currently undergoing standardisation at IETF, as well as SPHINCS — the only practical stateless hash-based scheme. While timing and simple power analysis attacks are unpromising, we show that the differential power analysis resistance of XMSS can be reduced to the differential power analysis resistance of the underlying pseudorandom number generator. This first systematic analysis helps to further increase confidence in XMSS, supporting current standardisation efforts. Furthermore, we show that at least a 32-bit chunk of the SPHINCS secret key can be recovered using a differential power analysis attack due to its stateless construction. We present novel differential power analyses on a SHA-2-based pseudorandom number generator for XMSS and a BLAKE-256-based pseudorandom function for SPHINCS-256 in the Hamming weight model. The first attack is not threatening current versions of XMSS, unless a customised pseudorandom number generator is used. The second one compromises the security of a hardware implementation of SPHINCS-256. Our analysis is supported by a power simulator implementation of SHA-2 for XMSS and a hardware implementation of BLAKE for SPHINCS. We also provide recommendations for XMSS implementers.
Cold Boot Attacks on Ring and Module LWE Keys Under the NTT
In this work, we consider the ring- and module- variants of the LWE problem and investigate cold boot attacks on cryptographic schemes based on these problems, wherein an attacker is faced with the problem of recovering a scheme's secret key from a noisy version of that key. The leakage resilience of cryptography based on the learning with errors (LWE) problem has been studied before, but there are only limited results considering the parameters observed in cold boot attack scenarios. There are two main encodings for storing ring- and module-LWE keys, and, as we show, the performance of cold boot attacks can be highly sensitive to the exact encoding used. The first encoding stores polynomial coefficients directly in memory. The second encoding performs a number theoretic transform (NTT) before storing the key, a commonly used method leading to more efficient implementations. We first give estimates for a cold boot attack complexity on the first encoding method based on standard algorithms; this analysis confirms that this encoding method is vulnerable to cold boot attacks only at very low bit-flip rates. We then show that, for the second encoding method, the structure introduced by using an NTT is exploitable in the cold boot setting: we develop a bespoke attack strategy that is much cheaper than our estimates for the first encoding when considering module-LWE keys. For example, at a \(1\%\) bit-flip rate (which corresponds roughly to what can be achieved in practice for cold boot attacks when applying cooling), a cold boot attack on Kyber KEM parameters has a cost of \(2^{43}\) operations when the second, NTT-based encoding is used for key storage, compared to \(2^{70}\) operations with the first encoding. On the other hand, in the case of the ring-LWE-based KEM, New Hope, the cold boot attack complexities are similar for both encoding methods.
A Systematic Study of the Impact of Graphical Models on Inference-based Attacks on AES
Uncategorized
Uncategorized
Belief propagation, or the sum-product algorithm, is a powerful and well known method for inference on probabilistic graphical models, which has been proposed for the specific use in side channel analysis by Veyrat-Charvillon et al.
We define a novel metric to capture the importance of variable nodes in factor graphs, we propose two improvements to the sum-product algorithm for the specific use case in side channel analysis, and we explicitly define and examine different ways of combining information from multiple side channel traces. With these new considerations we systematically investigate a number of graphical models that "naturally" follow from an implementation of AES. Our results are unexpected: neither a larger graph (i.e. more side channel information) nor more connectedness necessarily lead to significantly better attacks. In fact our results demonstrate that in practice the (on balance) best choice is to utilise an acyclic graph in an independent graph combination setting, which gives us provable convergence to the correct key distribution. We provide evidence using both extensive simulations and a final confirmatory analysis on real trace data.
Public Key Compression for Constrained Linear Signature Schemes
We formalize the notion of a constrained linear trapdoor as an abstract strategy for the generation of signature schemes, concrete instantiations of which can be found in MQ-based, code-based, and lattice-based cryptography. Moreover, we revisit and expand on a transformation by Szepieniec et al. to shrink the public key at the cost of a larger signature while reducing their combined size. This transformation can be used in a way that is provably secure in the random oracle model, and in a more aggressive variant whose security remained unproven. In this paper we show that this transformation applies to any constrained linear trapdoor signature scheme, and prove the security of the first mode in the quantum random oracle model. Moreover, we identify a property of constrained linear trapdoors that is sufficient (and necessary) for the more aggressive variant to be secure in the quantum random oracle model. We apply the transformation to an MQ-based scheme, a code-based scheme and a lattice-based scheme targeting 128-bits of post quantum security, and we show that in some cases the combined size of a signature and a public key can be reduced by more than a factor 300.
Faster cofactorization with ECM using mixed representations
This paper introduces a novel implementation of the elliptic curve factoring method specifically designed for medium-size integers such as those arising by billions in the cofactorization step of the number field sieve. In this context, our algorithm requires fewer modular multiplications than any other publicly available implementation. The main ingredients are: the use of batches of primes, fast point tripling, optimal double-base decompositions and Lucas chains, and a good mix of Edwards and Montgomery representations.
Breaking Message Integrity of an End-to-End Encryption Scheme of LINE
In this paper, we analyze the security of an end-to-end encryption scheme (E2EE) of LINE, a.k.a Letter Sealing. LINE is one of the most widely-deployed instant messaging applications, especially in East Asia. By a close inspection of their protocols, we give several attacks against the message integrity of Letter Sealing. Specifically, we propose forgery and impersonation attacks on the one-to-one message encryption and the group message encryption.
All of our attacks are feasible with the help of an end-to-end adversary, who has access to the inside of the LINE server (e.g. service provider LINE themselves). We stress that the main purpose of E2EE is to provide a protection against the end-to-end adversary. In addition, we found some attacks that even do not need the help of E2E adversary, which shows a critical security flaw of the protocol. Our results reveal that the E2EE scheme of LINE do not sufficiently guarantee the integrity of messages compared to the state-of-the-art E2EE schemes such as Signal, which is used by WhatApp and Facebook Messenger. We also provide some countermeasures against our attacks.
We have shared our findings with LINE corporation in advance.
The LINE corporation has confirmed our attacks are valid as long as the E2E adversary is involved, and officially recognizes our results as a vulnerability of encryption break.
On Hardware Implementation of Tang-Maitra Boolean Functions
Uncategorized
Uncategorized
In this paper, we investigate the hardware circuit complexity of the class of Boolean functions recently introduced by Tang and Maitra (IEEE-TIT 64(1): 393 402, 2018). While this class of functions has very good cryptographic properties, the exact hardware requirement is an immediate concern as noted in the paper itself. In this direction, we consider different circuit architectures based on finite field arithmetic and Boolean optimization. An estimation of the circuit complexity is provided for such functions given any input size n. We study different candidate architectures for implementing these functions, all based on the finite field arithmetic. We also show different implementations for both ASIC and FPGA, providing further analysis on the practical aspects of the functions in question and the relation between these implementations and the theoretical bound. The practical results show that the Tang-Maitra functions are quite competitive in terms of area, while still maintaining an acceptable level of throughput performance for both ASIC and FPGA implementations.
Reproducible Families of Codes and Cryptographic Applications
Uncategorized
Uncategorized
In this paper we study structured linear block codes, starting from well known examples and generalizing them to a wide class of codes that we call reproducible codes. These codes have the property that can be entirely generated from a small number of signature vectors, and consequently admit matrices that can be described in a very compact way. We then show some cryptographic applications of this class of codes and explain why the general framework we introduce may pave the way for future developments of code-based cryptography based on structured codes.
Multiparty Non-Interactive Key Exchange and More From Isogenies on Elliptic Curves
We describe a framework for constructing an efficient non-interactive key
exchange (NIKE) protocol for n parties for any n >= 2. Our approach is
based on the problem of computing isogenies between isogenous elliptic
curves, which is believed to be difficult. We do not obtain a working
protocol because of a missing step that is currently an open problem.
What we need to complete our protocol is an efficient algorithm that
takes as input an abelian variety presented as a product of isogenous
elliptic curves, and outputs an isomorphism invariant of the abelian
variety.
Our framework builds a cryptographic invariant map, which is a new
primitive closely related to a cryptographic multilinear map, but whose
range does not necessarily have a group structure. Nevertheless, we show
that a cryptographic invariant map can be used to build several
cryptographic primitives, including NIKE, that were previously
constructed from multilinear maps and indistinguishability obfuscation.
Public Accountability vs. Secret Laws: Can They Coexist?
Post 9/11, journalists, scholars and activists have pointed out that secret laws
--- a body of law whose details and sometime mere existence is classified as top secret --- were on the rise in all three branches of the US government due to growing national security concerns.
Amid heated current debates on governmental wishes
for exceptional access to encrypted digital data,
one of the key issues is: which mechanisms can be put in place to ensure that government agencies follow agreed-upon rules in a manner which does not compromise national security objectives? This promises to be especially challenging when the rules, according to which access to encrypted data is granted, may themselves be secret.
In this work we show how the use of cryptographic protocols, and in particular, the use of zero-knowledge proofs can ensure accountability and transparency of the government in this extraordinary, seemingly deadlocked, setting. We propose an efficient record-keeping infrastructure with versatile publicly verifiable audits that preserve perfect (information-theoretic) secrecy of record contents as well as of the rules by which the records are attested to abide.
Our protocol is based on existing blockchain and cryptographic tools including commitments and zero-knowledge SNARKs, and satisfies the properties of indelibility (i.e., no back-dating),
perfect data secrecy, public auditability of secret data with secret laws, accountable deletion, and succinctness.
We also propose a variant scheme where entities can be required to pay fees based on record contents (e.g., for violating regulations)
while still preserving data secrecy.
Our scheme can be directly instantiated on the Ethereum blockchain
(and a simplified version with weaker guarantees can be instantiated with Bitcoin).
Fast Secure Matrix Multiplications over Ring-Based Homomorphic Encryption
Secure matrix computation is one of the most fundamental and useful operations for statistical analysis and machine learning with protecting the confidentiality of input data. Secure computation can be achieved by homomorphic encryption, supporting meaningful operations over encrypted data. HElib is a software library that implements the Brakerski-Gentry-Vaikuntanathan (BGV) homomorphic scheme, in which secure matrix-vector multiplication is proposed for operating matrices. Recently, Duong et al. (Tatra Mt. Publ. 2016) proposed a new method for secure single matrix multiplication over a ring-LWE-based scheme. In this paper, we generalize Duong et al.'s method for secure multiple matrix multiplications over the BGV scheme. We also implement our method using HElib and show that our method is much faster than the matrix-vector multiplication in HElib for secure matrix multiplications.
Efficient Logistic Regression on Large Encrypted Data
Machine learning on encrypted data is a cryptographic method for analyzing private and/or sensitive data while keeping privacy. In the training phase, it takes as input an encrypted training data and outputs an encrypted model without using the decryption key. In the prediction phase, it uses the encrypted model to predict results on new encrypted data.
In each phase, no decryption key is needed, and thus the privacy of data is guaranteed while the underlying encryption is secure. It has many applications in various areas such as finance, education, genomics, and medical field that have sensitive private data. While several studies have been reported on the prediction phase, few studies have been conducted on the training phase due to the inefficiency of homomorphic encryption (HE), leaving the machine learning training on encrypted data only as a long-term goal.
In this paper, we propose an efficient algorithm for logistic regression on encrypted data, and evaluate our algorithm on real financial data consisting of 422,108 samples over 200 features. Our experiment shows that an encrypted model with a sufficient Kolmogorov Smirnow statistic value can be obtained in $\sim$17 hours in a single machine. We also evaluate our algorithm on the public MNIST dataset, and it takes $\sim$2 hours to learn an encrypted model with 96.4% accuracy. Considering the inefficiency of HEs, our result is encouraging and demonstrates the practical feasibility of the logistic regression training on large encrypted data, for the first time to the best of our knowledge.
Exploring Deployment Strategies for the Tor Network
In response to upcoming performance and security challenges of anonymity networks like Tor, it will be of crucial importance to be able to develop and deploy performance improvements and state-of-the-art countermeasures. In this paper, we therefore explore different deployment strategies and review their applicability to the Tor network. In particular, we consider flag day, dual stack, translation, and tunneling strategies and discuss their impact on the network, as well as common risks associated with each of them. In a simulation based evaluation, which stems on historical data of Tor, we show that they can practically be applied to realize significant protocol changes in Tor. However, our results also indicate that during the transitional phase a certain degradation of anonymity is unavoidable with current viable deployment strategies.
A New Blind ECDSA Scheme for Bitcoin Transaction Anonymity
Uncategorized
Uncategorized
In this paper, we consider a scenario where a bitcoin liquidity provider sells bitcoins to clients. When a client pays for a bitcoin online, the provider is able to link the client's payment information to the bitcoin sold to that client. To address the clients' privacy concern, it is desirable for the provider to perform the bitcoin transaction with blind signatures. However, existing blind signature schemes are incompatible with the Elliptic Curve Digital Signature Algorithm (ECDSA) which is used by most of the existing bitcoin protocol, thus cannot be applied directly in Bitcoin. In this paper, we propose a new blind signature scheme that allows generating a blind signature compatible with the standard ECDSA. Afterwards, we make use of the new scheme to achieve bitcoin transaction anonymity. The new scheme is built on a variant of the Paillier cryptosystem and its homomorphic properties. As long as the modified Paillier cryptosystem is semantically secure, the new blind signature scheme has blindness and unforgeability.
On the Menezes-Teske-Weng’s conjecture
In 2003, Alfred Menezes, Edlyn Teske and Annegret Weng presented
a conjecture on properties of the solutions of a type of quadratic equation
over the binary extension fields,
which had been convinced by extensive experiments but the proof was unknown until now.
We prove that this conjecture is correct. Furthermore, using this proved conjecture, we have completely determined the null space of a class of linear polynomials.
Blockchained Post-Quantum Signatures
Inspired by the blockchain architecture and existing Merkle tree based signature schemes, we propose BPQS, an extensible post-quantum (PQ) resistant digital signature scheme best suited to blockchain and distributed ledger technologies (DLTs). One of the unique characteristics of the protocol is that it can take advantage of application-specific chain/graph structures in order to decrease key generation, signing and verification costs as well as signature size. Compared to recent improvements in the field, BPQS outperforms existing hash-based algorithms when a key is reused for reasonable numbers of signatures, while it supports a fallback mechanism to allow for a practically unlimited number of signatures if required. To our knowledge, this is the first signature scheme that can utilise an existing blockchain or graph structure to reduce the signature cost to one OTS, even when we plan to sign many times. This makes existing many-time stateful signature schemes obsolete for blockchain applications.
We provide an open source implementation of the scheme and benchmark it.
Platform-independent Secure Blockchain-Based Voting System
Uncategorized
Uncategorized
Cryptographic techniques are employed to ensure the security of voting systems in order to increase its wide adoption. However, in such electronic voting systems, the public bulletin board that is hosted by the third party for publishing and auditing the voting results should be trusted by all participants. Recently a number of blockchain-based solutions have been proposed to address this issue. However, these systems are impractical to use due to the limitations on the voter and candidate numbers supported, and their security framework, which highly depends on the underlying blockchain protocol and suffers from potential attacks (e.g., force-abstention attacks). To deal with two aforementioned issues, we propose a practical platform-independent secure and verifiable voting system that can be deployed on any blockchain that supports an execution of a smart contract. Verifiability is inherently provided by the underlying blockchain platform, whereas cryptographic techniques like Paillier encryption, proof-of-knowledge, and linkable ring signature are employed to provide a framework for system security and user-privacy that are independent from the security and privacy features of the blockchain platform. We analyse the correctness and coercion-resistance of our proposed voting system. We employ Hyperledger Fabric to deploy our voting system and analyse the performance of our deployed scheme numerically.
FPGA Cluster based high performance Cryptanalysis framework
In this paper a ‘FPGA cluster’ based framework for high performance Cryptanalysis has been proposed. The framework abstracts underlying networked FPGA cluster into a unified acceleration resource. It does so by implementing requested amount of computation kernels (cryptographic modules) and managing efficient distribution of the network band-width between the inter-FPGA and intra-FPGA computation kernels. Further agile methodology for developing such networked computation kernels with use of a high level language (Python) based HDL library and seamless integration with a user space crypt analysis application have been discussed. 40-bit partial key attack over AES256 has been demonstrated as a capability demonstration. Performance higher than clustered CPUs and GPUs at lower costs and power is reported.
Loamit: A Blockchain-based Residual Loanable-limit Query System
Uncategorized
Uncategorized
Currently, the blockchain technology is experiencing an exponential growth in the academia and industry. Blockchain may provide the fault-tolerance, tamper-resistance, credibility and privacy to users. In this paper, we propose a blockchain-based residual loanable-limit query system, called Loamit. Firstly, to the best of our knowledge, it is the first work to prevent that a client, who lacks the ability to repay, borrows an un-repayable amount of money from independent banks without releasing the personal privacy of client.
Specifically, if a client wants to borrow a certain amount of money from a bank, then the bank can get the client's residual loanable-limit in the alliance of banks without knowing details of the client's previous loans and repayments.
Secondly, most of data in Loamit is verifiable. Therefore, malicious banks can be checked out.
Thirdly, Loamit is fault-tolerant since it may work smoothly as long as a certain number of banks are active and honest.
Finally, we deploy the Loamit system on the Ethererum private blockchain and give the corresponding performance evaluation.
Proofs of Replicated Storage Without Timing Assumptions
Uncategorized
Uncategorized
In this paper we provide a formal treatment of proof of replicated storage, a novel cryptographic primitive
recently proposed in the context of a novel cryptocurrency, namely Filecoin.
In a nutshell, proofs of replicated storage is a solution to the following problem: A user stores a file $m$ on $n$ different servers
to ensure that the file will be available even if some of the servers fail. Using proof of retrievability, the user could check
that every server is indeed storing the file. However, what if the servers collude and, in order to save on resources, decide to only
store one copy of the file?
A proof of replicated storage guarantees that,
unless the server is indeed reserving the space necessary to store
$n$ copies of the file, the user will not accept the proof.
While some candidate proofs of replicated storage have already been proposed, their soundness relies on timing assumptions i.e.,
the user must reject the proof if the prover does not reply within a certain time-bound.
In this paper we provide the first construction of a proof of replication which does not rely on any timing assumptions.
Homomorphic Evaluation of Lattice-Based Symmetric Encryption Schemes
Optimizing performance of Fully Homomorphic Encryption (FHE) is nowadays an active trend of research in cryptography. One way of improvement is to use a hybrid construction with a classical symmetric encryption scheme to transfer encrypted data to the Cloud. This allows to reduce the bandwidth since the expansion factor of symmetric schemes (the ratio between the ciphertext and the plaintext length) is close to one, whereas for FHE schemes it is in the order of 1,000 to 1,000,000. However, such a construction requires the decryption circuit of the symmetric scheme to be easy to evaluate homomorphically. Several works have studied the cost of homomorphically evaluating classical block ciphers, and some recent works have suggested new homomorphic oriented constructions of block ciphers or stream ciphers. Since the multiplication gate of FHE schemes significantly increases the noise of the ciphertext, we cannot afford too many multiplication stages in the decryption circuit. Consequently, FHE-friendly symmetric encryption schemes have a decryption circuit with small multiplication depth.
We aim at minimizing the cost of the homomorphic evaluation of the decryption of symmetric encryption schemes. To do so, we focus on schemes based on learning problems: Learning With Errors (LWE), Learning Parity with Noise (LPN) and Learning With Rounding (LWR). We show that they have lower multiplicative depth than usual block ciphers, and hence allow more FHE operations before a heavy bootstrapping becomes necessary. Moreover, some of them come with a security proof. Finally, we implement our schemes in HElib. Experimental evidence shows that they achieve lower amortized and total running time than previous performance from the literature: our schemes are from 10 to 10,000 more efficient for the time per bit and the total running time is also reduced by a factor between 20 to 10,000. Of independent interest, the security of our LWR-based scheme is related to LWE and we provide an efficient security proof that allows to take smaller parameters.
Efficient Collision Attack Frameworks for RIPEMD-160
RIPEMD-160 is an ISO/IEC standard and has been applied to generate the Bitcoin address with SHA-256. Due to the complex dual-stream structure, the first collision attack on reduced RIPEMD-160 presented by Liu, Mendel and Wang at Asiacrypt 2017 only reaches 30 steps, having a time complexity of $2^{70}$. Apart from that, several semi-free-start collision attacks have been published for reduced RIPEMD-160 with the start-from-the-middle method. Inspired from such start-from-the middle structures, we propose two novel efficient collision attack frameworks for reduced RIPEMD-160 by making full use of the weakness of its message expansion. Those two frameworks are called dense-left-and-sparse-right (DLSR) framework and sparse-left-and-dense-right (SLDR) framework. As it turns out, the DLSR framework is more efficient than SLDR framework since one more step can be fully controlled, though with extra $2^{32}$ memory complexity. To construct the best differential characteristics for the DLSR framework, we carefully build the linearized part of the characteristics and then solve the corresponding nonlinear part using a guess-and-determine approach. Based on the newly discovered differential characteristics, we provide colliding messages pairs for the first practical collision attacks on 30 and 31 (out of 80) steps of RIPEMD-160 with time complexity $2^{35.9}$ and $2^{41.5}$ respectively. In addition, benefiting from the partial calculation, we can attack 33 and 34 (out of 80) steps of RIPEMD-160 with time complexity $2^{67.1}$ and $2^{74.3}$ respectively. When applying the SLDR framework to the differential characteristic used in the Asiacrypt 2017 paper, we significantly improve the time complexity by a factor of $2^{13}$. However, it still cannot compete with the results obtained from the DLSR framework. To the best of our knowledge, these are the best collision attacks on reduced RIPEMD-160 with respect to the number of steps, including the first colliding message pairs for 30 and 31 steps of RIPEMD-160.
Side-Channel Analysis of SM2: A Late-Stage Featurization Case Study
SM2 is a public key cryptography suite originating from Chinese standards, including digital signatures and public key encryption. Ahead of schedule, code for this functionality was recently mainlined in OpenSSL, marked for the upcoming 1.1.1 release.
We perform a security review of this implementation, uncovering various deficiencies ranging from traditional software quality issues to side-channel risks.
To assess the latter, we carry out a side-channel security evaluation and discover that the implementation hits every pitfall seen for OpenSSL's ECDSA code in the past decade. We carry out remote timings, cache timings, and EM analysis, with accompanying empirical data to demonstrate secret information leakage during execution of both digital signature generation and public key decryption.
Finally, we propose, implement, and empirically evaluate countermeasures.
Designing Efficient Dyadic Operations for Cryptographic Applications
Cryptographic primitives from coding theory are some of the most promising candidates for NIST's Post-Quantum Cryptography Standardization process. In this paper, we introduce a variety of techniques to improve operations on dyadic matrices, a particular type of symmetric matrices that appear in the automorphism group of certain linear codes. Besides the independent interest, these techniques find an immediate application in practice. In fact, one of the candidates for the Key Exchange functionality, called DAGS, makes use of quasi-dyadic matrices to provide compact keys for the scheme.