All papers in 2021 (Page 7 of 1705 results)
Improved Linear Approximations of SNOW-V and SNOW-Vi
Abstract. in this paper, we improve the results of linear approximation of SNOW-V and SNOW-Vi.We optimized the automatic search program by replacing the S-box part with accurate characterizations of the Walsh spectral of S-boxes, which results in a series of trails with higher correlations. On the basis of existing results, we investigate the common features of linear approximation trails with high correlation, and search for more trails by exhausting free masks. By summing up the correlations of trails with the same input and output masks, we get closer to the real correlation. As a result, we get a linear approximation with a correlation -2^{-47.76},which results in a correlation attack on SNOW-V and SNOW-Vi with a time complexity 2^{246:53}, data complexity 2^{237.5} and memory complexity 2^{238.77}.
New Cryptanalysis of ZUC-256 Initialization Using Modular Differences
ZUC-256 is a stream cipher designed for 5G applications by the ZUC team. Together with AES-256 and SNOW-V, it is currently being under evaluation for standardized algorithms in 5G mobile telecommunications by Security Algorithms Group of Experts (SAGE). A notable feature of the round update function of ZUC-256 is that many operations are defined over different fields, which significantly increases the difficulty to analyze the algorithm.
As a main contribution, with the tools of the modular difference, signed difference and XOR difference, we develop new techniques to carefully control the interactions between these operations defined over different fields. At first glance, our techniques are somewhat similar to those developed by Wang et al. for the MD-SHA hash family. However, as ZUC-256 is quite different from the MD-SHA hash family and its round function is much more complex, we are indeed dealing with different problems and overcoming new obstacles.
As main results, by utilizing complex input differences, we can present the first distinguishing attacks on 31 out of 33 rounds of ZUC-256 and 30 out of 33 rounds of the new version of ZUC-256 called ZUC-256-v2 with low time and data complexities, respectively. These attacks target the initialization phase and work in the related-key model with weak keys. Moreover, with a novel IV-correcting technique, we show how to efficiently recover at least 16 key bits for 15-round ZUC-256 and 14-round ZUC-256-v2 in the related-key setting, respectively. It is unpredictable whether our attacks can be further extended to more rounds with more advanced techniques. Based on the current attacks, we believe that the full 33 initialization rounds provide marginal security.
Exploring Differential-Based Distinguishers and Forgeries for ASCON
Uncategorized
Uncategorized
Automated methods have become crucial components when searching for distinguishers against symmetric-key cryptographic primitives. While MILP and SAT solvers are among the most popular tools to model ciphers and perform cryptanalysis, other methods with different performance profiles are appearing. In this article, we explore the use of Constraint Programming (CP) for differential cryptanalysis on the ASCON authenticated encryption family (first choice of the CAESAR lightweight applications portfolio and current finalist of the NIST LWC competition) and its internal permutation. We first present a search methodology for finding differential characteristics for ASCON with CP, which can easily find the best differential characteristics already reported by the ASCON designers. This shows the capability of CP in generating easily good differential results compared to dedicated search heuristics. Based on our tool, we also parametrize the search strategies in CP to generate other differential characteristics with the goal of forming limited-birthday distinguishers for 4, 5, 6 and 7 rounds and rectangle attacks for 4 and 5 rounds of the ASCON internal permutation. We propose a categorization of the distinguishers into black-box and non-black-box to better differentiate them as they are often useful in different contexts. We also obtained limited-birthday distinguishers which represent currently the best known distinguishers for 4, 5 and 6 rounds under the category of non-black-box distinguishers. Leveraging again our tool, we have generated forgery attacks against both reduced-rounds ASCON-128 and ASCON-128a, improving over the best reported results at the time of writing. Finally, using the best differential characteristic we have found for 2 rounds, we could also improve a recent attack on round-reduced ASCON-Hash.
Last updated: 2021-09-12
Construction and Implementation of Practical Reusable and Robust Fuzzy Extractors for Fingerprint
Among the various authentication methods, biometrics provide good user friendliness. However, the non-renewability of biometrics leads to the problem that it might be stolen. The emergence of fuzzy extractors is a promising solution to this problem. The fuzzy extractors can extract uniformly distributed keys from various noise random sources (such as biometrics, physical unclonable functions and quantum bits). However, the research on fuzzy extractors mainly focuses on the theoretical level, and does not consider how the extracted biometrics should be coded and implementated. This paper first introduces a
method of feature selection and encoding for fingerprints, together with a secure sketch based on Chebyshev distance
in a rectangular coordinate system. Then we present the construction approach of reusable and robust fuzzy extractors(
rrFE). Meanwhile, we prove that our secure sketch scheme has sufficient security. Finally, we also present the complete experimental process and a demo program, and test the performance of our proposed fuzzy extractors. Compared with other schemes, our scheme has lower storage overhead.
Differential Privacy in Constant Function Market Makers
Constant function market makers (CFMMs) are the most popular mechanism for facilitating decentralized trading. While these mechanisms have facilitated hundreds of billions of dollars of trades, they provide users with little to no privacy. Recent work illustrates that privacy cannot be achieved in CFMMs without forcing worse pricing and/or latency on end users. This paper more precisely quantifies the trade-off between pricing and privacy in CFMMs. We analyze a simple privacy-enhancing mechanism called Uniform Random Execution and prove that it provides $(\epsilon, \delta)$-differential privacy. The privacy parameter $\epsilon$ depends on the curvature of the CFMM trading function and the number of trades executed. This mechanism can be implemented in any blockchain system that allows smart contracts to access a verifiable random function. We also investigate the worst case complexity over all private CFMM mechanisms using recent results from private PAC learning. These results suggest that one cannot do much better than Uniform Random Execution in CFMMs with non-zero curvature. Our results provide an optimistic outlook on providing partial privacy in CFMMs.
REDsec: Running Encrypted Discretized Neural Networks in Seconds
Machine learning as a service (MLaaS) has risen to become a prominent technology due to the large development time, amount of data, hardware costs, and level of expertise required to develop a machine learning model. However, privacy concerns prevent the adoption of MLaaS for applications with sensitive data. A promising privacy preserving solution is to use fully homomorphic encryption (FHE) to perform the ML computations. Recent advancements have lowered computational costs by several orders of magnitude, opening doors for secure practical applications to be developed. In this work we introduce the REDsec framework that optimizes FHE-based private machine learning inference by leveraging ternary neural networks. Such neural networks, whose weights are constrained to {-1,0,1}, have special properties that we exploit to operate efficiently in the homomorphic domain. REDsec introduces novel features, including a new data re-use scheme that enables bidirectional bridging between the integer and binary domains for the first time in FHE. This enables us to implement very efficient binary operations for multiplication and activations, as well as efficient integer domain additions. Our approach is complemented by a new GPU acceleration library, dubbed (RED)cuFHE, which supports both binary and integer operations on multiple GPUs. REDsec brings unique benefits by supporting user-defined models as input (bring-your-own-network), automation of plaintext training, and efficient evaluation of private inference leveraging TFHE. In our analysis, we perform inference experiments with the MNIST, CIFAR-10, and ImageNet datasets and report performance improvements compared to related works.
MILP modeling of Boolean functions by minimum number of inequalities
This work presents techniques for modeling Boolean functions by mixed-integer linear inequalities (MILP) on binary variables in-place (without auxiliary variables), reaching minimum possible number of inequalities for small functions and providing meaningful lower bounds on the number of inequalities when reaching the minimum is infeasible. While the minimum number of inequalities does not directly translate to best performance in MILP applications, it nonetheless provides a useful benchmark. We remark that our framework is heuristic and relies on SAT solvers and MILP optimization and so its feasibility is limited.
Individual Verifiability and Revoting in the Estonian Internet Voting System
Individual verifiability remains one of the main practical challenges in e-voting systems and, despite the central importance of this property, countries that sought to implement it faced repeated security problems.
In this note, we revisit this property in the context of the IVXV version of the Estonian voting system, which has been in used for the Estonian municipal elections of 2017 and for the Estonian and European parliamentary elections of 2019.
We show that a compromised voter device can defeat the individual verifiability mechanism of the current Estonian voting system. Our attack takes advantage of the revoting option that is available in the Estonian voting system, and only requires compromise of the voting client application: it does not require compromising the mobile device verification app, or any server side component.
Last updated: 2021-12-07
The Hadamard square of concatenated linear codes
The paper is devoted to the Hadamard square of concatenated linear codes. Such codes consist of codewords that are obtained by concatenation part of the codewords from other codes. It is proved that if the sum of Hadamard squares’ dimensions of the codes used in the concatenation is slightly less than the dimension of the entire space, then the Hadamard square of the concatenated code is equal to the Cartesian product of the Hadamard square of code-components.
It means that the cryptanalysis for many code-based post-quantum cryptographic mechanisms built on concatenated codes is equivalent to the cryptanalysis of these mechanisms built on code-components. So using the concatenation of codes from different classes instead of one class of codes, generally speaking, does not increase the cryptographic strength of the mechanisms.
Mt. Random: Multi-Tiered Randomness Beacons
Many decentralized applications require a common source of randomness that cannot be biased or predicted by any single party. Randomness beacons provide such a functionality, allowing parties to periodically obtain fresh random outputs and verify that they are computed correctly.
In this work, we propose Mt. Random, a multi-tiered randomness beacon that combines Publicly Verifiable Secret Sharing (PVSS) and (Threshold) Verifiable Random Function (VRF) techniques in order to provide efficiency/randomness quality trade-offs with security under the standard DDH assumption (in the random oracle model) using only a bulletin board as setup (a requirement for the vast majority of beacons). Each tier provides a constant stream of random outputs offering progressive efficiency vs. quality trade-offs: true uniform randomness is refreshed less frequently than pseudorandomness, which in turn is refreshed less frequently than (bounded) biased randomness. This wide span of efficiency/quality allows for applications to consume random outputs from an optimal point in this trade-off spectrum. In order to achieve these results, we construct two new building blocks of independent interest: GULL, a PVSS-based beacon that preprocesses a large batch of random outputs but allows for gradual release of smaller "sub-batches'', which is a first in the literature of randomness beacons; and a publicly verifiable and unbiasable protocol for Distributed Key Generation protocol (DKG), which is significantly more efficient than most of previous DKGs secure under standard assumptions and closely matches the efficiency of the currently most efficient biasable DKG protocol.
We showcase the efficiency of our novel building blocks and of the Mt. Random beacon via benchmarks made with a prototype implementation.
Analyzing Masked Ciphers Against Transition and Coupling Effects
This paper discusses how to analyze the probing security of masked symmetric primitives against the leakage effects from CHES 2018; glitches, transitions, and coupling effects. This is illustrated on several architectures of ciphers like PRESENT, AES, and ASCON where we transform glitch-extended probing secure maskings into transition and/or coupling secure ones. The analysis uses linear cryptanalytic methods and the diffusion layers of the cipher to efficiently protect against the advanced leakage effects.
Resilient Uniformity: Applying Resiliency in Masking
Threshold Implementations are known countermeasures defending against side-channel attacks via the use of masking techniques. While sufficient properties are known to defend against first-order side-channel attacks, it is not known how to achieve higher-order security. This work generalizes the Threshold Implementation notion of uniformity and proves it achieves second-order protection. The notion is applied to create a second-order masking of the PRESENT cipher with a low randomness cost.
Quantum Algorithms for Variants of Average-Case Lattice Problems via Filtering
We show polynomial-time quantum algorithms for the following problems:
(*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant.
(*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions.
(*) Extrapolated dihedral coset problem (EDCP) with certain parameters.
The SIS, LWE, and EDCP problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for the variants of SIS and LWE we consider. For EDCP, our quantum algorithm slightly extends the result of Ivanyos et al. (2018).
Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contribution is solving LWE given LWE-like quantum states with interesting parameters using a filtering technique.
SoK: Deep Learning-based Physical Side-channel Analysis
Side-channel attacks represent a realistic and serious threat to the security of embedded devices for almost three decades. The variety of attacks and targets they can be applied to have been introduced, and while the area of side-channel attacks and mitigations is very well-researched, it is yet to be consolidated.
Deep learning-based side-channel attacks entered the field in recent years with the promise of more competitive performance and enlarged attackers' capabilities compared to other techniques. At the same time, the new attacks bring new challenges and complexities to the domain, making a systematization of the existing knowledge ever more necessary.
In this SoK, we do exactly that, and by bringing new insights, we systematically structure the current knowledge of deep learning in side-channel analysis. We first dissect deep learning-assisted attacks into different phases and map those phases to the efforts conducted so far in the domain. For each of the phases, we identify the weaknesses and challenges that triggered the known open problems.
We connect the attacks to the existing threat models and evaluate their advantages and drawbacks. We finish by discussing other threat models that should be investigated and propose directions for future works.
No (Good) Loss no Gain: Systematic Evaluation of Loss functions in Deep Learning-based Side-channel Analysis
Deep learning is a powerful direction for profiling side-channel analysis as it can break targets protected with countermeasures even with a relatively small number of attack traces.
Still, it is necessary to conduct hyperparameter tuning for strong attack performance, which can be far from trivial.
Besides a plethora of options stemming from the machine learning domain, recent years also brought neural network elements specially designed for side-channel analysis.
An important hyperparameter is the loss function, which calculates the error or loss between the actual and desired output. The resulting loss is used to update the weights associated with the connections between the neurons or filters of the deep learning neural network. Unfortunately, despite being a highly relevant hyperparameter, there are no systematic comparisons among different loss functions.
This work provides a detailed study on the performance of different loss functions in the SCA context. We evaluate five loss functions commonly used in machine learning and two loss functions proposed for SCA.
Our results show that one of the SCA-specific loss functions (called CER) performs very well and outperforms other loss functions in most evaluated settings. Finally, our results show that categorical cross-entropy represents a good option for most settings, especially if there is a requirement to work well with different neural network architectures.
Towards Accountability in CRS Generation
It is well known that several cryptographic primitives cannot be achieved without a common reference string (CRS). Those include, for instance, non-interactive zero-knowledge for NP, or maliciously secure computation in fewer than four rounds. The security of those primitives heavily relies upon on the assumption that the trusted authority, who generates the CRS, does not misuse the randomness used in the CRS generation. However, we argue that there is no such thing as an unconditionally trusted authority and every authority must be held accountable for any trust to be well-founded. Indeed, a malicious authority can, for instance, recover private inputs of honest parties given transcripts of the protocols executed with respect to the CRS it has generated.
While eliminating trust in the trusted authority may not be entirely feasible, can we at least move towards achieving some notion of accountability? We propose a new notion in which, if the CRS authority releases the private inputs of protocol executions to others, we can then provide a publicly-verifiable proof that certifies that the authority misbehaved. We study the feasibility of this notion in the context of non-interactive zero knowledge and two-round secure two-party computation.
Threshold scheme to share a secret by means of sound ranging
In this short note we consider the scheme to share a bitstring secret among $n$ parties such that any $m$ of them, cooperating, can reconstruct it, but any $m - 1$ of them cannot (a so-called $(m,n)$-threshold scheme). The scheme is based on the sound ranging problem, which is to determine the unknown position of the source and the unknown instant when it emitted the sound from known instants when the sound wave reached known sensors. The features are 1) shadows are computed not so much by the secret dealer, but rather by environment where the sound propagates, so the amount of computations performed by the dealer is $O(1)$ instead of $O(n)$ as $n \rightarrow \infty$, and 2) the dealer does not need to establish separate secure channel with each party. There are severe inherent drawbacks though.
Studying Bitcoin privacy attacks and their Impact on Bitcoin-based Identity Methods
Uncategorized
Uncategorized
The Bitcoin blockchain was the first publicly verifiable, and distributed ledger, where it is possible for everyone to download and check the full history of all data records from the genesis block. These properties lead to the emergence of new types of applications and the redesign of traditional systems that no longer respond to current business needs (e.g., transparency, protection against censorship, decentralization). One particular application is the use of blockchain technology to enable decentralized and self-sovereign identities including new mechanisms for creating, resolving, and revoking them.
The public availability of data records has, in turn, paved the way for new kinds of attacks that combine sophisticated heuristics with auxiliary information to compromise users' privacy and deanonymize their identities.
In this paper, we review and categorize Bitcoin privacy attacks, investigate their impact on one of the Bitcoin-based identity methods namely did:btcr, and analyze and discuss its privacy properties.
Methods for Decentralized Identities: Evaluation and Insights
Recent technological shifts have pressured businesses to reshape the way they operate and transact. At the hart of this restructuring, identity management established itself as an essential building block in both B2C and B2B business models. Trustworthy identities may refer to customers, businesses, suppliers or assets, and enable trusted communications between different actors. Unfortunately, traditional identity management systems rely on centralized architectures and trust in third
party services. With the inception of blockchain technology, new methods for managing identity emerged, which promise better decentralization and self-sovereignty. This paper provides an evaluation of a selection of distributed identity methods, and analyzes their properties based on the categorization specified in the W3C recommendation rubric.
How do the Arbiter PUFs Sample the Boolean Function Class?
Arbiter based Physical Unclonable Function (sometimes called Physically Unclonable Function, or in short PUF) is a hardware based pseudorandom bit generator. The pseudorandomness in the output bits depends on device specific parameters. For example, based on the delay parameters, an $n$-length Arbiter PUF can be considered as an n-variable Boolean function. We note that the random variation of the delay parameters cannot exhaust all the Boolean functions and the class is significantly smaller as well as restricted. While this is expected (as the autocorrelation property in certain cases is quite biased), we present a more disciplined and first theoretical combinatorial study in this domain. Our work shows how one can explore the functions achieved through an Arbiter based PUF construction with random delay parameters. Our technique mostly shows limitation of such functions from the angle of cryptographic evaluation as the subclass of the Boolean function can be identified with much better efficiency (much less complexity) than random. On the other hand, we note that under certain constrains on the weights of inputs, such a simple model of Arbiter PUFs provide good cryptographic parameters in terms of differential analysis. In this regard, we theoretically solve the problem of autocorrelation properties in a restricted space of input variables with a fixed weight. Experimental evidences complement our theoretical findings.
Homomorphic Encryption for Multiple Users with Less Communications
Keeping privacy for every entity in outsourced computation is always a crucial issue. For efficient secure computation, homomorphic encryption (HE) can be one of nice solutions. Especially, multikey homomorphic encryption (MKHE) which allows homomorphic evaluation on encrypted data under different keys can be one of the simplest solutions for a secure computation which handles multiple users' data. However, the current main problem of MKHE is that the dimension of its evaluated ciphertext relies on the number of users. To solve this problem, there are several variants of multikey homomorphic encryption schemes to keep the size of ciphertext constant for a fixed number of users. However, users interact one another before computation to provide their inputs, which increases setup complexity. Moreover, all the existing MKHE schemes and their variants have unique benefits which cannot be easily achieved at the same time in one scheme. In other words, each type of scheme has a suitable computational scenario to put its best performance.
In this paper, we suggest more efficient evaluation key generation algorithms (relinearization key and bootstrapping key) for the existing variants of MKHE schemes which have no ciphertext expansion for a fixed number of users. Our method only requires a very simple and minor pre-processing; distributing public keys, which is not counted as a round at all in many other applications. Regarding bootstrapping, we firstly provide an efficient bootstrapping for multiple users which is the same as the base single-key scheme thanks to our simplified key generation method without a communication.
As a result, participants have less communication, computation, and memory cost in online phase. Moreover, we provide a practical conversion algorithm between the two types of schemes in order to \emph{efficiently} utilize both schemes' advantages together in more various applications. We also provide detailed comparison among similar results so that users can choose a suitable scheme for their homomorphic encryption based application scenarios.
Towards the Least Inequalities for Describing a Subset in $Z_2^n$
Mixed Integer Linear Programming (MILP) solvers have become one of the most powerful tools for searching cryptographic characteristics, including differentials, impossible differentials, and division trails. Generally, one MILP problem can be formulated by several different MILP models, and the models with fewer constraints and variables are usually easier to solve. How to model a subset of $Z_2^n$ with the least number of constraints is also an interesting mathematical problem. In this paper, we discuss this problem in a general form. Specifically, given a set $C \subset Z_2^n$, let $L$ be a set of inequalities, and we say $L$ describes $C$ if the inequalities in $L$ only involve $n$ variables and the solution set to $L$ is exactly $C$. Our goal is to find such a set $L$ with the least number of inequalities. We present a brand new approach, named as SuperBall approach, for resolving this problem completely. Our approach is able to generate all available inequalities. Once these inequalities are obtained, Sasaki and Todo's method is used to find out the smallest subset of inequalities that describes $C$. If Sasaki and Todo's method succeeds, the found subset will be proved as the smallest. As a result, we found the smallest subsets of inequalities that describe the Sboxes of Keccak and APN-6. The previous best results were 34 and 167, presented in FSE 2020, and we decreased these numbers to 26 and 145. Moreover, we can prove these numbers are the smallest if no dummy variables are used for generating inequalities.
Modular Design of Secure Group Messaging Protocols and the Security of MLS
The Messaging Layer Security (MLS) project is an IETF effort aiming to establish an industry-
wide standard for secure group messaging (SGM). Its development is supported by several major
secure-messaging providers (with a combined user base in the billions) and a growing body of
academic research.
MLS has evolved over many iterations to become a complex, non-trivial, yet relatively
ad-hoc cryptographic protocol. In an effort to tame its complexity and build confidence in
its security, past analyses of MLS have restricted themselves to sub-protocols of MLS—most
prominently a type of sub-protocol embodying so-called continuous group key agreement (CGKA).
However, to date the task of proving or even defining the security of the full MLS protocol has
been left open.
In this work, we fill in this missing piece. First, we formally capture the security of SGM
protocols by defining a corresponding security game, which is parametrized by a safety predicate
that characterizes the exact level of security achieved by a construction. Then, we cast MLS as
an SGM protocol, showing how to modularly build it from the following three main components
(and some additional standard cryptographic primitives) in a black-box fashion: (a) CGKA, (b)
forward-secure group AEAD (FS-GAEAD), which is a new primitive and roughly corresponds
to an “epoch” of group messaging, and (c) a so-called PRF-PRNG, which is a two-input hash
function that is a pseudorandom function (resp. generator with input) in its first (resp. second)
input. Crucially, the security predicate for the SGM security of MLS can be expressed purely as
a function of the security predicates of the underlying primitives, which allows to swap out any of
the components and immediately obtain a security statement for the resulting SGM construction.
Furthermore, we provide instantiations of all component primitives, in particular of CGKA with
MLS’s TreeKEM sub-protocol (which we prove adaptively secure) and of FS-GAEAD with a
novel construction (which has already been adopted by MLS).
Along the way we introduce a collection of new techniques, primitives, and results with
applications to other SGM protocols and beyond. For example, we extend the Generalized
Selective Decryption proof technique (which is central in CGKA literature) and prove adaptive
security for another (practical) more secure CGKA protocol called RTreeKEM (Alwen et al.,
CRYPTO ’20). The modularity of our approach immediately yields a corollary characterizing
the security of an SGM construction using RTreeKEM.
Some remarks on how to hash faster onto elliptic curves
This article proposes four optimizations of indifferentiable hashing onto (prime-order subgroups of) ordinary elliptic curves over finite fields $\mathbb{F}_{\!q}$. One of them is dedicated to elliptic curves $E$ without non-trivial automorphisms provided that $q \equiv 2 \ (\mathrm{mod} \ 3)$. The second deals with $q \equiv 2, 4 \ (\mathrm{mod} \ 7)$ and an elliptic curve $E_7$ of $j$-invariant $-3^3 5^3$. The corresponding section plays a rather theoretical role, because (the quadratic twist of) $E_7$ is not used in real-world cryptography. The other two optimizations take place for the subgroups $\mathbb{G}_1$, $\mathbb{G}_2$ of pairing-friendly curves. The performance gain comes from the smaller number of required exponentiations in $\mathbb{F}_{\!q}$ for hashing to $E(\mathbb{F}_{\!q})$, $E_7(\mathbb{F}_{\!q})$, and $\mathbb{G}_2$ as well as from the absence of necessity to hash directly onto $\mathbb{G}_1$ in certain settings. In particular, the last insight allows to drastically speed up verification of the aggregate BLS signature incorporated in many blockchain technologies. The new results affect, for example, the pairing-friendly curve BLS12-381 (the most popular in practice at the moment) and a few plain curves from the American standard NIST SP 800-186. Among other things, a taxonomy of state-of-the-art hash functions to elliptic curves is presented. Finally, the article discusses how to hash over highly $2$-adic fields $\mathbb{F}_{\!q}$.
OnionPIR: Response Efficient Single-Server PIR
This paper presents OnionPIR and stateful OnionPIR two single-server PIR schemes that significantly improve the response size and computation cost over state-of-the-art schemes. OnionPIR scheme utilizes recent advances in somewhat homomorphic encryption (SHE) and carefully composes two lattice-based SHE schemes and homomorphic operations to control the noise growth and response size. Stateful OnionPIR uses a technique based on the homomorphic evaluation of copy networks.
OnionPIR achieves a response overhead of just $4.2$x over the insecure baseline, in contrast to the $100$x response overhead of state-of-the-art schemes. Our stateful OnionPIR scheme improves upon the recent stateful PIR framework of Patel et al. and drastically reduces its response overhead by avoiding downloading the entire database in the offline stage.
Compared to stateless OnionPIR, Stateful OnionPIR reduces the computation cost by $1.8-22$x for different database sizes.
SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning
Distributed deep learning frameworks, such as split learning, have recently been proposed to enable a group of participants to collaboratively train a deep neural network without sharing their raw data. Split learning in particular achieves this goal by dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to steal the client's private data: the server can direct the client model towards learning a task of its choice. With a concrete example already proposed, such training-hijacking attacks present a significant risk for the data privacy of split learning clients.
In this paper, we propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not. We experimentally evaluate its effectiveness, and discuss in detail various points related to its use. We conclude that SplitGuard can effectively detect training-hijacking attacks while minimizing the amount of information recovered by the adversaries.
The Exact Complexity of Pseudorandom Functions and Tight Barriers to Lower Bound Proofs
How much computational resource do we need for cryptography? This is an important question of both theoretical and practical interests. In this paper, we study the problem on pseudorandom functions (PRFs) in the context of circuit complexity. Perhaps surprisingly, we prove extremely tight upper and lower bounds in various circuit models.
* In general $B_2$ circuits, assuming the existence of PRFs, PRFs can be constructed in $2n + o(n)$ size, simplifying and improving the $O(n)$ bound by Ishai et al. (STOC 2008). We show that such construction is almost optimal by giving an unconditional $2n-O(1)$ lower bound.
* In logarithmic depth circuits, assuming the existence of $NC^1$ PRFs, PRFs can be constructed in $2n + o(n)$ size and $(1+\epsilon) \log n$ depth simultaneously.
* In constant depth linear threshold circuits, assuming the existence of $TC^0$ PRFs, PRFs can be constructed with wire complexity $n^{1+O(1.61^{-d})}$. We also give an $n^{1+\Omega(c^{-d})}$ wire complexity lower bound for some constant $c$.
The upper bounds are proved with generalized Levin's trick and novel constructions of "almost" universal hash functions; the lower bound for general circuits is proved via a tricky but elementary wire-counting argument; and the lower bound for $TC^0$ circuits is proved by extracting a "black-box" property of $TC^0$ circuits from the "white-box" restriction lemma of Chen, Santhanam, and Srinivasan (Theory Comput. 2018). As a byproduct, we prove unconditional tight upper and lower bounds for "almost" universal hashing, which we believe to have independent interests.
Following Natural Proofs by Razborov and Rudich (J. Comput. Syst. Sci. 1997), our results make progress in realizing the difficulty to improve known circuit lower bounds which recently becomes significant due to the discovery of several "bootstrapping results". In $TC^0$, this reveals the limitation of the current restriction-based methods; in particular, it brings new insights in understanding the strange phenomenon of "sharp threshold results" such as the one presented by Chen and Tell (STOC 2019).
Reflection, Rewinding, and Coin-Toss in EasyCrypt
In this paper we derive a suite of lemmas which allows users to
internally reflect EasyCrypt programs into distributions which
correspond to their denotational semantics (probabilistic
reflection). Based on this we develop techniques for reasoning about
rewinding of adversaries in EasyCrypt. (A widely used technique in
cryptology.) We use our reflection and rewindability results to prove
the security of a coin-toss protocol.
MProve+ : Privacy Enhancing Proof of Reserves Protocol for Monero
Proof of reserves protocols enable cryptocurrency
exchanges to prove solvency, i.e. prove that they have enough
reserves to meet their liabilities towards their customers. MProve
(EuroS&PW, 2019) was the first proof of reserves protocol
for Monero which provided some privacy to the exchanges’
addresses. As the key images and the addresses are inherently
linked in the MProve proof, an observer could easily recognize
the exchange-owned address when a transaction spending from it
appears on the blockchain. This is detrimental for an exchange’s
privacy and becomes a natural reason for exchanges to not adopt
MProve. To this end, we propose MProve+, a Bulletproofs-
based (S&P, 2018) NIZK protocol, which unlinks the key images
and the addresses, thus alleviating the drawback of MProve.
Furthermore, MProve+ presents a promising alternative to
MProve due to an order of magnitude smaller proof sizes along
with practical proof generation and verification times.
Hardness of KT Characterizes Parallel Cryptography
A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, ${\rm K}^t$, is bounded-error hard on average to compute. In this paper, we strengthen this result and extend it to other complexity measures:
* We show, perhaps surprisingly, that the $\rm KT$ complexity is bounded-error average-case hard if and only if there exist one-way functions in *constant parallel time* (i.e., ${\sf NC}^0$). This result crucially relies on the idea of *randomized encodings*. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that ${\sf NC}^0$-computable one-way functions exist if and only if logspace-computable one-way functions exist.
* Inspired by the above result, we present randomized average-case reductions among the ${\sf NC}^1$-versions and logspace-versions of ${\rm K}^t$ complexity, and the $\rm KT$ complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the $\rm KT$ complexity and a variant of ${\rm K}^t$ complexity.
* We prove tight connections between the hardness of ${\rm K}^t$ complexity and the hardness of (the hardest) one-way functions. In analogy with the Exponential-Time Hypothesis and its variants, we define and motivate the *Perebor Hypotheses* for complexity measures such as ${\rm K}^t$ and $\rm KT$. We show that a Strong Perebor Hypothesis for ${\rm K}^t$ implies the existence of (weak) one-way functions of near-optimal hardness $2^{n-o(n)}$. To the best of our knowledge, this is the first construction of one-way functions of near-optimal hardness based on a natural complexity assumption about a search problem.
* We show that a Weak Perebor Hypothesis for ${\rm MCSP}$ implies the existence of one-way functions, and establish a partial converse. This is the first unconditional construction of one-way functions from the hardness of ${\rm MCSP}$ over a natural distribution.
* Finally, we study the average-case hardness of ${\rm MKtP}$. We show that it characterizes cryptographic pseudorandomness in one natural regime of parameters, and complexity-theoretic pseudorandomness in another natural regime.
The security of the code-based signature scheme based on the Stern identification protocol
The paper provides a complete description of the digital signature scheme based on the Stern identification protocol. We also present the proof of the existential unforgeability of the scheme under the chosen message attack (EUF-CMA) in the random oracle model (ROM) under assumptions of hardness of syndrome decoding and hash function collision finding problems. Finally, we discuss the choice of the signature parameters and introduce a parameter set providing 80-bit security.
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Training deep neural networks requires large scale data, which often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning framework aims to address this concern by splitting up the model among the client and the server. The idea is that since the server does not have access to client's part of the model, the scheme supposedly provides privacy. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and also obtain a functionally similar model to the client model, without the client being able to detect the attack. (2) Furthermore, we show that if split learning is used naively to protect the training labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using three benchmark datasets and investigate various properties of the overall system that affect the attacks' effectiveness. Our results show that plaintext split learning paradigm can pose serious security risks and provide no more than a false sense of security.
"Act natural!": Having a Private Chat on a Public Blockchain
Messengers have become an essential means of interpersonal interaction. Yet untraceable private communication remains an elusive goal, as most messengers hide content, but not communication patterns. The knowledge of communication patterns can by itself reveal too much, as happened, e.g., in the context of the Arab Spring. Subliminal channels in cryptographic systems enable untraceable private communication in plain sight. In this context, bulletin boards in the form of blockchains are a natural object for subliminal communication: accessing them is innocuous, as they rely on distributed access for verification and extension. At the same time, blockchain users generate hundreds of thousands of transactions per day that are individually signed and placed on the blockchain. Thus, blockchains may serve as innocuous repository for publicly accessible cryptographic transactions where subliminal channels can be placed. This significantly increases the availability of publicly accessible cryptographic transactions where subliminal channels can be placed.
In this paper, we propose a public-key subliminal channel using secret-recoverable splittable signature schemes on blockchains and prove that our construction is undetectable in the random oracle model under common cryptographic assumptions. Our approach is applicable to any secret-recoverable splittable signature scheme and introduces a constant overhead of a single signature per message. Such schemes are used by 98 of the top 100 cryptocurrencies. We also analyze the applicability of our approach to the Bitcoin, Monero, and RippleNet networks and present proof of concept implementations for Bitcoin and RippleNet.
Streaming SPHINCS+ for Embedded Devices using the Example of TPMs
We present an implementation of the hash-based post-quantum signature scheme SPHINCS+ that enables heavily memory-restricted devices to sign messages by streaming-out a signature during its computation and to verify messages by streaming-in a signature. We demonstrate our implementation in the context of Trusted Platform Modules (TPMs) by proposing a SPHINCS+ integration and a streaming extension for the TPM specification. We evaluate the overhead of our signature-streaming approach for a stand-alone SPHINCS+ implementation and for its integration in a proof-of-concept TPM with the proposed streaming extension running on an ARM Cortex-M4 platform. Our streaming interface greatly reduces the memory requirements without introducing a significant performance penalty. This is achieved not only by removing the need to store an entire signature but also by reducing the stack requirements of the key generation, sign, and verify operations. Therefore, our streaming interface enables small embedded devices that do not have sufficient memory to store an entire SPHINCS+ signature or that previously were only able to use a parameter set that results in smaller signatures to sign and verify messages using all SPHINCS+ variants.
Improved Verifiability for BeleniosVS
The BeleniosVS electronic voting scheme offers an attractive mix of verifiability and privacy properties. Moreover, using the ProVerif protocol-verification tool, BeleniosVS has automatic machine-aided analysis of (end-to-end) verifiability in 96 different threat models with the machine-aided analysis finding proofs in 22 cases and finding attacks in the remaining 74 cases. The high number of threat models covered by ProVerif delivers a much richer security analysis than the norm.
We revisit the BeleniosVS scheme and propose several refinements to the ProVerif security model and scheme which increase the number of threat models in which the scheme has verifiability from 22 to 28. Our new ProVerif security model also implies end-to-end verifiability but the requirements are easier to satisfy. Interestingly, in all six improvements, both the changes to the security model and one or more changes to the scheme are necessary to prove verifiability.
Onyx: New Encryption and Signature Schemes with Multivariate Public Key in Degree 3
In this paper, we present a new secret trapdoor function for the design of multivariate schemes that we call ``Onyx'', suitable for encryption and signature. It has been inspired by the schemes presented in Ariadne Thread and Pepper: New mul-tivariate cryptographic schemes with public keys in degree 3. .
From this idea, we present some efficient encryption and signature multivariate schemes with explicit parameters that resist all known attacks. In particular they resist the two main (and often very powerful) attacks in this area: the Gröbner attacks (to compute a solution of the system derived from the public key) and the MinRank attacks (to recover the secret key). Specific attacks due to the properties of the function and its differential are also addressed in this paper.
The ``Onyx'' schemes have public key equations of degree 3. Despite this, the size of the public key may still be reasonable since we can use larger fields and smaller extension degrees. Onyx signatures can be as short as the ``birthday paradox'' allows, i.e. twice the security level, or even shorter thanks to the Feistel-Patarin construction, like many other signatures schemes based on multivariate equations.
Djed: A Formally Verified Crypto-Backed Pegged Algorithmic Stablecoin
This paper describes Djed, an algorithmic stablecoin protocol that behaves like an autonomous bank that buys and sells stablecoins for a price in a range that is pegged to a target price. It is crypto-backed in the sense that the bank keeps a volatile cryptocurrency in its reserve. The reserve is used to buy stablecoins from users that want to sell them. And revenue from sales of stablecoins to users are stored in the reserve. Besides stablecoins, the bank also trades reservecoins in order to capitalize itself and maintain a reserve ratio significantly greater than one. To the best of our knowledge, this is the first stablecoin protocol where stability claims are precisely and mathematically stated and proven. Furthermore, the claims and their proofs are formally verified using two different techniques: bounded model checking, to exhaustively search for counter-examples to the claims; and interactive theorem proving, to build rigorous formal proofs using a proof assistant with automated theorem proving features.
A Simple Post-Quantum Non-Interactive Zero-Knowledge Proof from Garbled Circuits
We construct a simple public-coin zero-knowledge proof system solely based on symmetric primitives, from which we can apply the Fiat-Shamir heuristic to make it non-interactive. Our construction can be regarded as a simplified cut-and-choose-based malicious secure twoparty computation for the zero-knowledge functionality. Our protocol is suitable for pedagogical purpose for its simplicity (code is only 728 lines).
A Tale of Twin Primitives: Single-chip Solution for PUFs and TRNGs
Physically Unclonable Functions (PUFs) and True Random Number Generators (TRNGs) are two highly useful hardware primitives to build up the root-of-trust for an embedded device. PUFs are designed to offer repetitive and instance-specific
randomness, whereas TRNGs are expected to be invariably random. In this paper, we present a dual-mode PUF-TRNG design that utilises two different hardware-intrinsic properties, i.e. oscillation frequency of the Transition Effect Ring Oscillator (TERO)
cell and the propagation delay of a buffer within the cell to serve the purpose of both PUF and TRNG depending on the exact requirement of the application. The PUF design is also proposed to have a built-in resistance to machine learning (ML) and deep
learning (DL) attacks, whereas the TRNG exhibits sufficient randomness.
Taphonomical Security: (DNA) Information with Foreseeable Lifespan
Uncategorized
Uncategorized
This paper introduces the concept of information with a foreseeable lifespan and explains who to achieve this primitive via a new method for encoding and storing information in DNA-RNA sequences.
The storage process can be divided into three time-frames. Within the first (life), we can easily read out the stored data with high probability. The second time-frame (agony) is a parameter-dependent state of uncertainty; the data is not easily accessible, but still cannot be guaranteed to be inaccessible. During the third (death), the data can with high probability not be recovered without a large computational effort which can be controlled via a security parameter.
The quality of such a system, in terms of a foreseeable lifespan, depends on the brevity of the agony time-frame, and we show how to optimise this.
In the present paper, we analyse the use of synthetic DNA and RNA as a storage medium since it is a suitable information carrier and we can manipulate the RNA nucleotide degradation rate to help control the lifespan of the message embedded in the synthesized DNA/RNA molecules.
Other media such as Bisphenol A thermal fax paper or unstable nonvolatile memory technologies can be used to implement the same principle but the decay models of each of those phenomena should be re-analysed and the formulae given in this paper adapted correspondingly.
Time, Privacy, Robustness, Accuracy: Trade Offs for the Open Vote Network Protocol
Uncategorized
Uncategorized
Open vote network is a secure multi-party protocol allowing to compute a sum of integer votes without revealing their values. As such, it has several applications in social choice and financial applications.
An inherent limitation of OV-Net is its lack of robustness against denial-of-service attacks, which occur when at least one of the voters initiates the protocol but (maliciously or accidentally) does not complete it. Unfortunately such a situation is very likely to occur in any real-world implementation of the protocol. This will cost serious time delays from either waiting for the failing parties and perhaps having to perform extra protocol rounds with the remaining participants.
This paper provides a solution to this problem by extending OV-Net with mechanisms tolerating a number of unresponsive participants. The price to pay is a carefully controlled privacy loss, an increase in computation, and a statistical loss in the accuracy.
Glowworm Attack: Optical TEMPEST Sound Recovery via a Device’s Power Indicator LED
Two main classes of optical TEMPEST attacks
against the confidentiality of information processed/delivered by
devices have been demonstrated in the past two decades; the first
class includes methods for recovering content from monitors, and
the second class includes methods for recovering keystrokes from
physical and virtual keyboards. In this paper, we identify a new
class of optical TEMPEST attacks: recovering sound by analyzing
optical emanations from a device’s power indicator LED. We
analyze the response of the power indicator LED of various
devices to sound and show that there is an optical correlation
between the sound that is played by connected speakers and the
intensity of their power indicator LED due to the facts that: (1)
the power indicator LED of various devices is connected directly
to the power line, (2) the intensity of a device’s power indicator
LED is correlative to the power consumption, and (3) many
devices lack a dedicated means of countering this phenomenon.
Based on our findings, we present the Glowworm attack, an
optical TEMPEST attack that can be used by eavesdroppers
to recover sound by analyzing optical measurements obtained
via an electro-optical sensor directed at the power indicator
LED of various devices (e.g., speakers, USB hub splitters, and
microcontrollers). We propose an optical-audio transformation
(OAT) to recover sound in which we isolate the speech from
optical measurements obtained by directing an electro-optical
sensor at a device’s power indicator LED Finally, we test the
performance of the Glowworm attack in various experimental
setups and show that an eavesdropper can apply the attack
to recover speech from speakers’ power LED indicator with
good intelligibility from a distance of 15 meters and with fair
intelligibility from 35 meters.
Cairo – a Turing-complete STARK-friendly CPU architecture
Proof systems allow one party to prove to another party that a certain statement is true. Most existing practical proof systems require that the statement will be represented in terms of polynomial equations over a
finite field. This makes the process of representing a statement that one wishes to prove or verify rather complicated, as this process requires a new set of equations for each statement.
Various approaches to deal with this problem have been proposed.
We present Cairo, a practically-efficient Turing-complete STARK-friendly CPU architecture. We describe a single set of polynomial equations for the statement that the execution of a program on this architecture is valid. Given a statement one wishes to prove, Cairo allows writing a program that describes that statement, instead of writing a set of polynomial equations.
On the Nonsingularity and Equivalence of NFSRs
Nonlinear feedback shift registers (NFSRs) are used in many stream ciphers as their main building blocks. In particular, Galois NFSRs with terminal bits are used in the typical stream ciphers Grain and Trivium. One security criterion for the design of stream
ciphers is to assure their used NFSRs are nonsingular. The nonsingularity is well solved for Fibonacci NFSRs, whereas it is not for Galois NFSRs. In addition, some types of Galois NFSRs equivalent to Fibonacci ones have been found. However, whether there
exist new types of such Galois NFSRs remains unknown. The paper first considers the nonsingularity of Galois NFSRs. Some necessary/sufficient conditions are presented. The paper then concentrates on the equivalence between Galois NFSRs and Fibonacci ones. Some necessary conditions for Galois NFSRs equivalent to Fibonacci ones are provided. The Galois NFSRs with terminal bits equivalent to a given Fibonacci one are enumerated. Moreover, two classes of nonsingular Galois NFSRs with terminal bits are found to be the new types of Galois NFSRs equivalent to Fibonacci ones.
Edwards curves and FFT-based multiplication
Uncategorized
Uncategorized
This paper introduces fast algorithms for performing group operations on Edwards curves using FFT-based multiplication. Previously known algorithms can use such multiplication too, but better results can be achieved if particular properties of FFT-based arithmetic are accounted for. The introduced algorithms perform operations in extended Edwards coordinates and in Montgomery single coordinate.
Discovering New $L$-Function Relations Using Algebraic Sieving
We report the discovery of new results relating $L$-functions, which typically encode interesting information about mathematical objects,
obtained in a \emph{semi-automated} fashion using an algebraic sieving technique.
Algebraic sieving initially comes from cryptanalysis, where it is used to solve factorization, discrete logarithms, or to produce signature forgeries in cryptosystems such as RSA. We repurpose the technique here to provide candidate identities, which can be tested and ultimately formally proven.
A limitation of our technique is the need for human intervention in the post-processing phase, to determine the most general form of conjectured identities, and to provide a proof for them. Nevertheless we report 29 identities that hitherto never appeared in the literature, 9 of which we could completely prove, the remainder being numerically valid over all tested values.
This work complements other instances in the literature where this type of automated symbolic computation has served as a
productive step toward theorem proving; it can be extremely helpful in figuring out what it is that one should attempt to prove.
Revisiting cryptanalysis on ChaCha from Crypto 2020 and Eurocrypt 2021
ChaCha has been one of the prominent ARX designs of the last few years because of its use in several systems. The cryptanalysis of ChaCha involves a differential attack which exploits the idea of Probabilistic Neutral Bits (PNBs). For a long period, the single-bit distinguisher in this differential attack was found up to 3 rounds. At Crypto $2020$, Beierle et. al. introduced for the first time single bit distinguishers for $3.5$ rounds, which contributed significantly in regaining the flow of research work in this direction. This discovery became the primary factor behind the huge improvement in the key recovery attack complexity in that work. This was followed by another work at Eurocrypt 2021, where a single bit distinguisher of $3.5$-th round helped to produce a 7-round distinguisher of ChaCha and a further improvement in key recovery.
In the first part of this paper, we provide the theoretical framework for the distinguisher given by Beierle et. al. We mathematically derive the observed differential correlation for the particular position where the output difference is observed at $3.5$ rounds. Also, Beierle et. al. mentioned the issue of the availability of proper IVs to produce such distinguishers, and pointed out that not all keys have such IVs available. Here we provide a theoretical insight of this issue.
Next we revisit the work of Coutinho et. al. (Eurocrypt 2021). Using Differential-Linear attacks against ChaCha, they claimed distinguisher and key recovery with complexities $2^{218}$ and $2^{228.51}$ respectively. We show that the differential correlation for $3.5$ rounds is much smaller than the claim of Coutinho et. al. This makes the attack complexities much higher than their claim.
Cryptanalysis of Caesar using Quantum Support Vector Machine
Recently, artificial intelligence-based cryptanalysis techniques have been researched. In this paper, we find the key of the Caesar cipher, which is a classical cipher, by using a quantum machine learning algorithm that learns by parameterized quantum circuit instead of a classical neural network. In the case of 4-bit plaintext and key, results could not be obtained due to the limitations of the cloud environment. But in the case of 2-bit plaintext and key, an accuracy of 1.0 was achieved, and in the case of 3-bit plaintext and key, an accuracy of 0.84 was achieved. In addition, as a result of cryptanalysis for a 2-bit dataset on IBM's real quantum processor, a classification accuracy of 0.93 was achieved. In the future, we will research a qubit reduction method for cryptanalysis of longer-length plaintext and key, and a technique for maintaining accuracy in real quantum hardware.
An Efficient Data Protection Scheme Based on Hierarchical ID-Based Encryption for Message Queueing Telemetry Transport
As Internet of Things (IoT) thriving over the whole world, more and more IoT devices and IoT-based protocols have been designed and proposed in order to meet people's needs. Among those protocols, message queueing telemetry transport (MQTT) is one of the most emerging and promising protocol, which provides many-to-many message transmission based on the ``publish/subscribe'' mechanism. It has been widely used in industries such as the energy industry, chemical engineering, self-driving, etc.
While transporting important messages, MQTT specification recommends the use of TLS protocol. However, computation cost of TLS is too heavy. Since topics in a broker are stored with a hierarchical structure, In this manuscript, we propose a novel data protection protocol for MQTT from hierarchical ID-based encryption. Our protocol adopts the intrinsic hierarchical structures of MQTT, and achieves constant-size keys, i.e. independent of the depth in hierarchical structures.
Last updated: 2022-11-28
Revocable Attribute-Based Encryption for Multi-Keyword Search in Clouds
With the rapid advancement of cloud computing, users upload their files to the cloud server so that any user can access it remotely. To assure the data security, the data owner, typically, encrypts the data before outsourcing them to the cloud server. In addition, an encryption mechanism needs to enable the consumers to perform efficient searches of such encrypted data in the cloud storages through keywords, i.e. searchable encryption. However, most of searchable encryption is improper due to several limitations, such as the requirement of an on-line fully trusted third party, poor efficiency, high-overhead in user revocation, support of a single keyword search, etc. To mitigate such limitations, an attribute-based encryption scheme with fine-grained multi-keyword search is proposed. The new scheme supports the user revocation. In addition, the length of the ciphertext as well as the secret key do not grow linearly under the influence of the size of attribute set. The performance of the proposed scheme is better as compared to other related schemes. Hence, one can easily adopt the proposed scheme for the real life applications due to its flexibility in terms of its features, security and efficiency.
Threshold Schnorr with Stateless Deterministic Signing from Standard Assumptions
Schnorr's signature scheme permits an elegant threshold signing protocol due to its linear signing equation. However each new signature consumes fresh randomness, which can be a major attack vector in practice. Sources of randomness in deployments are frequently either unreliable, or require state continuity, i.e. reliable fresh state resilient to rollbacks. State continuity is a notoriously difficult guarantee to achieve in practice, due to system crashes caused by software errors, malicious actors, or power supply interruptions (Parno et al., S&P '11). This is a non-issue for Schnorr variants such as EdDSA, which is specified to derive nonces deterministically as a function of the message and the secret key. However, it is challenging to translate these benefits to the threshold setting, specifically to construct a threshold Schnorr scheme where signing neither requires parties to consume fresh randomness nor update long-term secret state.
In this work, we construct a dishonest majority threshold Schnorr protocol that enables such stateless deterministic nonce derivation using standardized block ciphers. Our core technical ingredients are new tools for the zero-knowledge from garbled circuits (ZKGC) paradigm to aid in verifying correct nonce derivation:
- A mechanism based on UC Commitments that allows a prover to commit once to a witness, and prove an unbounded number of statements online with only cheap symmetric key operations.
- A garbling gadget to translate intermediate garbled circuit wire labels to arithmetic encodings.
Our scheme prioritizes computation cost, with each proof requiring only a small constant number of exponentiations.
One-time Traceable Ring Signatures
A ring signature allows a party to sign messages anonymously on behalf of a group, which is called ring. Traceable ring signatures are a variant of ring signatures that limits the anonymity guarantees, enforcing that a member can sign anonymously at most one message per tag. Namely, if a party signs two different messages for the same tag, it will be de-anomymized. This property is very useful in decentralized platforms to allow members to anonymously endorse statements in a controlled manner.
In this work we introduce one-time traceable ring signatures, where a member can sign anonymously only one message. This natural variant suffices in many applications for which traceable ring signatures are useful, and enables us to design a scheme that only requires a few hash evaluations and outperforms existing (non one-time) schemes.
Our one-time traceable ring signature scheme presents many advantages: it is fast,
with a signing time of less than 1 second for a ring of $2^{10}$ signers (and much less for smaller rings); it is {\em post-quantum resistant}, as it only requires hash evaluations; it is extremely simple, as it requires only a black-box access to a generic hash function (modeled as a random oracle) and no other cryptographic operation is involved.
From a theoretical standpoint our scheme is also the first anonymous signature scheme based on a black-box access to a symmetric-key primitive. All existing anonymous signatures are either based on specific hardness assumptions (e.g., LWE, SIS, etc.) or use the underlying symmetric-key primitive in a non-black-box way, i.e., they leverage the circuit representation of the primitive.
XDIVINSA: eXtended DIVersifying INStruction Agent to Mitigate Power Side-Channel Leakage
Side-channel analysis (SCA) attacks pose a major threat to embedded systems due to their ease of accessibility. Realising SCA resilient cryptographic algorithms on embedded systems under tight intrinsic constraints, such as low area cost, limited computational ability, etc., is extremely challenging and often not possible. We propose a seamless and effective approach to realise a generic countermeasure against SCA attacks. XDIVINSA, an extended diversifying instruction agent, is introduced to realise the countermeasure at the microarchitecture level based on the combining concept of diversified instruction set extension (ISE) and hardware diversification. XDIVINSA is developed as a lightweight co-processor that is tightly coupled with a RISC-V processor. The proposed method can be applied to various algorithms without the need for software developers to undertake substantial design efforts hardening their implementations against SCA. XDIVINSA has been implemented on the SASEBO G-III board which hosts a Kintex-7 XC7K160T FPGA device for SCA mitigation evaluation. Experimental results based on non-specific t-statistic tests show that our solution can achieve leakage mitigation on the power side channel of different cryptographic kernels, i.e., Speck, ChaCha20, AES, and RSA with an acceptable performance overhead compared to existing countermeasures.
Comparing Lattice Families for Bounded Distance Decoding near Minkowski’s Bound.
In this report we analyse and compare the complexity of solving
the Bounded Distance Decoding problem in two families for discrete
logarithm lattices. Our algorithm uses the internal structure of the
lattice to decode an error close to Minkowski’s bound efficiently. This
procedure can be used as a decryption algorithm of an encryption
scheme, where the internal structure of the lattice serves as a secret
key. In addition, one of these lattices was used in [1] to construct a
family of one way functions. We present cryptanalysis of the mentioned
scheme and we prove that the stated size of the keys is insufficient for
a required security level.
Collisions in Supersingular Isogeny Graphs and the SIDH-based Identification Protocol
The digital signature schemes that have been proposed so far in the setting of the Supersingular Isogeny Diffie-Hellman scheme (SIDH) were obtained by applying the Fiat-Shamir transform - and a quantum-resistant analog, the Unruh transform - to an interactive identification protocol introduced by De Feo, Jao and Plût. The security of the resulting schemes is therefore deduced from that of the base identification protocol.
In this paper, we revisit the proofs that have appeared in the literature for the special soundness property of the aforementioned SIDH-based identification protocol. All such proofs consider the same extraction algorithm, which is claimed to always extract the witness for a statement x when given two valid transcripts, with the same commitment and different challenges, relative to x itself. We show that this is not always the case, with some explicit counterexamples. The general argument fails due to some special cycles, which we call collisions, in supersingular isogeny graphs.
We provide some theoretical results on their existence, and discuss their impact on the security of the SIDH-based digital signatures. Relying on the Generalised Riemann Hypothesis, we also introduce an alternative extractor for which we rigorously prove the special soundness property.
Privacy-Enhancing Group Signcryption Scheme
In the last decades, several signcryption schemes have been proposed for different privacy-enhancing purposes. In this paper, we propose a new privacy-enhancing group signcryption scheme that provides: unforgeability, confidentiality, ciphertext and sender anonymity, traceability, unlinkability, exculpability, coalition-resistance, and unforgeable tracing verification.
It is important to notice that the proposed scheme allows a signer to anonymously signcryt a message on the group's behalf (i.e., sender's anonymity). Security analysis of the scheme is also provided.
Our proposal is proven to be strongly existentially unforgeable under an adaptive chosen message attack, indistinguishable under an adaptive chosen ciphertext attack, and to provide ciphertext anonymity under an adaptive chosen ciphertext attack.
Furthermore, the scheme is extended to work in a multi-receiver scenario, where an authorized group of receivers is able to unsigncrypt the ciphertext.
The experimental results show that our scheme is efficient even on computationally restricted devices and can be therefore used in many IoT applications. Signcrypt protocol on smart cards takes less than 1~s (including communication overhead). The time of Unsigncrypt protocol on current ARM devices is negligible (less than 40 ms).
Binary Search in Secure Computation
Binary search is one of the most popular algorithms in computer science. Realizing it in the context of secure multiparty computation which demands data-oblivious execution, however, is extremely non-trivial. It has been previously implemented only using oblivious RAM (ORAM) for secure computation and in this work we initiate the study of this topic using conventional secure computation techniques based on secret sharing. We develop a suite of protocols with different properties and of different structure for searching a private dataset of $m$ elements by a private numeric key. Our protocols result in $O(m)$ and $O(\sqrt{m})$ communication using only standard and readily available operations based on secret sharing. We further extend our protocols to support write operations, namely, binary search that obliviously updates the selected element, and realize two variants: updating non-key fields and updating the key field. Our implementation results indicate that even after applying known and our own optimizations to the fastest ORAM constructions, our solutions are faster than optimized ORAM schemes for datasets of up to $2^{30}$ elements and by up to two orders of magnitude. We hope that this work will prompt further interest in seeking efficient realizations of this important problem.
Aggregating and thresholdizing hash-based signatures using STARKs
This work presents an approach for compressing hash-based signatures using STARKs (Ben-Sasson et. al.'18). We focus on constructing a hash-based t-of-n threshold signature scheme, as well as an aggregate signature scheme. In both constructions, an aggregator collects individual one-time hash-based signatures and outputs a STARK proof attesting that the signatures are valid and meet the required thresholds. This proof then serves the role of the aggregate or threshold signature. We demonstrate the concrete performance of such constructions, having implemented the algebraic intermediate representations (AIR) for them, along with an experimental evaluation over our implementation of the STARK protocol.
We find that, even when we aggregate thousands of signatures, the final aggregated size ranges between 100KB and 200KB. This makes our schemes attractive when there exist at least $50$ one-or-few-times hash-based signatures -- such as in the blockchain setting.
We also observe that for STARK-based signature aggregation, the size of individual signatures is less important than the number of hash invocations and the complexity of the signature verification algorithm. This implies that simple hash-based signature variants (e.g. Lamport, HORST, BPQS) are well-suited for aggregation, as their large individual signatures serve only as witnesses to the ZKP circuit and are not needed for aggregate signature verification.
Our constructions are directly applicable as scalable solutions for post-quantum secure blockchains which typically employ blocks of hundreds or thousands of signed transactions. Moreover, stateful hash-based one-or-few-times signatures are already used in some PQ-ready blockchains, as address reuse is typically discouraged for privacy reasons.
A Correlation Attack on Full SNOW-V and SNOW-Vi
In this paper, a method for searching correlations between the binary stream of Linear Feedback Shift Register (LFSR) and the keystream of SNOW-V and SNOW-Vi is presented based on the technique of approximation to composite functions. With the aid of the linear relationship between the four taps of LFSR input into Finite State Machine (FSM) at three consecutive clocks, we present an automatic search model based on the SAT/SMT technique and search out a series of linear approximation trails with high correlation. By exhausting the intermediate masks, we find a binary linear approximation with a correlation $-2^{-47.76}$. Using such approximation, we propose a correlation attack on SNOW-V with an expected time complexity $2^{246.53}$, a memory complexity $2^{238.77}$ and $2^{237.5}$ keystream words generated by the same key and Initial Vector (IV). For SNOW-Vi, we provide a binary linear approximation with the same correlation and mount a correlation attack with the same complexity as that of SNOW-V. To the best of our knowledge, this is the first known attack on full SNOW-V and SNOW-Vi, which is better than the exhaustive key search with respect to time complexity. The results indicate that neither SNOW-V nor SNOW-Vi can guarantee the 256-bit security level if we ignore the design constraint that the maximum length of keystream for a single pair of key and IV is less than $2^{64}$.
On the modifier Q for multivariate signature schemes
At PQCrypto 2021, Smith-Tone proposed a new modifier, called ``Q'', to construct a fast multivariate signature scheme from a known scheme. In the present paper, we propose an idea to weaken the security of this modifier.
An improvement of algorithms to solve under-defined systems of multivariate quadratic equations
The problem of solving a system of multivariate quadratic equations over a finite field is known to be hard in general. However, there have been several algorithms of solving the system of quadratic equations efficiently when the number of variables is sufficiently larger than the number of equations (e.g., Kipnis et al., Eurocrypt 1999, Thomae-Wolf, PKC 2012, Cheng et al., PQCrypto 2014 and Furue et al., PQCrypto 2021). In the present paper, we propose a new algorithm which is available if the number of variables is smaller than that required in the previously given algorithms. We also analyze the security of MAYO, a variant of UOV, proposed in SAC 2021 and submitted to NIST's standardization project of additional digital signature schemes for Post-Quantum Cryptography.
On the security of Hufu-UOV
In 2019, Tao proposed a new variant of UOV with small keys, called Hufu-UOV. This paper studies its security.
Brakedown: Linear-time and field-agnostic SNARKs for R1CS
This paper introduces Brakedown, the first built system that provides linear-time SNARKs for NP, meaning the prover incurs $O(N)$ finite field operations to prove the satisfiability of an $N$-sized R1CS instance. Brakedown’s prover is faster, both concretely and asymptotically, than prior SNARK implementations. Brakedown does not require a trusted setup and is plausibly post-quantum secure. Furthermore, it is compatible with arbitrary finite fields of sufficient size; this property is new amongst implemented arguments with sublinear proof sizes.
To design Brakedown, we observe that recent work of Bootle, Chiesa, and Groth (BCG, TCC 2020) provides a polynomial commitment scheme that, when combined with the linear-time interactive proof system of Spartan (CRYPTO 2020), yields linear-time IOPs and SNARKs for R1CS (a similar theoretical result was previously established by BCG, but our approach is conceptually simpler, and crucial for achieving high-speed SNARKs). A core ingredient in the polynomial commitment scheme that we distill from BCG is a linear-time encodable code. Existing constructions of such codes are believed to be impractical. Nonetheless, we design and engineer a new one that is practical in our context.
We also implement a variant of Brakedown that uses Reed-Solomon codes instead of our linear-time encodable codes; we refer to this variant as Shockwave. Shockwave is not a linear-time SNARK, but it provides shorter proofs and lower verification times than Brakedown (it also provides a faster prover than prior plausibly post-quantum SNARKs).
Rate One-Third Non-malleable Codes
At ITCS 2010, Dziembowski, Pietrzak, and Wichs introduced Non-malleable Codes (NMCs) which protect against tampering of a codeword of a given message into the codeword of a related message. A well-studied model of tampering is the $2$-split-state model where the codeword consists of two independently tamperable states. As with standard error-correcting codes, it is of great importance to build codes with high rates.
Following a long line of work, Aggarwal and Obremski (FOCS 2020) showed the first constant rate non-malleable code in the $2-$split state model; however this constant was a minuscule $10^{-6}$! In this work, we build a Non-malleable Code with rate $1/3$. This nearly matches the rate $1/2$ lower bound for this model due to Cheraghchi and Guruswami (ITCS 2014).
Our construction is simple, requiring just an inner-product extractor, a seeded extractor, and an affine-evasive function.
On the Multiplicative Complexity of Cubic Boolean Functions
Multiplicative complexity is a relevant complexity measure for many advanced cryptographic protocols such as multi-party computation, fully homomorphic encryption, and zero-knowledge proofs, where processing AND gates is more expensive than processing XOR gates. For Boolean functions, multiplicative complexity is defined as the minimum number of AND gates that are sufficient to implement a function with a circuit over the basis (AND, XOR, NOT). In this paper, we study the multiplicative complexity of cubic Boolean functions. We propose a method to implement a cubic Boolean function with a small number of AND gates and provide upper bounds on the multiplicative complexity that are better than the known generic bounds.
MUSE: Secure Inference Resilient to Malicious Clients
The increasing adoption of machine learning inference in applications has led to a corresponding increase in concerns surrounding the privacy guarantees offered by existing mechanisms for inference. Such concerns have motivated the construction of efficient secure inference protocols that allow parties to perform inference without revealing their sensitive information. Recently, there has been a proliferation of such proposals, rapidly improving efficiency. However, most of these protocols assume that the client is semi-honest, that is, the client does not deviate from the protocol; yet in practice, clients are many, have varying incentives, and can behave arbitrarily.
To demonstrate that a malicious client can completely break the security of semi-honest protocols, we first develop a new model-extraction attack against many state-of-the-art secure inference protocols. Our attack enables a malicious client to learn model weights with 22x-312x fewer queries than the best black-box model-extraction attack and scales to much deeper networks.
Motivated by the severity of our attack, we design and implement MUSE, an efficient two-party secure inference protocol resilient to malicious clients. MUSE introduces a novel cryptographic protocol for conditional disclosure of secrets to switch between authenticated additive secret shares and garbled circuit labels, and an improved Beaver's triple generation procedure which is 8x-12.5x faster than existing techniques.
These protocols allow MUSE to push a majority of its cryptographic overhead into a preprocessing phase: compared to the equivalent semi-honest protocol (which is close to state-of-the-art), MUSE's online phase is only 1.7x-2.2x slower and uses 1.4x more communication. Overall, MUSE is 13.4x-21x faster and uses 2x-3.6x less communication than existing secure inference protocols which defend against malicious clients.
Neyman’s Smoothness Test: a Trade-off between Moment-based and Distribution-based Leakage Detections
Leakage detection tests have become an indispensable tool for testing implementations featuring side channel countermeasures such as masking. Whilst moment-based techniques such as the Welch’s t-test are universally powerful if there is leakage in a central moment, they naturally fail if this is not the case. Distribution-based techniques such as the χ2-test then come to the rescue, but they have shown not to be robust with regards to noise. In this paper, we propose a novel leakage detection technique based on Neyman’s smoothness test. We find that our new test is robust with respect to noise (similar to the merit of Welch’s t-test), and can pick up on leakage that is not located in central moments (similar to the merit of the χ2-test). We also find that there is a sweet-spot where Neyman’s test outperforms both the t-test and the χ2-test. Realistic measurements confirm that such a sweet-spot is relevant in practice for detecting implementation flaws.
Reinforced Concrete: A Fast Hash Function for Verifiable Computation
We propose a new hash function Reinforced Concrete, which is the first generic purpose hash that is fast both for a zero-knowledge prover and in native x86 computations. It is suitable for a various range of zero-knowledge proofs and protocols, from set membership to generic purpose verifiable computation. Being up to 15x faster than its predecessor Poseidon hash, Reinforced Concrete inherits security from traditional time-tested schemes such as AES, whereas taking the zero-knowledge performance from a novel and efficient decomposition of a prime field into compact buckets.
The new hash function is suitable for a wide range of applications like privacy-preserving cryptocurrencies, verifiable encryption, protocols with state membership proofs, or verifiable computation. It may serve as a drop-in replacement for various prime-field hashes such as variants of MiMC, Poseidon, Pedersen hash, and others.
Randomness Bounds for Private Simultaneous Messages and Conditional Disclosure of Secrets
In cryptography, the private simultaneous messages (PSM) and conditional disclosure of secrets (CDS) are closely related fundamental primitives. We consider $k$-party PSM and CDS protocols for a function $f$ with a common random string, where each party $P_i$ generates a message and sends it to a referee $P_0$. We consider bounds for the optimal length $\rho$ of the common random string among $k$ parties (or, {\it randomness complexity}) in PSM and CDS protocols with perfect and statistical privacy through combinatorial and entropic arguments. ($i$) We provide general connections from the optimal total length $\lambda$ of the messages (or, {\it communication complexity}) to the randomness complexity $\rho$. ($ii$) We also prove randomness lower bounds in PSM and CDS protocols for general functions. ($iii$) We further prove randomness lower bounds for several important explicit functions. They contain the following results: For PSM protocols with perfect privacy, we prove $\lambda-1 \le \rho$ as the general connection. From the general lower bound, we prove $\rho\ge 2^{(k-1)n}-1$ for a general function $f:(\{0,1\}^n)^k\rightarrow\{0,1\}$ under universal reconstruction, in which $P_0$ is independent of $f$. This implies that the Feige-Killian-Naor PSM protocol for a general function [Proc.~STOC '94, pp.554--563] is optimal with respect to randomness complexity. We also provide a randomness lower bound $\rho> kn-2$ for a generalized inner product function. This implies the optimality of the $2$-party PSM protocol for the inner-product function of Liu, Vaikuntanathan, and Wee [Proc.~CRYPTO 2017, pp.758--790]. For CDS protocols with perfect privacy, we show $\rho\ge\lambda-\sigma$ as the general connection by similar argument to those for PSM protocols, where $\sigma$ is the length of secrets. We also obtain randomness lower bounds $\rho\ge (k-1)\sigma$ for XOR, AND, and generalized inner product functions. These imply the optimality of Applebaum and Arkis's $k$-party CDS protocol for a general function [Proc. TCC 2018, pp.317--344] up to a constant factor in a large $k$.
Lelantus-CLA
This article presents Lelantus-CLA, an adaptation of Lelantus for use with the Mimblewimble protocol and confidential assets. Whereas Mimblewimble achieves a limited amount of privacy by merging transactions that occur in the same block, Lelantus uses a logarithmic-size proof of membership to effectively enable merging across different blocks. At a high level, this allows value to be added to a common pool and then spent privately, but only once. We explain how to adapt this construction to Mimblewimble, while at the same time simplifying the protocol where possible.
Confidential assets is a mechanism that allows multiple currencies to co-exist in the same ledger and (optionally) enables transactions to be conducted without disclosing the currency.
Finally, we also describe how we can use Bulletproof “coloring” to enable offline payments, thus addressing one of the original shortcomings of Mimblewimble.
SoK: Cryptanalysis of Encrypted Search with LEAKER - A framework for LEakage AttacK Evaluation on Real-world data
Uncategorized
Uncategorized
An encrypted search algorithm (ESA) allows a user to encrypt its data while preserving the ability to search over it. As all practical solutions leak some information, cryptanalysis plays an important role in the area of encrypted search. Starting with the work of Islam et al. (NDSS'12), many attacks have been proposed that exploit different leakage profiles under various assumptions. While these attacks improve our understanding of leakage, it can sometimes be difficult to draw definite conclusions about their practical performance. This is due to several reasons, including a lack of open-source implementations (which are needed to reproduce results), empirical evaluations that are conducted on restricted datasets, and in some cases reliance on relatively strong assumptions that can significantly affect accuracy.
In this work, we address these limitations. First, we design and implement LEAKER, an open-source framework that evaluates the major leakage attacks against any dataset and that we hope will serve the community as a common way to evaluate leakage attacks. We identify new real-world datasets that capture different use cases for ESAs and, for the first time, include real-world user queries. Finally, we use LEAKER to systematically evaluate known attacks on our datasets, uncovering sometimes unexpected properties that increase or diminish accuracy. Our evaluation shows that some attacks work better on real-world data than previously thought and that others perform worse.
Optimal encodings to elliptic curves of $j$-invariants $0$, $1728$
This article provides new constant-time encodings $\mathbb{F}_{\!q}^* \to E(\mathbb{F}_{\!q})$ to ordinary elliptic $\mathbb{F}_{\!q}$-curves $E$ of $j$-invariants $0$, $1728$ having a small prime divisor of the Frobenius trace. Therefore all curves of $j = 1728$ are covered. This circumstance is also true for the Barreto--Naehrig curves BN512, BN638 from the international cryptographic standards ISO/IEC 15946-5, TCG Algorithm Registry, and FIDO ECDAA Algorithm. Many $j = 1728$ curves as well as BN512, BN638 are not appropriate for the most efficient prior encodings. So, in fact, only universal SW (Shallue--van de Woestijne) one was previously applicable to them. However this encoding (in contrast to the new ones) cannot be computed at the cost of one exponentiation in the field $\mathbb{F}_{\!q}$.
Limits of Polynomial Packings for $\mathbb{Z}_{p^k}$ and $\mathbb{F}_{p^k}$
We formally define polynomial packing methods and initiate a unified study of related concepts in various contexts of cryptography. This includes homomorphic encryption (HE) packing and reverse multiplication-friendly embedding (RMFE) in information-theoretically secure multi-party computation (MPC). We prove several upper bounds and impossibility results on packing methods for $\mathbb{Z}_{p^k}$ or $\mathbb{F}_{p^k}$-messages into $\mathbb{Z}_{p^t}[x]/f(x)$ in terms of (i) packing density, (ii) level-consistency, and (iii) surjectivity. These results have implications on recent development of HE-based MPC over $\mathbb{Z}_{2^k}$ secure against actively corrupted majority and provide new proofs for upper bounds on RMFE.
AdVeil: A Private Targeted Advertising Ecosystem
This paper presents AdVeil, a private targeted advertising ecosystem with strong security guarantees for end users.
AdVeil is built around an untrusted advertising network which targets relevant ads to users and processes metrics without learning any of the users’ personal information in the process.
Our targeting protocol combines private information retrieval with locality-sensitive hashing for nearest neighbor search.
User data is kept locally on the client, giving users full control over and transparency into the contents of their targeting profiles.
AdVeil supports private billing metrics, allowing the ad network to correctly charge advertisers and pay websites for publishing ads. This is done without the ad network learning which user interacted with which ads. AdVeil achieves this using an anonymizing proxy (e.g., Tor) along with unlinkable anonymous tokens to identify and prevent fraud.
We build a prototype implementation of AdVeil to demonstrate its potential for real-world deployment. Our evaluation shows that AdVeil scales to ad networks with millions of targeting categories. Targeting from a set of 1 million possible categories takes roughly 1.6 seconds with a single 16-core server and is highly parallelizable. Targeting is performed out-of-band (e.g., on a daily basis) while ad delivery happens in real time as users browse the web. Verifying reports (for fraud prevention) requires less than 300 microseconds per report.
Commitment Schemes from Supersingular Elliptic Curve Isogeny Graphs
In this work we present two commitment schemes based on hardness assumptions arising from supersingular elliptic curve isogeny graphs, which possess strong security properties. The first is based on the CGL hash function while the second is based on the SIDH framework, both of which require a trusted third party for the setup phrase. The proofs of security of these protocols depend on properties of non-backtracking random walks on regular graphs. The optimal efficiency of these protocols depends on the size of a certain constant, defined in the paper, related to relevant isogeny graphs, which we give conjectural upper bounds for.
A lightweight ISE for ChaCha on RISC-V
ChaCha is a high-throughput stream cipher designed with the aim of ensuring high-security margins while achieving high performance on software platforms. RISC-V, an emerging, free, and open Instruction Set Architecture (ISA) is being developed with many instruction set extensions (ISE). ISEs are a native concept in RISC-V to support a relatively small RISC-V ISA to suit different use-cases including cryptographic acceleration via either standard or custom ISEs. This paper proposes a lightweight ISE to support ChaCha on RISC-V architectures. This approach targets embedded computing systems such as IoT edge devices that don't support a vector engine. The proposed ISE is designed to accelerate the computation of the ChaCha block function and align with the RISC-V design principles. We show that our proposed ISEs help to improve the efficiency of the ChaCha block function. The ISE-assisted implementation of ChaCha encryption speeds up at least $5.4\times$ and $3.4\times$ compared to the OpenSSL baseline and ISA-based optimised implementation, respectively. For encrypting short messages, the ISE-assisted implementation gains a comparative performance compared to the implementations using very high area overhead vector extensions.
LOVE a pairing
The problem of securely outsourcing the computation of a bilinear pairing has been widely investigated in the literature. Designing an efficient protocol with the desired functionality has, however, been an open challenge for a long time. Recently, Di Crescenzo et al. (CARDIS’20) proposed the first suite of protocols for securely and efficiently delegating pairings with online inputs under the presence of a malicious server. We progress along this path with the aim of LOVE (Lowering the cost of Outsourcing and Verifying Efficiently) a pairing. Our contributions are threefold. First, we propose a protocol (LOVE) that improves the efficiency of Di Crescenzo et al.’s proposal for securely delegating pairings with online, public inputs. Second, we provide the first implementation of efficient protocols in this setting. Finally, we evaluate the performance of our LOVE protocol in different application scenarios by benchmarking an implementation using BN, BLS12 and BLS24 pairing-friendly curves. Interestingly, compared to Di Crescenzo et al.’s protocol, LOVE is up to 29.7% faster for the client, up to 24.9% for the server and requires 23-24% less communication cost depending on the choice of parameters. Furthermore, we note that our LOVE protocol is especially suited for subgroup-secure groups: checking the correctness of the delegated pairing requires up to 56.2% less computations than evaluating the pairing locally (no delegation). This makes LOVE the most efficient protocol to date for securely outsourcing the computation of a pairing with online public inputs, even when the server is malicious.
Structural Attack (and Repair) of Diffused-Input-Blocked-Output White-Box Cryptography
In some practical enciphering frameworks, operational constraints may require that a secret key be embedded into the cryptographic algorithm.
Such implementations are referred to as White-Box Cryptography (WBC).
One technique consists of the algorithm's tabulation specialized for its key, followed by obfuscating the resulting tables.
The obfuscation consists of the application of invertible diffusion and confusion layers at the interface between tables
so that the analysis of input/output does not provide exploitable information about the concealed key material.
Several such protections have been proposed in the past and already cryptanalyzed thanks to a complete WBC scheme analysis.
In this article, we study a particular pattern for local protection (which can be leveraged for robust WBC); we formalize it as DIBO (for Diffused-Input-Blocked-Output).
This notion has been explored (albeit without having been nicknamed DIBO) in previous works.
However, we notice that guidelines to adequately select the invertible diffusion $\phi$ and the blocked bijections $B$ were missing.
Therefore, all choices for $\phi$ and $B$ were assumed as suitable.
Actually, we show that most configurations can be attacked, and we even give mathematical proof for the attack.
The cryptanalysis tool is the number of zeros in a Walsh-Hadamard spectrum.
This ``spectral distinguisher'' improves on top of the previously known one (Sasdrich, Moradi, G{ü}neysu, at FSE 2016).
However, we show that such an attack does not work always (even if it works most of the time).
Therefore, on the defense side, we give a straightforward rationale for the WBC implementations to be secure against such spectral attacks:
the random diffusion part $\phi$ shall be selected such that the rank of each restriction to bytes is full.
In AES's case, this seldom happens if $\phi$ is selected at random as a linear bijection of $\F_2^{32}$.
Thus, specific care shall be taken. Notice that the entropy of the resulting $\phi$ (suitable for WBC against spectral attacks) is still sufficient to design acceptable WBC schemes.
On Fingerprinting Attacks and Length-Hiding Encryption
Uncategorized
Uncategorized
It is well-known that already the length of encrypted messages may reveal sensitive information about encrypted data. Fingerprinting attacks enable an adversary to determine web pages visited by a user and even the language and phrases spoken in voice-over-IP conversations.
Prior research has established the general perspective that a length-hiding padding which is long enough to improve security significantly incurs an unfeasibly large bandwidth overhead. We argue that this perspective is a consequence of the choice of the security models considered in prior works, which are based on classical indistinguishability of two messages, and that this does not reflect the attacker model of typical fingerprinting attacks well. Furthermore, these models also consider a model where the attacker is restricted to choosing messages of bounded length difference, depending on a given length-hiding padding of the encryption scheme. This restriction seems difficult to enforce in practice, because application layer protocols are typically unaware of the concrete length-hiding padding applied by an underlying encryption protocol, such as TLS. We also do not want to make application-layer messages dependent on the underlying encryption scheme, but instead want to provide length hiding encryption that satisfies the requirements of the given application.
Therefore we propose a new perspective on length hiding encryption, which aims to capture security against fingerprinting attacks more accurately. This makes it possible to concretely quantify the security provided by length-hiding padding against fingerprinting attacks, depending on the real message distribution of an application. We find that for many real-world applications (such as webservers with static content, DNS requests, Google search terms, or Wikipedia page visits) and their specific message distributions, even length-hiding padding with relatively small bandwidth overhead of only 2-5% can already significantly improve security against fingerprinting attacks. This gives rise to a new perspective on length-hiding encryption, which helps understanding how and under what conditions length-hiding encryption can be used to improve security.
Last updated: 2021-09-01
On the Hardness of Ring/Module/Polynomial LWR Problems
The Learning with Rounding (LWR) problem is an important variant of the Learning with Errors (LWE) problem. Recently, Liu {\it{et al.}} proposed a comprehensive study of LWR problems defined over algebraic number fields in CRYPTO 2020. However, their search-to-decision reductions of LWR problems depend heavily on the existence of the so-called {\it{Normal Integral Basis}} (NIB). Meanwhile, the aesthetic deficiency is a lack of discussions of choices of secret $s$, and one may could not show the {\it{worst-case}} hardness of decision LWR problems {\it{strictly}} even for fields with NIB. In this paper, we give a more refined analysis of reductions between different LWR problems. Our contributions are summarized as follows: (1) We give a search-to-decision reduction of ring/module LWR problems defined over {\it{any}} number field $K=\QQ[x]/(\Phi(x))$ which is {\it{Galois}} over $\QQ$ with suitable parameters, {\it{regardless of the existence of NIB}}. (2) To the best of our knowledge, we give the first reduction from search ring/module LWE problems to corresponding search/decision LWR problems. Hence, combining known hardness results of LWE problems, we could reduce {\it{worst-case}} ideal/module lattices problems to search/decsion LWR problems {\it{strictly}}. (3) For the first time, we show the {\it{worst-case}} hardness of search/decision polynomial LWR problems defined over polynomial rings $\ZZ_q[x]/(\Phi(x))$ with {\it{comparable small parameters}}, which could be regarded as a theoretical support for some ring/module LWR based crypto-systems, e.g. the NIST Round $3$ candidate - Saber. As a finish, we also give some hardness results of middle product polynomial LWR problems.
Efficient Information-Theoretic Multi-Party Computation over Non-Commutative Rings
We construct the first efficient MPC protocol that only requires black-box access to a non-commutative ring $R$. Previous results in the same setting were efficient only either for a constant number of corruptions or when computing branching programs and formulas. Our techniques are based on a generalization of Shamir's secret sharing to non-commutative rings, which we derive from the work on Reed Solomon codes by Quintin, Barbier and Chabot (IEEE Transactions on Information Theory, 2013). When the center of the ring contains a set $A = \{\alpha_0, \ldots, \alpha_n\}$ such that $\forall i \neq j, \alpha_i - \alpha_j \in R^*$, the resulting secret sharing scheme is strongly multiplicative and we can generalize existing constructions over finite fields without much trouble.
Most of our work is devoted to the case where the elements of $A$ do not commute with all of $R$, but they just commute with each other. For such rings, the secret sharing scheme cannot be linear ``on both sides" and furthermore it is not multiplicative. Nevertheless, we are still able to build MPC protocols with a concretely efficient online phase and black-box access to $R$. As an example we consider the ring $\mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$, for which when $m > \log(n+1)$, we obtain protocols that require around $\lceil\log(n+1)\rceil/2$ less communication and $2\lceil\log(n+1)\rceil$ less computation than the state of the art protocol based on Circuit Amortization Friendly Encodings (Dalskov, Lee and Soria-Vazquez, ASIACRYPT 2020).
In this setting with a ``less commutative" $A$, our black-box preprocessing phase has a less practical complexity of $\poly(n)$. Due to this, we additionally provide specialized, concretely efficient preprocessing protocols for $R = \mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$ that exploit the structure of the matrix ring.
Efficient Implementation of Lightweight Hash Functions on GPU and Quantum Computers for IoT Applications
Secure communication is an important aspect Internet of Things (IoT) applications in order to avoid cyber-security attacks and privacy issue. One of the key security aspects is data integrity, which can be protected by employing cryptographic
hash functions. Recently, US National Institute of Standards and Technology (NIST) had initialized a competition to standardize lightweight hash functions targeting constrained devices, which can be used in IoT applications. The communication in IoT involves various hardware platforms, from low-end microcontrollers to high-end cloud servers with accelerators like GPU. In this paper, we show that with carefully crafted implementation techniques, all the finalist hash function candidates in NIST standardization can achieve high throughput on GPU. This research output can be used in IoT gateway devices and cloud
servers to perform data integrity check in high speed. On top of that, we also present the first implementation of these hash
functions on a quantum computer (IBM ProjectQ). The efficient implementation of these hash functions on GPU and quantum
computer is useful in evaluating their strength against brute-force attack, which is important to protect the secure communication in IoT.
SIDH Proof of Knowledge
We show that the soundness proof for the De Feo-Jao-Plut identification scheme (the basis for supersingular isogeny Diffie--Hellman (SIDH) signatures) contains an invalid assumption, and we provide a counterexample for this assumption---thus showing the proof of soundness is invalid. As this proof was repeated in a number of works by various authors, multiple pieces of literature are affected by this result. Due to the importance of being able to prove knowledge of an SIDH key (for example, to prevent adaptive attacks), soundness is a vital property. Surprisingly, the problem of proving knowledge of a specific isogeny turns out to be considerably more difficult than was perhaps anticipated. The main results of this paper are a sigma protocol to prove knowledge of a walk of specified length in a supersingular isogeny graph, and a second one to additionally prove that the isogeny maps some torsion points to some other torsion points (as seen in SIDH public keys). Our scheme also avoids the SIDH identification scheme soundness issue raised by Ghantous, Pintore and Veroni. In particular, our protocol provides a non-interactive way of verifying correctness of SIDH public keys, and related statements, as protection against adaptive attacks.
Post-scriptum: Some months after this work was completed and made public, the SIDH assumption was broken in a series of papers by several authors. Hence, in the standard SIDH setting, some of the statements studied here now have trivial polynomial time non-interactive proofs. Nevertheless our first sigma protocol is unaffected by the attacks, and our second protocol may still be useful in present and future variants of SIDH that escape the attacks.
Zero-Knowledge Middleboxes
This paper initiates research on zero-knowledge middleboxes (ZKMBs). A ZKMB is a network middlebox that enforces network usage policies on encrypted traffic. Clients send the middlebox zero-knowledge proofs that their traffic is policy-compliant; these proofs reveal nothing about the client’s communication except that it complies with the policy. We show how to make ZKMBs work with unmodified encrypted-communication protocols (specifically TLS 1.3), making ZKMBs invisible to servers. As a contribution of independent interest, we design optimized zero-knowledge proofs for TLS 1.3 session keys.
We apply the ZKMB paradigm to several case studies. Experimental results suggest that in certain settings, performance is in striking distance of practicality; an example is a middlebox that filters domain queries (each query requiring a separate proof) when the client has a long-lived TLS connection with a DNS resolver. In such configurations, the middlebox’s overhead is 2–5 ms of running time per proof, and client latency to create a proof is several seconds. On the other hand, clients may have to store hundreds of MBs depending on the underlying zero-knowledge proof machinery, and for some applications, latency is tens of seconds.
Power-based Side Channel Attack Analysis on PQC Algorithms
Power-based side channel attacks have been successfully conducted against proven cryptographic algorithms including standardized algorithms such as AES and RSA. These algorithms are now supported by best practices in hardware and software to defend against malicious attacks. As NIST conducts the third round of the post-quantum cryptography (PQC) standardization process, a key feature is to identify the security candidate algorithms have against side channel attacks, and the tradeoffs that must be made to obtain that level of protection. In this work, we document the development of a multi-target and multi-tool platform to conduct test vector leakage assessment of the candidate algorithms. The long-term goals of the platform are to 1) quantify test vector leakage of each of the primary and alternate candidates, 2) quantify test vector leakage of each of the candidates when adjustments and adaptations (e.g., masking) are applied, and 3) assess the equivalent security levels when tools of varying sophistication are used in the attack (e.g., commodity vs. specialized hardware). The goal of this work is to document the progress towards that standardized platform and to invite discussion in how to extend, refine, and distribute our tools.
Designing a Practical Code-based Signature Scheme from Zero-Knowledge Proofs with Trusted Setup
Uncategorized
Uncategorized
This paper defines a new practical construction for a code-based signature scheme. We introduce a new protocol that is designed to follow the recent paradigm known as ``Sigma protocol with helper'', and prove that the protocol's security reduces directly to the Syndrome Decoding Problem. The protocol is then converted to a full-fledged signature scheme via a sequence of generic steps that include: removing the role of the helper; incorporating a variety of protocol optimizations (using e.g., Merkle trees); applying the Fiat-Shamir transformation. The resulting signature scheme is EUF-CMA secure in the QROM, with the following advantages:
a) Security relies on only minimal assumptions and is backed by a long-studied NP-complete problem;
b) the trusted setup structure allows for obtaining an arbitrarily small soundness error.
This minimizes the required number of repetitions, thus alleviating a major bottleneck associated with Fiat-Shamir schemes.
We outline an initial performance estimation to confirm that our scheme is competitive with respect to existing solutions of similar type.
Implementing and Measuring KEMTLS
KEMTLS is a novel alternative to the Transport Layer Security (TLS) handshake that integrates post-quantum algorithms. It uses key encapsulation mechanisms (KEMs) for both confidentiality and authentication, achieving post-quantum security while obviating the need for expensive post-quantum signatures. The original KEMTLS paper presents a security analysis, Rust implementation, and benchmarks over emulated networks. In this work, we provide full Go implementations of KEMTLS and other post-quantum handshake alternatives, describe their integration into a distributed system, and provide performance evaluations over real network conditions. We compare the standard (nonquantum-resistant) TLS 1.3 handshake with three alternatives: one that uses post-quantum signatures in combination with post-quantum KEMs (PQTLS), one that uses KEMTLS, and one that is a reduced round trip version of KEMTLS (KEMTLS-PDK). In addition to the performance evaluations, we discuss how the design of these protocols impacts TLS from an implementation and configuration perspective.
Obfustopia Built on Secret-Key Functional Encryption
We show that indistinguishability obfuscation (IO) for all circuits can be constructed solely from secret-key functional encryption (SKFE). In the construction, SKFE needs to be secure against an unbounded number of functional key queries, that is, collusion-resistant. Our strategy is to replace public-key functional encryption (PKFE) in the construction of IO proposed by Bitansky and Vaikuntanathan (FOCS 2015) with puncturable SKFE. Bitansky and Vaikuntanathan introduced the notion of puncturable SKFE and observed that the strategy works. However, it has not been clear whether we can construct puncturable SKFE without assuming PKFE. In particular, it has not been known whether puncturable SKFE is constructed from standard SKFE. In this work, we show that a relaxed variant of puncturable SKFE can be constructed from collusion-resistant SKFE. Moreover, we show that the relaxed variant of puncturable SKFE is sufficient for constructing IO.
In addition, we also study the relation of collusion-resistance and succinctness for SKFE. Functional encryption is said to be weakly succinct if the size of its encryption circuit is sub-linear in the size of functions. We show that collusion-resistant SKFE can be constructed from weakly succinct SKFE supporting only one functional key.
By combining the above two results, we show that IO for all circuits can be constructed from weakly succinct SKFE supporting only one functional key.
Improve Neural Distinguisher for Cryptanalysis
Uncategorized
Uncategorized
At CRYPTO'19, Gohr built a bridge between deep learning and cryptanalysis. Based on deep neural networks, he trained neural distinguishers of Speck32/64 using a plaintext difference and single ciphertext pair. Compared with purely differential distinguishers, neural distinguishers successfully use features of the ciphertext pairs. Besides, with the help of neural distinguishers, he attacked 11-round Speck32/64 using Bayesian optimization. At EUROCRYPTO'21, Benamira proposed a detailed analysis about the inherent workings of Gohr's distinguishers. Although their work opened a new direction of machine learning aided cryptanalysis, there are still two research gaps that researchers are eager to fill in. (1) How to further improve neural distinguishers? (2) Can we conduct effective key recovery on large-size block ciphers adopting neural distinguishers?
In this paper, we propose a new algorithm and model to improve neural distinguishers in terms of accuracy and the number of rounds and present effective neural aided attack on large-size block ciphers. First, we design an algorithm based on SAT to improve neural distinguishers. With the help of SAT/SMT solver, we obtain new effective neural distinguishers of SIMON using the input differences of high-probability differential characteristics. Second, we propose a new neural distinguisher model using multiple output differences. Inspired by Benamira's work and data augmentation in deep learning, we use the output differences to exploit more derived features and train neural distinguishers, by splicing output differences into a matrix as a sample. Based on the new model, we construct neural distinguishers of SIMON and Speck with round and accuracy promotion. Utilizing our neural distinguishers, we can distinguish reduced-round NSA block ciphers from pseudo-random permutation better.
Moreover, we perform practical key recovery attacks on different versions of SIMON. For SIMON32/64 and SIMON48/96, we append additional 2-round optimal characteristics searched by SAT/SMT solver to the beginning of our neural distinguishers and attack 13-round SIMON32/64, 14-round SIMON48/96 using Gohr's key recovery frame. For SIMON64/128, it costs too much time in precomputation, especially in wrong key response profile, which is unbearable for most of researchers. However, we show with experiments that the distribution of the wrong key profile is pseudo-periodic. Based on this, we make use of partial wrong key profile to describe the whole wrong key response profile, and then propose a generic key recovery attack scheme which can attack large-size block ciphers. As an application, we perform a key recovery attack on 13-round SIMON64/128 using a 11-round neural distinguisher. All our results are confirmed with experiments (source code available online).
Quantum collision finding for homomorphic hash functions
Hash functions are a basic cryptographic primitive. Certain hash functions try to prove security against collision and preimage attacks by reductions to known hard problems. These hash functions usually have some additional properties that allow for that reduction. Hash functions which are additive or multiplicative are vulnerable to a quantum attack using the hidden subgroup problem algorithm for quantum computers. Using a quantum oracle to the hash, we can reconstruct the kernel of the hash function, which is enough to find collisions and second preimages. When the hash functions are additive with respect to the group operation in an Abelian group, there is always an efficient implementation of this attack. We present concrete attack examples to provable hash functions, including a preimage attack to $\oplus$-linear hash functions and for certain multiplicative homomorphic hash schemes.
Look-up the Rainbow: Efficient Table-based Parallel Implementation of Rainbow Signature on 64-bit ARMv8 Processors
Rainbow signature is one of the finalist in National Institute of Standards and Technology (NIST) standardization. It is also the only signature candidate that is designed based on multivariate quadratic hard problem. Rainbow signature is known to have very small signature size compared to other post-quantum candidates. In this paper, we propose an efficient implementation technique to improve performance of Rainbow signature schemes.
A parallel polynomial-multiplication on a 64-bit ARMv8 processor was proposed, wherein a look-up table was created by pre-calculating the $4\times4$ multiplication results. This technique was developed based on the observation that the existing implementation of Rainbow's polynomial-multiplication relies on the Karatsuba algorithm. It is not optimal due to the divide and conquer steps involved, whereby operations on $\mathbb{F}_{16}$ are divided into many small sub-fields of $\mathbb{F}_{4}$ and $\mathbb{F}_{2}$. Further investigations reveal that when the polynomial-multiplication in Rainbow signature is operated on $\mathbb{F}_{16}$, its operand is in 4-bit. Since the maximum combinations of a $4 \times 4$ multiplication is only 256, we constructed a 256-byte look-up table. According to the 4-bit constant, only 16-byte is loaded from the table at one time. The time-consuming multiplication is replaced by performing the table look-up. In addition, it calculates up-to 16 result values per register using characteristics of vector registers available on 64-bit ARMv8 processor.
With the proposed fast polynomial-multiplication technique, we implemented the optimized Rainbow III and V. These two parameter sets are performed on $\mathbb{F}_{256}$, but they use sub-field $\mathbb{F}_{16}$ in the multiplication process. Therefore, the sub-field multiplication can be replaced with the proposed table look-up technique, which in turn omitted a significant number of operations. We have carried out the experiments on the Apple M1 processor, which shows up to 167.2$\times$ and 51.6$\times$ better performance enhancement at multiplier, and Rainbow signatures, respectively, compared to the previous implementation.
SoC Security Properties and Rules
A system-on-chip (SoC) security can be weakened by exploiting the potential vulnerabilities of the
intellectual property (IP) cores used to implement the
design and interaction among the IPs. These vulnerabilities
not only increase the security verification effort but also
can increase design complexity and time-to-market. The
design and verification engineers should be knowledgeable about potential vulnerabilities and threat models at
the early SoC design life cycle to protect their designs
from potential attacks. However, currently, there is no
publicly available repository that can be used as a base
to develop such knowledge in practice. In this paper, we
develop ‘SoC Security Property/Rule Database’ and make
it available publicly to all researchers to facilitate and
extend security verification effort to address this need. The
database gathers a comprehensive security vulnerability
and property list. It also provides all the corresponding
design behavior that should be held in the design to
ensure such vulnerabilities do not exist. The database
contains 67 different vulnerability scenarios for which 105
corresponding security properties have been developed till
now. This paper reviews the existing database and presents
the methodologies we used to gather vulnerabilities and
develop such comprehensive security properties. Additionally, this paper discusses the challenges for security
verification and the utilization of this database to overcome
the research challenges.
Iterative Oblivious Pseudo-Random Functions and Applications
We consider the problem of a client querying an encrypted binary tree structure, outsourced to an untrusted server. While the server must not learn the contents of the binary tree, we also want to prevent the client from maliciously crafting a query that traverses the tree out-of-order. That is, the client should not be able to retrieve nodes outside one contiguous path from the root to a leaf. Finally, the server should not learn which path the client accesses, but is guaranteed that the access corresponds to one valid path in the tree. This is an extension of protocols such as structured encryption, where it is only guaranteed that the tree's encrypted data remains hidden from the server.
To this end, we initiate the study of Iterative Oblivious Pseudorandom Functions (iOPRFs), new primitives providing two-sided, fully malicious security for these types of applications. We present a first, efficient iOPRF construction secure against both malicious clients and servers in the standard model, based on the DDH assumption. We demonstrate that iOPRFs are useful to implement different interesting applications, including an RFID authentication protocol and a protocol for private evaluation of outsourced decision trees. Finally, we implement and evaluate our full iOPRF construction and show that it is efficient in practice.
A Formal Security Analysis of the W3C Web Payment APIs: Attacks and Verification
Payment is an essential part of e-commerce. Merchants usually rely on third-parties, so-called payment processors, who take care of transferring the payment from the customer to the merchant. How a payment processor interacts with the customer and the merchant varies a lot. Each payment processor typically invents its own protocol that has to be integrated into the merchant’s application and provides the user with a new, potentially unknown and confusing user experience.
Pushed by major companies, including Apple, Google, Mastercard, and Visa, the W3C is currently developing a new set of standards to unify the online checkout process and “streamline the user’s payment experience”. The main idea is to integrate payment as a native functionality into web browsers, referred to as the Web Payment APIs. While this new checkout process will indeed be simple and convenient from an end-user perspective, the technical realization requires rather significant changes to browsers.
Many major browsers, such as Chrome, Firefox, Edge, Safari, and Opera, already implement these new standards, and many payment processors, such as Google Pay, Apple Pay, or Stripe, support the use of Web Payment APIs for payments. The ecosystem is constantly growing, meaning that the Web Payment APIs will likely be used by millions of people worldwide.
So far, there has been no in-depth security analysis of these new standards. In this paper, we present the first such analysis of the Web Payment APIs standards, a rigorous formal analysis. It is based on the Web Infrastructure Model (WIM), the most comprehensive model of the web infrastructure to date, which, among others, we extend to integrate the new payment functionality into the generic browser model.
Our analysis reveals two new critical vulnerabilities that allow a malicious merchant to over-charge an unsuspecting customer. We have verified our attacks using the Chrome implementation and reported these problems to the W3C as well as the Chrome developers, who have acknowledged these problems. Moreover, we propose fixes to the standard, which by now have been adopted by the W3C and Chrome, and prove that the fixed Web Payment APIs indeed satisfy strong security properties.
A Fast and Flexible Multi-Client Functional Encryption for Set Intersection
A Multi-Client Functional Encryption (MCFE) scheme for set intersection is a cryptographic primitive that enables an evaluator to learn the intersection from all sets of a pre-determined number of clients, without need to learn the plaintext set of each individual client. In this paper, we propose a flexible version of the MCFE schemes for the set intersection, called Flexible Multi-Client Functional Encryption for Set Intersection (FMCFE). In our FMCFE scheme, the evaluator can learn the intersection from any flexible choice of sets (instead of all sets). In this regard, we redefine syntax and security notions of the MCFE schemes for the FMCFE schemes.
In the literature, solving multi-client set intersection problem in polynomial time, such that only the intersection result is revealed (without additional information), is an open problem. In this paper, we propose a relaxed solution using FMCFE schemes to solve secure set intersection in polynomial time.
We analyze that for practical use of secure multi-client set intersection, this relaxation is necessary.
We also show that our scheme has the adaptive indistinguishability-based security under passive corruption. Our proof relies on the Symmetric eXternal Diffie-Hellman (SXDH) assumption in the standard model.
Circuit friendly, post-quantum dynamic accumulators from RingSIS with logarithmic prover time
Mainstream hash functions such as SHA or BLAKE while generally efficient in their implementations, are not suitable for zero-knowledge boolean or arithmetic circuits due to their reliance on CPU designs. As a candidate hash function that uses only on trivial arithmetics which can be generalized to zeroknowledge circuits, the Ajtai lattice SIS-hasher has been proposed. In this paper we review Micciancio’s R-SIS generalization and argue about it’s circuit complexity, then we show how this R-SIS
hasher can be used as a universal dynamic hash accumulator that has constant-time update and revocation complexity, and can be run on 16-bit hardware as well as smart contracts.
Polynomial Representation Is Tricky: Maliciously Secure Private Set Intersection Revisited
Private Set Intersection protocols (PSIs) allow parties to compute the intersection of their private sets, such that nothing about the sets’ elements beyond the intersection is revealed. PSIs have a variety of applications, primarily in efficiently supporting data sharing in a privacy-preserving manner. At Eurocrypt 2019, Ghosh and Nilges pro- posed three efficient PSIs based on the polynomial representation of sets and proved their security against active adversaries. In this work, we show that these three PSIs are susceptible to several serious attacks. The attacks let an adversary (1) learn the correct intersection while making its victim believe that the intersection is empty, (2) learn a certain element of its victim’s set beyond the intersection, and (3) delete multiple elements of its victim’s input set. We explain why the proofs did not identify these attacks and propose a set of mitigations
Public-key Authenticated Encryption with Keyword Search: Cryptanalysis, Enhanced Security, and Quantum-resistant Instantiation
With the rapid development of cloud computing, an increasing number of companies are adopting cloud storage technology to reduce overhead. However, to ensure the privacy of sensitive data, the uploaded data need to be encrypted before being outsourced to the cloud. The concept of public-key encryption with keyword search (PEKS) was introduced by Boneh \textit{et al.} to provide flexible usage of the encrypted data. Unfortunately, most of the PEKS schemes are not secure against inside keyword guessing attacks (IKGA), so the keyword information of the trapdoor may be leaked to the adversary. To solve this issue, Huang and Li presented public key authenticated encryption with keyword search (PAEKS) in which the trapdoor generated by the receiver is only valid for authenticated ciphertexts.
With their seminal work, many PAEKS schemes have been introduced for the enhanced security of PAEKS. Some of them further consider the upcoming quantum attacks. However, our cryptanalysis indicated that in fact, these schemes could not withstand IKGA.
To fight against the attacks from quantum adversaries and support the privacy-preserving search functionality, we first introduce a novel generic PAEKS construction in this work. Then, we further present the first quantum-resistant PAEKS instantiation based on lattices. The security proofs show that our instantiation not only satisfies the basic requirements but also achieves enhanced security models, namely the multi-ciphertext indistinguishability and trapdoor privacy. Furthermore, the comparative results indicate that with only some additional expenditure, the proposed instantiation provides more secure properties, making it suitable for more diverse application environments.
Provably Solving the Hidden Subset Sum Problem via Statistical Learning
At Crypto '99, Nguyen and Stern described a lattice based algorithm for solving the hidden subset sum problem, a variant of the classical subset sum problem where the $n$ weights are also hidden. As an application, they showed how to break the Boyko et al. fast generator of random pairs $(x,g^x \pmod{p})$. The Nguyen-Stern algorithm works quite well in practice for moderate values of $n$, but its complexity is exponential in $n$. A polynomial-time variant was recently described at Crypto 2020, based on a multivariate technique, but the approach is heuristic only. In this paper, we describe a proven polynomial-time algorithm for solving the hidden subset-sum problem, based on statistical learning. In addition, we show that the statistical approach is also quite efficient in practice: using the FastICA algorithm, we can reach $n=250$ in reasonable time.
UOV-Pepper: New Public Key Short Signature in Degree 3
In this paper, we present a new perturbation for the design of multivariate schemes that we call ``Pepper''.
From this idea, we present some efficient multivariate signature schemes with explicit parameters that resist all known attacks. In particular they resist the two main (and often very powerful) attacks in this area: the Gröbner attacks (to compute a solution of the system derived from the public key) and the MinRank attacks (to recover the secret key). Pepper can also be seen as a new perturbation that can be used to strengthen many other multivariate schemes.
The ``Pepper'' perturbation works only for public key equations of degree (at least) 3. Despite this, the size of the public key may still be reasonable since we can use larger fields (and also maybe non dense equations). Furthermore, the size of the signatures can be very short.