Papers updated in last 365 days (Page 29 of 3020 results)
Functional Bootstrapping for Packed Ciphertexts via Homomorphic LUT Evaluation
Fully Homomorphic Encryption (FHE) enables the computation of an arbitrary function over encrypted data without decrypting them. In particular, bootstrapping is a core building block of FHE which reduces the noise of a ciphertext thereby recovering the computational capability.
This paper introduces a new bootstrapping framework for the Fan-Vercauteren (FV) scheme, called the functional bootstrapping, providing more generic and advanced functionality than the ordinary bootstrapping method. More specifically, the functional bootstrapping allows us to evaluate an arbitrary function while removing the error of an input ciphertext. Therefore, we achieve better depth consumption and computational complexity as the evaluation of a circuit can be integrated as part of the functional bootstrapping procedure. In particular, our approach extends the functionality of FV since it is even applicable to functions between different plaintext spaces.
At the heart of our functional bootstrapping framework is a homomorphic Look-Up Table (LUT) evaluation method where we represent any LUT using only the operations supported by the FV scheme. Finally, we provide a proof-of-concept implementation and present benchmarks of the functional bootstrapping. In concrete examples, such as delta and sign functions, our functional bootstrapping takes about 46.5s or 171.4s for 9-bit or 13-bit plaintext modulus, respectively.
DVA: Dangerous Variations of ALTEQ
In this paper, we present three types of variations of the ALTEQ cryptosystem, a recent submission to the NIST's additional call for signatures. We name these Dangerous Variations of ALTEQ (DVA), as there is always a certain danger in stepping out of usual constructions, although we attempt to maintain heuristic security.
First, we present DVA-GG (Graph Generalization), that can be seen as a more abstract point-of-view on the operations done in ALTEQ and encourages more research on the algebraic variants. In particular, we show this approach can lead to a patch counter to Beullens' recent seed collision attack on ALTEQ that only depends on the primitive, and showcase some fancy usages of the primitive for experimental protocols.
Second, we present DVA-PC (Precomputations) which is ``likely'' as secure as ALTEQ in the random oracle model, and allow to drastically reduce the intermediate memory requirements within both the signature and verification process through an easily parallelizable extra operation. In particular, this facilitates precomputation variants with online phases that only depends on the complexity of basic matrix operations. We can then choose between either a tiny offline memory per signature, or get one of the fastest online signing speed for post-quantum cryptography.
Third, we present DVA-DM (Distinct Matrices), some cryptanalytic targets that deviates from ALTEQ's original algebraic structure. Those structures can serve as plain computational acceleration or just compress data sizes,
and provide good options to motivate the study of specialized
cryptanalysis for ALTEQ: if those are safe, then ALTEQ gain safe variants, and otherwise, we gain further understanding of the problems.
In particular, the ideas can be applied beyond ALTEQ and beyond, and hopefully extend to MEDS, LESS, and group-action-based cryptography.
Zero-knowledge IOPs Approaching Witness Length
Interactive Oracle Proofs (IOPs) allow a probabilistic verifier interacting with a prover to verify the validity of an NP statement while reading only few bits from the prover messages. IOPs generalize standard Probabilistically-Checkable Proofs (PCPs) to the interactive setting, and in the few years since their introduction have already exhibited major improvements in main parameters of interest (such as the proof length and prover and verifier running times), which in turn led to significant improvements in constructions of succinct arguments. Zero-Knowledge (ZK) IOPs additionally guarantee that the view of any query-bounded (possibly malicious) verifier can be efficiently simulated. ZK-IOPs are the main building block of succinct ZK arguments which use the underlying cryptographic object (e.g., a collision-resistant hash function) as a black box.
In this work, we construct the first ZK-IOPs approaching the witness length for a natural NP problem. More specifically, we design constant-query and constant-round IOPs for 3SAT in which the total communication is $(1+\gamma)m$, where $m$ is the number of variables and $\gamma>0$ is an arbitrarily small constant, and ZK holds against verifiers querying $m^\beta$ bits from the prover's messages, for a constant $\beta>0$. This gives a ZK variant of a recent result of Ron-Zewi and Rothblum (FOCS `20), who construct (non-ZK) IOPs approaching the witness length for a large class of NP languages. Previous constructions of ZK-IOPs incurred an (unspecified) large constant multiplicative overhead in the proof length, even when restricting to ZK against the honest verifier.
We obtain our ZK-IOPs by improving the two main building blocks underlying most ZK-IOP constructions, namely ZK codes and ZK-IOPs for sumcheck. More specifically, we give the first ZK-IOPs for sumcheck that achieve both sublinear communication for sumchecking a general tensor code, and a ZK guarantee. We also show a strong ZK preservation property for tensors of ZK codes, which extends a recent result of Bootle, Chiesa, and Liu (EC `22). Given the central role of these objects in designing ZK-IOPs, these results might be of independent interest.
Faster verifications and smaller signatures: Trade-offs for ALTEQ using rejections
In this paper, we introduce a new probability function parameter in the instantiations of the Goldreich-Micali-Wigderson with Fiat-Shamir and unbalanced challenges used in ALTEQ, a recent NIST PQC candidate in the call for additional signatures. This probability set at 100% does not bring any changes in the scheme, but modifies the public challenge generation process when below 100%, by injecting potential rejections in otherwise completely valid inputs.
From a theoretical point of view, this does not improve the asymptotical hardness of the scheme and negatively affects the efficiency of the signatory, and might itself seem trivial. However, from a practical point of view, implementation-wise and performance-wise, this triviality allows an extra degree of freedom in optimizing parameters, as the heuristic security level is also increased against forgers: previously valid combinations now can be deemed invalid. This allows us to make trade-offs to reduce the computational load in verifiers, accelerating verifications, marginally reduce the signature size, at the cost of making signatures slower and unlikely to be constant-time.
In particular, this extra degree of freedom allows to make implementation choices that enable smoother and faster executions of the aforementioned protocols, especially in the context of parallelization using vectorized instructions. We also demonstrate the usefulness of our proposal to ALTEQ for other options, when slowing down the signing process is not an issue: significantly smaller signatures but longer verifications, or lower public key sizes.
The ideas presented apply to any primitive, and can be used beyond ALTEQ.
Admissible Parameter Sets and Complexity Estimation of Crossbred Algorithm
The Crossbred algorithm is one of the algorithms for solving a system of polynomial equations, proposed by Joux and Vitse in 2017. It has been implemented in Fukuoka MQ challenge, which is related to the security of multivariate crytography, and holds several records. A framework for estimating the complexity has already been provided by Chen et al. in 2017. However, it is generally unknown which parameters are actually available. This paper investigates how to select available parameters for the Crossbred algorithm. As a result, we provide formulae that give an available parameter set and estimate the complexity of the Crossbred algorithm.
Probabilistically Checkable Arguments for all NP
A probabilistically checkable argument (PCA) is a computational relaxation of PCPs, where soundness is guaranteed to hold only for false proofs generated by a computationally bounded adversary. The advantage of PCAs is that they are able to overcome the limitations of PCPs. A succinct PCA has a proof length that is polynomial in the witness length (and is independent of the non-deterministic verification time), which is impossible for PCPs, under standard complexity assumptions. Bronfman and Rothblum (ITCS 2022) constructed succinct PCAs for NC that are publicly-verifiable and have constant query complexity under the sub-exponential hardness of LWE.
We construct a publicly-verifiable succinct PCA with constant query complexity for all NP in the adaptive security setting. Our PCA scheme offers several improvements compared to the Bronfman and Rothblum construction: (1) it applies to all problems in NP, (2) it achieves adaptive security, and (3) it can be realized under any of the following assumptions: the polynomial hardness of LWE; $O(1)$-LIN on bilinear maps; or sub-exponential DDH.
Moreover, our PCA scheme has a succinct prover, which means that for any NP relation that can be verified in time $T$ and space $S$, the proof can be generated in time $O_{\lambda,m}(T\cdot\mathrm{polylog}(T))$ and space $O_{\lambda,m}(S\cdot\mathrm{polylog}(T))$. Here, ${O}_{\lambda,m}$ accounts for polynomial factors in the security parameter and in the size of the witness. En route, we construct a new complexity-preserving RAM delegation scheme that is used in our PCA construction and may be of independent interest.
Generic MitM Attack Frameworks on Sponge Constructions
This paper proposes general meet-in-the-middle (MitM) attack frameworks for preimage and collision attacks on hash functions based on (generalized) sponge construction.
As the first contribution, our MitM preimage attack framework covers a wide range of sponge-based hash functions, especially those with lower claimed security level for preimage compared to their output size. Those hash functions have been very widely standardized (e.g., Ascon-Hash, PHOTON, etc.), but are rarely studied against preimage attacks. Even the recent MitM attack framework on sponge construction by Qin et al. (EUROCRYPT 2023) cannot attack those hash functions. As the second contribution, our MitM collision attack framework shows a different tool for the collision cryptanalysis on sponge construction, while previous collision attacks on sponge construction are mainly based on differential attacks.
Most of the results in this paper are the first third-party cryptanalysis results. If cryptanalysis previously existed, our new results significantly improve the previous results, such as improving the previous 2-round collision attack on Ascon-Hash to the current 4 rounds, improving the previous 3.5-round quantum preimage attack on SPHINCS$^+$-Haraka to our 4-round classical preimage attack, etc.
Nonadaptive One-Way to Hiding Implies Adaptive Quantum Reprogramming
An important proof technique in the random oracle model involves reprogramming it on hard to predict inputs and arguing that an attacker cannot detect that this occurred. In the quantum setting, a particularly challenging version of this considers adaptive reprogramming wherein the points to be reprogrammed (or output values they should be programmed to) are dependent on choices made by the adversary. Frameworks for analyzing adaptive reprogramming were given by, e.g., by Unruh (CRYPTO 2014), Grilo-Hövelmanns-Hülsing-Majenz (ASIACRYPT 2021), and Pan-Zeng (PKC 2024).
We show, counterintuitively, that these adaptive results follow directly from the non-adaptive one-way to hiding theorem of Ambainis-Hamburg-Unruh (CRYPTO 2019). These implications contradict beliefs (whether stated explicitly or implicitly) that some properties of the adaptive frameworks cannot be provided by the Ambainis-Hamburg-Unruh result.
Measure-Rewind-Extract: Tighter Proofs of One-Way to Hiding and CCA Security in the Quantum Random Oracle Model
The One-Way to Hiding (O2H) theorem, first given by Unruh (J ACM 2015) and then restated by Ambainis et al. (CRYPTO 2019), is a crucial technique for solving the reprogramming problem in the quantum random oracle model (QROM). It provides an upper bound $d\cdot\sqrt{\epsilon}$ for the distinguisher's advantage, where $d$ is the query depth and $\epsilon$ denotes the advantage of a one-wayness attacker. Later, in order to obtain a tighter upper bound, Kuchta et al. (EUROCRYPT 2020) proposed the Measure-Rewind-Measure (MRM) technique and then proved the Measure-Rewind-Measure O2H (MRM-O2H) theorem, which provides the upper bound $d\cdot\epsilon$. They also proposed an open question: Can we combine their MRM technique with Ambainis et al.'s semi-classical oracle technique (CRYPTO 2019) or Zhandry's compressed oracle technique (CRYPTO 2019) to prove a new O2H theorem with an upper bound even tighter than $d\cdot\epsilon$?
In this paper, we give an affirmative answer for the above question. We propose a new technique named Measure-Rewind-Extract (MRE) by combining the MRM technique with the semi-classical oracle technique. By using MRE technique, we prove the Measure-Rewind-Extract O2H (MRE-O2H) theorem, which provides the upper bound $\sqrt{d}\cdot\epsilon$.
As an important application of our MRE-O2H theorem, for the $FO^{\cancel{\bot}}$, $FO_m^{\cancel{\bot}}$, $FO^{\bot}$ and $FO_m^\bot$ proposed by Hofheinz et al. (TCC 2017), i.e., the key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto transformation, we prove the following results in the QROM:
Their IND-CCA security can be reduced to the IND-CPA security of the underlying public key encryption (PKE) scheme without the square-root advantage loss. In particular, compared with the IND-CCA proof of $FO^{\cancel{\bot}}$ given by Kuchta et al. (EUROCRYPT 2020), ours removes the injectivity assumption and has a tighter security bound.
Under the assumption that the underlying PKE scheme is unique randomness recoverable, we for the first time prove that their IND-CCA security can be reduced to the OW-CPA security of the underlying PKE scheme without the square-root advantage loss.
Succinct Homomorphic Secret Sharing
This work introduces homomorphic secret sharing (HSS) with succinct share size. In HSS, private inputs are shared between parties, who can then homomorphically evaluate a function on their shares, obtaining a share of the function output. In succinct HSS, a portion of the inputs can be distributed using shares whose size is sublinear in the number of such inputs. The parties can then locally evaluate a function $f$ on the shares, with the restriction that $f$ must be linear in the succinctly shared inputs.
We construct succinct, two-party HSS for branching programs, based on either the decisional composite residuosity assumption, a DDH-like assumption in class groups, or learning with errors with a superpolynomial modulus-to-noise ratio. We then give several applications of succinct HSS, which were only previously known using fully homomorphic encryption, or stronger tools:
- Succinct vector oblivious linear evaluation (VOLE): Two parties can obtain secret shares of a long, arbitrary vector $\vec x$, multiplied by a scalar $\Delta$, with communication sublinear in the size of the vector.
- Batch, multi-party distributed point functions: A protocol for distributing a batch of secret, random point functions among $N$ parties, for any polynomial $N$, with communication sublinear in the number of DPFs.
- Sublinear MPC for any number of parties: Two new constructions of MPC with sublinear communication complexity, with $N$ parties for any polynomial $N$: (1) For general layered Boolean circuits of size $s$, with communication $O(N s/\log\log s)$, and (2) For layered, sufficiently wide Boolean circuits, with communication $O(N s/\log s)$.
Detecting Rogue Decryption in (Threshold) Encryption via Self-Incriminating Proofs
Keeping decrypting parties accountable in public key encryption is notoriously hard since the secret key owner can decrypt any arbitrary ciphertext. Threshold encryption aims to solve this issue by distributing the power to decrypt among a set of parties, who must interact via a decryption protocol. However, such parties can employ cryptographic tools such as Multiparty Computation (MPC) to decrypt arbitrary ciphertexts without being detected. We introduce the notion of (threshold) encryption with Self-Incriminating Proofs, where parties must produce a self-incriminating proof of decryption when decrypting every ciphertext. In the standard public key encryption case, the adversary could destroy these proofs, so we strengthen our notion to guarantee that the proofs are published when decryption succeeds. This creates a decryption audit trail, which is useful in scenarios where decryption power is held by a single trusted party (e.g., a Trusted Execution Environment) who must be kept accountable. In the threshold case, we ensure that at least one of the parties who execute the decryption protocol will learn a self-incriminating proof, even if they employ advanced tools such as MPC. The fact that a party learns the proof and may leak it at any moment functions as a deterrent for parties who do not wish to be identified as malicious decryptors (e.g., a commercial operator of a service based on threshold encryption). We investigate the (im)possibility and applications of our notions while providing matching constructions under appropriate assumptions. In the threshold case, we build on recent results on Individual Cryptography (CRYPTO 2023).
SoK: Public Randomness
Public randomness is a fundamental component in many cryptographic protocols and distributed systems and often plays a crucial role in ensuring their security, fairness, and transparency properties. Driven by the surge of interest in blockchain and cryptocurrency platforms and the usefulness of such a building block in those areas, designing secure protocols to generate public randomness in a distributed manner has received considerable attention in recent years. This paper presents a systematization of knowledge on the topic of public randomness with a focus on cryptographic tools providing public verifiability and key themes underlying these systems. We provide concrete insights on how state-of-the-art protocols achieve this task efficiently in an adversarial setting and present various research gaps that may be of interest for future research.
The Perils of Limited Key Reuse: Adaptive and Parallel Mismatch Attacks with Post-processing Against Kyber
In this paper, we study the robustness of Kyber, the Learning With Errors (LWE)-based Key Encapsulation Mechanism (KEM) chosen for standardization by NIST, against key mismatch attacks. We demonstrate that Kyber's security levels can be compromised with a few mismatch queries by striking a balance between the parallelization level and the cost of lattice reduction for post-processing. This highlights the imperative need to strictly prohibit key reuse in CPA-secure Kyber.
We further propose an adaptive method to enhance parallel mismatch
attacks, initially proposed by Shao et al. at AsiaCCS 2024, thereby significantly reducing query complexity. This method combines the adaptive attack with post-processing via lattice reduction to retrieve the final secret key entries. Our method proves its efficacy by reducing query complexity by 14.6 % for Kyber512 and 7.5 % for Kyber768/Kyber1024.
Furthermore, this approach has the potential to substantially improve
multi-value Plaintext-Checking (PC) oracle-based side-channel attacks against the CCA-secure version of Kyber KEM.
Closing the Efficiency Gap between Synchronous and Network-Agnostic Consensus
In the consensus problem, $n$ parties want to agree on a common value, even if some of them are corrupt and arbitrarily misbehave. If the parties have a common input $m$, then they must agree on $m$.
Protocols solving consensus assume either a synchronous communication network, where messages are delivered within a known time, or an asynchronous network with arbitrary delays. Asynchronous protocols only tolerate $t_a < n/3$ corrupt parties. Synchronous ones can tolerate $t_s < n/2$ corruptions with setup, but their security completely breaks down if the synchrony assumptions are violated.
Network-agnostic consensus protocols, as introduced by Blum, Katz, and Loss [TCC'19], are secure regardless of network conditions, tolerating up to $t_s$ corruptions with synchrony and $t_a$ without, under provably optimal assumptions $t_a \leq t_s$ and $2t_s + t_a < n$. Despite efforts to improve their efficiency, all known network-agnostic protocols fall short of the asymptotic complexity of state-of-the-art purely synchronous protocols.
In this work, we introduce a novel technique to compile any synchronous and any asynchronous consensus protocols into a network-agnostic one. This process only incurs a small constant number of overhead rounds, so that the compiled protocol matches the optimal round complexity for synchronous protocols. Our compiler also preserves under a variety of assumptions the asymptotic communication complexity of state-of-the-art synchronous and asynchronous protocols. Hence, it closes the current efficiency gap between synchronous and network-agnostic consensus.
As a plus, our protocols support $\ell$-bit inputs, and can be extended to achieve communication complexity $O(n^2\kappa + \ell n)$ under the assumptions for which this is known to be possible for purely synchronous protocols.
Reducing Overdefined Systems of Polynomial Equations Derived from Small Scale Variants of the AES via Data Mining Methods
This paper deals with reducing the secret key computation time of small scale variants of the AES cipher using algebraic cryptanalysis, which is accelerated by data mining methods. This work is based on the known plaintext attack and aims to speed up the calculation of the secret key by processing the polynomial equations extracted from plaintext-ciphertext pairs. Specifically, we propose to transform the overdefined system of polynomial equations over GF(2) into a new system so that the computation of the Gröbner basis using the F4 algorithm takes less time than in the case of the original system. The main idea is to group similar polynomials into clusters, and for each cluster, sum the two most similar polynomials, resulting in simpler polynomials. We compare different data mining techniques for finding similar polynomials, such as clustering or locality-sensitive hashing (LSH). Experimental results show that using the LSH technique, we get a system of equations for which we can calculate the Gröbner basis the fastest compared to the other methods that we consider in this work. Experimental results also show that the time to calculate the Gröbner basis for the transformed system of equations is significantly reduced compared to the case when the Gröbner basis was calculated from the original non-transformed system. This paper demonstrates that reducing an overdefined system of equations reduces the computation time for finding a secret key.
Arma: Byzantine Fault Tolerant Consensus with Horizontal Scalability
Arma is a Byzantine Fault Tolerant (BFT) consensus system designed to
achieve horizontal scalability across all hardware resources: network
bandwidth, CPU, and disk I/O. As opposed to preceding BFT protocols, Arma separates the dissemination and validation of client transactions from the consensus process, restricting the latter to totally ordering only metadata of batches of transactions. This separation enables each party to distribute compute and storage resources for transaction validation, dissemination and disk I/O among multiple machines, resulting in horizontal scalability.
Additionally, Arma ensures censorship resistance by imposing a maximum
time limit on the inclusion of client transactions. We built and evaluated two Arma prototypes. The first is an independent system handling over 200,000 transactions per second, the second integrated into Hyperledger Fabric, speeding its consensus by an order of magnitude.
A Simple Noncommutative UOV Scheme
In this paper, we propose a simple noncommutative-ring based UOV signature scheme with key-randomness alignment: Simple NOVA, which can be viewed as a simplified version of NOVA[48]. We simplify the design of NOVA by skipping the perturbation trick used in NOVA, thus shortens the key generation process and accelerates the signing and verification. Together with a little modification accordingly, this alternative version of NOVA is also secure and may be more suitable for practical uses. We also use Magma to actually implement and give a detailed security analysis against known major attacks.
Resettable Statistical Zero-Knowledge for NP
Resettable statistical zero-knowledge [Garg--Ostrovsky--Visconti--Wadia, TCC 2012] is a strong privacy notion that guarantees statistical zero-knowledge even when the prover uses the same randomness in multiple proofs.
In this paper, we show an equivalence of resettable statistical zero-knowledge arguments for $NP$ and witness encryption schemes for $NP$.
- Positive result: For any $NP$ language $L$, a resettable statistical zero-knowledge argument for $L$ can be constructed from a witness encryption scheme for $L$ under the assumption of the existence of one-way functions.
- Negative result: The existence of even resettable statistical witness-indistinguishable arguments for $NP$ imply the existence of witness encryption schemes for $NP$ under the assumption of the existence of one-way functions.
The positive result is obtained by naturally extending existing techniques (and is likely to be already well-known among experts). The negative result is our main technical contribution.
To explore workarounds for the negative result, we also consider resettable security in a model where the honest party's randomness is only reused with fixed inputs. We show that resettable statistically hiding commitment schemes are impossible even in this model.
DiTRU: A Resurrection of NTRU over Dihedral Group
NTRU-like cryptosystems are among the most studied lattice-based post-quantum candidates. While most NTRU proposals have been introduced over a commutative ring of quotient polynomials, other rings can be used. Noncommutative algebra has been endorsed as a direction to build new variants of NTRU a long time ago. The first attempt to construct a noncommutative variant was due to Hoffstein and Silverman motivated by more resistance to lattice attack. The scheme has been built over the group ring of a dihedral group. However, their design differed from standard NTRU and soon was found vulnerable to algebraic attacks. In this work, we revive the group ring NTRU over the dihedral group as an instance of the GR-NTRU framework.
Unlike many proposals of noncommutative variants in the literature, our work focuses on putting the scheme into practice. We clear all the aspects that make our scheme implementable by proposing an efficient inversion algorithm over the new setting of the noncommutative ring, describing the decryption failure model, and analyzing the lattice associated with our instantiation. Finally, we discuss the best-known attacks against our scheme and provide an implementation targeting 128-bit, 192-bit, and 256-bit levels of security as proof of its practicality.
Differential Cryptanalysis on Quantum Computers
As quantum computing progresses, extensive research has been conducted to find quantum advantages in the field of cryptography. Combining quantum algorithms with classical cryptographic analysis methods, such as differential cryptanalysis and linear cryptanalysis, has the potential to reduce complexity.
In this paper, we present a quantum differential finding circuit for differential cryptanalysis. In our quantum circuit, both plaintext and input difference are in a superposition state. Actually, while our method cannot achieve a direct speedup with quantum computing, it offers a different perspective by relying on quantum probability in a superposition state.
For the quantum simulation, given the limited number of qubits, we simulate our quantum circuit by implementing the Toy-ASCON quantum circuit.
Can We Beat Three Halves Lower Bound?: (Im)Possibility of Reducing Communication Cost for Garbled Circuits
Recent improvements to garbled circuits are mainly focused on reducing their size.
The state-of-the-art construction of Rosulek and Roy (Crypto 2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting.
This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans (Eurocrypt 2015).
Whether their construction is optimal in a more inclusive model than the linear garbling model still remains open.
This paper begins by providing a comprehensive model for a large class of practical garbling schemes
and proves the lower bound for the size of the garbled AND gates in our model.
We show that garbled AND gates require at least $1.5\kappa$ bits in our new model with the free-XOR setting.
It is remarkable to see that the construction by Rosulek and Roy is already optimal
despite the fact that our model possibly captures any potential extension of their construction.
Quantum Public-Key Encryption with Tamper-Resilient Public Keys from One-Way Functions
We construct quantum public-key encryption from one-way functions. In our construction, public keys are quantum, but ciphertexts are classical. Quantum public-key encryption from one-way functions (or weaker primitives such as pseudorandom function-like states) are also proposed in some recent works [Morimae-Yamakawa, eprint:2022/1336; Coladangelo, eprint:2023/282; Barooti-Grilo-Malavolta-Sattath-Vu-Walter, eprint:2023/877]. However, they have a huge drawback: they are secure only when quantum public keys can be transmitted to the sender (who runs the encryption algorithm) without being tampered with by the adversary, which seems to require unsatisfactory physical setup assumptions such as secure quantum channels. Our construction is free from such a drawback: it guarantees the secrecy of the encrypted messages even if we assume only unauthenticated quantum channels. Thus, the encryption is done with adversarially tampered quantum public keys. Our construction is the first quantum public-key encryption that achieves the goal of classical public-key encryption, namely, to establish secure communication over insecure channels, based only on one-way functions. Moreover, we show a generic compiler to upgrade security against chosen plaintext attacks (CPA security) into security against chosen ciphertext attacks (CCA security) only using one-way functions. As a result, we obtain CCA secure quantum public-key encryption based only on one-way functions.
LegoSNARK: Modular Design and Composition of Succinct Zero-Knowledge Proofs
We study the problem of building SNARKs modularly by linking small specialized “proof gadgets" SNARKs in a lightweight manner.
Our motivation is both theoretical and practical. On the theoretical side, modular SNARK designs would be flexible and reusable.
In practice, specialized SNARKs have the potential to be more efficient than general-purpose schemes, on which most existing works have focused. If a computation naturally presents different “components" (e.g. one arithmetic circuit and one boolean circuit), a general-purpose scheme would homogenize them to a single representation with a subsequent cost in performance. Through a modular approach one could instead exploit the nuances of a computation and choose the best gadget for each component.
Our contribution is LegoSNARK, a "toolbox" (or framework) for commit-and-prove zkSNARKs (CP-SNARKs) that includes:
1) General composition tools: build new CP-SNARKs from proof gadgets for basic relations $\mathit{simply}$.
2) A "lifting" tool: add commit-and-prove capabilities to a broad class of existing zkSNARKs $\mathit{efficiently}$. This makes them interoperable (linkable) within the same computation. For example, one QAP-based scheme can be used prove one component; another GKR-based scheme can be used to prove another.
3) A collection of succinct proof gadgets for a variety of relations.
Additionally, through our framework and gadgets, we are able to obtain new succinct proof systems. Notably:
– $\mathsf{LegoGro16}$, a commit-and-prove version of Groth16 zkSNARK, that operates over data committed with a classical Pedersen vector commitment, and that achieves a 5000$\times$ speed in proving time.
– $\mathsf{LegoUAC}$, a pairing-based SNARK for arithmetic circuits that has a universal, circuit-independent, CRS, and proving time linear in the number of circuit gates (vs. the recent scheme of Groth et al. (CRYPTO'18) with quadratic CRS and quasilinear proving time).
– CP-SNARKs for matrix multiplication that achieve optimal proving complexity.
4) A codebase written in C$\mathsf{++}$ for highly composable zkSNARKs with commit-and-prove capabilities$^*$.
_______________
$^*$ Available at https://github.com/imdea-software/legosnark .
Hardness of Non-Interactive Differential Privacy from One-Way Functions
A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset $D \in X^n$ consisting of some small number of elements $n$ from some large data universe $X$, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family $Q$.
Ignoring computational constraints, this problem can be solved even when $X$ and $Q$ are exponentially large and $n$ is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation (iO), no efficient differentially private algorithm exists when $X$ and $Q$ can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.
In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when $X$ and $Q$ are exponentially large, and $n$ is an arbitrary polynomial. In fact, we show that this result holds even if $X$ is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey.
Algebraic Structure of the Iterates of $\chi$
We consider the map $\chi:\mathbb{F}_2^n\to\mathbb{F}_2^n$ for $n$ odd given by $y=\chi(x)$ with $y_i=x_i+x_{i+2}(1+x_{i+1})$, where the indices are computed modulo $n$. We suggest a generalization of the map $\chi$ which we call generalized $\chi$-maps. We show that these maps form an Abelian group which is isomorphic to the group of units in $\mathbb{F}_2[X]/(X^{(n+1)/2})$. Using this isomorphism we easily obtain closed-form expressions for iterates of $\chi$ and explain their properties.
Symmetric Signcryption and E2EE Group Messaging in Keybase
We introduce a new cryptographic primitive called symmetric signcryption, which differs from traditional signcryption because the sender and recipient share a secret key. We prove that a natural composition of symmetric encryption and signatures achieves strong notions of security against attackers that can learn and control many keys. We then identify that the core encryption algorithm of the Keybase encrypted messaging protocol can be modeled as a symmetric signcryption scheme. We prove the security of this algorithm, though our proof requires assuming non-standard, brittle security properties of the underlying primitives.
Time-Based Cryptography From Weaker Assumptions: Randomness Beacons, Delay Functions and More
The assumption that certain computations inherently require some sequential time has established itself as a powerful tool for cryptography. It allows for security and liveness guarantees in distributed protocols that are impossible to achieve with classical hardness assumptions. Unfortunately, many constructions from the realm of time-based cryptography are based on new and poorly understood hardness assumptions, which tend not to stand the test of time (cf. Leurent et al. 2023, Peikert & Tang 2023).
In this work, we make progress on several fronts. We formally define the concept of a delay function and present a construction thereof from minimal assumptions. We show that these functions, in combination with classical cryptographic objects that satisfy certain efficiency criteria, would allow for constructing delay encryption, which is otherwise only known to exist based on a new hardness assumption about isogenies. We formally define randomness beacons as they are used in the context of blockchains, and we show that (linearly homomorphic) time-lock puzzles allow for efficiently constructing them.
Our work puts time-based cryptography on a firmer theoretical footing, provides new constructions from simpler assumptions, and opens new avenues for constructing delay encryption.
Weak Consistency mode in Key Transparency: OPTIKS
The need for third-party auditors in privacy-preserving Key Transparency (KT) systems presents a deployment challenge. In this short note, we take a simple privacy-preserving KT system that provides strong security guarantees in the presence of an honest auditor (OPTIKS) and show how to add a auditor-free mode to it. The auditor-free mode offers slightly weaker security. We formalize this security property and prove that our proposed protocol satisfies our security definition.
New Limits of Provable Security and Applications to ElGamal Encryption
We provide new results showing that ElGamal encryption cannot be proven CCA1-secure – a long-standing open problem in cryptography. Our result follows from a very broad, meta-reduction-based impossibility result on random self-reducible relations with efficiently re-randomizable witnesses. The techniques that we develop allow, for the first time, to provide impossibility results for very weak security notions where the challenger outputs fresh challenge statements at the end of the security game. This can be used to finally tackle encryption-type definitions that have remained elusive in the past. We show that our results have broad applicability by casting several known cryptographic setups as instances of random self-reducible and re-randomizable relations. These setups include general semi-homomorphic PKE and the large class of certified homomorphic one-way bijections. As a result, we also obtain new impossibility results for the IND-CCA1 security of the PKEs of Paillier and Damg ̊ard–Jurik, and many one-more inversion assumptions like the one-more DLOG or the one-more RSA assumption.
Precio: Private Aggregate Measurement via Oblivious Shuffling
We introduce Precio, a new secure aggregation method for computing layered histograms and sums over secret shared data in a client-server setting.
Precio is motivated by ad conversion measurement scenarios, where online advertisers and ad networks want to measure the performance of ad campaigns without requiring privacy-invasive techniques, such as third-party cookies.
Precio has linear (time and communication) complexity in the number of data points and guarantees differentially private outputs. We formally analyze its security and privacy and present a thorough performance evaluation. The protocol supports much larger domains than Prio. It supports much more flexible aggregates than the DPF-based solution and in some settings has up to four orders of magnitude better performance.
Anamorphic Encryption, Revisited
An anamorphic encryption scheme allows two parties who share a so-called double key to embed covert messages in ciphertexts of an established PKE scheme. This protects against a dictator that can force the receiver to reveal the secret keys for the PKE scheme, but who is oblivious about the existence of the double key. We identify two limitations of the original model by Persiano, Phan, and Yung (EUROCRYPT 2022). First, in their definition a double key can only be generated once, together with a key-pair. This has the drawback that a receiver who wants to use the anamorphic mode after a dictator comes to power, needs to deploy a new key-pair, a potentially suspicious act. Second, a receiver cannot distinguish whether or not a ciphertext contains a covert message.
In this work we propose a new model that overcomes these limitations. First, we allow to associate multiple double keys to a key-pair, after its deployment. This also enables deniability in case the double key only depends on the public key. Second, we propose a natural robustness notion, which guarantees that anamorphically decrypting a regularly encrypted message results in a special symbol indicating that no covert message is contained, which also eliminates certain attacks.
Finally, to instantiate our new, stronger definition of anamorphic encryption, we provide generic and concrete constructions. Concretely, we show that ElGamal and Cramer-Shoup satisfy a new condition, selective randomness recoverability, which enables robust anamorphic extensions, and we also provide a robust anamorphic extension for RSA-OAEP.
Hide-and-Seek and the Non-Resignability of the BUFF Transform
The BUFF transform, due to Cremers et al. (S&P'21), is a generic transformation for digital signature scheme, with the purpose of obtaining additional security guarantees beyond unforgeability: exclusive ownership, message-bound signatures, and non-resignability. Non-resignability (which essentially challenges an adversary to re-sign an unknown message for which it only obtains the signature) turned out to be a delicate matter, as recently Don et al. (CRYPTO'24) showed that the initial definition is essentially unachievable; in particular, it is not achieved by the BUFF transform. This led to the introduction of new, weakened versions of non-resignability, which are (potentially) achievable. In particular, it was shown that a salted variant of the BUFF transform does achieves some weakened version of non-resignability. However, the salting requires additional randomness and leads to slightly larger signatures. Whether the original BUFF transform also achieves some meaningful notion of non-resignability remained a natural open question.
In this work, we answer this question in the affirmative. We show that the BUFF transform satisfies the (almost) strongest notions of non-resignability one can hope for, facing the known impossibility results. Our results cover both the statistical and the computational case, and both the classical and the quantum setting. At the core of our analysis lies a new security game for random oracles that we call Hide-and-Seek. While seemingly innocent at first glance, it turns out to be surprisingly challenging to rigorously analyze.
Suboptimality in DeFi
The decentralized finance (DeFi) ecosystem has proven to be popular in facilitating financial operations, such as token exchange and lending. The public availability of DeFi platforms’ code, together with real-time data on all user interactions with them, has given rise to complex tools that find and seize profit opportunities on behalf of users.
In this work, we show that both users and the aforementioned tools sometimes act suboptimally: their profits can be increased by more than 100%, with the highest amount of missed revenue by a suboptimal action reaching 428.14ETH ($517K). To reach these findings, we examine core DeFi primitives used by over 850 platforms that are responsible for a daily volume of more than 100 million USD in Ethereum alone: (1) lending and borrowing funds, (2) using flashswaps to close arbitrage opportunities between decentralized exchanges (DEXs), (3) liquidation of insolvent loans using flashswaps, which combines the previous two. The profit that can be made from each primitive is then cast as an optimization problem that can be solved. We show that missed opportunities to make a profit are noticed by others and are sometimes followed by back-running transactions that extract profits using similar actions. By analyzing these events, we find that some transactions are circumstantially tied to specific miners and hypothesize that they use their knowledge of private orderflow for a profit. Essentially, this is an instance of miner-extractable value (MEV) "in action".
On the (In)Security of the BUFF Transform
The BUFF transform is a generic transformation for digital signature schemes, with the purpose of obtaining additional security properties beyond standard unforgeability, e.g., exclusive ownership and non-resignability. In the call for additional post-quantum signatures, these were explicitly mentioned by the NIST as ``additional desirable security properties'', and some of the submissions indeed refer to the BUFF transform with the purpose of achieving them, while some other submissions follow the design of the BUFF transform without mentioning it explicitly.
In this work, we show the following negative results regarding the non-resignability property in general, and the BUFF transform in particular. In the plain model, we observe by means of a simple attack that any signature scheme for which the message has a high entropy given the signature does not satisfy the non-resignability property (while non-resignability is trivially not satisfied if the message can be efficiently computed from its signature). Given that the BUFF transform has high entropy in the message given the signature, it follows that the BUFF transform does not achieve non-resignability whenever the random oracle is instantiated with a hash function, no matter what hash function.
When considering the random oracle model (ROM), the matter becomes slightly more delicate since prior works did not rigorously define the non-resignability property in the ROM. For the natural extension of the definition to the ROM, we observe that our impossibility result still holds, despite there having been positive claims about the non-resignability of the BUFF transform in the ROM. Indeed, prior claims of the non-resignability of the BUFF transform rely on faulty argumentation.
On the positive side, we prove that a salted version of the BUFF transform satisfies a slightly weaker variant of non-resignability in the ROM, covering both classical and quantum attacks, if the entropy requirement in the (weakened) definition of non-resignability is statistical; for the computational variant, we show yet another negative result.
Algebraically Structured LWE, Revisited
In recent years, there has been a proliferation of *algebraically structured* Learning With Errors (LWE) variants, including Ring-LWE, Module-LWE, Polynomial-LWE, Order-LWE, and Middle-Product LWE, and a web of reductions to support their hardness, both among these problems themselves and from related worst-case problems on structured lattices. However, these reductions are often difficult to interpret and use, due to the complexity of their parameters and analysis, and most especially their (frequently large) blowup and distortion of the error distributions.
In this paper we unify and simplify this line of work. First, we give a general framework that encompasses *all* proposed LWE variants (over commutative base rings), and in particular unifies all prior ``algebraic'' LWE variants defined over number fields. We then use this framework to give much simpler, more general, and tighter reductions from Ring-LWE to other algebraic LWE variants, including Module-LWE, Order-LWE, and Middle-Product LWE. In particular, all of our reductions have easy-to-analyze and frequently small error expansion; in most cases they even leave the error unchanged. A main message of our work is that it is straightforward to use the hardness of the original Ring-LWE problem as a foundation for the hardness of all other algebraic LWE problems defined over number fields, via simple and rather tight reductions.
Menhir: An Oblivious Database with Protection against Access and Volume Pattern Leakage
Analyzing user data while protecting the privacy of individuals remains a big challenge. Trusted execution environments (TEEs) are a possible solution as they protect processes and Virtual Machines (VMs) against malicious hosts. However, TEEs can leak access patterns to code and to the data being processed. Furthermore, when data is stored in a TEE database, the data volume required to answer a query is another unwanted side channel that contains sensitive information. Both types of information leaks, access patterns and volume patterns, allow for database reconstruction attacks.
In this paper, we present Menhir, an oblivious TEE database that hides access patterns with ORAM guarantees and volume patterns through differential privacy. The database allows range and point queries with SQL-like WHERE-clauses. It builds on the state-of-the-art oblivious AVL tree construction Oblix (S&P'18), which by itself does not protect against volume leakage. We show how volume leakage can be exploited in range queries and improve the construction to mitigate this type of attack. We prove the correctness and obliviousness of Menhir. Our evaluation shows that our approach is feasible and scales well with the number of rows and columns in the database.
Physical Ring Signature
Ring signatures allow members of a group (called "ring") to sign a message anonymously within the group, which is chosen ad hoc at the time of signing (the members do not need to have interacted before). In this paper, we propose a physical version of ring signatures. Our signature is based on one-out-of-many signatures, a method used in many real cryptographic ring signatures. It consists of boxes containing coins locked with padlocks that can only be opened by a particular group member. To sign a message, a group member shakes the boxes of the other members of the group so that the coins are in a random state ("heads" or "tails", corresponding to bits $0$ and $1$), and opens their box to arrange the coins so that the exclusive "or" of the coins corresponds to the bits of the message they wish to sign. We present a prototype that can be used with coins, or with dice for messages encoded in larger (non-binary) alphabets. We suggest that this system can be used to explain ring signatures to the general public in a fun way. Finally, we propose a semi-formal analysis of the security of our signature based on real cryptographic security proofs.
A Fault-Resistant NTT by Polynomial Evaluation and Interpolation
Uncategorized
Uncategorized
In computer arithmetic operations, the Number Theoretic
Transform (NTT) plays a significant role in the efficient implementation
of cyclic and nega-cyclic convolutions with the application of multiplying
large integers and large degree polynomials. Multiplying polynomials is
a common operation in lattice-based cryptography. Hence, the NTT is a
core component of several lattice-based cryptographic algorithms. Two
well-known examples are the key encapsulation mechanism Kyber and
the digital signature algorithm Dilithium. In this work, we introduce a
novel and efficient method for safeguarding the NTT against fault attacks.
This new countermeasure is based on polynomial evaluation and
interpolation. We prove its error detection capability, calculate the required
additional computational effort, and show how to concretely use
it to secure the NTT in Kyber and Dilithium against fault injection
attacks. Finally, we provide concrete implementation results of the proposed
novel technique on a resource-constrained ARM Cortex-M4 microcontroller,
e.g., the technique exhibits a 72% relative overhead, when
applied to Dilithium.
Cryptanalysis of rank-2 module-LIP in Totally Real Number Fields
At Asiacrypt 2022, Ducas, Postlethwaite, Pulles, and van Woerden introduced the Lattice Isomorphism Problem for module lattices in a number field $K$ (module-LIP). In this article, we describe an algorithm solving module-LIP for modules of rank $2$ in $K^2$, when $K$ is a totally real number field.
Our algorithm exploits the connection between this problem, relative norm equations and the decomposition of algebraic integers as sums of two squares. For a large class of modules (including $\mathcal{O}_K^2$), and a large class of totally real number fields (including the maximal real subfield of cyclotomic fields) it runs in classical polynomial time in the degree of the field and the residue at 1 of the Dedekind zeta function of the field (under reasonable number theoretic assumptions).
We provide a proof-of-concept code running over the maximal real subfield of some cyclotomic fields. As a side contribution, we also provide some algorithmic and theoretical tools for the future study of the module-LIP problem.
A new attack against search-LWE using Diophantine approximations
In this paper, we present a new attack against search-LWE instances with a small secret key. The method consists of lifting the public key to $\mathbb Z$ and finding a good Diophantine approximation of the public key divided by the modulus $a$. This is done using lattice reduction algorithms. The lattice considered, and the approximation quality needed is similar to known decision-LWE attacks for small keys. However, we do not require an in-depth analysis of the reduction algorithm (any reduction algorithm giving small enough vectors is enough for us), and our method solves the search problem directly, which is harder than the decision problem.
Modelling Ciphers with Overdefined Systems of Quadratic Equations: Application to Friday, Vision, RAIN and Biscuit
It is well-known that a system of equations becomes easier to solve when it is overdefined. In this work, we study how to overdefine the system of equations to describe the arithmetic oriented (AO) ciphers Friday, Vision, and RAIN, as well as a special system of quadratic equations over $\mathbb F_{2^{\ell}}$ used in the post-quantum signature scheme Biscuit. Our method is inspired by Courtois-Pieprzyk's and Murphy-Robshaw's methods to model AES with overdefined systems of quadratic equations over $\mathbb F_2$ and $\mathbb F_{2^8}$, respectively. However, our method is more refined and much simplified compared with Murphy-Robshaw's method, since it can take full advantage of the low-degree $\mathbb F_2$-linearized affine polynomials used in Friday and Vision, and the overdefined system of equations over $\mathbb F_{2^{\ell}}$ can be described in a clean way with our method. For RAIN, we instead consider quadratic Boolean equations rather than equations over large finite fields $\mathbb F_{2^{\ell}}$. Specifically, we demonstrate that the special structure of RAIN allows us to set up much more linearly independent quadratic Boolean equations than those obtained only with Courtois-Pieprzyk's method. Moreover, we further demonstrate that the underlying key-recovery problem in Biscuit (NIST PQC Round 1 Additional Signatures) can also be described by solving a much overdefined system of quadratic equations over $\mathbb F_{2^{\ell}}$. On the downside, the constructed systems of quadratic equations for these ciphers cannot be viewed as semi-regular, which makes it challenging to upper bound the complexity of the Gröbner basis attack. However, such a new modelling method can significantly improve the lower bound of the complexity of the Gröbner basis attacks on these ciphers, i.e., we view the complexity of solving a random system of quadratic equations of the same scale as the lower bound. How to better estimate the upper and lower bounds of the Gröbner basis attacks on these ciphers based on our modelling method is left as an open problem.
Last updated: 2024-05-22
SmartBean: Transparent, Concretely Efficient, Polynomial Commitment Scheme with Logarithmic Verification and Communication Costs that Runs on Any Group
We introduce a new, concretely efficient, transparent polynomial commitment scheme with logarithmic verification time and communication cost that can run on any group. Existing group-based polynomial commitment schemes must use less efficient groups, such as class groups of unknown order or pairing-based groups to achieve transparency (no trusted setup), making them expensive to adopt in practice.
We offer the first group-based polynomial commitment scheme that can run on any group s.t. it does not rely on expensive pairing operations or require class groups of unknown order to achieve transparency while still providing logarithmic verifier time and communication cost.
The prover work of our work is dominated by $4n \,\mathbb{G}$ multi-exponentiations, the verifier work is dominated by 4 log $n \, \mathbb{G}$ exponentiations, and the communication cost is 4 log $n \, \mathbb{G}$. Since our protocol can run on fast groups such as Curve25519, we can easily accelerate the multi-exponentiations with Pippenger's algorithm. The concrete performance of our work shows a significant improvement over the current state of the art in almost every aspect.
Universal Blockchain Assets
We present a novel protocol for issuing and transferring tokens across blockchains without the need of a trusted third party or cross-chain bridge. In our scheme, the blockchain is used for double-spend protection only, while the authorisation of token transfers is performed off-chain. Due to the universality of our approach, it works in almost all blockchain settings. It can be implemented immediately on UTXO blockchains such as Bitcoin without modification, and on account-based blockchains such as Ethereum by introducing a smart contract that mimics the properties of a UTXO. We provide a proof-of-concept implementation of an NFT that is issued on Bitcoin, transferred to Ethereum, and then transferred back to Bitcoin. Our new approach means that users no longer need to be locked into one blockchain when issuing and transferring tokens.
Quantum Advantage from One-Way Functions
Is quantum computing truly faster than classical computing? Demonstrating unconditional quantum computational advantage lies beyond the reach of the current complexity theory, and therefore we have to rely on some complexity assumptions. While various results on quantum advantage have been obtained, all necessitate relatively stronger or less standard assumptions in complexity theory or classical cryptography. In this paper, we show quantum advantage based on several fundamental assumptions, specifically relying solely on the existence of classically-secure one-way functions. Given the fact that one-way functions are necessary for almost all classical cryptographic primitives, our findings yield a surprising implication: if there is no quantum advantage, then there is no classical cryptography! More precisely, we introduce inefficient-verifier proofs of quantumness (IV-PoQ), and construct it from statistically-hiding and computationally-binding classical bit commitments. IV-PoQ is an interactive protocol between a verifier and a quantum polynomial-time prover consisting of two phases. In the first phase, the verifier is classical probabilistic polynomial-time, and it interacts with the quantum polynomial-time prover over a classical channel. In the second phase, the verifier becomes inefficient, and makes its decision based on the transcript of the first phase. If the quantum prover is honest, the inefficient verifier accepts with high probability, but any classical probabilistic polynomial-time malicious prover only has a small probability of being accepted by the inefficient verifier. In our construction, the inefficient verifier can be a classical deterministic polynomial-time algorithm that queries an NP oracle. Our construction demonstrates the following results based on the known constructions of statistically-hiding and computationally-binding commitments from one-way functions or distributional collision-resistant hash functions:
• If one-way functions exist, then IV-PoQ exist.
• If distributional collision-resistant hash functions exist (which exist if hard-on-average problems in SZK exist), then constant-round IV-PoQ exist.
We also demonstrate quantum advantage based on worst-case-hard assumptions. We define auxiliary-input IV-PoQ (AI-IV-PoQ) that only require that for any malicious prover, there exist infinitely many auxiliary inputs under which the prover cannot cheat. We construct AI-IV-PoQ from an auxiliary-input version of commitments in a similar way, showing that
• If auxiliary-input one-way functions exist (which exist if CZK ̸ ⊆ BPP), then AI-IV-PoQ exist.
• If auxiliary-input collision-resistant hash functions exist (which is equivalent to PWPP ⊈ FBPP) or SZK ⊈ BPP, then constant-round AI-IV-PoQ exist.
Finally, we also show that some variants of PoQ can be constructed from quantum-evaluation one-way functions (QE-OWFs), which are similar to classically-secure classical one-way functions except that the evaluation algorithm is not classical but quantum. QE-OWFs appear to be weaker than classically-secure classical one-way functions, and therefore it demonstrates quantum advantage based on assumptions even weaker than one-way functions.
Generating Supersingular Elliptic Curves over $\mathbb{F}_p$ with Unknown Endomorphism Ring
A number of supersingular isogeny based cryptographic protocols require the endomorphism ring of the initial elliptic curve to be either unknown or random in order to be secure. To instantiate these protocols, Basso et al. recently proposed a secure multiparty protocol that generates supersingular elliptic curves defined over
$\mathbb{F}_{p^2}$ of unknown endomorphism ring as long as at least one party acts honestly. However, there are many protocols that specifically require curves defined over $\mathbb{F}_p$, for which the Basso et al. protocol cannot be used. Also, the simple solution of using a signature scheme such as CSI-FiSh or SeaSign for proof of knowledge either requires extensive precomputation of large ideal class groups or is too slow for everyday applications.
In this paper, we present CSIDH-SCG, a new multiparty protocol that generates curves of unknown endomorphism ring defined over $\mathbb{F}_p$. This protocol relies on CSIDH-IP, a new CSIDH based proof of knowledge. We also present CSIDH-CR, a multiparty algorithm that be used in conjunction with CSIDH-SCG to generate a random curve over $\mathbb{F}_p$ while still keeping the endomorphism ring unknown.
Towards Unclonable Cryptography in the Plain Model
By leveraging the no-cloning principle of quantum mechanics, unclonable cryptography enables us to achieve novel cryptographic protocols that are otherwise impossible classically.
Two most notable examples of unclonable cryptography are quantum copy-protection and unclonable encryption.
Most known constructions rely on the quantum random oracle model (as opposed to the plain model), in which all parties have access in superposition to a powerful random oracle.
Despite receiving a lot of attention in recent years, two important open questions still remain: copy-protection for point functions in the plain model, which is usually considered as feasibility demonstration, and unclonable encryption with unclonable indistinguishability security in the plain model.
A core ingredient of these protocols is the so-called monogamy-of-entanglement property.
Such games allow quantifying the correlations between the outcomes of multiple non-communicating parties sharing entanglement in a particular context.
Specifically, we define the games between a challenger and three players in which the first player is asked to split and share a quantum state between the two others, who are then simultaneously asked a question and need to output the correct answer.
In this work, by relying on previous works of Coladangelo, Liu, Liu, and Zhandry (Crypto'21) and Culf and Vidick (Quantum'22), we establish a new monogamy-of-entanglement property for subspace coset states, which allows us to progress towards the aforementioned goals.
However, it is not sufficient on its own, and we present two conjectures that would allow first to show that copy-protection of point functions exists in the plain model, with different challenge distributions (including arguably the most natural ones), and then that unclonable encryption with unclonable indistinguishability security exists in the plain model.
We believe that our new monogamy-of-entanglement to be of independent interest, and it could be useful in other applications as well.
To highlight this last point, we leverage our new monogamy-of-entanglement property to show the existence of a tokenized signature scheme with a new security definition, called unclonable unforgeability.
How to Recover a Cryptographic Secret From the Cloud
Clouds have replaced most local backup systems as they offer strong availability and reliability guarantees. Clouds, however, are not (and should not be) used as backup for cryptographic secrets. Cryptographic secrets might control financial assets (e.g., crypto wallets), hence, storing such secrets on the cloud corresponds to sharing ownership of the financial assets with the cloud, and makes the cloud a more attractive target for insider attacks.
Can we have the best of the two worlds, where a user, Alice, can conveniently store a copy of her cryptographic secrets on the cloud and she is the only one who can recover them (without trusting any entity)? Can she do so even when she loses her devices and forgets all credentials, while at the same time retaining full ownership of her secrets?
In this paper, we provide a cloud-based secret-recovery mechanism where confidentiality is always guaranteed when Alice has not lost her credentials, even in the presence of a malicious cloud. If Alice loses all her credentials, she can still recover her secrets (in most circumstances). This is in contrast with all previous work that relies on the assumption that Alice remembers some authentication secret.
We prove our system secure in the Universally Composable framework. Further, we implement our protocols and evaluate their performance.
Boosting the Performance of High-Assurance Cryptography: Parallel Execution and Optimizing Memory Access in Formally-Verified Line-Point Zero-Knowledge
Despite the notable advances in the development of high-assurance, verified implementations of cryptographic protocols, such implementations typically face significant performance overheads, particularly due to the penalties induced by formal verification and automated extraction of executable code. In this paper, we address some core performance challenges facing computer-aided cryptography by presenting a formal treatment for accelerating such verified implementations based on multiple generic optimizations covering parallelism and memory access. We illustrate our techniques for addressing such performance bottlenecks using the Line-Point Zero-Knowledge (LPZK) protocol as a case study. Our starting point is a new verified implementation of LPZK that we formalize and synthesize using EasyCrypt; our first implementation is developed to reduce the proof effort and without considering the performance of the extracted executable code. We then show how such (automatically) extracted code can be optimized in three different ways to obtain a 3000x speedup and thus matching the performance of the manual implementation of LPZK. We obtain such performance gains by first modifying the algorithmic specifications, then by adopting a provably secure parallel execution model, and finally by optimizing the memory access structures. All optimizations are first formally verified inside EasyCrypt, and then executable code is automatically synthesized from each step of the formalization. For each optimization, we analyze performance gains resulting from it and also address challenges facing the computer-aided security proofs thereof, and challenges facing automated synthesis of executable code with such an optimization.
Adaptively Secure Functional Encryption for Finite Languages from DLIN Assumption
In this paper, we present Functional Encryption (FE) schemes for finite languages from standard static assumption, viz., \textit{Decisional Linear} (DLIN) assumption. These finite languages are described by deterministic finite automata. Our first scheme is ciphertext-policy functional encryption (CP-FE), where a key $\sk_w$ is labeled with a string $w$ over a fixed alphabet $\Sigma$ and a ciphertext $\cipher_\amn$ is associated with a deterministic finite Automaton (DFA) $\amn$ over the same alphabet $\Sigma$. The key $\sk_w$ can extract the message from the ciphertext $\cipher_\amn$ if the DFA $\amn$ accepts the string $w$. This CP-FE scheme is constructed based on attribute-based encryption (ABE) structure of Okamoto-Takashima in Asiacrypt, 2012. To achieve the adaptive security, we put bounds on number of occurrences of any symbol in a string and in the set of transition tuples of a DFA. Due to this restriction, the size of key space (where the keys are indexed with strings) is reduced to finite. Hence, the functional scope of any DFA in our system can capture only finite language. Similarly, we obtain our second adaptively secure FE scheme in key-policy flavor from DLIN assumption. Both the schemes are shown to be secure in the standard model.
Doubly-Efficient Batch Verification in Statistical Zero-Knowledge
A sequence of recent works, concluding with Mu et al. (Eurocrypt, 2024) has shown that every problem $\Pi$ admitting a non-interactive statistical zero-knowledge proof (NISZK) has an efficient zero-knowledge batch verification protocol. Namely, an NISZK protocol for proving that $x_1,\dots,x_k \in \Pi$ with communication that only scales poly-logarithmically with $k$. A caveat of this line of work is that the prover runs in exponential-time, whereas for NP problems it is natural to hope to obtain a doubly-efficient proof - that is, a prover that runs in polynomial-time given the $k$ NP witnesses.
In this work we show that every problem in $NISZK \cap UP$ has a doubly-efficient interactive statistical zero-knowledge proof with communication $poly(n,\log(k))$ and $poly(\log(k),\log(n))$ rounds. The prover runs in time $poly(n,k)$ given access to the $k$ UP witnesses. Here $n$ denotes the length of each individual input, and UP is the subclass of NP relations in which YES instances have unique witnesses.
This result yields doubly-efficient statistical zero-knowledge batch verification protocols for a variety of concrete and central cryptographic problems from the literature.
Information-theoretic Multi-server Private Information Retrieval with Client Preprocessing
A private information retrieval (PIR) protocol allows a client to fetch any entry from single or multiple servers who hold a public database (of size $n$) while ensuring no server learns any information about the client's query. Initial works on PIR were focused on reducing the communication complexity of PIR schemes. However, standard PIR protocols are often impractical to use in applications involving large databases, due to its inherent large server-side computation complexity, that's at least linear in the database size. Hence, a line of research has focused on considering alternative PIR models that can achieve improved server complexity.
The model of private information retrieval with client prepossessing has received a lot of interest beginning with the work due to Corrigan-Gibbs and Kogan (Eurocrypt 2020). In this model, the client interacts with two servers in an offline phase and it stores a local state, which it uses in the online phase to perform PIR queries. Constructions in this model achieve online client/server computation and bandwidth that's sublinear in the database size, at the cost of a one-time expensive offline phase. Till date all known constructions in this model are based on symmetric key primitives or on stronger public key assumptions like Decisional Diffie-Hellman (DDH) and Learning with Error (LWE). This work initiates the study of unconditional PIR with client prepossessing - where we avoid using any cryptographic assumptions. We present a new PIR protocol for $2t$ servers (where $t \in [2,\log_2n/2]$) with threshold 1, where client and server online computation is $\widetilde{O}(\sqrt{n})$ (the $\widetilde{O}(.)$ notation hides $\textsf{poly}\log$ factors) - matching the computation costs of other works based on cryptographic assumptions. The client storage and online communication complexity are $\widetilde{O}(n^{0.5+1/2t})$ and $\widetilde{O}(n^{1/2})$ respectively. Compared to previous works our PIR with client preprocessing protocol also has a very concretely efficient client/server online computation phase - which is dominated by xor operations, compared to cryptographic operations that are orders of magnitude slower. As a building block for our construction, we introduce a new information-theoretic primitive called \textit{privately multi-puncturable random set }(\pprs), which might be of independent interest. This new primitive can be viewed as as a generalization of privately puncturable pseudo-random set, which is the key cryptographic building block used in previous works on PIR with client preprocessing.
Byzantine Reliable Broadcast with One Trusted Monotonic Counter
Byzantine Reliable Broadcast is one of the most popular communication primitives in distributed systems. Byzantine reliable broadcast ensures that processes agree to deliver a message from an initiator even if some processes (perhaps including the initiator) are Byzantine. In asynchronous settings it is known since the prominent work of Bracha [Bracha87] that Byzantine reliable broadcast can be implemented deterministically if $n \geq 3t+1$ where $t$ is an upper bound on the number of Byzantine processes. Here, we study Byzantine Reliable Broadcast when processes are equipped with trusted execution environments (TEEs), special software or hardware designed to prevent equivocation. Our contribution is twofold. First, we show that, despite common belief, when each process is equipped with a TEE, Bracha's algorithm still needs $n \geq 3t+1$. Second, we present a novel algorithm that uses a single TEE (at the initiator) that implements Byzantine Reliable Asynchronous Broadcast with $n \geq 2t+1$.
Mempool Privacy via Batched Threshold Encryption: Attacks and Defenses
With the rising popularity of DeFi applications it is important to implement protections for regular users of these DeFi platforms against large parties with massive amounts of resources allowing them to engage in market manipulation strategies such as frontrunning/backrunning. Moreover, there are many situations (such as recovery of funds from vulnerable smart contracts) where a user may not want to reveal their transaction until it has been executed. As such, it is clear that preserving the privacy of transactions in the mempool is an important goal.
In this work we focus on achieving mempool transaction privacy through a new primitive that we term batched-threshold encryption, which is a variant of threshold encryption with strict efficiency requirements to better model the needs of resource constrained environments such as blockchains. Unlike the naive use of threshold encryption, which requires communication proportional to $O(nB)$ to decrypt $B$ transactions with a committee of $n$ parties, our batched-threshold encryption scheme only needs $O(n)$ communication. We additionally discuss pitfalls in prior approaches that use (vanilla) threshold encryption for mempool privacy.
To show that our scheme is concretely efficient, we implement our scheme and find that transactions can be encrypted in under 6 ms, independent of committee size, and the communication required to decrypt an entire batch of $B$ transactions is 80 bytes per party, independent of the number of transactions $B$, making it an attractive choice when communication is very expensive. If deployed on Ethereum, which processes close to 500 transaction per block, it takes close to 2.8 s for each committee member to compute a partial decryption and under 3.5 s to decrypt all transactions for a block in single-threaded mode.
Thwarting Last-Minute Voter Coercion
Counter-strategies are key components of coercion-resistant voting schemes, allowing voters to submit votes that represent their own intentions in an environment controlled by a coercer. By deploying a counter-strategy a voter can prevent the coercer from learning if the voter followed the coercer’s instructions or not. Two effective counter-strategies have been proposed in the literature, one based on fake credentials and another on revoting. While fake-credential schemes assume that voters hide cryptographic keys away from the coercer, revoting schemes assume that voters can revote after being coerced.
In this work, we present a new counter-strategy technique that enables flexible vote updating, that is, a revoting approach that provides protection against coercion even if the adversary is able to coerce a voter at the very last minute of the voting phase. We demonstrate that our technique is effective by implementing it in Loki, an Internet-based coercion-resistant voting scheme that allows revoting. We prove that Loki satisfies a game-based definition of coercion-resistance that accounts for flexible vote updating. To the best of our knowledge, we provide the first technique that enables deniable coercion- resistant voting and that can evade last-minute voter coercion.
High-assurance field inversion for curve-based cryptography
The security of modern cryptography depends on multiple factors, from sound hardness assumptions to correct implementations that resist side-channel cryptanalysis. Curve-based cryptography is not different in this regard, and substantial progress in the last few decades has been achieved in both selecting parameters and devising secure implementation strategies. In this context, the security of implementations of field inversion is sometimes overlooked in the research literature, because
(i) the approach based on Fermat's Little Theorem (FLT) suffices performance-wise for many parameters used in practice;
(ii) it is typically invoked only at the very end of a cryptographic computation, with a small impact on performance;
(iii) it is challenging to implement securely for general parameters without a significant performance penalty.
However, field inversion can process sensitive information and must be protected with side-channel countermeasures like any other cryptographic operation, as illustrated by recent attacks. In this work, we focus on implementing field inversion for primes of cryptographic interest with security against timing attacks, irrespective of whether the FLT-based inversion can be efficiently implemented. We extend the Fiat-Crypto framework, which synthesizes provably correct-by-construction implementations, to implement the Bernstein-Yang inversion algorithm as a step towards this goal. This allows a correct implementation of prime field inversion to be synthesized for any prime. We benchmark the implementations across a range of primes for curve-based cryptography and they outperform traditional FLT-based approaches in most cases, with observed speedups up to 2 for the largest parameters. Our work is already used in production in the MirageOS unikernel operating system, $\mathtt{zig}$ programming language, and the ECCKiila framework.
Faster constant-time evaluation of the Kronecker symbol with application to elliptic curve hashing
We generalize the Bernstein-Yang (BY) algorithm for constant-time modular inversion to compute the Kronecker symbol, of which the Jacobi and Legendre symbols are special cases. We start by developing a basic and easy-to-implement divstep version of the algorithm defined in terms of full-precision division steps. We then describe an optimized version due to Hamburg over word-sized inputs, similar to the jumpdivstep version of the BY algorithm, and formally verify its correctness. Along the way, we introduce a number of optimizations for implementing both versions in constant time and at high-speed. The resulting algorithms are particularly suitable for the special case of computing the Legendre symbol with dense prime $p$, where no efficient addition chain is known for the conventional approach by exponentiation to $\frac{p-1}{2}$. This is often the case for the base field of popular pairing-friendly elliptic curves. Our high-speed implementation for a range of parameters shows that the new algorithm is up to 40 times faster than the conventional exponentiation approach, and up to 25.7\% faster than the previous state of the art. We illustrate the performance of the algorithm with an application for hashing to elliptic curves, where the observed savings amount to 14.7\% -- 48.1\% when used for testing quadratic residuosity within the SwiftEC hashing algorithm. We also apply our techniques to the CTIDH isogeny-based key exchange, with savings of 3.5--13.5\%.
PriFHEte: Achieving Full-Privacy in Account-based Cryptocurrencies is Possible
In cryptocurrencies, all transactions are public. For their adoption, it is important that these transactions, while publicly verifiable, do not leak information about the identity and the balances of the transactors.
For UTXO-based cryptocurrencies, there are well-established approaches (e.g., ZCash) that guarantee full privacy to the transactors. Full privacy in UTXO means that each transaction is anonymous within the set of all private transactions ever posted on the blockchain.
In contrast, for account-based cryptocurrencies (e.g., Ethereum) full privacy, that is, privacy within the set of all accounts, seems to be impossible to achieve within the constraints of blockchain transactions (e.g., they have to fit in a block).
Indeed, every approach proposed in the literature achieves only a much weaker privacy guarantee called $k-$anonymity where a transactor is private within a set of $k$ account holders.
$k-$anonymity is achieved by adding $k$ accounts to the transaction, which concretely limits the anonymity guarantee to a very small constant (e.g., $~$64 for QuisQuis and $~$256 for anonymous Zether), compared to the set of all possible accounts.
In this paper, we propose a completely new approach that does not achieve anonymity by including more accounts in the transaction, but instead makes the transaction itself ``smarter''.
Our key contribution is to provide a mechanism whereby a compact transaction can be used to correctly update all accounts. Intuitively, this guarantees that all accounts are equally likely to be the recipients/sender of such a transaction.
We, therefore, provide the first protocol that guarantees full privacy in account-based cryptocurrencies PriFHEte
The contribution of this paper is theoretical.
Our main objective is to demonstrate that achieving
full privacy in account-based cryptocurrency is actually possible.
We see our work as opening the door to new possibilities for anonymous account-based cryptocurrencies.
Nonetheless, in this paper, we also discuss PriFHEte's potential to be developed in practice by leveraging the power of off-chain scalability solutions such as zk rollups.
Fuzzy Private Set Intersection with Large Hyperballs
Traditional private set intersection (PSI) involves a receiver and a sender holding sets $X$ and $Y$, respectively, with the receiver learning only the intersection $X\cap Y$.
We turn our attention to its fuzzy variant, where the receiver holds \(|X|\) hyperballs of radius \(\delta\) in a metric space and the sender has $|Y|$ points.
Representing the hyperballs by their center, the receiver learns the points $x\in X$ for which there exists $y\in Y$ such that $\mathsf{dist}(x,y)\leq \delta$ with respect to some distance metric.
Previous approaches either require general-purpose multi-party computation (MPC) techniques like garbled circuits or fully homomorphic encryption (FHE), leak details about the sender’s precise inputs, support limited distance metrics, or scale poorly with the hyperballs' volume.
This work presents the first black-box construction for fuzzy PSI (including other variants such as PSI cardinality, labeled PSI, and circuit PSI), which can handle polynomially large radius and dimension (i.e., a potentially exponentially large volume) in two interaction messages, supporting general \(L_{p\in[1,\infty]}\) distance, without relying on garbled circuits or FHE. The protocol excels in both asymptotic and concrete efficiency compared to existing works. For security, we solely rely on the assumption that the Decisional Diffie-Hellman (DDH) holds in the random oracle model.
The Ouroboros of ZK: Why Verifying the Verifier Unlocks Longer-Term ZK Innovation
Verifying the verifier in the context of zero-knowledge proof is an essential part of ensuring the long-term integrity of the zero-knowledge ecosystem. This is vital for both zero-knowledge rollups and also other industrial applications of ZK. In addition to further minimizing the required trust and reducing the trusted computing base (TCB), having a verified verifier opens the door to decentralized proof generation by potentially untrusted parties. We outline a research program and justify the need for more work at the intersection of ZK and formal verification research.
Transmitter Actions for Secure Integrated Sensing and Communication
This work models a secure integrated sensing and communication (ISAC) system as a wiretap channel with action-dependent channel states and channel output feedback, e.g., obtained through reflections. The transmitted message is split into a common and a secure message, both of which must be reliably recovered at the legitimate receiver, while the secure message needs to be kept secret from the eavesdropper. The transmitter actions, such as beamforming vector design, affect the corresponding state at each channel use. The action sequence is modeled to depend on both the transmitted message and channel output feedback. For perfect channel output feedback, the secrecy-distortion regions are provided for physically-degraded and reversely-physically-degraded secure ISAC channels with transmitter actions. The corresponding rate regions when the entire message should be kept secret are also provided. The results are illustrated through characterizing the secrecy-distortion region of a binary example.
Slothful reduction
In the implementation of many public key schemes, there is a need to implement modular arithmetic. Typically this consists
of addition, subtraction, multiplication and (occasionally) division with respect to a prime modulus. To resist certain side-channel attacks it helps if implementations are ``constant time''. As the calculations proceed there is potentially a need to reduce the result of an operation to its remainder modulo the prime modulus. However often this reduction can be delayed, a process known as ``lazy reduction''. The idea is that results do not have to be fully reduced at each step, that full reduction takes place only occasionally, hence providing a performance benefit. Here we extend the idea to determine the circumstances under which reduction can be delayed to the very end of a particular public key operation.
Group Oblivious Message Retrieval
Anonymous message delivery, as in private communication and privacy-preserving blockchain applications, ought to protect recipient metadata: a message should not be inadvertently linkable to its destination. But how can messages then be delivered to each recipient, without each recipient scanning all messages? Recent work constructed Oblivious Message Retrieval (OMR) protocols that outsource this job to untrusted servers in a privacy-preserving manner.
We consider the case of group messaging, where each message may have multiple recipients (e.g., in a group chat or blockchain transaction). Direct use of prior OMR protocols in the group setting increases the servers' work linearly in the group size, rendering it prohibitively costly for large groups.
We thus devise new protocols where the servers' cost grows very slowly with the group size, while recipients' cost is low and independent of the group size. Our approach uses Fully Homomorphic Encryption and other lattice-based techniques, building on and improving on prior work. The efficient handling of groups is attained by encoding multiple recipient-specific clues into a single polynomial or multilinear function that can be efficiently evaluated under FHE, and via preprocessing and amortization techniques.
We formally study several variants of Group Oblivious Message Retrieval (GOMR) and describe corresponding GOMR protocols. Our implementation and benchmarks show, for parameters of interest, cost reductions of orders of magnitude compared to prior schemes. For example, the servers' cost is ${\sim}\$3.36$ per million messages scanned, where each message may address up to $15$ recipients.
Admissible Parameters for the Crossbred Algorithm and Semi-regular Sequences over Finite Fields
Multivariate public key cryptography (MPKC) is one of the most promising alternatives to build quantum-resistant signature schemes, as evidenced in NIST's call for additional post-quantum signature schemes. The main assumption in MPKC is the hardness of the Multivariate Quadratic (MQ) problem, which seeks for a common root to a system of quadratic polynomials over a finite field. Although the Crossbred algorithm is among the most efficient algorithm to solve MQ over small fields, its complexity analysis stands on shaky ground. In particular, it is not clear for what parameters it works and under what assumptions.
In this work, we provide a rigorous analysis of the Crossbred algorithm over any finite field. We provide a complete explanation of the series of admissible parameters proposed in previous literature and explicitly state the regularity assumptions required for its validity. Moreover, we show that the series does not tell the whole story, hence we propose an additional condition for Crossbred to work. Additionally, we define and characterize a notion of regularity for systems over a small field, which is one of the main building blocks in the series of admissible parameters.
ScionFL: Efficient and Robust Secure Quantized Aggregation
Secure aggregation is commonly used in federated learning (FL) to alleviate privacy concerns related to the central aggregator seeing all parameter updates in the clear. Unfortunately, most existing secure aggregation schemes ignore two critical orthogonal research directions that aim to (i) significantly reduce client-server communication and (ii) mitigate the impact of malicious clients. However, both of these additional properties are essential to facilitate cross-device FL with thousands or even millions of (mobile) participants.
In this paper, we unite both research directions by introducing ScionFL, the first secure aggregation framework for FL that operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients. Our framework leverages (novel) multi-party computation (MPC) techniques and supports multiple linear (1-bit) quantization schemes, including ones that utilize the randomized Hadamard transform and Kashin's representation.
Our theoretical results are supported by extensive evaluations.
We show that with no overhead for clients and moderate overhead for the server compared to transferring and processing quantized updates in plaintext, we obtain comparable accuracy for standard FL benchmarks. Moreover, we demonstrate the robustness of our framework against state-of-the-art poisoning attacks.
(Strong) aPAKE Revisited: Capturing Multi-User Security and Salting
Asymmetric Password-Authenticated Key Exchange (aPAKE) protocols, particularly Strong aPAKE (saPAKE) have enjoyed significant attention, both from academia and industry, with the well-known OPAQUE protocol currently undergoing standardization. In (s)aPAKE, a client and a server collaboratively establish a high-entropy key, relying on a previously exchanged password for authentication. A main feature is its resilience against offline and precomputation (for saPAKE) attacks. OPAQUE, as well as most other aPAKE protocols, have been designed and analyzed in a single-user setting, i.e., modelling that only a single user interacts with the server. By the composition framework of UC, security for the actual multi-user setting is then conjectured. As any real-world (s)aPAKE instantiation will need to cater multiple users, this introduces a dangerous gap in which developers are tasked to extend the single-user protocol securely and in a UC-compliant manner.
In this work, we extend the (s)aPAKE definition to directly model the multi-user setting, and explicitly capture the impact that a server compromise has across user accounts. We show that the currently standardized multi-user version of OPAQUE might not provide the expected security, as it is insecure against offline attacks as soon as the file for one user in the system is compromised. This is due to using shared state among different users, which violates the UC composition framework. However, we show that another change introduced in the standardization draft which also involves a shared state does not compromise security. When extending the aPAKE security in the multi-client setting, we notice that the widely used security definition captures significantly weaker security guarantees than what is offered by many protocols. Essentially, the aPAKE definition assumes that the server stores unsalted password-hashes, whereas several protocols explicitly use a salt to protect against precomputation attacks. We therefore propose a definitional framework that captures different salting approaches -- thus showing that the security gap between aPAKE and saPAKE can be smaller than expected.
Efficient Second-Order Masked Software Implementations of Ascon in Theory and Practice
In this paper, we present efficient protected software implementations of the authenticated cipher Ascon, the recently announced winner of the NIST standardization process for lightweight cryptography.
Our implementations target theoretical and practical security against second-order power analysis attacks.
First, we propose an efficient second-order extension of a previously presented first-order masking of the Keccak S-box that does not require online randomness.
The extension itself is inspired by a previously presented second-order masking of an AND-XOR construction.
We then discuss implementation tricks that further improve performance and reduce the chance of unintended combination of shares during the execution of masked software on microprocessors.
This allows us to retain the theoretic protection orders of masking in practice with low performance overhead, which we also confirm via TVLA on ARM microprocessors.
The formal correctness of our designs is additionally verified using Coco on the netlist of a RISC-V IBEX core.
We benchmark our masked software designs on 32-bit ARM and RISC-V microprocessor platforms.
On both platforms, we can perform Ascon-128 authenticated encryption with a throughput of about 300 or 550 cycles/byte when operating on 2 or 3 shares.
When utilizing a leveled implementation technique, the throughput of our masked implementations generally increases to about 90 cycles/byte.
We publish our masked software implementations together with a generic software framework for evaluating performance and side-channel resistance of various masked cryptographic implementations.
Simultaneous Haar Indistinguishability with Applications to Unclonable Cryptography
Unclonable cryptography is concerned with leveraging the no-cloning principle to build cryptographic primitives that are otherwise impossible to achieve classically. Understanding the feasibility of unclonable encryption, one of the key unclonable primitives, satisfying indistinguishability security in the plain model has been a major open question in the area. So far, the existing constructions of unclonable encryption are either in the quantum random oracle model or are based on new conjectures.
We present a new approach to unclonable encryption via a reduction to a novel
question about nonlocal quantum state discrimination: how well can
non-communicating -- but entangled -- players distinguish between different distributions over quantum states? We call this task simultaneous state indistinguishability. Our main technical result is showing that the players cannot distinguish between each player receiving independently-chosen Haar random states versus all players receiving the same Haar random state.
We leverage this result to present the first construction of unclonable
encryption satisfying indistinguishability security, with quantum decryption
keys, in the plain model. We also show other implications to single-decryptor
encryption and leakage-resilient secret sharing.
Reducing the CRS Size in Registered ABE Systems
Attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data. In (ciphertext-policy) ABE, a central trusted authority issues decryption keys for attributes $x$ to users. In turn, ciphertexts are associated with a decryption policy $\mathcal{P}$. Decryption succeeds and recovers the encrypted message whenever $\mathcal{P}(x) = 1$. Recently, Hohenberger, Lu, Waters, and Wu (Eurocrypt 2023) introduced the notion of registered ABE, which is an ABE scheme without a trusted central authority. Instead, users generate their own public/secret keys (just like in public-key encryption) and then register their keys (and attributes) with a key curator. The key curator is a transparent and untrusted entity.
Currently, the best pairing-based registered ABE schemes support monotone Boolean formulas and an a priori bounded number of users $L$. A major limitation of existing schemes is that they require a (structured) common reference string (CRS) of size $L^2 \cdot |\mathcal{U}|$ where $|\mathcal{U}|$ is the size of the attribute universe. In other words, the size of the CRS scales quadratically with the number of users and multiplicatively with the size of the attribute universe. The large CRS makes these schemes expensive in practice and limited to a small number of users and a small universe of attributes.
In this work, we give two ways to reduce the CRS size in pairing-based registered ABE schemes. First, we introduce a combinatoric technique based on progression-free sets that enables registered ABE for the same class of policies but with a CRS whose size is sub-quadratic in the number of users. Asymptotically, we obtain a scheme where the CRS size is nearly linear in the number of users $L$ (i.e., $L^{1 + o(1)}$). If we take a more concrete-efficiency-oriented focus, we can instantiate our framework to obtain a construction with a CRS of size $L^{\log_2 3} \approx L^{1.6}$. For instance, in a scheme for 100,000 users, our approach reduces the CRS by a factor of over $115\times$ compared to previous approaches (and without incurring any overhead in encryption/decryption time). Our second approach for reducing the CRS size is to rely on a partitioning-based argument when arguing security of the registered ABE scheme. Previous approaches took a dual-system approach. Using a partitioning-based argument yields a registered ABE scheme where the size of the CRS is independent of the size of the attribute universe. The cost is the resulting scheme satisfies a weaker notion of static security. Our techniques for reducing the CRS size can be combined, and taken together, we obtain a pairing-based registered ABE scheme that supports monotone Boolean formulas with a CRS size of $L^{1 + o(1)}$. Notably, this is the first pairing-based registered ABE scheme that does not require imposing a bound on the size of the attribute universe during setup time.
As an additional application, we also show how to apply our techniques based on progression-free sets to the batch argument (BARG) for $\mathsf{NP}$ scheme of Waters and Wu (Crypto 2022) to obtain a scheme with a nearly-linear CRS without needing to rely on non-black-box bootstrapping techniques.
PERK: Compact Signature Scheme Based on a New Variant of the Permuted Kernel Problem
In this work we introduce PERK a compact digital signature scheme based on the hardness of a new variant of the Permuted Kernel Problem (PKP). PERK achieves the smallest signature sizes for any PKP-based scheme for NIST category I security with 6 kB, while obtaining competitive signing and verification timings. PERK also compares well with the general state-of-the-art. To substantiate those claims we provide an optimized constant-time AVX2 implementation, a detailed performance analysis and different size-performance trade-offs.
Technically our scheme is based on a Zero-Knowledge Proof of Knowledge following the MPC-in-the-Head paradigm and employing the Fiat-Shamir transform. We provide comprehensive security proofs, ensuring EUF-CMA security for PERK in the random oracle model. The efficiency of PERK greatly stems from our particular choice of PKP variant which allows for an application of the challenge-space amplification technique due to Bidoux-Gaborit (C2SI 2023).
Our second main contribution is an in-depth study of the hardness of the introduced problem variant. First, we establish a link between the hardness of our problem variant and the hardness of standard PKP. Then, we initiate an in-depth study of the concrete complexity to solve our variant. We present a novel algorithm which outperforms previous approaches for certain parameter regimes. However, the proximity of our problem variant to the standard variant can be controlled via a specific parameter. This enables us to effectively safeguard against our new attack and potential future extensions by a choice of parameters that ensures only a slight variation from standard PKP.
Leakage-Tolerant Circuits
A leakage-resilient circuit for $f:\{0,1\}^n\to\{0,1\}^m$ is a randomized Boolean circuit $C$ mapping a randomized encoding of an input $x$ to an encoding of $y=f(x)$, such that applying any leakage function $L\in \cal L$ to the wires of $C$ reveals essentially nothing about $x$. A leakage-tolerant circuit achieves the stronger guarantee that even when $x$ and $y$ are not protected by any encoding, the output of $L$ can be simulated by applying some $L'\in \cal L$ to $x$ and $y$ alone. Thus, $C$ is as secure as an ideal hardware implementation of $f$ with respect to leakage from $\cal L$.
Leakage-resilient circuits were constructed for low-complexity classes $\cal L$, including (length-$t$ output) $\mathcal{AC}0$ functions, parities, and functions with bounded communication complexity. In contrast, leakage-tolerant circuits were only known for the simple case of probing leakage, where $L$ outputs the values of $t$ wires in $C$.
We initiate a systematic study of leakage-tolerant circuits for natural classes $\cal L$ of global leakage functions, obtaining the following main results.
$\textbf{Leakage-tolerant circuits for depth-1 leakage.}$ Every circuit $C_f$ for $f$ can be efficiently compiled into an $\cal L$-tolerant circuit $C$ for $f$, where $\cal L$ includes all leakage functions $L$ that output either $t$ parities or $t$ disjunctions (alternatively, conjunctions) of any number of wires or their negations. In the case of parities, our simulator runs in $2^{O(t)}$ time. We provide partial evidence that this may be inherent.
$\textbf{Application to stateful leakage-resilient circuits.}$ We present a general transformation from (stateless) leakage-tolerant circuits to stateful leakage-resilient circuits. Using this transformation, we obtain the first constructions of stateful $t$-leakage-resilient circuits that tolerate a continuous parity/disjunction/conjunction leakage in which the circuit size grows sub-quadratically with $t$. Interestingly, here we can obtain $\mathtt{poly}(t)$-time simulation even in the case of parities.
MQ on my Mind: Post-Quantum Signatures from the Non-Structured Multivariate Quadratic Problem
This paper presents MQ on my Mind (MQOM), a digital signature scheme based on the difficulty of solving multivariate systems of quadratic equations (MQ problem). MQOM has been submitted to the NIST call for additional post-quantum signature schemes. MQOM relies on the MPC-in-the-Head (MPCitH) paradigm to build a zero-knowledge proof of knowledge (ZK-PoK) for MQ which is then turned into a signature scheme through the Fiat-Shamir heuristic. The underlying MQ problem is non-structured in the sense that the system of quadratic equations defining an instance is drawn uniformly at random. This is one of the hardest and most studied problems from multivariate cryptography which hence constitutes a conservative choice to build candidate post-quantum cryptosystems. For the efficient application of the MPCitH paradigm, we design a specific MPC protocol to verify the solution of an MQ instance. Compared to other multivariate signature schemes based on non-structured MQ instances, MQOM achieves the shortest signatures (6.3-7.8 KB) while keeping very short public keys (few dozen of bytes). Other multivariate signature schemes are based on structured MQ problems (less conservative) which either have large public keys (e.g. UOV) or use recently proposed variants of these MQ problems (e.g. MAYO).
Improved Conditional Cube Attacks on Ascon AEADs in Nonce-Respecting Settings -- with a Break-Fix Strategy
The best-known distinguisher on 7-round Ascon-128 and Ascon-128a AEAD uses a 60-dimensional cube where the nonce bits are set to be equal in the third and fourth rows of the Ascon state during initialization (Rohit et al. ToSC 2021/1).
It was not known how to use this distinguisher to mount key-recovery attacks.
In this paper, we investigate this problem using a new strategy called \textit{break-fix} for the conditional cube attack. The idea is to introduce slightly-modified cubes which increase the degrees of 7-round output bits to be more than 59 (break phase) and then find key conditions which can bring the degree back to 59 (fix phase).
Using this idea, key-recovery attacks on 7-round Ascon-128, Ascon-128a and Ascon-80pq are proposed.
The attacks have better time/memory complexities than the existing attacks, and in some cases improve the weak-key attacks as well.
A Deniability Analysis of Signal's Initial Handshake PQXDH
Many use messaging apps such as Signal to exercise their right to private communication. To cope with the advent of quantum computing, Signal employs a new initial handshake protocol called PQXDH for post-quantum confidentiality, yet keeps guarantees of authenticity and deniability classical. Compared to its predecessor X3DH, PQXDH includes a KEM encapsulation and a signature on the ephemeral key. In this work we show that PQXDH does not meet the same deniability guarantees as X3DH due to the signature on the ephemeral key. Our analysis relies on plaintext awareness of the KEM, which Signal's implementation of PQXDH does not provide. As for X3DH, both parties (initiator and responder) obtain different deniability guarantees due to the asymmetry of the protocol.
For our analysis of PQXDH, we introduce a new model for deniability of key exchange that allows a more fine-grained analysis. Our deniability model picks up on the ideas of prior work and facilitates new combinations of deniability notions, such as deniability against malicious adversaries in the big brother model, i.e. where the distinguisher knows all secret keys. Our model may be of independent interest.
One vector to rule them all: Key recovery from one vector in UOV schemes
Unbalanced Oil and Vinegar is a multivariate signature scheme that was introduced in 1999.
Most multivariate candidates for signature schemes at NIST's PQC standardization process are either based on UOV or closely related to it.
The UOV trapdoor is a secret subspace, the "oil subspace".
We show how to recover an equivalent secret key from the knowledge of a single vector in the oil subspace in any characteristic.
The reconciliation attack was sped-up by adding some bilinear equations in the subsequent computations, and able to conclude after two vectors were found.
We show here that these bilinear equations contain enough information to dismiss the quadratic equations and retrieve the secret subspace with linear algebra for practical parametrizations of UOV, in at most 15 seconds for modern instanciations of UOV.
This proves that the security of the UOV scheme lies in the complexity of finding exactly one vector in the oil space.
In addition, we deduce a key recovery attack from any forgery attack by applying a corollary of our main result.
We show how to extend this result to schemes related to UOV, such as MAYO and VOX.
Constant Input Attribute Based (and Predicate) Encryption from Evasive and Tensor LWE
Constructing advanced cryptographic primitives such as obfuscation or broadcast encryption from standard hardness assumptions in the post quantum regime is an important area of research, which has met with limited success despite significant effort. It is therefore extremely important to find new, simple to state assumptions in this regime which can be used to fill this gap. An important step was taken recently by Wee (Eurocrypt '22) who identified two new assumptions from lattices, namely evasive ${\sf LWE}$ and tensor ${\sf LWE}$, and used these to construct broadcast encryption and ciphertext policy attribute based encryption for ${\sf P}$ with optimal parameters. Independently, Tsabary formulated a similar assumption and used it to construct witness encryption (Crypto '22). Following Wee's work, Vaikuntanathan, Wee and Wichs independently provided a construction of witness encryption (Asiacrypt '22).
In this work, we advance this line of research by providing the first construction of multi-input attribute based encryption (${\sf MIABE}$) for the function class ${\sf NC_1}$ for any constant arity from evasive ${\sf LWE}$. Our construction can be extended to support the function class ${\sf P}$ by using evasive and a suitable strengthening of tensor ${\sf LWE}$. In more detail, our construction supports $k$ encryptors, for any constant $k$, where each encryptor uses the master secret key ${\sf msk}$ to encode its input $(\mathbf{x}_i, m_i)$, the key generator computes a key ${\sf sk}_f$ for a function $f \in {\sf NC}_1$ and the decryptor can recover $(m_1,\ldots,m_k)$ if and only if $f(\mathbf{x}_1,\ldots,\mathbf{x}_k)=1$. The only known construction for ${\sf MIABE}$ for ${\sf NC}_1$ by Agrawal, Yadav and Yamada (Crypto '22) supports arity $2$ and relies on pairings in the generic group model (or with a non-standard knowledge assumption) in addition to ${\sf LWE}$. Furthermore, it is completely unclear how to go beyond arity $2$ using this approach due to the reliance on pairings.
Using a compiler from Agrawal, Yadav and Yamada (Crypto '22), our ${\sf MIABE}$ can be upgraded to multi-input predicate encryption for the same arity and function class. Thus, we obtain the first constructions for constant-arity predicate and attribute based encryption for a generalized class such as ${\sf NC}_1$ or ${\sf P}$ from simple assumptions that may be conjectured post-quantum secure. Along the way, we show that the tensor ${\sf LWE}$ assumption can be reduced to standard ${\sf LWE}$ in an important special case which was not known before. This adds confidence to the plausibility of the assumption and may be of wider interest.
BGJ15 Revisited: Sieving with Streamed Memory Access
The focus of this paper is to tackle the issue of memory access within sieving algorithms for lattice problems. We have conducted an in-depth analysis of an optimized BGJ sieve (Becker-Gama-Joux 2015), and our findings suggest that its inherent structure is significantly more memory-efficient compared to the asymptotically fastest BDGL sieve (Becker-Ducas-Gama-Laarhoven 2016). Specifically, it necessitates merely $2^{0.2075n + o(n)}$ streamed (non-random) main memory accesses for the execution of an $n$-dimensional sieving. We also provide evidence that the time complexity of this refined BGJ sieve could potentially be $2^{0.292n + o(n)}$, or at least something remarkably close to it. Actually, it outperforms the BDGL sieve in all dimensions that are practically achievable. We hope that this study will contribute to the resolution of the ongoing debate regarding the measurement of RAM access overhead in large-scale, sieving-based lattice attacks.
The concept above is also supported by our implementation. Actually, we provide a highly efficient, both in terms of time and memory, CPU-based implementation of the refined BGJ sieve within an optimized sieving framework. This implementation results in approximately 40% savings in RAM usage and is at least $2^{4.5}$ times more efficient in terms of gate count compared to the previous 4-GPU implementation (Ducas-Stevens-Woerden 2021). Notably, we have successfully solved the 183-dimensional SVP Darmstadt Challenge in 30 days using a 112-core server and approximately 0.87TB of RAM. The majority of previous sieving-based SVP computations relied on the HK3-sieve (Herold-Kirshanova 2017), hence this implementation could offer further insights into the behavior of these asymptotically faster sieving algorithms when applied to large-scale problems. Moreover, our refined cost estimation of SVP based on this implementation suggests that some of the NIST PQC candidates, such as Falcon-512, are unlikely to achieve NIST's security requirements.
An update on Keccak performance on ARMv7-M
This note provides an update on Keccak performance on the ARMv7-M processors. Starting from the XKCP implementation, we have applied architecture-specific optimizations that have yielded a performance gain of up to 21% for the largest permutation instance.
Provable Security for PKI Schemes
PKI schemes provide a critical foundation for applied cryptographic protocols.
However, there are no rigorous security specifications for realistic PKI schemes, and therefore, no PKI schemes were proven secure.
Cryptographic systems that use PKI are analyzed by adopting overly simplified models of the PKI, often, simply assuming securely-distributed public keys. This is problematic given the extensive reliance on PKI, the multiple failures of PKI systems, and the complexity of both proposed and deployed systems, which involve complex requirements and models.
We present game-based security specifications for PKI schemes, and analyze important and widely deployed PKIs: PKIX and two variants of Certificate Transparency (CT). All PKIs are based on the X.509v3 standard and its CRL revocation mechanism. Our analysis identified few subtle vulnerabilities, and provides reduction-based proofs showing that the PKIs ensure specific requirements under specific models (assumptions).
To our knowledge, this is the first reduction-based proof of security for a realistic PKI scheme, e.g., supporting certificate chains.
A Detailed Analysis of Fiat-Shamir with Aborts
Lyubashevky's signatures are based on the Fiat-Shamir with Aborts paradigm. It transforms an interactive identification protocol that has a non-negligible probability of aborting into a signature by repeating executions until a loop iteration does not trigger an abort. Interaction is removed by replacing the challenge of the verifier by the evaluation of a hash function, modeled as a random oracle in the analysis. The access to the random oracle is classical (ROM), resp. quantum (QROM), if one is interested in security against classical, resp. quantum, adversaries. Most analyses in the literature consider a setting with a bounded number of aborts (i.e., signing fails if no signature is output within a prescribed number of loop iterations), while practical instantiations (e.g., Dilithium) run until a signature is output (i.e., loop iterations are unbounded).
In this work, we emphasize that combining random oracles with loop iterations induces numerous technicalities for analyzing correctness, run-time, and security of the resulting schemes, both in the bounded and unbounded case. As a first contribution, we put light on errors in all existing analyses. We then provide two detailed analyses in the QROM for the bounded case, adapted from Kiltz, Lyubashevsky, and Shaffner [EUROCRYPT'18] and from Grilo, Hövelmanns, Hülsing, and Majenz [ASIACRYPT'21]. In the process, we prove the underlying $\Sigma$-protocol to achieve a stronger zero-knowledge property than usually considered for $\Sigma$-protocols with aborts, which enables a corrected analysis. A further contribution is a detailed analysis in the case of unbounded aborts, the latter inducing several additional subtleties.
Quantum Oblivious LWE Sampling and Insecurity of Standard Model Lattice-Based SNARKs
The Learning With Errors ($\mathsf{LWE}$) problem asks to find $\mathbf{s}$ from an input of the form $(\mathbf{A}, \mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}) \in (\mathbb{Z}/q\mathbb{Z})^{m \times n} \times (\mathbb{Z}/q\mathbb{Z})^{m}$, for a vector $\mathbf{e}$ that has small-magnitude entries. In this work, we do not focus on solving $\mathsf{LWE}$ but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create $\mathbf{s}$ and $\mathbf{e}$ and then set $\mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}$. In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample $(\mathbf{A}, \mathbf{A}\mathbf{s}+\mathbf{e})$, namely, without knowing the underlying $\mathbf{s}$. A variant of the assumption that oblivious $\mathsf{LWE}$ sampling is hard has been used in a series of works to analyze the security of candidate constructions of Succinct Non interactive Arguments of Knowledge (SNARKs). As the assumption is related to $\mathsf{LWE}$, these SNARKs have been conjectured to be secure in the presence of quantum adversaries.
Our main result is a quantum polynomial-time algorithm that
samples well-distributed $\mathsf{LWE}$ instances while provably not knowing the solution, under the assumption that $\mathsf{LWE}$ is hard. Moreover, the approach works for a vast range of $\mathsf{LWE}$ parametrizations, including those used in the above-mentioned SNARKs. This invalidates the assumptions used in their security analyses, although it does not yield attacks against the constructions themselves.
Quantum Key-Revocable Dual-Regev Encryption, Revisited
Quantum information can be used to achieve novel cryptographic primitives that are impossible to achieve classically. A recent work by Ananth, Poremba, Vaikuntanathan (TCC 2023) focuses on equipping the dual-Regev encryption scheme, introduced by Gentry, Peikert, Vaikuntanathan (STOC 2008), with key revocation capabilities using quantum information. They further showed that the key-revocable dual-Regev scheme implies the existence of fully homomorphic encryption and pseudorandom functions, with both of them also equipped with key revocation capabilities. Unfortunately, they were only able to prove the security of their schemes based on new conjectures and left open the problem of basing the security of key revocable dual-Regev encryption on well-studied assumptions.
In this work, we resolve this open problem. Assuming polynomial hardness of learning with errors (over sub-exponential modulus), we show that key-revocable dual-Regev encryption is secure. As a consequence, for the first time, we achieve the following results:
1. Key-revocable public-key encryption and key-revocable fully-homomorphic encryption satisfying classical revocation security and based on polynomial hardness of learning with errors. Prior works either did not achieve classical revocation or were based on sub-exponential hardness of learning with errors.
2. Key-revocable pseudorandom functions satisfying classical revocation from the polynomial hardness of learning with errors. Prior works relied upon unproven conjectures.
Updatable Encryption from Group Actions
Updatable Encryption (UE) allows to rotate the encryption key in the outsourced storage setting while minimizing the bandwith used. The server can update ciphertexts to the new key using a token provided by the client. UE schemes should provide strong confidentiality guarantees against an adversary that can corrupt keys and tokens.
This paper studies the problem of building UE in the group action framework. We introduce a new notion of Mappable Effective Group Action (MEGA) and show that we can build CCA secure UE from a MEGA by generalizing the SHINE construction of Boyd etal at Crypto 2020.
Unfortunately, we do not know how to instantiate this new construction in the post-quantum setting. Doing so would solve the open problem of building a CCA secure post-quantum UE scheme.
Isogeny-based group actions are the most studied post-quantum group actions. Unfortunately, the resulting group actions are not mappable. We show that we can still build UE from isogenies by introducing a new algebraic structure called Effective Triple Orbital Group Action (ETOGA). We prove that UE can be built from an ETOGA and show how to instantiate this abstract structure from isogeny-based group actions. This new construction solves two open problems in ciphertext-independent post-quantum UE.
First, this is the first post-quantum UE scheme that supports an unbounded number of updates. Second, our isogeny-based UE scheme is the first post-quantum UE scheme not based on lattices. The security of this new scheme holds under an extended version of the weak pseudorandomness of the standard isogeny group action.
Pianist: Scalable zkRollups via Fully Distributed Zero-Knowledge Proofs
In the past decade, blockchains have seen various financial and technological innovations, with cryptocurrencies reaching a market cap of over 1 trillion dollars. However, scalability is one of the key issues hindering the deployment of blockchains in many applications. To improve the throughput of the transactions, zkRollups and zkEVM techniques using the cryptographic primitive of zero-knowledge proofs (ZKPs) have been proposed and many companies are adopting these technologies in the layer-2 solutions. However, in these technologies, the proof generation of the ZKP is the bottleneck and the companies have to deploy powerful machines with TBs of memory to batch a large number of transactions in a ZKP.
In this work, we improve the scalability of these techniques by proposing new schemes of fully distributed ZKPs. Our schemes can improve the efficiency and the scalability of ZKPs using multiple machines, while the communication among the machines is minimal. With our schemes, the ZKP generation can be distributed to multiple participants in a model similar to the mining pools. Our protocols are based on Plonk, an efficient zero-knowledge proof system with a universal trusted setup. The first protocol is for data-parallel circuits.
For a computation of $M$ sub-circuits of size $T$ each, using $M$ machines, the prover time is $O(T\log T + M \log M)$, while the prover time of the original Plonk on a single machine is $O(MT\log (MT))$. Our protocol incurs only $O(1)$ communication per machine, and the proof size and verifier time are both $O(1)$, the same as the original Plonk. Moreover, we show that with minor modifications, our second protocol can support general circuits with arbitrary connections while preserving the same proving, verifying, and communication complexity. The technique is general and may be of independent interest for other applications of ZKP.
We implement Pianist (Plonk vIA uNlimited dISTribution), a fully distributed ZKP system using our protocols. Pianist can generate the proof for 8192 transactions in 313 seconds on 64 machines. This improves the scalability of the Plonk scheme by 64$\times$. The communication per machine is only 2.1 KB, regardless of the number of machines and the size of the circuit. The proof size is 2.2 KB and the verifier time is 3.5 ms. We further show that Pianist has similar improvements for general circuits. On a randomly generated circuit with $2^{25}$ gates, it only takes 5s to generate the proof using 32 machines, 24.2$\times$ faster than Plonk on a single machine.
Secret Sharing with Certified Deletion
Secret sharing allows a user to split a secret into many shares so that the secret can be recovered if, and only if, an authorized set of shares is collected. Although secret sharing typically does not require any computational hardness assumptions, its security does require that an adversary cannot collect an authorized set of shares. Over long periods of time where an adversary can benefit from multiple data breaches, this may become an unrealistic assumption.
We initiate the systematic study of secret sharing with certified deletion in order to achieve security even against an adversary that eventually collects an authorized set of shares. In secret sharing with certified deletion, a (classical) secret $s$ is split into quantum shares that can be destroyed in a manner verifiable by the dealer.
We put forth two natural definitions of security. No-signaling security roughly requires that if multiple non-communicating adversaries delete sufficiently many shares, then their combined view contains negligible information about $s$, even if the total set of corrupted parties forms an authorized set. Adaptive security requires privacy of $s$ against an adversary that can continuously and adaptively corrupt new shares and delete previously-corrupted shares, as long as the total set of corrupted shares minus deleted shares remains unauthorized.
Next, we show that these security definitions are achievable: we show how to construct (i) a secret sharing scheme with no-signaling certified deletion for any monotone access structure, and (ii) a threshold secret sharing scheme with adaptive certified deletion. Our first construction uses Bartusek and Khurana's (CRYPTO 2023) 2-out-of-2 secret sharing scheme with certified deletion as a building block, while our second construction is built from scratch and requires several new technical ideas. For example, we significantly generalize the ``XOR extractor'' of Agarwal, Bartusek, Khurana, and Kumar (EUROCRYPT 2023) in order to obtain better seedless extraction from certain quantum sources of entropy, and show how polynomial interpolation can double as a high-rate randomness extractor in our context of threshold sharing with certified deletion.
Secure Multiparty Computation in the Presence of Covert Adaptive Adversaries
We design a new MPC protocol for arithmetic circuits secure against erasure-free covert adaptive adversaries with deterrence 1/2. The new MPC protocol has the same asymptotic communication cost, the number of PKE operations and the number of exponentiation operations as the most efficient MPC protocol for arithmetic circuits secure against covert static adversaries. That means, the new MPC protocol improves security from covert static security to covert adaptive adversary almost for free. For MPC problems where the number of parties n is much larger than the number of multiplication gates M, the new MPC protocol asymptotically improves communication complexity over the most efficient MPC protocol for arithmetic circuits secure against erasure-free active adaptive adversaries.
Proof of Stake and Activity: Rewarding On-Chain Activity Through Consensus
We are introducing a novel consensus protocol for
blockchain, called Proof of Stake and Activity (PoSA) which can
augment the traditional Proof of Stake methods by integrating
a unique Proof of Activity system. PoSA offers a compelling
economic model that promotes decentralization by rewarding
validators based on their staked capital and also the business
value they contribute to the chain. This protocol has been
implemented already into a fully-fledged blockchain platform
called Bahamut (www.bahamut.io) which boasts hundreds of thousands of active users already.
Multivariate Blind Signatures Revisited
In 2017, Petzoldt, Szepieniec, and Mohamed proposed a blind signature scheme, based on multivariate cryptography. This construction has been expanded on by several other works. This short paper shows that their construction is susceptible to an efficient polynomial-time attack. The problem is that the authors implicitly assumed that for a random multivariate quadratic map $\mathcal{R}:\mathbb{F}_q^m \rightarrow \mathbb{F}_q^m$ and a collision-resistant hash function $H: \{0,1\}^* \rightarrow \mathbb{F}_q^m$, the function $\mathsf{Com}(m;\mathbf{r}) := H(m) - \mathcal{R}(\mathbf{r})$ is a binding commitment, which is not the case. There is a "folklore" algorithm that can be used to, given any pair of messages, efficiently produce a commitment that opens to both of them. We hope that by pointing out that multivariate quadratic maps are not binding, similar problems can be avoided in the future.
Quasi-Optimal Permutation Ranking and Applications to PERK
A ranking function for permutations maps every permutation of length $n$ to a unique integer between $0$ and $n!-1$. For permutations of size that are of interest in cryptographic applications, evaluating such a function requires multiple-precision arithmetic. This work introduces a quasi-optimal ranking technique that allows us to rank a permutation efficiently without needing a multiple-precision arithmetic library. We present experiments that show the computational advantage of our method compared to the standard lexicographic optimal permutation ranking. As an application of our result, we show how this technique improves the signature sizes and the efficiency of PERK digital signature scheme.
Communication-Efficient Secure Logistic Regression
We present a novel construction that enables two parties to securely train a logistic regression model on private secret-shared data. Our goal is to minimize online communication and round complexity, while still allowing for an efficient offline phase.
As part of our construction, we develop many building blocks of independent interest. These include a new approximation technique for the sigmoid function that results in a secure protocol with better communication, protocols for secure powers evaluation and secure spline computation on fixed-point values, and a new comparison protocol that optimizes online communication. We also present a new two-party protocol for generating keys for distributed point functions (DPFs) over arithmetic sharing, where previous constructions do this only for Boolean outputs.
We implement our protocol in an end-to-end system and benchmark its efficiency. We can securely evaluate a batch of $10^3$ sigmoids with $\approx 0.5$ MB of online communication, $4$ online rounds, and $\approx 1.6$ seconds of online time over WAN. This is $\approx 30 \times$ less in online communication, $\approx 31\times$ fewer online rounds, and $\approx 5.5\times$ less online time than the well-known MP-SPDZ's protocol. Our system can train a logistic regression model over $6$ epochs and a database containing $70,000$ samples and $15$ features with $208.09$ MB of online communication and $9.68$ minutes of online time. We compare our logistic regression training against MP-SPDZ over a synthetic dataset of $1000$ samples and $10$ features and show an improvement of $\approx 130\times$ in online communication and $\approx 4.75\times$ in online time over WAN. We converge to virtually the same model as plaintext in all cases. We open-source our system and include extensive tests.
Covert Adaptive Adversary Model: A New Adversary Model for Multiparty Computation
In covert adversary model, the corrupted parties can behave in any possible way like active adversaries, but any party that attempts to cheat is guaranteed to get caught by the honest parties with a minimum fixed probability. That probability is called the deterrence factor of covert adversary model. Security-wise, covert adversary is stronger than passive adversary and weaker than active adversary. It is more realistic than passive adversary model. Protocols for covert adversaries are significantly more efficient than protocols for active adversaries. Covert adversary model is defined only for static corruption. Adaptive adversary model is more realistic than static adversaries. In this article, we define a new adversary model, the covert adaptive adversary model, by generalizing the definition of covert adversary model for the more realistic adaptive corruption. We prove security relations between the new covert adaptive adversary model with existing adversary models like passive adaptive adversary model, active adaptive
adversary model and covert static adversary model. We prove the sequential composition theorem for the new adversary model which is necessary to allow modular design of protocols for this new adversary model.
Modeling Mobile Crash in Byzantine Consensus
Targeted Denial-of-Service (DoS) attacks have been a practical concern
for permissionless blockchains. Potential solutions, such as random
sampling, are adopted by blockchains.
However, the associated security guarantees have only been informally discussed in prior work. This
is due to the fact that existing adversary models are either not
fully capturing this attack or giving up certain design choices
(as in the sleepy model or asynchronous network model), or too strong to
be practical (as in the mobile Byzantine adversary model).
This paper provides theoretical foundations and desired properties
for consensus protocols that resist against targeted DoS attacks. In particular, we
define the Mobile Crash Adaptive Byzantine (MCAB) model to capture such an attack. In addition, we
identify and formalize two properties for consensus protocols under the MCAB model, and analyze their trade-offs.
As case studies, we prove that Ouroboros Praos and Algorand are secure in our MCAB model, giving the first formal proofs supporting their security guarantee against targeted DoS attacks, which were previously only informally discussed.
We also illustrate an application of our properties to secure a streamlined BFT protocol, chained Hotstuff, against targeted DoS attacks.
Categorization of Faulty Nonce Misuse Resistant Message Authentication
A growing number of lightweight block ciphers are proposed for environments such as the Internet of Things. An important contribution to the reduced implementation cost is a block length n of 64 or 96 bits rather than 128 bits. As a consequence, encryption modes and message authentication code (MAC) algorithms require security beyond the 2^{n/2} birthday bound. This paper provides an extensive treatment of MAC algorithms that offer beyond birthday bound PRF security for both nonce-respecting and nonce-misusing adversaries. We study constructions that use two block cipher calls, one universal hash function call and an arbitrary number of XOR operations.
We start with the separate problem of generically identifying all possible secure n-to-n-bit pseudorandom functions (PRFs) based on two block cipher calls. The analysis shows that the existing constructions EDM, SoP, and EDMD are the only constructions of this kind that achieve beyond birthday bound security.
Subsequently we deliver an exhaustive treatment of MAC algorithms, where the outcome of a universal hash function evaluation on the message may be entered at any point in the computation of the PRF. We conclude that there are a total amount of nine schemes that achieve beyond birthday bound security, and a tenth construction that cannot be proven using currently known proof techniques. For these former nine MAC algorithms, three constructions achieve optimal n-bit security in the nonce-respecting setting, but are completely insecure if the nonce is reused. The remaining six constructions have 3n/4-bit security in the nonce-respecting setting, and only four out of these six constructions still achieve beyond the birthday bound security in the case of nonce misuse.
Let Attackers Program Ideal Models: Modularity and Composability for Adaptive Compromise
We show that the adaptive compromise security definitions of Jaeger and Tyagi (Crypto '20) cannot be applied in several natural use-cases. These include proving multi-user security from single-user security, the security of the cascade PRF, and the security of schemes sharing the same ideal primitive. We provide new variants of the definitions and show that they resolve these issues with composition. Extending these definitions to the asymmetric settings, we establish the security of the modular KEM/DEM and Fujisaki-Okamoto approaches to public key encryption in the full adaptive compromise setting. This allows instantiations which are more efficient and standard than prior constructions.
Hardness of Range Avoidance and Remote Point for Restricted Circuits via Cryptography
A recent line of research has introduced a systematic approach to explore the complexity of explicit construction problems through the use of meta problems, namely, the range avoidance problem (abbrev. $\textsf{Avoid}$) and the remote point problem (abbrev. $\textsf{RPP}$). The upper and lower bounds for these meta problems provide a unified perspective on the complexity of specific explicit construction problems that were previously studied independently. An interesting question largely unaddressed by previous works is whether $\textsf{Avoid}$ and $\textsf{RPP}$ are hard for simple circuits such as low-depth circuits.
In this paper, we demonstrate, under plausible cryptographic assumptions, that both the range avoidance problem and the remote point problem cannot be efficiently solved by nondeterministic search algorithms, even when the input circuits are as simple as constant-depth circuits. This extends a hardness result established by Ilango, Li, and Williams (STOC '23) against deterministic algorithms employing witness encryption for $\textsf{NP}$, where the inputs to $\textsf{Avoid}$ are general Boolean circuits.
Our primary technical contribution is a novel construction of witness encryption inspired by public-key encryption for certain promise language in $\textsf{NP}$ that is unlikely to be $\textsf{NP}$-complete. We introduce a generic approach to transform a public-key encryption scheme with particular properties into a witness encryption scheme for a promise language related to the initial public-key encryption scheme. Based on this translation and variants of standard lattice-based or coding-based PKE schemes, we obtain, under plausible assumption, a provably secure witness encryption scheme for some promise language in $\textsf{NP}\setminus \textsf{coNP}_{/\textsf{poly}}$. Additionally, we show that our constructions of witness encryption are plausibly secure against nondeterministic adversaries under a generalized notion of security in the spirit of Rudich's super-bits (RANDOM '97), which is crucial for demonstrating the hardness of $\textsf{Avoid}$ and $\textsf{RPP}$ against nondeterministic algorithms.
Challenger: Blockchain-based Massively Multiplayer Online Game Architecture
We propose Challenger a peer-to-peer blockchain-based middleware architecture for narrative games, and discuss its resilience to cheating attacks. Our architecture orchestrates nine services in a fully decentralized manner where nodes are not aware of the entire composition of the system nor its size. All these components are orchestrated together to obtain (strong) resilience to cheaters.
The main contribution of the paper is to provide, for the first time, an architecture for narrative games agnostic of a particular blockchain that brings together several distinct research areas, namely distributed ledgers, peer-to-peer networks, multi-player-online games and resilience to attacks.
Multi User Security of LightMAC and LightMAC_Plus
In FSE'16, Luykx et al. have proposed $\textsf{LightMAC}$ that provably achieves a query length independent PRF security bound. To be precise, the construction achieves security roughly in the order of $O(q^2/2^n)$, when instantiated with two independently keyed $n$-bit block ciphers and $q$ is the total number of queries made by the adversary. Subsequently, in ASIACRYPT'17, Naito proposed a beyond-birthday-bound variant of the $\textsf{LightMAC}$ construction, dubbed as $\textsf{LightMAC_Plus}$, that is built on three independently keyed $n$-bit block ciphers and achieves $2n/3$-bits PRF security. Security analyses of these two constructions have been conducted in the single-user setting, where we assume that the adversary has the access to a single instance of the construction. In this paper, we investigate, for the first time, the security of the $\textsf{LightMAC}$ and the $\textsf{LightMAC_Plus}$ construction in the context of multi-user setting, where we assume that the adversary has access to more than one instances of the construction. In particular, we have shown that $\textsf{LightMAC}$ remains secure roughly up to $2^{n/2}$ construction queries and $2^k$ ideal-cipher queries in the ideal-cipher model and $\textsf{LightMAC_Plus}$ maintains security up to approximately $2^{2n/3}$ construction queries and $2^{2k/3}$ ideal-cipher queries in the ideal-cipher model, where $n$ denotes the block size and $k$ denotes the key size of the block cipher.
Massive Superpoly Recovery with a Meet-in-the-middle Framework -- Improved Cube Attacks on Trivium and Kreyvium
The cube attack extracts the information of secret key bits by recovering the coefficient called superpoly in the output bit with respect to a subset of plaintexts/IV, which is called a cube. While the division property provides an efficient way to detect the structure of the superpoly, superpoly recovery could still be prohibitively costly if the number of rounds is sufficiently high. In particular, Core Monomial Prediction (CMP) was proposed at ASIACRYPT 2022 as a scaled-down version of Monomial Prediction (MP), which sacrifices accuracy for efficiency but ultimately gets stuck at 848 rounds of \trivium.
In this paper, we provide new insights into CMP by elucidating the algebraic meaning to the core monomial trails. We prove that it is sufficient to recover the superpoly by extracting all the core monomial trails, an approach based solely on CMP, thus demonstrating that CMP can achieve perfect accuracy as MP does. We further reveal that CMP is still MP in essence, but with variable substitutions on the target function. Inspired by the divide-and-conquer strategy that has been widely used in previous literature, we design a meet-in-the-middle (MITM) framework, in which the CMP-based approach can be embedded to achieve a speedup.
To illustrate the power of these new techniques, we apply the MITM framework to \trivium, \grain and \kreyvium. As a result, not only can the previous computational cost of superpoly recovery be reduced (e.g., 5x faster for superpoly recovery on 192-round \grain), but we also succeed in recovering superpolies for up to 851 rounds of \trivium and up to 899 rounds of \kreyvium. This surpasses the previous best results by respectively 3 and 4 rounds. Using the memory-efficient M\"obius transform proposed at EUROCRYPT 2021, we can perform key recovery attacks on target ciphers, even though the superpoly may contain over $2^{40}$ monomials. This leads to the best cube attacks on the target ciphers.
Efficient Hardware Implementation for Maiorana-McFarland type Functions
Maiorana--McFarland type constructions are basically concatenating the truth tables of linear functions on a smaller number of variables to obtain highly nonlinear ones on larger inputs. Such functions and their different variants have significant cryptology and coding theory applications. The straightforward hardware implementation of such functions using decoders (Khairallah et al., WAIFI 2018; Tang et al., SIAM Journal on Discrete Mathematics, 2019) requires exponential resources on the number of inputs. In this paper, we study such constructions in detail and provide implementation strategies for a selected subset of this class with polynomial many gates over the number of inputs. We demonstrate that such implementations cover the requirement of cryptographic primitives to a great extent. Several existing constructions are revisited in this direction, and exact implementations are provided with specific depth and gate counts for hardware implementation. Related combinatorial results of theoretical nature are also analyzed in this regard. Finally, we present a novel construction of a new class of balanced Boolean functions with very low absolute indicators and very high nonlinearity that can be implemented in polynomial-size circuits over the number of inputs. We underline that these constructions have immediate applications to resist the signature generation in Differential Fault Attack (DFA) and to implement functions on a large number of variables in designing ciphers for the paradigm of Fully Homomorphic Encryption (FHE).
Orca: FSS-based Secure Training and Inference with GPUs
Secure Two-party Computation (2PC) allows two parties to compute any function on their private inputs without revealing their inputs to each other. In the offline/online model for 2PC, correlated randomness that is independent of all inputs to the computation, is generated in a preprocessing (offline) phase and this randomness is then utilized in the online phase once the inputs to the parties become available. Most 2PC works focus on optimizing the online time as this overhead lies on the critical path. A recent paradigm for obtaining efficient 2PC protocols with low online cost is based on the cryptographic technique of function secret sharing (FSS).
We build an end-to-end system ORCA to accelerate the computation of FSS-based 2PC protocols with GPUs. Next, we observe that the main performance bottleneck in such accelerated protocols is in storage (due to the large amount of correlated randomness), and we design new FSS-based 2PC protocols for several key functionalities in ML which reduce storage by up to 5×. Compared to prior state-of-the-art on secure training accelerated with GPUs in the same computation model (PIRANHA, Usenix Security 2022), we show that ORCA has 4% higher accuracy, 98× lesser communication, and is 26× faster on CIFAR-10. Moreover, maintaining training accuracy while using fixed-point needs stochastic truncations, and all prior works on secure fixed-point training (including PIRANHA) use insecure protocols for it. We provide the first secure protocol for stochastic truncations and build on it to provide the first evaluation of training with end-to-end security. For secure ImageNet inference, ORCA achieves sub-second latency for VGG-16 and ResNet-50, and outperforms the state-of-the-art by 8 − 103×.
Shorter VOLEitH Signature from Multivariate Quadratic
The VOLE-in-the-Head paradigm, recently introduced by Baum et al. (Crypto 2023), is a compiler that uses SoftspokenOT (Crypto 2022) to transfer any VOLE-based designated verifier zero-knowledge protocol into a publicly verifiable zero-knowledge protocol. Together with the Fiat-Shamir transformation, a new digital signature scheme FAEST (faest.info) is proposed, and it outperforms all MPC-in-the-Head signatures.
We propose a new candidate post-quantum signature scheme from the Multivariate Quadratic (MQ) problem in the VOLE-in-the-Head framework, which significantly reduces the signature size compared to previous works. We achieve a signature size ranging from 3.5KB to 6KB for the 128-bit security level. Compared to the state-of-the-art MQ-based signature schemes and existing VOLE-in-the-Head signatures, our scheme achieves the smallest signature size (1.5 to 2 times compared to MQ-based schemes) while keeping the computational efficiency competitive.