All papers (Page 21 of 24087 results)

Last updated:  2024-05-26
DVA: Dangerous Variations of ALTEQ
Arnaud Sipasseuth
In this paper, we present three types of variations of the ALTEQ cryptosystem, a recent submission to the NIST's additional call for signatures. We name these Dangerous Variations of ALTEQ (DVA), as there is always a certain danger in stepping out of usual constructions, although we attempt to maintain heuristic security. First, we present DVA-GG (Graph Generalization), that can be seen as a more abstract point-of-view on the operations done in ALTEQ and encourages more research on the algebraic variants. In particular, we show this approach can lead to a patch counter to Beullens' recent seed collision attack on ALTEQ that only depends on the primitive, and showcase some fancy usages of the primitive for experimental protocols. Second, we present DVA-PC (Precomputations) which is ``likely'' as secure as ALTEQ in the random oracle model, and allow to drastically reduce the intermediate memory requirements within both the signature and verification process through an easily parallelizable extra operation. In particular, this facilitates precomputation variants with online phases that only depends on the complexity of basic matrix operations. We can then choose between either a tiny offline memory per signature, or get one of the fastest online signing speed for post-quantum cryptography. Third, we present DVA-DM (Distinct Matrices), some cryptanalytic targets that deviates from ALTEQ's original algebraic structure. Those structures can serve as plain computational acceleration or just compress data sizes, and provide good options to motivate the study of specialized cryptanalysis for ALTEQ: if those are safe, then ALTEQ gain safe variants, and otherwise, we gain further understanding of the problems. In particular, the ideas can be applied beyond ALTEQ and beyond, and hopefully extend to MEDS, LESS, and group-action-based cryptography.
Last updated:  2024-05-26
Zero-knowledge IOPs Approaching Witness Length
Noga Ron-Zewi and Mor Weiss
Interactive Oracle Proofs (IOPs) allow a probabilistic verifier interacting with a prover to verify the validity of an NP statement while reading only few bits from the prover messages. IOPs generalize standard Probabilistically-Checkable Proofs (PCPs) to the interactive setting, and in the few years since their introduction have already exhibited major improvements in main parameters of interest (such as the proof length and prover and verifier running times), which in turn led to significant improvements in constructions of succinct arguments. Zero-Knowledge (ZK) IOPs additionally guarantee that the view of any query-bounded (possibly malicious) verifier can be efficiently simulated. ZK-IOPs are the main building block of succinct ZK arguments which use the underlying cryptographic object (e.g., a collision-resistant hash function) as a black box. In this work, we construct the first ZK-IOPs approaching the witness length for a natural NP problem. More specifically, we design constant-query and constant-round IOPs for 3SAT in which the total communication is $(1+\gamma)m$, where $m$ is the number of variables and $\gamma>0$ is an arbitrarily small constant, and ZK holds against verifiers querying $m^\beta$ bits from the prover's messages, for a constant $\beta>0$. This gives a ZK variant of a recent result of Ron-Zewi and Rothblum (FOCS `20), who construct (non-ZK) IOPs approaching the witness length for a large class of NP languages. Previous constructions of ZK-IOPs incurred an (unspecified) large constant multiplicative overhead in the proof length, even when restricting to ZK against the honest verifier. We obtain our ZK-IOPs by improving the two main building blocks underlying most ZK-IOP constructions, namely ZK codes and ZK-IOPs for sumcheck. More specifically, we give the first ZK-IOPs for sumcheck that achieve both sublinear communication for sumchecking a general tensor code, and a ZK guarantee. We also show a strong ZK preservation property for tensors of ZK codes, which extends a recent result of Bootle, Chiesa, and Liu (EC `22). Given the central role of these objects in designing ZK-IOPs, these results might be of independent interest.
Last updated:  2024-05-26
Faster verifications and smaller signatures: Trade-offs for ALTEQ using rejections
Arnaud Sipasseuth
In this paper, we introduce a new probability function parameter in the instantiations of the Goldreich-Micali-Wigderson with Fiat-Shamir and unbalanced challenges used in ALTEQ, a recent NIST PQC candidate in the call for additional signatures. This probability set at 100% does not bring any changes in the scheme, but modifies the public challenge generation process when below 100%, by injecting potential rejections in otherwise completely valid inputs. From a theoretical point of view, this does not improve the asymptotical hardness of the scheme and negatively affects the efficiency of the signatory, and might itself seem trivial. However, from a practical point of view, implementation-wise and performance-wise, this triviality allows an extra degree of freedom in optimizing parameters, as the heuristic security level is also increased against forgers: previously valid combinations now can be deemed invalid. This allows us to make trade-offs to reduce the computational load in verifiers, accelerating verifications, marginally reduce the signature size, at the cost of making signatures slower and unlikely to be constant-time. In particular, this extra degree of freedom allows to make implementation choices that enable smoother and faster executions of the aforementioned protocols, especially in the context of parallelization using vectorized instructions. We also demonstrate the usefulness of our proposal to ALTEQ for other options, when slowing down the signing process is not an issue: significantly smaller signatures but longer verifications, or lower public key sizes. The ideas presented apply to any primitive, and can be used beyond ALTEQ.
Last updated:  2024-05-24
Succinct Homomorphic Secret Sharing
Damiano Abram, Lawrence Roy, and Peter Scholl
This work introduces homomorphic secret sharing (HSS) with succinct share size. In HSS, private inputs are shared between parties, who can then homomorphically evaluate a function on their shares, obtaining a share of the function output. In succinct HSS, a portion of the inputs can be distributed using shares whose size is sublinear in the number of such inputs. The parties can then locally evaluate a function $f$ on the shares, with the restriction that $f$ must be linear in the succinctly shared inputs. We construct succinct, two-party HSS for branching programs, based on either the decisional composite residuosity assumption, a DDH-like assumption in class groups, or learning with errors with a superpolynomial modulus-to-noise ratio. We then give several applications of succinct HSS, which were only previously known using fully homomorphic encryption, or stronger tools: - Succinct vector oblivious linear evaluation (VOLE): Two parties can obtain secret shares of a long, arbitrary vector $\vec x$, multiplied by a scalar $\Delta$, with communication sublinear in the size of the vector. - Batch, multi-party distributed point functions: A protocol for distributing a batch of secret, random point functions among $N$ parties, for any polynomial $N$, with communication sublinear in the number of DPFs. - Sublinear MPC for any number of parties: Two new constructions of MPC with sublinear communication complexity, with $N$ parties for any polynomial $N$: (1) For general layered Boolean circuits of size $s$, with communication $O(N s/\log\log s)$, and (2) For layered, sufficiently wide Boolean circuits, with communication $O(N s/\log s)$.
Last updated:  2024-11-18
How to Redact the Bitcoin Backbone Protocol
Mehmet Sabir Kiraz, Enrique Larraia, and Owen Vaughan
We explain how to extend the Bitcoin backbone model of Garay et al. (Eurocrypt, 2015) to accommodate for redactable blockchains. Our extension captures fluid blockchain-based databases (with mutability requirements) and compliance with existing legislation, such as the GDPR right to be forgotten, or the need to erase offending data from nodes’ databases that would otherwise provoke legal shutdowns. Our redactable backbone protocol retains the essential properties of blockchains. Leveraging zero-knowledge proofs, old data can be erased without requiring trusted third parties or heuristics about past chain validation. Our solution can be implemented on Bitcoin immediately without hard-forks, and it is scalable. It allows the redaction of data from UTXOs or unconfirmed transactions that have not yet flooded the network, while guaranteeing invariance of the Bitcoin state. Thus, offending data does not need to persist in the system, not even temporarily.
Last updated:  2025-04-02
Relations among new CCA security notions for approximate FHE
Chris Brzuska, Sébastien Canard, Caroline Fontaine, Duong Hieu Phan, David Pointcheval, Marc Renard, and Renaud Sirdey
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for correct FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
Last updated:  2025-02-27
Traceable Secret Sharing Based on the Chinese Remainder Theorem
Charlotte Hoffmann
Traceable threshold secret sharing schemes, introduced by Goyal, Song and Srinivasan (CRYPTO'21), allow to provably trace leaked shares to the parties that leaked them. The authors give the first definition and construction of traceable secret sharing schemes. However, the size of the shares in their construction are quadratic in the size of the secret. Boneh, Partap and Rotem (CRYPTO'24) recently proposed a new definition of traceable secret sharing and the first practical constructions. In their definition, one considers a reconstruction box $R$ that contains $f$ leaked shares and, on input $t-f$ additional shares, outputs the secret $s$. A scheme is traceable if one can find out the leaked shares inside the box $R$ by only getting black-box access to $R$ and a private tracing key. Boneh, Partap and Rotem give constructions from Shamir's secret sharing and Blakley's secret sharing. The constructions are efficient as the size of the secret shares is only twice the size of the secret. In this work we present the first traceable secret sharing scheme based on the Chinese remainder theorem. This was stated as an open problem by Boneh, Partap and Rotem, as it gives rise to traceable secret sharing with weighted threshold access structures. The scheme is based on Mignotte's secret sharing and increases the size of the shares of the standard Mignotte secret sharing scheme only by a factor of $2$. We also introduce a new definition of semi-public secret sharing that does not require a private tracing key and give a construction based on the Chinese Remainder Theorem.
Last updated:  2024-05-24
The Perils of Limited Key Reuse: Adaptive and Parallel Mismatch Attacks with Post-processing Against Kyber
Qian Guo, Erik Mårtensson, and Adrian Åström
In this paper, we study the robustness of Kyber, the Learning With Errors (LWE)-based Key Encapsulation Mechanism (KEM) chosen for standardization by NIST, against key mismatch attacks. We demonstrate that Kyber's security levels can be compromised with a few mismatch queries by striking a balance between the parallelization level and the cost of lattice reduction for post-processing. This highlights the imperative need to strictly prohibit key reuse in CPA-secure Kyber. We further propose an adaptive method to enhance parallel mismatch attacks, initially proposed by Shao et al. at AsiaCCS 2024, thereby significantly reducing query complexity. This method combines the adaptive attack with post-processing via lattice reduction to retrieve the final secret key entries. Our method proves its efficacy by reducing query complexity by 14.6 % for Kyber512 and 7.5 % for Kyber768/Kyber1024. Furthermore, this approach has the potential to substantially improve multi-value Plaintext-Checking (PC) oracle-based side-channel attacks against the CCA-secure version of Kyber KEM.
Last updated:  2024-05-24
Reducing Overdefined Systems of Polynomial Equations Derived from Small Scale Variants of the AES via Data Mining Methods
Jana Berušková, Martin Jureček, and Olha Jurečková
This paper deals with reducing the secret key computation time of small scale variants of the AES cipher using algebraic cryptanalysis, which is accelerated by data mining methods. This work is based on the known plaintext attack and aims to speed up the calculation of the secret key by processing the polynomial equations extracted from plaintext-ciphertext pairs. Specifically, we propose to transform the overdefined system of polynomial equations over GF(2) into a new system so that the computation of the Gröbner basis using the F4 algorithm takes less time than in the case of the original system. The main idea is to group similar polynomials into clusters, and for each cluster, sum the two most similar polynomials, resulting in simpler polynomials. We compare different data mining techniques for finding similar polynomials, such as clustering or locality-sensitive hashing (LSH). Experimental results show that using the LSH technique, we get a system of equations for which we can calculate the Gröbner basis the fastest compared to the other methods that we consider in this work. Experimental results also show that the time to calculate the Gröbner basis for the transformed system of equations is significantly reduced compared to the case when the Gröbner basis was calculated from the original non-transformed system. This paper demonstrates that reducing an overdefined system of equations reduces the computation time for finding a secret key.
Last updated:  2024-05-24
Arma: Byzantine Fault Tolerant Consensus with Horizontal Scalability
Yacov Manevich, Hagar Meir, Kaoutar Elkhiyaoui, Yoav Tock, and May Buzaglo
Arma is a Byzantine Fault Tolerant (BFT) consensus system designed to achieve horizontal scalability across all hardware resources: network bandwidth, CPU, and disk I/O. As opposed to preceding BFT protocols, Arma separates the dissemination and validation of client transactions from the consensus process, restricting the latter to totally ordering only metadata of batches of transactions. This separation enables each party to distribute compute and storage resources for transaction validation, dissemination and disk I/O among multiple machines, resulting in horizontal scalability. Additionally, Arma ensures censorship resistance by imposing a maximum time limit on the inclusion of client transactions. We built and evaluated two Arma prototypes. The first is an independent system handling over 200,000 transactions per second, the second integrated into Hyperledger Fabric, speeding its consensus by an order of magnitude.
Last updated:  2024-10-11
Consensus in the Presence of Overlapping Faults and Total Omission
Julian Loss, Kecheng Shi, and Gilad Stern
Understanding the fault tolerance of Byzantine Agreement protocols is an important question in distributed computing. While the setting of Byzantine faults has been thoroughly explored in the literature, the (arguably more realistic) omission fault setting is far less studied. In this paper, we revisit the recent work of Loss and Stern who gave the first protocol in the mixed fault model tolerating $t$ Byzantine faults, $s$ send faults, and $r$ receive faults, when $2t+r+s<n$ and omission faults do not overlap. We observe that their protocol makes no guarantees when omission faults can overlap, i.e., when parties can simultaneously have send and receive faults. We give the first protocol that overcomes this limitation and tolerates the same number of potentially overlapping faults. We then study, for the first time, the total omission setting where all parties can become omission faulty. This setting is motivated by real-world scenarios where every party may experience connectivity issues from time to time, yet agreement should still hold for the parties who manage to output values. We show the first agreement protocol in this setting with parameters $s<n$ and $s+r=n$. On the other hand, we prove that there is no consensus protocol for the total omission setting which tolerates even a single overlapping omission fault, i.e., where $s+r=n+1$ and $s>2$, or a broadcast protocol for $s+r=n$ and $s>1$ even without overlapping faults.
Last updated:  2024-05-24
Resettable Statistical Zero-Knowledge for NP
Susumu Kiyoshima
Resettable statistical zero-knowledge [Garg--Ostrovsky--Visconti--Wadia, TCC 2012] is a strong privacy notion that guarantees statistical zero-knowledge even when the prover uses the same randomness in multiple proofs. In this paper, we show an equivalence of resettable statistical zero-knowledge arguments for $NP$ and witness encryption schemes for $NP$. - Positive result: For any $NP$ language $L$, a resettable statistical zero-knowledge argument for $L$ can be constructed from a witness encryption scheme for $L$ under the assumption of the existence of one-way functions. - Negative result: The existence of even resettable statistical witness-indistinguishable arguments for $NP$ imply the existence of witness encryption schemes for $NP$ under the assumption of the existence of one-way functions. The positive result is obtained by naturally extending existing techniques (and is likely to be already well-known among experts). The negative result is our main technical contribution. To explore workarounds for the negative result, we also consider resettable security in a model where the honest party's randomness is only reused with fixed inputs. We show that resettable statistically hiding commitment schemes are impossible even in this model.
Last updated:  2024-05-24
DiTRU: A Resurrection of NTRU over Dihedral Group
Ali Raya, Vikas Kumar, and Sugata Gangopadhyay
NTRU-like cryptosystems are among the most studied lattice-based post-quantum candidates. While most NTRU proposals have been introduced over a commutative ring of quotient polynomials, other rings can be used. Noncommutative algebra has been endorsed as a direction to build new variants of NTRU a long time ago. The first attempt to construct a noncommutative variant was due to Hoffstein and Silverman motivated by more resistance to lattice attack. The scheme has been built over the group ring of a dihedral group. However, their design differed from standard NTRU and soon was found vulnerable to algebraic attacks. In this work, we revive the group ring NTRU over the dihedral group as an instance of the GR-NTRU framework. Unlike many proposals of noncommutative variants in the literature, our work focuses on putting the scheme into practice. We clear all the aspects that make our scheme implementable by proposing an efficient inversion algorithm over the new setting of the noncommutative ring, describing the decryption failure model, and analyzing the lattice associated with our instantiation. Finally, we discuss the best-known attacks against our scheme and provide an implementation targeting 128-bit, 192-bit, and 256-bit levels of security as proof of its practicality.
Last updated:  2024-09-14
Analysis on Sliced Garbling via Algebraic Approach
Taechan Kim
Recent improvements to garbled circuits are mainly focused on reducing their size. The state-of-the-art construction of Rosulek and Roy~(Crypto~2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting. This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans~(Eurocrypt~2015). Recently, Ashur, Hazay, and Satish~(eprint 2024/389) proposed a scheme that requires $4/3\kappa + O(1)$ bits for garbling AND gates. Precisely they extended the idea of \emph{slicing} introduced by Rosulek and Roy to garble 3-input gates of the form $g(u,v,w) := u(v+w)$. By setting $w = 0$, it can be used to garble AND gates with the improved communication costs. However, in this paper, we observe that the scheme proposed by Ashur, Hazy, and Satish leaks information on the permute bits, thereby allowing the evaluator to reveal information on the private inputs. To be precise, we show that in their garbling scheme, the evaluator can compute the bits $\alpha$ and $\beta + \gamma$, where $\alpha$, $\beta$, and $\gamma$ are the private permute bits of the input labels $A$, $B$, and $C$, respectively.
Last updated:  2024-05-24
Can We Beat Three Halves Lower Bound?: (Im)Possibility of Reducing Communication Cost for Garbled Circuits
Chunghun Baek and Taechan Kim
Recent improvements to garbled circuits are mainly focused on reducing their size. The state-of-the-art construction of Rosulek and Roy (Crypto 2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting. This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans (Eurocrypt 2015). Whether their construction is optimal in a more inclusive model than the linear garbling model still remains open. This paper begins by providing a comprehensive model for a large class of practical garbling schemes and proves the lower bound for the size of the garbled AND gates in our model. We show that garbled AND gates require at least $1.5\kappa$ bits in our new model with the free-XOR setting. It is remarkable to see that the construction by Rosulek and Roy is already optimal despite the fact that our model possibly captures any potential extension of their construction.
Last updated:  2025-01-31
On Maximum Size Simultaneous Linear Approximations in Ascon and Keccak and Related Translation and Differential Properties
Nicolas T. Courtois, Frédéric Amiel, and Alexandre Bonnard de Fonvillars
In this paper we study the S-box known as Chi or \chi initially proposed by Daemen in 1995 and very widely used ever since in Keccak, Ascon, and many other. This type of ciphers is typically analyzed [in recent research] in terms of subspace trail attacks [TeDi19] and vector space invariants. An interesting question is then, when different spaces are mapped to each other by translations with a constant. In this paper we relax this fundamental question and we consider arbitrary sets of points and their translations. We generalize previous S-box partial linearization results on Keccak and Ascon from Eurocrypt 2017. We basically introduce new ways to linearize the Ascon S-box to the maximum possible extent. On this basis we show further remarkable properties and some surprising connections between [simultaneous] linear and [prominent] differential properties. We exhibit a new family of maximum size and optimal approximation properties with 11 points, beyond the maximum size of any set in the DDT table. We prove a theorem which guarantees that this type of properties are stable by arbitrary input-side translations which holds for all quadratic permutations. All this will be placed in the context of previous work on classification of 5-bit quadratic permutations.
Last updated:  2024-05-23
Algebraic Structure of the Iterates of $\chi$
Björn Kriepke and Gohar Kyureghyan
We consider the map $\chi:\mathbb{F}_2^n\to\mathbb{F}_2^n$ for $n$ odd given by $y=\chi(x)$ with $y_i=x_i+x_{i+2}(1+x_{i+1})$, where the indices are computed modulo $n$. We suggest a generalization of the map $\chi$ which we call generalized $\chi$-maps. We show that these maps form an Abelian group which is isomorphic to the group of units in $\mathbb{F}_2[X]/(X^{(n+1)/2})$. Using this isomorphism we easily obtain closed-form expressions for iterates of $\chi$ and explain their properties.
Last updated:  2024-09-06
A Note on Zero-Knowledge for NP and One-Way Functions
Yanyi Liu, Noam Mazor, and Rafael Pass
We present a simple alternative exposition of the the recent result of Hirahara and Nanashima (STOC’24) showing that one-way functions exist if (1) every language in NP has a zero-knowledge proof/argument and (2) ZKA contains non-trivial languages. Our presentation does not rely on meta-complexity and we hope it may be useful for didactic purposes. We also remark that the same result hold for (imperfect) iO for 3CNF, or Witness Encryption for NP.
Last updated:  2024-05-23
Symmetric Signcryption and E2EE Group Messaging in Keybase
Joseph Jaeger, Akshaya Kumar, and Igors Stepanovs
We introduce a new cryptographic primitive called symmetric signcryption, which differs from traditional signcryption because the sender and recipient share a secret key. We prove that a natural composition of symmetric encryption and signatures achieves strong notions of security against attackers that can learn and control many keys. We then identify that the core encryption algorithm of the Keybase encrypted messaging protocol can be modeled as a symmetric signcryption scheme. We prove the security of this algorithm, though our proof requires assuming non-standard, brittle security properties of the underlying primitives.
Last updated:  2024-10-09
Incompressible Functional Encryption
Rishab Goyal, Venkata Koppula, Mahesh Sreekumar Rajasree, and Aman Verma
Incompressible encryption (Dziembowski, Crypto'06; Guan, Wichs, Zhandry, Eurocrypt'22) protects from attackers that learn the entire decryption key, but cannot store the full ciphertext. In incompressible encryption, the attacker must try to compress a ciphertext within pre-specified memory bound $S$ before receiving the secret key. In this work, we generalize the notion of incompressibility to functional encryption. In incompressible functional encryption, the adversary can corrupt non-distinguishing keys at any point, but receives the distinguishing keys only after compressing the ciphertext to within $S$ bits. An important efficiency measure for incompressible encryption is the ciphertext-rate ( i.e., $\mathsf{rate} = \frac{|m|}{|\mathsf{ct}|}$). We give many new results for incompressible functional encryption for circuits, from minimal assumption of (non-incompressible) functional encryption, with 1. $\mathsf{ct}$-rate-$\frac{1}{2}$ and short secret keys, 2. $\mathsf{ct}$-rate-$1$ and large secret keys. Along the way, we also give a new incompressible attribute-based encryption for circuits from standard assumptions, with $\mathsf{ct}$-rate-$\frac{1}{2}$ and short secret keys. Our results achieve optimal efficiency, as incompressible attribute-based/functional encryption with $\mathsf{ct}$-rate-$1$ as well as short secret keys has strong barriers for provable security from standard assumptions. Moreover, our assumptions are minimal as incompressible attribute-based/functional encryption are strictly stronger than their non-incompressible counterparts.
Last updated:  2024-05-25
Nonadaptive One-Way to Hiding Implies Adaptive Quantum Reprogramming
Joseph Jaeger
An important proof technique in the random oracle model involves reprogramming it on hard to predict inputs and arguing that an attacker cannot detect that this occurred. In the quantum setting, a particularly challenging version of this considers adaptive reprogramming wherein the points to be reprogrammed (or output values they should be programmed to) are dependent on choices made by the adversary. Frameworks for analyzing adaptive reprogramming were given by, e.g., by Unruh (CRYPTO 2014), Grilo-Hövelmanns-Hülsing-Majenz (ASIACRYPT 2021), and Pan-Zeng (PKC 2024). We show, counterintuitively, that these adaptive results follow directly from the non-adaptive one-way to hiding theorem of Ambainis-Hamburg-Unruh (CRYPTO 2019). These implications contradict beliefs (whether stated explicitly or implicitly) that some properties of the adaptive frameworks cannot be provided by the Ambainis-Hamburg-Unruh result.
Last updated:  2024-05-23
Weak Consistency mode in Key Transparency: OPTIKS
Esha Ghosh and Melissa Chase
The need for third-party auditors in privacy-preserving Key Transparency (KT) systems presents a deployment challenge. In this short note, we take a simple privacy-preserving KT system that provides strong security guarantees in the presence of an honest auditor (OPTIKS) and show how to add a auditor-free mode to it. The auditor-free mode offers slightly weaker security. We formalize this security property and prove that our proposed protocol satisfies our security definition.
Last updated:  2024-05-22
New Limits of Provable Security and Applications to ElGamal Encryption
Sven Schäge
We provide new results showing that ElGamal encryption cannot be proven CCA1-secure – a long-standing open problem in cryptography. Our result follows from a very broad, meta-reduction-based impossibility result on random self-reducible relations with efficiently re-randomizable witnesses. The techniques that we develop allow, for the first time, to provide impossibility results for very weak security notions where the challenger outputs fresh challenge statements at the end of the security game. This can be used to finally tackle encryption-type definitions that have remained elusive in the past. We show that our results have broad applicability by casting several known cryptographic setups as instances of random self-reducible and re-randomizable relations. These setups include general semi-homomorphic PKE and the large class of certified homomorphic one-way bijections. As a result, we also obtain new impossibility results for the IND-CCA1 security of the PKEs of Paillier and Damg ̊ard–Jurik, and many one-more inversion assumptions like the one-more DLOG or the one-more RSA assumption.
Last updated:  2024-05-24
Detecting Rogue Decryption in (Threshold) Encryption via Self-Incriminating Proofs
James Hsin-yu Chiang, Bernardo David, Tore Kasper Frederiksen, Arup Mondal, and Esra Yeniaras
Keeping decrypting parties accountable in public key encryption is notoriously hard since the secret key owner can decrypt any arbitrary ciphertext. Threshold encryption aims to solve this issue by distributing the power to decrypt among a set of parties, who must interact via a decryption protocol. However, such parties can employ cryptographic tools such as Multiparty Computation (MPC) to decrypt arbitrary ciphertexts without being detected. We introduce the notion of (threshold) encryption with Self-Incriminating Proofs, where parties must produce a self-incriminating proof of decryption when decrypting every ciphertext. In the standard public key encryption case, the adversary could destroy these proofs, so we strengthen our notion to guarantee that the proofs are published when decryption succeeds. This creates a decryption audit trail, which is useful in scenarios where decryption power is held by a single trusted party (e.g., a Trusted Execution Environment) who must be kept accountable. In the threshold case, we ensure that at least one of the parties who execute the decryption protocol will learn a self-incriminating proof, even if they employ advanced tools such as MPC. The fact that a party learns the proof and may leak it at any moment functions as a deterrent for parties who do not wish to be identified as malicious decryptors (e.g., a commercial operator of a service based on threshold encryption). We investigate the (im)possibility and applications of our notions while providing matching constructions under appropriate assumptions. In the threshold case, we build on recent results on Individual Cryptography (CRYPTO 2023).
Last updated:  2024-05-22
Hide-and-Seek and the Non-Resignability of the BUFF Transform
Jelle Don, Serge Fehr, Yu-Hsuan Huang, Jyun-Jie Liao, and Patrick Struck
The BUFF transform, due to Cremers et al. (S&P'21), is a generic transformation for digital signature scheme, with the purpose of obtaining additional security guarantees beyond unforgeability: exclusive ownership, message-bound signatures, and non-resignability. Non-resignability (which essentially challenges an adversary to re-sign an unknown message for which it only obtains the signature) turned out to be a delicate matter, as recently Don et al. (CRYPTO'24) showed that the initial definition is essentially unachievable; in particular, it is not achieved by the BUFF transform. This led to the introduction of new, weakened versions of non-resignability, which are (potentially) achievable. In particular, it was shown that a salted variant of the BUFF transform does achieves some weakened version of non-resignability. However, the salting requires additional randomness and leads to slightly larger signatures. Whether the original BUFF transform also achieves some meaningful notion of non-resignability remained a natural open question. In this work, we answer this question in the affirmative. We show that the BUFF transform satisfies the (almost) strongest notions of non-resignability one can hope for, facing the known impossibility results. Our results cover both the statistical and the computational case, and both the classical and the quantum setting. At the core of our analysis lies a new security game for random oracles that we call Hide-and-Seek. While seemingly innocent at first glance, it turns out to be surprisingly challenging to rigorously analyze.
Last updated:  2024-06-04
Stickel's Key Agreement Algebraic Variation
Daniel Nager
In this document we present a further development of non-commutative algebra based key agreement due to E. Stickel and a way to deal with the algebraic break due to V. Sphilrain.
Last updated:  2024-06-28
Minimize the Randomness in Rasta-Like Designs: How Far Can We Go?
Lorenzo Grassi, Fukang Liu, Christian Rechberger, Fabian Schmid, Roman Walch, and Qingju Wang
The Rasta design strategy allows building low-round ciphers due to its efficient prevention of statistical attacks and algebraic attacks by randomizing the cipher, which makes it especially suitable for hybrid homomorphic encryption (HHE), also known as transciphering. Such randomization is obtained by pseudorandomly sampling new invertible matrices for each round of each new cipher evaluation. However, naively sampling a random invertible matrix for each round significantly impacts the plain evaluation runtime, though it does not impact the homomorphic evaluation cost. To address this issue, Dasta was proposed at ToSC 2020 to reduce the cost of generating the random matrices. In this work, we address this problem from a different perspective: How far can the randomness in Rasta-like designs be reduced in order to minimize the plain evaluation runtime without sacrificing the security? To answer this question, we carefully studied the main threats to Rasta-like ciphers and the role of random matrices in ensuring security. We apply our results to the recently proposed cipher $\text{PASTA}$, proposing a modified version called $\text{PASTA}_\text{v2}$ instantiated with one initial random matrix and fixed linear layers - obtained by combining two MDS matrices with the Kronecker product - for the other rounds. Compared with $\text{PASTA}$, the state-of-the-art cipher for BGV- and BFV-style HHE, our evaluation shows that $\text{PASTA}_\text{v2}$ is up to 100% faster in plain while having the same homomorphic runtime in the SEAL homomorphic encryption library and up to 30% faster evaluation time in HElib, respectively.
Last updated:  2024-05-22
Physical Ring Signature
Xavier Bultel
Ring signatures allow members of a group (called "ring") to sign a message anonymously within the group, which is chosen ad hoc at the time of signing (the members do not need to have interacted before). In this paper, we propose a physical version of ring signatures. Our signature is based on one-out-of-many signatures, a method used in many real cryptographic ring signatures. It consists of boxes containing coins locked with padlocks that can only be opened by a particular group member. To sign a message, a group member shakes the boxes of the other members of the group so that the coins are in a random state ("heads" or "tails", corresponding to bits $0$ and $1$), and opens their box to arrange the coins so that the exclusive "or" of the coins corresponds to the bits of the message they wish to sign. We present a prototype that can be used with coins, or with dice for messages encoded in larger (non-binary) alphabets. We suggest that this system can be used to explain ring signatures to the general public in a fun way. Finally, we propose a semi-formal analysis of the security of our signature based on real cryptographic security proofs.
Last updated:  2024-12-01
Maliciously Secure Circuit Private Set Intersection via SPDZ-Compatible Oblivious PRF
Yaxi Yang, Xiaojian Liang, Xiangfu Song, Ye Dong, Linting Huang, Hongyu Ren, Changyu Dong, and Jianying Zhou
Circuit Private Set Intersection (Circuit-PSI) allows two parties to compute a function $f$ on items in the intersection of their input sets without revealing items in the intersection set. It is a well-known variant of PSI and has numerous practical applications. However, existing Circuit-PSI protocols only provide security against \textit{semi-honest} adversaries. A straightforward approach to constructing a maliciously secure Circuit-PSI is to extend a pure garbled-circuit-based PSI (NDSS'12 \cite{huang2012private}) to a maliciously secure circuit-PSI, but it will not be concretely efficient. Another is converting state-of-the-art semi-honest Circuit-PSI protocols (EUROCRYPT'21 \cite{rindal2021vole}; PoPETS'22 \cite{chandran2022circuit}) to be secure in the malicious setting. However, it will come across \textit{the consistency issue} (EUROCRYPT'11 \cite{shelat2011two}) since parties can not guarantee the inputs of the function $f$ stay unchanged as obtained from the last step. This paper tackles the previously mentioned issue by presenting the first maliciously secure Circuit-PSI protocol. Our key innovation, the Distributed Dual-key Oblivious Pseudorandom Function (DDOPRF), enables the oblivious evaluation of secret-shared inputs using dual keys within the SPDZ MPC framework. Notably, this construction seamlessly ensures fairness within the Circuit-PSI. Compared to the state-of-the-art semi-honest Circuit-PSI protocol (PoPETS'22), experimental results demonstrate that our malicious Circuit-PSI protocol not only reduces around $5$x communication costs but also enhances efficiency, particularly for modest input sets ($\le 2^{14}$) in the case of the WAN setting with high latency and limited bandwidth.
Last updated:  2024-05-22
A Fault-Resistant NTT by Polynomial Evaluation and Interpolation
Uncategorized
Sven Bauer, Fabrizio De Santis, Kristjane Koleci, and Anita Aghaie
Show abstract
Uncategorized
In computer arithmetic operations, the Number Theoretic Transform (NTT) plays a significant role in the efficient implementation of cyclic and nega-cyclic convolutions with the application of multiplying large integers and large degree polynomials. Multiplying polynomials is a common operation in lattice-based cryptography. Hence, the NTT is a core component of several lattice-based cryptographic algorithms. Two well-known examples are the key encapsulation mechanism Kyber and the digital signature algorithm Dilithium. In this work, we introduce a novel and efficient method for safeguarding the NTT against fault attacks. This new countermeasure is based on polynomial evaluation and interpolation. We prove its error detection capability, calculate the required additional computational effort, and show how to concretely use it to secure the NTT in Kyber and Dilithium against fault injection attacks. Finally, we provide concrete implementation results of the proposed novel technique on a resource-constrained ARM Cortex-M4 microcontroller, e.g., the technique exhibits a 72% relative overhead, when applied to Dilithium.
Last updated:  2024-05-22
A new attack against search-LWE using Diophantine approximations
Robin Frot and Daniel Zentai
In this paper, we present a new attack against search-LWE instances with a small secret key. The method consists of lifting the public key to $\mathbb Z$ and finding a good Diophantine approximation of the public key divided by the modulus $a$. This is done using lattice reduction algorithms. The lattice considered, and the approximation quality needed is similar to known decision-LWE attacks for small keys. However, we do not require an in-depth analysis of the reduction algorithm (any reduction algorithm giving small enough vectors is enough for us), and our method solves the search problem directly, which is harder than the decision problem.
Last updated:  2024-05-22
Modelling Ciphers with Overdefined Systems of Quadratic Equations: Application to Friday, Vision, RAIN and Biscuit
Fukang Liu, Mohammad Mahzoun, and Willi Meier
It is well-known that a system of equations becomes easier to solve when it is overdefined. In this work, we study how to overdefine the system of equations to describe the arithmetic oriented (AO) ciphers Friday, Vision, and RAIN, as well as a special system of quadratic equations over $\mathbb F_{2^{\ell}}$ used in the post-quantum signature scheme Biscuit. Our method is inspired by Courtois-Pieprzyk's and Murphy-Robshaw's methods to model AES with overdefined systems of quadratic equations over $\mathbb F_2$ and $\mathbb F_{2^8}$, respectively. However, our method is more refined and much simplified compared with Murphy-Robshaw's method, since it can take full advantage of the low-degree $\mathbb F_2$-linearized affine polynomials used in Friday and Vision, and the overdefined system of equations over $\mathbb F_{2^{\ell}}$ can be described in a clean way with our method. For RAIN, we instead consider quadratic Boolean equations rather than equations over large finite fields $\mathbb F_{2^{\ell}}$. Specifically, we demonstrate that the special structure of RAIN allows us to set up much more linearly independent quadratic Boolean equations than those obtained only with Courtois-Pieprzyk's method. Moreover, we further demonstrate that the underlying key-recovery problem in Biscuit (NIST PQC Round 1 Additional Signatures) can also be described by solving a much overdefined system of quadratic equations over $\mathbb F_{2^{\ell}}$. On the downside, the constructed systems of quadratic equations for these ciphers cannot be viewed as semi-regular, which makes it challenging to upper bound the complexity of the Gröbner basis attack. However, such a new modelling method can significantly improve the lower bound of the complexity of the Gröbner basis attacks on these ciphers, i.e., we view the complexity of solving a random system of quadratic equations of the same scale as the lower bound. How to better estimate the upper and lower bounds of the Gröbner basis attacks on these ciphers based on our modelling method is left as an open problem.
Last updated:  2024-05-22
SmartBean: Transparent, Concretely Efficient, Polynomial Commitment Scheme with Logarithmic Verification and Communication Costs that Runs on Any Group
Frank Y.C. Lu
We introduce a new, concretely efficient, transparent polynomial commitment scheme with logarithmic verification time and communication cost that can run on any group. Existing group-based polynomial commitment schemes must use less efficient groups, such as class groups of unknown order or pairing-based groups to achieve transparency (no trusted setup), making them expensive to adopt in practice.  We offer the first group-based polynomial commitment scheme that can run on any group s.t. it does not rely on expensive pairing operations or require class groups of unknown order to achieve transparency while still providing logarithmic verifier time and communication cost.  The prover work of our work is dominated by $4n \,\mathbb{G}$ multi-exponentiations, the verifier work is dominated by 4 log $n \, \mathbb{G}$ exponentiations, and the communication cost is 4 log $n \, \mathbb{G}$. Since our protocol can run on fast groups such as Curve25519, we can easily accelerate the multi-exponentiations with Pippenger's algorithm. The concrete performance of our work shows a significant improvement over the current state of the art in almost every aspect.
Last updated:  2024-05-22
Universal Blockchain Assets
Owen Vaughan
We present a novel protocol for issuing and transferring tokens across blockchains without the need of a trusted third party or cross-chain bridge. In our scheme, the blockchain is used for double-spend protection only, while the authorisation of token transfers is performed off-chain. Due to the universality of our approach, it works in almost all blockchain settings. It can be implemented immediately on UTXO blockchains such as Bitcoin without modification, and on account-based blockchains such as Ethereum by introducing a smart contract that mimics the properties of a UTXO. We provide a proof-of-concept implementation of an NFT that is issued on Bitcoin, transferred to Ethereum, and then transferred back to Bitcoin. Our new approach means that users no longer need to be locked into one blockchain when issuing and transferring tokens.
Last updated:  2024-05-24
Differential Cryptanalysis on Quantum Computers
Kyungbae Jang, Yujin Oh, and Hwajeong Seo
As quantum computing progresses, extensive research has been conducted to find quantum advantages in the field of cryptography. Combining quantum algorithms with classical cryptographic analysis methods, such as differential cryptanalysis and linear cryptanalysis, has the potential to reduce complexity. In this paper, we present a quantum differential finding circuit for differential cryptanalysis. In our quantum circuit, both plaintext and input difference are in a superposition state. Actually, while our method cannot achieve a direct speedup with quantum computing, it offers a different perspective by relying on quantum probability in a superposition state. For the quantum simulation, given the limited number of qubits, we simulate our quantum circuit by implementing the Toy-ASCON quantum circuit.
Last updated:  2024-12-15
Relating Code Equivalence to Other Isomorphism Problems
Huck Bennett and Kaung Myat Htay Win
We study the complexity of the Code Equivalence Problem on linear error-correcting codes by relating its variants to isomorphism problems on other discrete structures---graphs, lattices, and matroids. Our main results are a fine-grained reduction from the Graph Isomorphism Problem to the Linear Code Equivalence Problem over any field $\mathbb{F}$, and a reduction from the Linear Code Equivalence Problem over any field $\mathbb{F}_p$ of prime, polynomially bounded order $p$ to the Lattice Isomorphism Problem. Both of these reductions are simple and natural. We also give reductions between variants of the Code Equivalence Problem, and study the relationship between isomorphism problems on codes and linear matroids.
Last updated:  2024-05-21
Doubly-Efficient Batch Verification in Statistical Zero-Knowledge
Or Keret, Ron D. Rothblum, and Prashant Nalini Vasudevan
A sequence of recent works, concluding with Mu et al. (Eurocrypt, 2024) has shown that every problem $\Pi$ admitting a non-interactive statistical zero-knowledge proof (NISZK) has an efficient zero-knowledge batch verification protocol. Namely, an NISZK protocol for proving that $x_1,\dots,x_k \in \Pi$ with communication that only scales poly-logarithmically with $k$. A caveat of this line of work is that the prover runs in exponential-time, whereas for NP problems it is natural to hope to obtain a doubly-efficient proof - that is, a prover that runs in polynomial-time given the $k$ NP witnesses. In this work we show that every problem in $NISZK \cap UP$ has a doubly-efficient interactive statistical zero-knowledge proof with communication $poly(n,\log(k))$ and $poly(\log(k),\log(n))$ rounds. The prover runs in time $poly(n,k)$ given access to the $k$ UP witnesses. Here $n$ denotes the length of each individual input, and UP is the subclass of NP relations in which YES instances have unique witnesses. This result yields doubly-efficient statistical zero-knowledge batch verification protocols for a variety of concrete and central cryptographic problems from the literature.
Last updated:  2024-05-21
Information-theoretic Multi-server Private Information Retrieval with Client Preprocessing
Jaspal Singh, Yu Wei, and Vassilis Zikas
A private information retrieval (PIR) protocol allows a client to fetch any entry from single or multiple servers who hold a public database (of size $n$) while ensuring no server learns any information about the client's query. Initial works on PIR were focused on reducing the communication complexity of PIR schemes. However, standard PIR protocols are often impractical to use in applications involving large databases, due to its inherent large server-side computation complexity, that's at least linear in the database size. Hence, a line of research has focused on considering alternative PIR models that can achieve improved server complexity. The model of private information retrieval with client prepossessing has received a lot of interest beginning with the work due to Corrigan-Gibbs and Kogan (Eurocrypt 2020). In this model, the client interacts with two servers in an offline phase and it stores a local state, which it uses in the online phase to perform PIR queries. Constructions in this model achieve online client/server computation and bandwidth that's sublinear in the database size, at the cost of a one-time expensive offline phase. Till date all known constructions in this model are based on symmetric key primitives or on stronger public key assumptions like Decisional Diffie-Hellman (DDH) and Learning with Error (LWE). This work initiates the study of unconditional PIR with client prepossessing - where we avoid using any cryptographic assumptions. We present a new PIR protocol for $2t$ servers (where $t \in [2,\log_2n/2]$) with threshold 1, where client and server online computation is $\widetilde{O}(\sqrt{n})$ (the $\widetilde{O}(.)$ notation hides $\textsf{poly}\log$ factors) - matching the computation costs of other works based on cryptographic assumptions. The client storage and online communication complexity are $\widetilde{O}(n^{0.5+1/2t})$ and $\widetilde{O}(n^{1/2})$ respectively. Compared to previous works our PIR with client preprocessing protocol also has a very concretely efficient client/server online computation phase - which is dominated by xor operations, compared to cryptographic operations that are orders of magnitude slower. As a building block for our construction, we introduce a new information-theoretic primitive called \textit{privately multi-puncturable random set }(\pprs), which might be of independent interest. This new primitive can be viewed as as a generalization of privately puncturable pseudo-random set, which is the key cryptographic building block used in previous works on PIR with client preprocessing.
Last updated:  2024-10-05
Elliptic Curve Cryptography for the masses: Simple and fast finite field arithmetic
Michael Scott
Shaped prime moduli are often considered for use in elliptic curve and isogeny-based cryptography to allow for faster modular reduction. Here we focus on the most common choices for shaped primes that have been suggested, that is pseudo-Mersenne, generalized Mersenne and Montgomery-friendly primes. We consider how best to to exploit these shapes for maximum efficiency, and provide an open source tool to automatically generate, test and time working high-level language finite-field code. Next we consider the advantage to be gained from implementations that are written in assembly language and which exploit special instructions, SIMD hardware if present, and the particularities of the algorithm being implemented.
Last updated:  2024-12-03
Ideal-to-isogeny algorithm using 2-dimensional isogenies and its application to SQIsign
Hiroshi Onuki and Kohei Nakagawa
The Deuring correspondence is a correspondence between supersingular elliptic curves and quaternion orders. Under this correspondence, an isogeny between elliptic curves corresponds to a quaternion ideal. This correspondence plays an important role in isogeny-based cryptography and several algorithms to compute an isogeny corresponding to a quaternion ideal (ideal-to-isogeny algorithms) have been proposed. In particular, SQIsign is a signature scheme based on the Deuring correspondence and uses an ideal-to-isogeny algorithm. In this paper, we propose a novel ideal-to-isogeny algorithm using isogenies of dimension $2$. Our algorithm is based on Kani's reducibility theorem, which gives a connection between isogenies of dimension $1$ and $2$. By using the characteristic $p$ of the base field of the form $2^fg - 1$ for a small odd integer $g$, our algorithm works by only $2$-isogenies and $(2, 2)$-isogenies in the operations in $\mathbb{F}_{p^2}$. We apply our algorithm to SQIsign and compare the efficiency of the new algorithm with the existing one. Our analysis shows that the key generation and the signing in our algorithm are at least twice as fast as those in the existing algorithm at the NIST security level 1. This advantage becomes more significant at higher security levels. In addition, our algorithm also improves the efficiency of the verification in SQIsign.
Last updated:  2024-05-25
Measure-Rewind-Extract: Tighter Proofs of One-Way to Hiding and CCA Security in the Quantum Random Oracle Model
Jiangxia Ge, Heming Liao, and Rui Xue
The One-Way to Hiding (O2H) theorem, first given by Unruh (J ACM 2015) and then restated by Ambainis et al. (CRYPTO 2019), is a crucial technique for solving the reprogramming problem in the quantum random oracle model (QROM). It provides an upper bound $d\cdot\sqrt{\epsilon}$ for the distinguisher's advantage, where $d$ is the query depth and $\epsilon$ denotes the advantage of a one-wayness attacker. Later, in order to obtain a tighter upper bound, Kuchta et al. (EUROCRYPT 2020) proposed the Measure-Rewind-Measure (MRM) technique and then proved the Measure-Rewind-Measure O2H (MRM-O2H) theorem, which provides the upper bound $d\cdot\epsilon$. They also proposed an open question: Can we combine their MRM technique with Ambainis et al.'s semi-classical oracle technique (CRYPTO 2019) or Zhandry's compressed oracle technique (CRYPTO 2019) to prove a new O2H theorem with an upper bound even tighter than $d\cdot\epsilon$? In this paper, we give an affirmative answer for the above question. We propose a new technique named Measure-Rewind-Extract (MRE) by combining the MRM technique with the semi-classical oracle technique. By using MRE technique, we prove the Measure-Rewind-Extract O2H (MRE-O2H) theorem, which provides the upper bound $\sqrt{d}\cdot\epsilon$. As an important application of our MRE-O2H theorem, for the $FO^{\cancel{\bot}}$, $FO_m^{\cancel{\bot}}$, $FO^{\bot}$ and $FO_m^\bot$ proposed by Hofheinz et al. (TCC 2017), i.e., the key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto transformation, we prove the following results in the QROM: Their IND-CCA security can be reduced to the IND-CPA security of the underlying public key encryption (PKE) scheme without the square-root advantage loss. In particular, compared with the IND-CCA proof of $FO^{\cancel{\bot}}$ given by Kuchta et al. (EUROCRYPT 2020), ours removes the injectivity assumption and has a tighter security bound. Under the assumption that the underlying PKE scheme is unique randomness recoverable, we for the first time prove that their IND-CCA security can be reduced to the OW-CPA security of the underlying PKE scheme without the square-root advantage loss.
Last updated:  2024-11-29
Instance-Hiding Interactive Proofs
Changrui Mu and Prashant Nalini Vasudevan
In an Instance-Hiding Interactive Proof (IHIP) [Beaver et al. CRYPTO 90], an efficient verifier with a _private_ input x interacts with an unbounded prover to determine whether x is contained in a language L. In addition to completeness and soundness, the instance-hiding property requires that the prover should not learn anything about x in the course of the interaction. Such proof systems capture natural privacy properties, and may be seen as a generalization of the influential concept of Randomized Encodings [Ishai et al. FOCS 00, Applebaum et al. FOCS 04, Agrawal et al. ICALP 15], and as a counterpart to Zero-Knowledge proofs [Goldwasser et al. STOC 89]. We investigate the properties and power of such instance-hiding proofs, and show the following: 1. Any language with an IHIP is contained in NP/poly and coNP/poly. 2. If an average-case hard language has a constant-round IHIP, then infinitely-often One-Way Functions exist. 3. There is an oracle with respect to which there is a language that has an IHIP but not an SZK proof. 4. IHIP's are closed under composition with any efficiently computable function. We further study a stronger version of IHIP (that we call Simulatable IHIP) where the view of the honest prover can be efficiently simulated. For these, we obtain stronger versions of some of the above: 5. Any language with a Simulatable IHIP is contained in AM and coAM. 6. If a _worst-case_ hard language has a Simulatable IHIP, then One-Way Functions exist.
Last updated:  2024-10-22
Spec-o-Scope: Cache Probing at Cache Speed
Uncategorized
Gal Horowitz, Eyal Ronen, and Yuval Yarom
Show abstract
Uncategorized
Over the last two decades, microarchitectural side channels have been the focus of a large body of research on the development of new attack techniques, exploiting them to attack various classes of targets and designing mitigations. One line of work focuses on increasing the speed of the attacks, achieving higher levels of temporal resolution that can allow attackers to learn finer-grained information. The most recent addition to this line of work is Prime+Scope [CCS '21], which only requires a single access to the L1 cache to confirm the absence of victim activity in a cache set. While significantly faster than prior attacks, Prime+Scope is still an order of magnitude slower than cache access. In this work, we set out to close this gap. We draw on techniques from research into microarchitectural weird gates, software constructs that exploit transient execution to perform arbitrary computation on cache state. We design the Spec-o-Scope gate, a new weird gate that performs 10 cache probes in quick succession, and forms the basis for our eponymous attack. Our Spec-o-Scope attack achieves an order of magnitude improvement in temporal resolution compared to the previous state-of-the-art of Prime+Scope, reducing the measurement time from ~70 cycles to only 5 --- only one cycle more than an L1 cache access. We experimentally verify that our attack can detect timing differences in a 5 cycle resolution. Finally, using our Spec-o-Scope attack, we show the first microarchitectural side-channel attack on an unmodified AES S-box-based implementation, which uses generic CPU features and does not require manipulation of the operating system's scheduler.
Last updated:  2024-05-20
Byzantine Reliable Broadcast with One Trusted Monotonic Counter
Yackolley Amoussou-Guenou, Lionel Beltrando, Maurice Herlihy, and Maria Potop-Butucaru
Byzantine Reliable Broadcast is one of the most popular communication primitives in distributed systems. Byzantine reliable broadcast ensures that processes agree to deliver a message from an initiator even if some processes (perhaps including the initiator) are Byzantine. In asynchronous settings it is known since the prominent work of Bracha [Bracha87] that Byzantine reliable broadcast can be implemented deterministically if $n \geq 3t+1$ where $t$ is an upper bound on the number of Byzantine processes. Here, we study Byzantine Reliable Broadcast when processes are equipped with trusted execution environments (TEEs), special software or hardware designed to prevent equivocation. Our contribution is twofold. First, we show that, despite common belief, when each process is equipped with a TEE, Bracha's algorithm still needs $n \geq 3t+1$. Second, we present a novel algorithm that uses a single TEE (at the initiator) that implements Byzantine Reliable Asynchronous Broadcast with $n \geq 2t+1$.
Last updated:  2024-10-13
SQIPrime: A dimension 2 variant of SQISignHD with non-smooth challenge isogenies
Max Duparc and Tako Boris Fouotsa
We introduce SQIPrime, a post-quantum digital signature scheme based on the Deuring correspondence and Kani's Lemma. Compared to its predecessors that are SQISign and especially SQISignHD, SQIPrime further expands the use of high dimensional isogenies, already in use in the verification in SQISignHD, to all its subroutines. In doing so, it no longer relies on smooth degree isogenies (of dimension 1). Intriguingly, this includes the challenge isogeny which is also a non-smooth degree isogeny, but has an accessible kernel. The fact that the isogenies do not have rational kernel allows to fit more rational power 2 torsion points which are necessary when computing and representing the response isogeny. SQIPrime operates with prime numbers of the form $p = 2^\alpha f-1$. We describe two variants of SQIPrime. SQIPrime4D which incorporates the novelties described above and uses dimension 4 isogenies to represent the response isogeny. The runtime of higher dimensional isogeny computation is exponential in the dimension, hence the smaller the dimension the better for efficiency. The second variant, SQIPrime2D, solely uses dimension 2 isogenies. This is achieved by setting the degree of the secret isogeny to be equal to that of the challenge isogeny and further exploiting Kani's Lemma. SQIPrime2D is more efficient compared to SQIPrime4D and to SQISignHD, at the cost of being comparatively less compact, but still very compact compared to non isogeny based post-quantum signatures.
Last updated:  2024-12-19
Reducing the Share Size of Weighted Threshold Secret Sharing Schemes via Chow Parameters Approximation
Oriol Farràs and Miquel Guiot
A secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets. In a weighted threshold access structure, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the share size of the best known secret sharing schemes is either linear on the weights or quasipolynomial on the number of parties, which leads to long shares, in general. In certain settings, a way to circumvent this efficiency problem is to approximate the access structure by another one that admits more efficient schemes. This work is dedicated to the open problem posed by this strategy: Finding secret sharing schemes with a good tradeoff between the efficiency and the accuracy of the approximation. We present a method to approximate weighted threshold access structures by others that admit schemes with small shares. This method is based on the techniques for the approximation of the Chow parameters developed by De et al. [Journal of the ACM, 2014]. Our method provides secret sharing schemes with share size $n^{1+o(1)}$, where $n$ is the number of parties, and whose access structure is close to the original one. Namely, in this approximation the condition of being authorized or not is preserved for almost all subsets of parties. In addition, we apply the recent results on computational secret sharing schemes by Applebaum et al. [STOC, 2023] to construct computational secret sharing schemes whose share size is polylogarithmic in the number of parties.
Last updated:  2024-10-04
SQIsign2D-East: A New Signature Scheme Using 2-dimensional Isogenies
Kohei Nakagawa and Hiroshi Onuki
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition. SQIsign has attracted much attention because of its very short signature and key size among the candidates for the NIST PQC standardization. Recently, a lot of new schemes have been proposed that use high-dimensional isogenies. Among them, the signature scheme called SQIsignHD has an even shorter signature size than SQIsign. However, it requires 4-dimensional isogeny computations for the signature verification. In this paper, we propose a new signature scheme, SQIsign2D-East, which requires only two-dimensional isogeny computations for verification, thus reducing the computational cost of verification. First, we generalized an algorithm called RandIsogImg, which computes a random isogeny of non-smooth degree. Then, by using this generalized RandIsogImg, we construct a new signature scheme SQIsign2D-East.
Last updated:  2024-11-28
Sublinear-Round Broadcast without Trusted Setup
Andreea B. Alexandru, Julian Loss, Charalampos Papamanthou, Giorgos Tsimos, and Benedikt Wagner
Byzantine broadcast is one of the fundamental problems in distributed computing. Many of its practical applications, from multiparty computation to consensus mechanisms for blockchains, require increasingly weaker trust assumptions, as well as scalability for an ever-growing number of users $n$. This rules out existing solutions which run in a linear number of rounds in $n$ or rely on trusted setup requirements. In this paper, we propose the first sublinear-round and trustless Byzantine broadcast protocol for the dishonest majority setting. Unlike previous sublinear-round protocols, our protocol assumes neither the existence of a trusted dealer who honestly issues keys and correlated random strings to the parties nor random oracles. Instead, we present a solution whose setup is limited to an unstructured uniform reference string and a plain public key infrastructure (a.k.a. bulletin-board PKI). Our broadcast protocol builds on top of a \emph{moderated gradecast} protocol which parties can use to reach weak agreement on shared random strings. Using these strings, we can then run in an unbiased fashion a committee-based Byzantine protocol, similar to that of Chan et al. (PKC 2020), which terminates in a sublinear number of rounds. To this end, we propose a novel construction for committee election, which does not rely either on random oracles or on a trusted dealer, and uses NIZKs and time-lock puzzles. Our protocol is resilient against an adaptive adversary who corrupts any constant fraction of parties.
Last updated:  2024-05-23
Time-Based Cryptography From Weaker Assumptions: Randomness Beacons, Delay Functions and More
Damiano Abram, Lawrence Roy, and Mark Simkin
The assumption that certain computations inherently require some sequential time has established itself as a powerful tool for cryptography. It allows for security and liveness guarantees in distributed protocols that are impossible to achieve with classical hardness assumptions. Unfortunately, many constructions from the realm of time-based cryptography are based on new and poorly understood hardness assumptions, which tend not to stand the test of time (cf. Leurent et al. 2023, Peikert & Tang 2023). In this work, we make progress on several fronts. We formally define the concept of a delay function and present a construction thereof from minimal assumptions. We show that these functions, in combination with classical cryptographic objects that satisfy certain efficiency criteria, would allow for constructing delay encryption, which is otherwise only known to exist based on a new hardness assumption about isogenies. We formally define randomness beacons as they are used in the context of blockchains, and we show that (linearly homomorphic) time-lock puzzles allow for efficiently constructing them. Our work puts time-based cryptography on a firmer theoretical footing, provides new constructions from simpler assumptions, and opens new avenues for constructing delay encryption.
Last updated:  2024-05-20
The Ouroboros of ZK: Why Verifying the Verifier Unlocks Longer-Term ZK Innovation
Denis Firsov and Benjamin Livshits
Verifying the verifier in the context of zero-knowledge proof is an essential part of ensuring the long-term integrity of the zero-knowledge ecosystem. This is vital for both zero-knowledge rollups and also other industrial applications of ZK. In addition to further minimizing the required trust and reducing the trusted computing base (TCB), having a verified verifier opens the door to decentralized proof generation by potentially untrusted parties. We outline a research program and justify the need for more work at the intersection of ZK and formal verification research.
Last updated:  2024-05-30
Bootstrapping Bits with CKKS
Youngjin Bae, Jung Hee Cheon, Jaehyung Kim, and Damien Stehlé
The Cheon-Kim-Kim-Song (CKKS) fully homomorphic encryption scheme is designed to efficiently perform computations on real numbers in an encrypted state. Recently, Drucker et al. [J. Cryptol.] proposed an efficient strategy to use CKKS in a black-box manner to perform computations on binary data. In this work, we introduce several CKKS bootstrapping algorithms designed specifically for ciphertexts encoding binary data. Crucially, the new CKKS bootstrapping algorithms enable to bootstrap ciphertexts containing the binary data in the most significant bits. First, this allows to decrease the moduli used in bootstrapping, saving a larger share of the modulus budget for non-bootstrapping operations. In particular, we obtain full-slot bootstrapping in ring degree $2^{14}$ for the first time. Second, the ciphertext format is compatible with the one used in the DM/CGGI fully homomorphic encryption schemes. Interestingly, we may combine our CKKS bootstrapping algorithms for bits with the fast ring packing technique from Bae et al. [CRYPTO'23]. This leads to a new bootstrapping algorithm for DM/CGGI that outperforms the state-of-the-art approaches when the number of bootstraps to be performed simultaneously is in the low hundreds.
Last updated:  2025-02-17
Breaking Verifiable Delay Functions in the Random Oracle Model
Ziyi Guan, Artur Riazanov, and Weiqiang Yuan
This work resolves the open problem of whether verifiable delay functions (VDFs) can be constructed in the random oracle model. A VDF is a cryptographic primitive that requires a long time to compute (even with parallelization), but produces a unique output that is efficiently and publicly verifiable. We prove that VDFs do not exist in the random oracle model. This also rules out black-box constructions of VDFs from other cryptographic primitives, such as one-way functions, one-way permutations and collision-resistant hash functions. Prior to our work, Mahmoody, Smith and Wu (ICALP 2020) prove that \emph{perfectly unique} VDFs (a much stronger form of VDFs) do not exist in the random oracle model; on the other hand, Ephraim, Freitag, Komargodski, and Pass (Eurocrypt 2020) construct VDFs in the random oracle model assuming the hardness of repeated squaring. Our result is optimal -- we bridge the current gap between previously known impossibility results and existing constructions. We initiate the study of \emph{proof of work functions}, a new cryptographic primitive that shares similarities with both VDFs and proof of works. We show that a stronger form of it does not exist in the random oracle model, leaving open the fascinating possibility of a random-oracle-based construction.
Last updated:  2025-02-14
Scalable Multi-Server Private Information Retrieval
Ashrujit Ghoshal, Baitian Li, Yaohua Ma, Chenxin Dai, and Elaine Shi
We revisit multi-server Private Information Retrieval (PIR), where the client interacts with $S$ non-colluding servers. Ideally, we want a *scalable* family of multi-server PIR schemes where all the performance metrics of the scheme decrease as $S$ increases. However, no prior work achieved scalability under any setting, and any hardness assumption. In this paper we construct new multi-server, information-theoretically secure *scalable* PIR schemes for three natural settings. First, we give a construction where all the performance metrics scale at equal rate. Second, we give a scalable construction that minimizes the per-query bandwidth. Third, we give a scalable construction that minimizes the per-query online bottleneck cost (the maximum of the bandwidth and computation). For the first two settings, our constructions are *doubly efficient* with only a super-constant number of servers. In comparison, the best known prior works in the information-theoretic setting required super-logarithmically many servers to achieve the doubly efficient notion. Our techniques for achieving scalable PIR also enable us to advance the state of the art in the polynomial space setting. In this setting, we show how to improve the space consumption of prior works by a polynomial factor while preserving all other metrics. Further, we show a new balancing technique that allows us to further minimize the bandwidth per query by trading off the computation and server space, thus enabling a more smooth tradeoff between the metrics and generalizing the design space.
Last updated:  2024-07-01
Decentralized Multi-Client Functional Encryption with Strong Security
Ky Nguyen, David Pointcheval, and Robert Schädlich
Decentralized Multi-Client Functional Encryption (DMCFE) extends the basic functional encryption to multiple clients that do not trust each other. They can independently encrypt the multiple plaintext-inputs to be given for evaluation to the function embedded in the functional decryption key, defined by multiple parameter-inputs. And they keep control on these functions as they all have to contribute to the generation of the functional decryption keys. Tags can be used in the ciphertexts and the keys to specify which inputs can be combined together. As any encryption scheme, DMCFE provides privacy of the plaintexts. But the functions associated to the functional decryption keys might be sensitive too (e.g. a model in machine learning). The function-hiding property has thus been introduced to additionally protect the function evaluated during the decryption process. In this paper, we provide new proof techniques to analyze a new concrete construction of function-hiding DMCFE for inner products, with strong security guarantees in the random oracle model: the adversary can adaptively query multiple challenge ciphertexts and multiple challenge keys, with unbounded repetitions of the same message tags in the ciphertext-queries and a fixed polynomially-large number of repetitions of the same key tags in the key-queries, allowing static corruption of the secret encryption keys. Previous constructions were proven secure in the selective setting only.
Last updated:  2025-03-11
Differential Analysis of Feistel Ciphers Incorporating Ajtai SIS Hash Function
Yu Morishima and Masahiro Kaminaga
This paper presents a framework for evaluating the differential cryptanalysis resistance of a Feistel cipher that uses Ajtai SIS hash function as its S-box. We derive an upper bound on the maximum differential probability and analyze the S-box output bias using a generalized extreme value (GEV) model. Simulation results indicate that practical security is achieved with 16 rounds for a 32-bit block and six for a 128-bit block.
Last updated:  2024-10-04
Constant-Cost Batched Partial Decryption in Threshold Encryption
Sora Suegami, Shinsaku Ashizawa, and Kyohei Shibano
Threshold public key encryption schemes distribute secret keys among multiple parties, known as the committee, to reduce reliance on a single trusted entity. However, existing schemes face inefficiencies as the committee should perform computation and communication for decryption of each individual ciphertext. As the number of ciphertexts being decrypted per unit of time increases, this can limit the number of committee parties and their decentralization due to increased hardware requirements, heightening the risk of adversarial collusion. To address this, we introduce tag-based batched threshold encryption (TBTE), which ensures constant computational and communication costs per committee member, independent of the number of ciphertexts being decrypted in batch under distinct decryption policies. The TBTE scheme is constructed over bilinear groups in the random oracle model and secure in the algebraic group model, assuming the hardness of the $(q_1,q_2)$-discrete logarithm problem and the EAV-security of the symmetric-key encryption scheme. Evaluation of our implementation demonstrates constant data size, specifically 48 bytes received and 56 bytes sent, and constant execution time for each committee party during decryption, even for various batch sizes up to $2^{20}$.
Last updated:  2025-03-17
Enabling Lattice-based Authentication Encrypted Search with Ciphertext Broadcast for Cloud Storage
Yibo Cao, Shiyuan Xu, Xiu-Bo Chen, Gang Xu, Siu-Ming Yiu, and Zongpeng Li
The development of cloud computing facilitates data outsourced sharing and storage, but also brings up several security issues. Public key authenticated encryption with keyword search (PAEKS) enables the encrypted search over cloud data while resisting the insider keyword guessing attacks (IKGAs). However, existing PAEKS schemes are limited to a single receiver, restricting application prospects in cloud storage. In addition, quantum computing attacks and key leakage issues further threaten the data security, which attracted extensive attention from researchers. Therefore, designing an encrypted search scheme to resist the above-mentioned attacks is still far-reaching. In this paper, we first propose BroSearch, a lattice-based authentication encrypted search with ciphertext broadcast. It utilizes lattice sampling algorithms to authenticate the keyword and offers searchability over broadcasting ciphertext while enjoying IKGAs-resistant in a quantum setting. To get around key leakage issues, we then incorporate the minimal cover set technique and lattice basis extension algorithm to construct FS-BroSearch, as an enhanced version. Furthermore, we give rigorous security analysis (IND-CKA and IND-IKGA) and comprehensive performance evaluation of both schemes. Specifically, the time cost of BroSearch is at least 0.61, 0.82, and 0.83 times compared to prior arts in terms of ciphertext calculation, trapdoor generation, and search procedures, which is practical and effective for cloud storage.
Last updated:  2025-01-17
SQIsign2D-West: The Fast, the Small, and the Safer
Andrea Basso, Luca De Feo, Pierrick Dartois, Antonin Leroux, Luciano Maino, Giacomo Pope, Damien Robert, and Benjamin Wesolowski
We introduce SQIsign2D-West, a variant of SQIsign using two-dimensional isogeny representations. SQIsignHD was the first variant of SQIsign to use higher dimensional isogeny representations. Its eight-dimensional variant is geared towards provable security but is deemed unpractical. Its four-dimensional variant is geared towards efficiency and has significantly faster signing times than SQIsign, but slower verification owing to the complexity of the four-dimensional representation. Its authors commented on the apparent difficulty of getting any improvement over SQIsign by using two-dimensional representations. In this work, we introduce new algorithmic tools that make two-dimensional representations a viable alternative. These lead to a signature scheme with sizes comparable to SQIsignHD, slightly slower signing than SQIsignHD but still much faster than SQIsign, and the fastest verification of any known variant of SQIsign. We achieve this without compromising on the security proof: the assumptions behind SQIsign2D-West are similar to those of the eight-dimensional variant of SQIsignHD. Additionally, like SQIsignHD, SQIsign2D-West favourably scales to high levels of security Concretely, for NIST level I we achieve signing times of 80 ms and verifying times of 4.5 ms, using optimised arithmetic based on intrinsics available to the Ice Lake architecture. For NIST level V, we achieve 470 ms for signing and 31 ms for verifying.
Last updated:  2024-06-28
Watermarking Language Models for Many Adaptive Users
Aloni Cohen, Alexander Hoover, and Gabe Schoenbach
We study watermarking schemes for language models with provable guarantees. As we show, prior works offer no robustness guarantees against adaptive prompting: when a user queries a language model more than once, as even benign users do. And with just a single exception (Christ and Gunn, 2024), prior works are restricted to zero-bit watermarking: machine-generated text can be detected as such, but no additional information can be extracted from the watermark. Unfortunately, merely detecting AI-generated text may not prevent future abuses. We introduce multi-user watermarks, which allow tracing model-generated text to individual users or to groups of colluding users, even in the face of adaptive prompting. We construct multi-user watermarking schemes from undetectable, adaptively robust, zero-bit watermarking schemes (and prove that the undetectable zero-bit scheme of Christ, Gunn, and Zamir (2024) is adaptively robust). Importantly, our scheme provides both zero-bit and multi-user assurances at the same time. It detects shorter snippets just as well as the original scheme, and traces longer excerpts to individuals. The main technical component is a construction of message-embedding watermarks from zero-bit watermarks. Ours is the first generic reduction between watermarking schemes for language models. A challenge for such reductions is the lack of a unified abstraction for robustness --- that marked text is detectable even after edits. We introduce a new unifying abstraction called AEB-robustness. AEB-robustness provides that the watermark is detectable whenever the edited text "approximates enough blocks" of model-generated output.
Last updated:  2024-05-17
Admissible Parameters for the Crossbred Algorithm and Semi-regular Sequences over Finite Fields
John Baena, Daniel Cabarcas, Sharwan K. Tiwari, Javier Verbel, and Luis Villota
Multivariate public key cryptography (MPKC) is one of the most promising alternatives to build quantum-resistant signature schemes, as evidenced in NIST's call for additional post-quantum signature schemes. The main assumption in MPKC is the hardness of the Multivariate Quadratic (MQ) problem, which seeks for a common root to a system of quadratic polynomials over a finite field. Although the Crossbred algorithm is among the most efficient algorithm to solve MQ over small fields, its complexity analysis stands on shaky ground. In particular, it is not clear for what parameters it works and under what assumptions. In this work, we provide a rigorous analysis of the Crossbred algorithm over any finite field. We provide a complete explanation of the series of admissible parameters proposed in previous literature and explicitly state the regularity assumptions required for its validity. Moreover, we show that the series does not tell the whole story, hence we propose an additional condition for Crossbred to work. Additionally, we define and characterize a notion of regularity for systems over a small field, which is one of the main building blocks in the series of admissible parameters.
Last updated:  2025-01-15
Formal Definition and Verification for Combined Random Fault and Random Probing Security
Sonia Belaid, Jakob Feldtkeller, Tim Güneysu, Anna Guinet, Jan Richter-Brockmann, Matthieu Rivain, Pascal Sasdrich, and Abdul Rahman Taleb
In our highly digitalized world, an adversary is not constrained to purely digital attacks but can monitor or influence the physical execution environment of a target computing device. Such side-channel or fault-injection analysis poses a significant threat to otherwise secure cryptographic implementations. Hence, it is important to consider additional adversarial capabilities when analyzing the security of cryptographic implementations besides the default black-box model. For side-channel analysis, this is done by providing the adversary with knowledge of some internal values, while for fault-injection analysis the capabilities of the adversaries include manipulation of some internal values. In this work, we extend probabilistic security models for physical attacks, by introducing a general random probing model and a general random fault model to capture arbitrary leakage and fault distributions, as well as the combination of these models. Our aim is to enable a more accurate modeling of low-level physical effects. We then analyze important properties, such as the impact of adversarial knowledge on faults and compositions, and provide tool-based formal verification methods that allow the security assessment of design components. These methods are introduced as extension of previous tools VERICA and IronMask which are implemented, evaluated and compared.
Last updated:  2024-05-17
(Strong) aPAKE Revisited: Capturing Multi-User Security and Salting
Dennis Dayanikli and Anja Lehmann
Asymmetric Password-Authenticated Key Exchange (aPAKE) protocols, particularly Strong aPAKE (saPAKE) have enjoyed significant attention, both from academia and industry, with the well-known OPAQUE protocol currently undergoing standardization. In (s)aPAKE, a client and a server collaboratively establish a high-entropy key, relying on a previously exchanged password for authentication. A main feature is its resilience against offline and precomputation (for saPAKE) attacks. OPAQUE, as well as most other aPAKE protocols, have been designed and analyzed in a single-user setting, i.e., modelling that only a single user interacts with the server. By the composition framework of UC, security for the actual multi-user setting is then conjectured. As any real-world (s)aPAKE instantiation will need to cater multiple users, this introduces a dangerous gap in which developers are tasked to extend the single-user protocol securely and in a UC-compliant manner. In this work, we extend the (s)aPAKE definition to directly model the multi-user setting, and explicitly capture the impact that a server compromise has across user accounts. We show that the currently standardized multi-user version of OPAQUE might not provide the expected security, as it is insecure against offline attacks as soon as the file for one user in the system is compromised. This is due to using shared state among different users, which violates the UC composition framework. However, we show that another change introduced in the standardization draft which also involves a shared state does not compromise security. When extending the aPAKE security in the multi-client setting, we notice that the widely used security definition captures significantly weaker security guarantees than what is offered by many protocols. Essentially, the aPAKE definition assumes that the server stores unsalted password-hashes, whereas several protocols explicitly use a salt to protect against precomputation attacks. We therefore propose a definitional framework that captures different salting approaches -- thus showing that the security gap between aPAKE and saPAKE can be smaller than expected.
Last updated:  2024-05-17
Efficient Second-Order Masked Software Implementations of Ascon in Theory and Practice
Barbara Gigerl, Florian Mendel, Martin Schläffer, and Robert Primas
In this paper, we present efficient protected software implementations of the authenticated cipher Ascon, the recently announced winner of the NIST standardization process for lightweight cryptography. Our implementations target theoretical and practical security against second-order power analysis attacks. First, we propose an efficient second-order extension of a previously presented first-order masking of the Keccak S-box that does not require online randomness. The extension itself is inspired by a previously presented second-order masking of an AND-XOR construction. We then discuss implementation tricks that further improve performance and reduce the chance of unintended combination of shares during the execution of masked software on microprocessors. This allows us to retain the theoretic protection orders of masking in practice with low performance overhead, which we also confirm via TVLA on ARM microprocessors. The formal correctness of our designs is additionally verified using Coco on the netlist of a RISC-V IBEX core. We benchmark our masked software designs on 32-bit ARM and RISC-V microprocessor platforms. On both platforms, we can perform Ascon-128 authenticated encryption with a throughput of about 300 or 550 cycles/byte when operating on 2 or 3 shares. When utilizing a leveled implementation technique, the throughput of our masked implementations generally increases to about 90 cycles/byte. We publish our masked software implementations together with a generic software framework for evaluating performance and side-channel resistance of various masked cryptographic implementations.
Last updated:  2025-01-23
Adversary Resilient Learned Bloom Filters
Allison Bishop and Hayder Tirmazi
The Learned Bloom Filter is a recently proposed data structure that combines the Bloom Filter with a Learning Model while preserving the Bloom Filter's one-sided error guarantees. Creating an adversary-resilient construction of the Learned Bloom Filter with provable guarantees is an open problem. We define a strong adversarial model for the Learned Bloom Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e. not ``Learned'') Bloom Filter by prior work and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). Using our model, we construct an adversary-resilient variant of the Learned Bloom Filter called the Downtown Bodega Filter. We show that: if pseudo-random permutations exist, then an Adversary Resilient Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We construct a hybrid adversarial model for the case where a fraction of the query workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.
Last updated:  2025-01-27
Summation-based Private Segmented Membership Test from Threshold-Fully Homomorphic Encryption
Nirajan Koirala, Jonathan Takeshita, Jeremy Stevens, and Taeho Jung
In many real-world scenarios, there are cases where a client wishes to check if a data element they hold is included in a set segmented across a large number of data holders. To protect user privacy, the client's query and the data holders' sets should remain encrypted throughout the whole process. Prior work on Private Set Intersection (PSI), Multi-Party PSI (MPSI), Private Membership Test (PMT), and Oblivious RAM (ORAM) falls short in this scenario in many ways. They either require data holders to possess the sets in plaintext, incur prohibitively high latency for aggregating results from a large number of data holders, leak the information about the party holding the intersection element, or induce a high false positive. This paper introduces the primitive of a Private Segmented Membership Test (PSMT). We give a basic construction of a protocol to solve PSMT using a threshold variant of approximate-arithmetic homomorphic encryption and show how to overcome existing challenges to construct a PSMT protocol without leaking information about the party holding the intersection element or false positives for a large number of data holders ensuring IND-CPA^D security. Our novel approach is superior to existing state-of-the-art approaches in scalability with regard to the number of supported data holders. This is enabled by a novel summation-based homomorphic membership check rather than a product-based one, as well as various novel ideas addressing technical challenges. Our PSMT protocol supports many more parties (up to 4096 in experiments) compared to prior related work that supports only around 100 parties efficiently. Our experimental evaluation shows that our method's aggregation of results from data holders can run in 92.5s for 1024 data holders and a set size of 2^25, and our method's overhead increases very slowly with the increasing number of senders. We also compare our PSMT protocol to other state-of-the-art PSI and MPSI protocols and discuss our improvements in usability with a better privacy model and a larger number of parties.
Last updated:  2024-08-06
More Embedded Curves for SNARK-Pairing-Friendly Curves
Aurore Guillevic
Embedded curves are elliptic curves defined over a prime field whose order (characteristic) is the prime subgroup order (the scalar field) of a pairing-friendly curve. Embedded curves have a large prime-order subgroup of cryptographic size but are not pairing-friendly themselves. Sanso and El Housni published families of embedded curves for BLS pairing-friendly curves. Their families are parameterized by polynomials, like families of pairing-friendly curves are. However their work did not found embedded families for KSS pairing-friendly curves. In this note we show how the problem of finding families of embedded curves is related to the problem of finding optimal formulas for $\G_1$ subgroup membership testing on the pairing-friendly curve side. Then we apply Smith's technique and Dai, Lin, Zhao, and Zhou (DLZZ) criteria to obtain the formulas of embedded curves with KSS, and outline a generic algorithm for solving this problem in all cases. We provide two families of embedded curves of prime-order for KSS18 that can form a plain cycle, and give examples of cryptographic size. We also give families of even-order $j=1728$ embedded curves for KSS16 with examples. We also suggest alternative embedded curves for BLS that have a seed of much lower Hamming weight than Sanso et al.~and much higher 2-valuation for fast FFT. In particular we highlight BLS12 curves which have a prime-order embedded curve that form a plain cycle (no pairing), and a second (plain) embedded curve in Montgomery form. A Brezing-Weng outer curve to have a pairing-friendly 2-chain is also possible like in the BLS12-377-BW6-761 construction. All curves have $j$-invariant 0 and an endomorphism for a faster arithmetic on the curve side.
Last updated:  2024-05-16
Simultaneous Haar Indistinguishability with Applications to Unclonable Cryptography
Prabhanjan Ananth, Fatih Kaleoglu, and Henry Yuen
Unclonable cryptography is concerned with leveraging the no-cloning principle to build cryptographic primitives that are otherwise impossible to achieve classically. Understanding the feasibility of unclonable encryption, one of the key unclonable primitives, satisfying indistinguishability security in the plain model has been a major open question in the area. So far, the existing constructions of unclonable encryption are either in the quantum random oracle model or are based on new conjectures. We present a new approach to unclonable encryption via a reduction to a novel question about nonlocal quantum state discrimination: how well can non-communicating -- but entangled -- players distinguish between different distributions over quantum states? We call this task simultaneous state indistinguishability. Our main technical result is showing that the players cannot distinguish between each player receiving independently-chosen Haar random states versus all players receiving the same Haar random state. We leverage this result to present the first construction of unclonable encryption satisfying indistinguishability security, with quantum decryption keys, in the plain model. We also show other implications to single-decryptor encryption and leakage-resilient secret sharing.
Last updated:  2025-02-18
Speeding Up Multi-Scalar Multiplications for Pairing-Based zkSNARKs
Xinxin Fan, Veronika Kuchta, Francesco Sica, and Lei Xu
Multi-scalar multiplication (MSM) is one of the core components of many zero-knowledge proof systems, and a primary performance bottleneck for proof generation in these schemes. One major strategy to accelerate MSM is utilizing precomputation. Several algorithms (e.g., Pippenger and BGMW) and their variants have been proposed in this direction. In this paper, we revisit the recent precomputation-based MSM calculation method proposed by Luo, Fu and Gong at CHES 2023 and generalize their approach. In particular, we presented a general construction of optimal buckets. This improvement leads to significant performance improvements, which are verified by both theoretical analysis and experiments.
Last updated:  2024-05-16
Reducing the CRS Size in Registered ABE Systems
Rachit Garg, George Lu, Brent Waters, and David J. Wu
Attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data. In (ciphertext-policy) ABE, a central trusted authority issues decryption keys for attributes $x$ to users. In turn, ciphertexts are associated with a decryption policy $\mathcal{P}$. Decryption succeeds and recovers the encrypted message whenever $\mathcal{P}(x) = 1$. Recently, Hohenberger, Lu, Waters, and Wu (Eurocrypt 2023) introduced the notion of registered ABE, which is an ABE scheme without a trusted central authority. Instead, users generate their own public/secret keys (just like in public-key encryption) and then register their keys (and attributes) with a key curator. The key curator is a transparent and untrusted entity. Currently, the best pairing-based registered ABE schemes support monotone Boolean formulas and an a priori bounded number of users $L$. A major limitation of existing schemes is that they require a (structured) common reference string (CRS) of size $L^2 \cdot |\mathcal{U}|$ where $|\mathcal{U}|$ is the size of the attribute universe. In other words, the size of the CRS scales quadratically with the number of users and multiplicatively with the size of the attribute universe. The large CRS makes these schemes expensive in practice and limited to a small number of users and a small universe of attributes. In this work, we give two ways to reduce the CRS size in pairing-based registered ABE schemes. First, we introduce a combinatoric technique based on progression-free sets that enables registered ABE for the same class of policies but with a CRS whose size is sub-quadratic in the number of users. Asymptotically, we obtain a scheme where the CRS size is nearly linear in the number of users $L$ (i.e., $L^{1 + o(1)}$). If we take a more concrete-efficiency-oriented focus, we can instantiate our framework to obtain a construction with a CRS of size $L^{\log_2 3} \approx L^{1.6}$. For instance, in a scheme for 100,000 users, our approach reduces the CRS by a factor of over $115\times$ compared to previous approaches (and without incurring any overhead in encryption/decryption time). Our second approach for reducing the CRS size is to rely on a partitioning-based argument when arguing security of the registered ABE scheme. Previous approaches took a dual-system approach. Using a partitioning-based argument yields a registered ABE scheme where the size of the CRS is independent of the size of the attribute universe. The cost is the resulting scheme satisfies a weaker notion of static security. Our techniques for reducing the CRS size can be combined, and taken together, we obtain a pairing-based registered ABE scheme that supports monotone Boolean formulas with a CRS size of $L^{1 + o(1)}$. Notably, this is the first pairing-based registered ABE scheme that does not require imposing a bound on the size of the attribute universe during setup time. As an additional application, we also show how to apply our techniques based on progression-free sets to the batch argument (BARG) for $\mathsf{NP}$ scheme of Waters and Wu (Crypto 2022) to obtain a scheme with a nearly-linear CRS without needing to rely on non-black-box bootstrapping techniques.
Last updated:  2024-05-16
PERK: Compact Signature Scheme Based on a New Variant of the Permuted Kernel Problem
Slim Bettaieb, Loïc Bidoux, Victor Dyseryn, Andre Esser, Philippe Gaborit, Mukul Kulkarni, and Marco Palumbi
In this work we introduce PERK a compact digital signature scheme based on the hardness of a new variant of the Permuted Kernel Problem (PKP). PERK achieves the smallest signature sizes for any PKP-based scheme for NIST category I security with 6 kB, while obtaining competitive signing and verification timings. PERK also compares well with the general state-of-the-art. To substantiate those claims we provide an optimized constant-time AVX2 implementation, a detailed performance analysis and different size-performance trade-offs. Technically our scheme is based on a Zero-Knowledge Proof of Knowledge following the MPC-in-the-Head paradigm and employing the Fiat-Shamir transform. We provide comprehensive security proofs, ensuring EUF-CMA security for PERK in the random oracle model. The efficiency of PERK greatly stems from our particular choice of PKP variant which allows for an application of the challenge-space amplification technique due to Bidoux-Gaborit (C2SI 2023). Our second main contribution is an in-depth study of the hardness of the introduced problem variant. First, we establish a link between the hardness of our problem variant and the hardness of standard PKP. Then, we initiate an in-depth study of the concrete complexity to solve our variant. We present a novel algorithm which outperforms previous approaches for certain parameter regimes. However, the proximity of our problem variant to the standard variant can be controlled via a specific parameter. This enables us to effectively safeguard against our new attack and potential future extensions by a choice of parameters that ensures only a slight variation from standard PKP.
Last updated:  2024-05-27
Scaling Lattice Sieves across Multiple Machines
Martin R. Albrecht and Joe Rowell
Lattice sieves are algorithms for finding short vectors in lattices. We present an implementation of two such sieves – known as “BGJ1” and “BDGL” in the literature – that scales across multiple servers (with varying success). This class of algorithms requires exponential memory which had put into question their ability to scale across sieving nodes. We discuss our architecture and optimisations and report experimental evidence of the efficiency of our approach.
Last updated:  2025-04-11
The Art of Bonsai: How Well-Shaped Trees Improve the Communication Cost of MLS
Céline Chevalier, Guirec Lebrun, Ange Martinelli, and Jérôme Plût
Messaging Layer Security (MLS) is a Secure Group Messaging protocol that uses for its handshake a binary tree – called a Ratchet Tree – in order to reach a logarithmic communication cost in the number of group members. This Ratchet Tree represents users as its leaves; therefore any change in the group membership results in adding or removing a leaf in the tree. MLS consequently implements what we call a tree evolution mechanism, consisting of a user add algorithm – determining where to insert a new leaf – and a tree expansion process – stating how to increase the size of the tree when no space is available for a new user. The tree evolution mechanism currently used by MLS is designed so that it naturally left-balances the Ratchet Tree. However, such a tree structure is often quite inefficient in terms of communication cost. Furthermore, one may wonder whether the binary Ratchet Tree has a degree optimized for the features of MLS. Therefore, we study in this paper how to improve the communication cost of the handshake in MLS – realized through an operation called a commit – by considering both the tree evolution mechanism and the tree degree used for the Ratchet Tree. To do so, we determine the tree structure that optimizes its communication cost and we propose algorithms for both the user add and the tree expansion processes, that allow to remain close to that optimal structure and thus to have a communication cost as close as possible to the optimum. We also find out the Ratchet Tree degree that is best suited to a given set of parameters induced by the encryption scheme used by MLS. This study shows that when using classical (i.e. pre-quantum) ciphersuites, a binary tree is indeed the most appropriate Ratchet Tree; nevertheless, with post-quantum algorithms, it generally becomes more interesting to use instead a ternary tree. Our improvements do not change the TreeKEM protocol and are easy to implement. With parameter sets corresponding to practical ciphersuites, they reduce TreeKEM’s communication cost by 5 to 10%. In particular, the gain of 10% appears in the post-quantum setting – when both an optimized tree evolution mechanism and a ternary tree are necessary –, which is precisely the context where any optimization of the protocol’s communication cost is welcome, due to the large bandwidth of PQ encrypted communication.
Last updated:  2024-09-19
FRAST: TFHE-friendly Cipher Based on Random S-boxes
Mingyu Cho, Woohyuk Chung, Jincheol Ha, Jooyoung Lee, Eun-Gyeol Oh, and Mincheol Son
A transciphering framework, also known as hybrid homomorphic encryption, is a practical method of combining a homomorphic encryption~(HE) scheme with a symmetric cipher in the client-server model to reduce computational and communication overload on the client side. As a server homomorphically evaluates a symmetric cipher in this framework, new design rationales are required for ``HE-friendly'' ciphers that take into account the specific properties of the HE schemes. In this paper, we propose a new TFHE-friendly cipher, dubbed $\mathsf{FRAST}$, with a TFHE-friendly round function based on a random S-box to minimize the number of rounds. The round function of $\mathsf{FRAST}$ can be efficiently evaluated in TFHE by a new optimization technique, dubbed double blind rotation. Combined with our new WoP-PBS method, the double blind rotation allows computing multiple S-box calls in the round function of $\mathsf{FRAST}$ at the cost of a single S-box call. In this way, $\mathsf{FRAST}$ enjoys $2.768$ (resp. $10.57$) times higher throughput compared to $\mathsf{Kreyvium}$ (resp. $\mathsf{Elisabeth}$) for TFHE keystream evaluation in the offline phase of the transciphering framework at the cost of slightly larger communication overload.
Last updated:  2025-04-16
An NVMe-based Secure Computing Platform with FPGA-based TFHE Accelerator
Yoshihiro Ohba, Tomoya Sanuki, Claude Gravel, Kentaro Mihara, Asuka Wakasugi, and Kenta Adachi
In this study, we introduce a new approach to secure computing by implementing a platform that utilizes a non-volatile memory express (NVMe)-based system with an FPGA-based Torus fully homomorphic encryption (TFHE) accelerator, solid state drive (SSD), and middleware on the host-side. Our platform is the first to offer completely secure computing capabilities for TFHE by using an FPGA-based accelerator. We defined secure computing instructions to evaluate 14-bit to 14-bit functions using TFHE. Our middleware allows for the communication of ciphertexts, keys, and secure computing programs while invoking secure computing programs through NVMe commands with metadata. Our performance evaluation demonstrates that our secure computing platform outperforms CPU-based and GPU-based platforms by 15 to 120 times and 2.5 to 3 times, respectively, in gate bootstrapping execution time. Additionally, our platform uses 7 to 12 times less electric energy consumption during the gate bootstrapping execution time than CPU-based platforms and 4.95 times less than a GPU-based platform. The performance of a machine learning application running on our platform shows that bootstrapping accounts for more than 80% of ciphertext learning time.
Last updated:  2024-05-15
Improved Conditional Cube Attacks on Ascon AEADs in Nonce-Respecting Settings -- with a Break-Fix Strategy
Kai Hu
The best-known distinguisher on 7-round Ascon-128 and Ascon-128a AEAD uses a 60-dimensional cube where the nonce bits are set to be equal in the third and fourth rows of the Ascon state during initialization (Rohit et al. ToSC 2021/1). It was not known how to use this distinguisher to mount key-recovery attacks. In this paper, we investigate this problem using a new strategy called \textit{break-fix} for the conditional cube attack. The idea is to introduce slightly-modified cubes which increase the degrees of 7-round output bits to be more than 59 (break phase) and then find key conditions which can bring the degree back to 59 (fix phase). Using this idea, key-recovery attacks on 7-round Ascon-128, Ascon-128a and Ascon-80pq are proposed. The attacks have better time/memory complexities than the existing attacks, and in some cases improve the weak-key attacks as well.
Last updated:  2024-07-06
Efficient Universally-Verifiable Electronic Voting with Everlasting Privacy
David Pointcheval
Universal verifiability is a must-to-have for electronic voting schemes. It is essential to ensure honest behavior of all the players during the whole process, together with the eligibility. However, it should not endanger the privacy of the individual votes, which is another major requirement. Whereas the first property prevents attacks during the voting process, privacy of the votes should hold forever, which has been called everlasting privacy. A classical approach for universal verifiability is to add some proofs together with the encrypted votes, which requires publication of the latter, while eligibility needs a link between the votes and the voters: it definitely excludes long-term privacy. An alternative is the use of perfectly-hiding commitments, on which proofs are published, while ciphertexts are kept private for computing the tally. In this paper, we show how recent linearly-homomorphic signatures can be exploited for all the proofs, leading to very efficient procedures towards universal verifiability with both strong receipt-freeness and everlasting privacy. Privacy will indeed be unconditional, after the publication of the results and the proofs, whereas the soundness of the proofs holds in the algebraic group model and the random oracle model.
Last updated:  2024-05-15
A Deniability Analysis of Signal's Initial Handshake PQXDH
Rune Fiedler and Christian Janson
Many use messaging apps such as Signal to exercise their right to private communication. To cope with the advent of quantum computing, Signal employs a new initial handshake protocol called PQXDH for post-quantum confidentiality, yet keeps guarantees of authenticity and deniability classical. Compared to its predecessor X3DH, PQXDH includes a KEM encapsulation and a signature on the ephemeral key. In this work we show that PQXDH does not meet the same deniability guarantees as X3DH due to the signature on the ephemeral key. Our analysis relies on plaintext awareness of the KEM, which Signal's implementation of PQXDH does not provide. As for X3DH, both parties (initiator and responder) obtain different deniability guarantees due to the asymmetry of the protocol. For our analysis of PQXDH, we introduce a new model for deniability of key exchange that allows a more fine-grained analysis. Our deniability model picks up on the ideas of prior work and facilitates new combinations of deniability notions, such as deniability against malicious adversaries in the big brother model, i.e. where the distinguisher knows all secret keys. Our model may be of independent interest.
Last updated:  2025-02-19
Multi-Client Functional Encryption with Public Inputs and Strong Security
Ky Nguyen, Duong Hieu Phan, and David Pointcheval
Recent years have witnessed a significant development for functional encryption (FE) in the multi-user setting, particularly with multi-client functional encryption (MCFE). The challenge becomes more important when combined with access control, such as attribute-based encryption (ABE), which was actually not covered syntactically by the public-key FE nor semantically by the secret-key MCFE frameworks. On the other hand, as for complex primitives, many works have studied the admissibility of adversaries to ensure that the security model encompasses all real threats of attacks. 1. At a conceptual level, by adding a public input to FE/MCFE, we cover many previous primitives, notably attribute-based function classes. Furthermore, with the strongest admissibility for inner-product functionality, our framework is quite versatile, as it encrypts multiple sub-vectors, allows repetitions and corruptions, and eventually also encompasses public-key FE and classical ABE, bridging the $\mathit{private}$ setting of MCFE with the $\mathit{public}$ setting of FE and ABE. 2. Finally, we propose an MCFE with public inputs with the class of functions that combines inner-products (on private inputs) and attribute-based access-control (on public inputs) for LSSS policies. We achieve the first AB-MCFE for inner products with strong admissibility (from Nguyen et al., ACNS'23) and with adaptive security. In the end, our concrete MCFE leads to MIFE for inner products, public-key single-input inner-product FE with LSSS key-policy, and KP-ABE for LSSS, with adaptive security. Previous AB-MCFE constructions are either restricted in terms of weaker admissibility (Nguyen et al., ASIACRYPT'22) or considers a slightly larger functionality of attribute-weighted sum but with only selective security (Agrawal et al., CRYPTO'23).
Last updated:  2024-05-15
BGJ15 Revisited: Sieving with Streamed Memory Access
Ziyu Zhao, Jintai Ding, and Bo-Yin Yang
The focus of this paper is to tackle the issue of memory access within sieving algorithms for lattice problems. We have conducted an in-depth analysis of an optimized BGJ sieve (Becker-Gama-Joux 2015), and our findings suggest that its inherent structure is significantly more memory-efficient compared to the asymptotically fastest BDGL sieve (Becker-Ducas-Gama-Laarhoven 2016). Specifically, it necessitates merely $2^{0.2075n + o(n)}$ streamed (non-random) main memory accesses for the execution of an $n$-dimensional sieving. We also provide evidence that the time complexity of this refined BGJ sieve could potentially be $2^{0.292n + o(n)}$, or at least something remarkably close to it. Actually, it outperforms the BDGL sieve in all dimensions that are practically achievable. We hope that this study will contribute to the resolution of the ongoing debate regarding the measurement of RAM access overhead in large-scale, sieving-based lattice attacks. The concept above is also supported by our implementation. Actually, we provide a highly efficient, both in terms of time and memory, CPU-based implementation of the refined BGJ sieve within an optimized sieving framework. This implementation results in approximately 40% savings in RAM usage and is at least $2^{4.5}$ times more efficient in terms of gate count compared to the previous 4-GPU implementation (Ducas-Stevens-Woerden 2021). Notably, we have successfully solved the 183-dimensional SVP Darmstadt Challenge in 30 days using a 112-core server and approximately 0.87TB of RAM. The majority of previous sieving-based SVP computations relied on the HK3-sieve (Herold-Kirshanova 2017), hence this implementation could offer further insights into the behavior of these asymptotically faster sieving algorithms when applied to large-scale problems. Moreover, our refined cost estimation of SVP based on this implementation suggests that some of the NIST PQC candidates, such as Falcon-512, are unlikely to achieve NIST's security requirements.
Last updated:  2024-05-14
Quantum Key-Revocable Dual-Regev Encryption, Revisited
Prabhanjan Ananth, Zihan Hu, and Zikuan Huang
Quantum information can be used to achieve novel cryptographic primitives that are impossible to achieve classically. A recent work by Ananth, Poremba, Vaikuntanathan (TCC 2023) focuses on equipping the dual-Regev encryption scheme, introduced by Gentry, Peikert, Vaikuntanathan (STOC 2008), with key revocation capabilities using quantum information. They further showed that the key-revocable dual-Regev scheme implies the existence of fully homomorphic encryption and pseudorandom functions, with both of them also equipped with key revocation capabilities. Unfortunately, they were only able to prove the security of their schemes based on new conjectures and left open the problem of basing the security of key revocable dual-Regev encryption on well-studied assumptions. In this work, we resolve this open problem. Assuming polynomial hardness of learning with errors (over sub-exponential modulus), we show that key-revocable dual-Regev encryption is secure. As a consequence, for the first time, we achieve the following results: 1. Key-revocable public-key encryption and key-revocable fully-homomorphic encryption satisfying classical revocation security and based on polynomial hardness of learning with errors. Prior works either did not achieve classical revocation or were based on sub-exponential hardness of learning with errors. 2. Key-revocable pseudorandom functions satisfying classical revocation from the polynomial hardness of learning with errors. Prior works relied upon unproven conjectures.
Last updated:  2024-10-08
Mutable Batch Arguments and Applications
Rishab Goyal
We put forth a new concept of mutability for batch arguments (BARGs), called mutable batch arguments. Our goal is to re-envision how we think about and use BARGs. Traditionally, a BARG proof $\pi$ is an immutable encoding of $k$ $\mathbf{NP}$ witness $\omega_1, \ldots, \omega_{k}$. A mutable BARG system captures the notion of computations over BARGs, where each proof string $\pi$ is treated as a mutable encoding of original witnesses. We also study strong privacy notions for mutable BARGs, with the goal of hiding all non-trivial information about witnesses from a mutated proof. Such mutable BARGs are a naturally good fit for many privacy sensitive applications. Our main contributions include introducing the concept of mutable BARGs, identifying non-trivial classes of feasible mutations, designing new constructions for mutable BARGs with varying capabilities satisfying mutation privacy from standard cryptographic assumptions, and enabling new applications to many advanced signatures (homomorphic/ redactable/ aggregate signatures). Our results improve state-of-the-art known for many signature systems. E.g., we provide the first multi-key homomorphic signature with succinct signatures from standard assumptions, and we provide the first truly compact redactable signature where pre/post-redaction signatures are of fixed size, and we provide the first locally verifiable multi-signer aggregate signature satisfying message privacy from standard assumptions.
Last updated:  2024-05-13
Secret Sharing with Certified Deletion
James Bartusek and Justin Raizes
Secret sharing allows a user to split a secret into many shares so that the secret can be recovered if, and only if, an authorized set of shares is collected. Although secret sharing typically does not require any computational hardness assumptions, its security does require that an adversary cannot collect an authorized set of shares. Over long periods of time where an adversary can benefit from multiple data breaches, this may become an unrealistic assumption. We initiate the systematic study of secret sharing with certified deletion in order to achieve security even against an adversary that eventually collects an authorized set of shares. In secret sharing with certified deletion, a (classical) secret $s$ is split into quantum shares that can be destroyed in a manner verifiable by the dealer. We put forth two natural definitions of security. No-signaling security roughly requires that if multiple non-communicating adversaries delete sufficiently many shares, then their combined view contains negligible information about $s$, even if the total set of corrupted parties forms an authorized set. Adaptive security requires privacy of $s$ against an adversary that can continuously and adaptively corrupt new shares and delete previously-corrupted shares, as long as the total set of corrupted shares minus deleted shares remains unauthorized. Next, we show that these security definitions are achievable: we show how to construct (i) a secret sharing scheme with no-signaling certified deletion for any monotone access structure, and (ii) a threshold secret sharing scheme with adaptive certified deletion. Our first construction uses Bartusek and Khurana's (CRYPTO 2023) 2-out-of-2 secret sharing scheme with certified deletion as a building block, while our second construction is built from scratch and requires several new technical ideas. For example, we significantly generalize the ``XOR extractor'' of Agarwal, Bartusek, Khurana, and Kumar (EUROCRYPT 2023) in order to obtain better seedless extraction from certain quantum sources of entropy, and show how polynomial interpolation can double as a high-rate randomness extractor in our context of threshold sharing with certified deletion.
Last updated:  2024-05-13
Secure Multiparty Computation in the Presence of Covert Adaptive Adversaries
Isheeta Nargis and Anwar Hasan
We design a new MPC protocol for arithmetic circuits secure against erasure-free covert adaptive adversaries with deterrence 1/2. The new MPC protocol has the same asymptotic communication cost, the number of PKE operations and the number of exponentiation operations as the most efficient MPC protocol for arithmetic circuits secure against covert static adversaries. That means, the new MPC protocol improves security from covert static security to covert adaptive adversary almost for free. For MPC problems where the number of parties n is much larger than the number of multiplication gates M, the new MPC protocol asymptotically improves communication complexity over the most efficient MPC protocol for arithmetic circuits secure against erasure-free active adaptive adversaries.
Last updated:  2024-05-13
Proof of Stake and Activity: Rewarding On-Chain Activity Through Consensus
Aram Jivanyan and Karen Terjanian
We are introducing a novel consensus protocol for blockchain, called Proof of Stake and Activity (PoSA) which can augment the traditional Proof of Stake methods by integrating a unique Proof of Activity system. PoSA offers a compelling economic model that promotes decentralization by rewarding validators based on their staked capital and also the business value they contribute to the chain. This protocol has been implemented already into a fully-fledged blockchain platform called Bahamut (www.bahamut.io) which boasts hundreds of thousands of active users already.
Last updated:  2024-06-19
Proxying is Enough: Security of Proxying in TLS Oracles and AEAD Context Unforgeability
Uncategorized
Zhongtang Luo, Yanxue Jia, Yaobin Shen, and Aniket Kate
Show abstract
Uncategorized
TLS oracles allow a TLS client to offer selective data provenance to an external (oracle) node such that the oracle node is ensured that the data is indeed coming from a pre-defined TLS server. Typically, the client/user supplies their credentials to the server and reveals selective data using zero-knowledge proofs to demonstrate certain server-offered information to oracles while ensuring the secrecy of the rest of the TLS transcript. Conceptually, this is a standard three-party secure computation between the TLS server, TLS client (prover), and the oracle (verifier) node; however, the key practical requirement for TLS oracles to ensure that data provenance process remains transparent to the TLS server. Recent TLS oracle protocols such as DECO enforce the communication pattern of server-client-verifier and utilize a novel three-party handshake process during TLS to ensure data integrity against potential tempering by the client. However, this approach introduces a significant performance penalty on the client/prover and the verifier. This raises the question of whether it is possible to reduce the overhead by putting the verifier (as a proxy) between the server and the client such that the correct TLS transcript is available to the verifier. This work offers both positive and negative answers to this oracle proxy question: We first formalize the oracle proxy notion that allows the verifier to directly proxy client-server TLS communication, without entering a three-party handshake or interfering with the connection in any way. We then show that for common TLS-based higher-level protocols such as HTTPS, data integrity to the verifier proxy is ensured by the variable padding built into the HTTP protocol semantics. On the other hand, if a TLS-based protocol comes without variable padding, we demonstrate that data integrity cannot be guaranteed. In this context, we then study the case where the TLS response is pre-determined and cannot be tampered with during the connection. We propose the concept of context unforgeability and show allows overcoming the impossibility. We further show that ChaCha20-Poly1305 satisfies the concept while AES-GCM does not under the standard model.
Last updated:  2024-06-11
Compact Encryption based on Module-NTRU problems
Shi Bai, Hansraj Jangir, Hao Lin, Tran Ngo, Weiqiang Wen, and Jinwei Zheng
The Module-NTRU problem, introduced by Cheon, Kim, Kim, Son (IACR ePrint 2019/1468), and Chuengsatiansup, Prest, Stehlé, Wallet, Xagawa (ASIACCS ’20), generalizes the versatile NTRU assump- tion. One of its main advantages lies in its ability to offer greater flexibil- ity on parameters, such as the underlying ring dimension. In this work, we present several lattice-based encryption schemes, which are IND-CPA (or OW-CPA) secure in the standard model based on the Module-NTRU and Module-LWE problems. Leveraging the Fujisaki-Okamoto transfor- mations, one can obtain IND-CCA secure key encapsulation schemes. Our first encryption scheme is based on the Module-NTRU assumption, which uses the determinant of the secret matrix over the underlying ring for the decryption. Our second scheme is analogue to the Module-LWE encryption scheme, but uses only a matrix as the public key, based on a vectorial variant of the Module-NTRU problem. In the end, we conduct comprehensive analysis of known attacks and propose concrete parame- ters for the instantiations. In particular, our ciphertext size is about 614 (resp. 1228) bytes for NIST Level 1 (resp. Level 5) security and small decryption failure, placing it on par with the most recent schemes such as the one proposed by Zhang, Feng and Yan (ASIACRYPT ’23). We also present several competitive parameters for NIST Level 3, which has a ci- phertext size of 921 bytes. Moreover, our schemes do not require specific codes for plaintext encoding and decoding.
Last updated:  2024-09-09
Toward Full $n$-bit Security and Nonce Misuse Resistance of Block Cipher-based MACs
Wonseok Choi, Jooyoung Lee, and Yeongmin Lee
In this paper, we study the security of MAC constructions among those classified by Chen et al. in ASIACRYPT '21. Precisely, $F^{\text{EDM}}_{B_2}$ (or $\mathsf{EWCDM}$ as named by Cogliati and Seurin in CRYPTO '16), $F^{\text{EDM}}_{B_3}$, $F^{\text{SoP}}_{B_2}$, $F^{\text{SoP}}_{B_3}$ (all as named by Chen et al.) are proved to be fully secure up to $2^n$ MAC queries in the nonce-respecting setting, improving the previous bound of $\frac{3n}{4}$-bit security. In particular, $F^{\text{SoP}}_{B_2}$ and $F^{\text{SoP}}_{B_3}$ enjoy graceful degradation as the number of queries with repeated nonces grows (when the underlying universal hash function satisfies a certain property called multi-xor-collision resistance). To do this, we develop a new tool, namely extended Mirror theory based on two independent permutations to a wide range of $\xi_{\max}$ including inequalities. We also present matching attacks on $F^{\text{EDM}}_{B_4}$ and $F^{\text{EDM}}_{B_5}$ using $O(2^{3n/4})$ MAC queries and $O(1)$ verification query without using repeated nonces.
Last updated:  2024-05-27
New Solutions to Delsarte's Dual Linear Programs
André Chailloux and Thomas Debris-Alazard
Understanding the maximum size of a code with a given minimum distance is a major question in computer science and discrete mathematics. The most fruitful approach for finding asymptotic bounds on such codes is by using Delsarte's theory of association schemes. With this approach, Delsarte constructs a linear program such that its maximum value is an upper bound on the maximum size of a code with a given minimum distance. Bounding this value can be done by finding solutions to the corresponding dual linear program. Delsarte's theory is very general and goes way beyond binary codes. In this work, we provide universal bounds in the framework of association schemes that generalize the Elias-Bassalygo bound, which can be applied to any association scheme constructed from a distance function. These bounds are obtained by constructing new solutions to Delsarte's dual linear program. We instantiate these results and we recover known bounds for $q$-ary codes and for constant-weight binary codes. Our other contribution is to recover, for essentially any $Q$-polynomial scheme, MRRW-type solutions to Delsarte's dual linear program which are inspired by the Laplacian approach of Friedman and Tillich instead of using the Christoffel-Darboux formulas. We show in particular how the second linear programming bound can be interpreted in this framework.
Last updated:  2024-05-13
Covert Adaptive Adversary Model: A New Adversary Model for Multiparty Computation
Isheeta Nargis and Anwar Hasan
In covert adversary model, the corrupted parties can behave in any possible way like active adversaries, but any party that attempts to cheat is guaranteed to get caught by the honest parties with a minimum fixed probability. That probability is called the deterrence factor of covert adversary model. Security-wise, covert adversary is stronger than passive adversary and weaker than active adversary. It is more realistic than passive adversary model. Protocols for covert adversaries are significantly more efficient than protocols for active adversaries. Covert adversary model is defined only for static corruption. Adaptive adversary model is more realistic than static adversaries. In this article, we define a new adversary model, the covert adaptive adversary model, by generalizing the definition of covert adversary model for the more realistic adaptive corruption. We prove security relations between the new covert adaptive adversary model with existing adversary models like passive adaptive adversary model, active adaptive adversary model and covert static adversary model. We prove the sequential composition theorem for the new adversary model which is necessary to allow modular design of protocols for this new adversary model.
Last updated:  2025-02-08
Relativized Succinct Arguments in the ROM Do Not Exist
Annalisa Barbara, Alessandro Chiesa, and Ziyi Guan
A relativized succinct argument in the random oracle model (ROM) is a succinct argument in the ROM that can prove/verify the correctness of computations that involve queries to the random oracle. We prove that relativized succinct arguments in the ROM do not exist. The impossibility holds even if the succinct argument is interactive, and even if soundness is computational (rather than statistical). This impossibility puts on a formal footing the commonly-held belief that succinct arguments require non-relativizing techniques. Moreover, our results stand in sharp contrast with other oracle models, for which a recent line of work has constructed relativized succinct non-interactive arguments (SNARGs). Indeed, relativized SNARGs are a powerful primitive that, e.g., can be used to obtain constructions of IVC (incrementally-verifiable computation) and PCD (proof-carrying data) based on falsifiable cryptographic assumptions. Our results rule out this approach for IVC and PCD in the ROM.
Last updated:  2024-05-12
Let Attackers Program Ideal Models: Modularity and Composability for Adaptive Compromise
Joseph Jaeger
We show that the adaptive compromise security definitions of Jaeger and Tyagi (Crypto '20) cannot be applied in several natural use-cases. These include proving multi-user security from single-user security, the security of the cascade PRF, and the security of schemes sharing the same ideal primitive. We provide new variants of the definitions and show that they resolve these issues with composition. Extending these definitions to the asymmetric settings, we establish the security of the modular KEM/DEM and Fujisaki-Okamoto approaches to public key encryption in the full adaptive compromise setting. This allows instantiations which are more efficient and standard than prior constructions.
Last updated:  2024-05-12
Challenger: Blockchain-based Massively Multiplayer Online Game Architecture
Boris Chan Yip Hon, Bilel Zaghdoudi, Maria Potop-Butucaru, Sébastien Tixeuil, and Serge Fdida
We propose Challenger a peer-to-peer blockchain-based middleware architecture for narrative games, and discuss its resilience to cheating attacks. Our architecture orchestrates nine services in a fully decentralized manner where nodes are not aware of the entire composition of the system nor its size. All these components are orchestrated together to obtain (strong) resilience to cheaters. The main contribution of the paper is to provide, for the first time, an architecture for narrative games agnostic of a particular blockchain that brings together several distinct research areas, namely distributed ledgers, peer-to-peer networks, multi-player-online games and resilience to attacks.
Last updated:  2024-05-12
Multi User Security of LightMAC and LightMAC_Plus
Nilanjan Datta, Shreya Dey, Avijit Dutta, and Devdutto Kanungo
In FSE'16, Luykx et al. have proposed $\textsf{LightMAC}$ that provably achieves a query length independent PRF security bound. To be precise, the construction achieves security roughly in the order of $O(q^2/2^n)$, when instantiated with two independently keyed $n$-bit block ciphers and $q$ is the total number of queries made by the adversary. Subsequently, in ASIACRYPT'17, Naito proposed a beyond-birthday-bound variant of the $\textsf{LightMAC}$ construction, dubbed as $\textsf{LightMAC_Plus}$, that is built on three independently keyed $n$-bit block ciphers and achieves $2n/3$-bits PRF security. Security analyses of these two constructions have been conducted in the single-user setting, where we assume that the adversary has the access to a single instance of the construction. In this paper, we investigate, for the first time, the security of the $\textsf{LightMAC}$ and the $\textsf{LightMAC_Plus}$ construction in the context of multi-user setting, where we assume that the adversary has access to more than one instances of the construction. In particular, we have shown that $\textsf{LightMAC}$ remains secure roughly up to $2^{n/2}$ construction queries and $2^k$ ideal-cipher queries in the ideal-cipher model and $\textsf{LightMAC_Plus}$ maintains security up to approximately $2^{2n/3}$ construction queries and $2^{2k/3}$ ideal-cipher queries in the ideal-cipher model, where $n$ denotes the block size and $k$ denotes the key size of the block cipher.
Last updated:  2024-09-05
zkSNARKs in the ROM with Unconditional UC-Security
Alessandro Chiesa and Giacomo Fenzi
The universal composability (UC) framework is a “gold standard” for security in cryptography. UC-secure protocols achieve strong security guarantees against powerful adaptive adversaries, and retain these guarantees when used as part of larger protocols. Zero knowledge succinct non-interactive arguments of knowledge (zkSNARKs) are a popular cryptographic primitive that are often used within larger protocols deployed in dynamic environments, and so UC-security is a highly desirable, if not necessary, goal. In this paper we prove that there exist zkSNARKs in the random oracle model (ROM) that unconditionally achieve UC-security. Here, “unconditionally” means that security holds against adversaries that make a bounded number of queries to the random oracle, but are otherwise computationally unbounded. Prior work studying UC-security for zkSNARKs obtains transformations that rely on computational assumptions and, in many cases, lose most of the succinctness property of the zkSNARK. Moreover, these transformations make the resulting zkSNARK more expensive and complicated. In contrast, we prove that widely used zkSNARKs in the ROM are UC-secure without modifications. We prove that the Micali construction, which is the canonical construction of a zkSNARK, is UC-secure. Moreover, we prove that the BCS construction, which many zkSNARKs deployed in practice are based on, is UC-secure. Our results confirm the intuition that these natural zkSNARKs do not need to be augmented to achieve UC-security, and give confidence that their use in larger real-world systems is secure.
Last updated:  2025-03-04
$\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
Harish Karthikeyan and Antigoni Polychroniadou
Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is the introduction of One-shot Private Aggregation ($\mathsf{OPA}$) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation, contrasting with multi-round protocols and aligning with plaintext secure aggregation, where clients interact only once. We construct $\mathsf{OPA}$ based on LWR, LWE, class groups, DCR and demonstrate applications to privacy-preserving Federated Learning (FL) where clients {speak once}. This is a sharp departure from prior multi-round FL protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, $\mathsf{OPA}$ eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, $\mathsf{OPA}$ is practical, outperforming state-of-the-art solutions. We benchmark logistic regression classifiers for two datasets, while also building an MLP classifier to train on MNIST, CIFAR-10, and CIFAR-100 datasets. We build two flavors of $\mathsf{OPA}$ (1) from (threshold) key homomorphic PRF and (2) from seed homomorphic PRG and secret sharing. The threshold Key homomorphic PRF addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new threshold key homomorphic PRFs based on class groups or DCR or the LWR assumption.
Last updated:  2024-09-10
Ultrametric integral cryptanalysis
Tim Beyne and Michiel Verbauwhede
A systematic method to analyze divisibility properties is proposed. In integral cryptanalysis, divisibility properties interpolate between bits that sum to zero (divisibility by two) and saturated bits (divisibility by $2^{n - 1}$ for $2^n$ inputs). From a theoretical point of view, we construct a new cryptanalytic technique that is a non-Archimedean multiplicative analogue of linear cryptanalysis. It lifts integral cryptanalysis to characteristic zero in the sense that, if all quantities are reduced modulo two, then one recovers the algebraic theory of integral cryptanalysis. The new technique leads to a theory of trails. We develop a tool based on off-the-shelf solvers that automates the analysis of these trails and use it to show that many integral distinguishers on PRESENT and SIMON are stronger than expected.
Last updated:  2025-02-23
Real-world Universal zkSNARKs are non-malleable
Antonio Faonio, Dario Fiore, and Luigi Russo
Simulation extractability is a strong security notion of zkSNARKs that guarantees that an attacker who produces a valid proof must know the corresponding witness, even if the attacker had prior access to proofs generated by other users. Notably, simulation extractability implies that proofs are non-malleable and is of fundamental importance for applications of zkSNARKs in distributed systems. In this work, we study sufficient and necessary conditions for constructing simulation-extractable universal zkSNARKs via the popular design approach based on compiling polynomial interactive oracle proofs (PIOP). Our main result is the first security proof that popular universal zkSNARKs, such as PLONK and Marlin, as deployed in the real world, are simulation-extractable. Our result fills a gap left from previous work (Faonio et al. TCC’23, and Kohlweiss et al. TCC’23) which could only prove the simulation extractability of the “textbook” versions of these schemes and does not capture their optimized variants, with all the popular optimization tricks in place, that are eventually implemented and deployed in software libraries.
Last updated:  2024-05-13
Multivariate Blind Signatures Revisited
Ward Beullens
In 2017, Petzoldt, Szepieniec, and Mohamed proposed a blind signature scheme, based on multivariate cryptography. This construction has been expanded on by several other works. This short paper shows that their construction is susceptible to an efficient polynomial-time attack. The problem is that the authors implicitly assumed that for a random multivariate quadratic map $\mathcal{R}:\mathbb{F}_q^m \rightarrow \mathbb{F}_q^m$ and a collision-resistant hash function $H: \{0,1\}^* \rightarrow \mathbb{F}_q^m$, the function $\mathsf{Com}(m;\mathbf{r}) := H(m) - \mathcal{R}(\mathbf{r})$ is a binding commitment, which is not the case. There is a "folklore" algorithm that can be used to, given any pair of messages, efficiently produce a commitment that opens to both of them. We hope that by pointing out that multivariate quadratic maps are not binding, similar problems can be avoided in the future.
Last updated:  2025-03-14
Client-Efficient Online-Offline Private Information Retrieval
Hoang-Dung Nguyen, Jorge Guajardo, and Thang Hoang
Private Information Retrieval (PIR) permits clients to query data entries from a public database hosted on untrusted servers while preserving client privacy. Traditional PIR models suffer from high computation and/or bandwidth overhead due to linear database processing for privacy. Recently, Online-Offline PIR (OO-PIR) has been proposed to improve PIR practicality by precomputing query-independent materials to accelerate online access. While state-of-the-art OO-PIR schemes (e.g., S&P’24, CRYPTO’23) successfully reduce online processing cost to sublinear levels, they still impose substantial bandwidth and storage burdens on the client, especially when operating on large databases. In this paper, we propose Pirex, a new two-server OO-PIR with semi-honest security that offers minimal client inbound bandwidth and storage cost while retaining the sublinear processing efficiency. The Pirex design is simple with most operations are naturally low-cost and streamlined (e.g., XOR, PRF, modular arithmetic). We have fully implemented Pirex and evaluated its real-world performance using commodity hardware. Our results showed that Pirex outperforms existing OO-PIR schemes by at least two orders of magnitude. With a 1 TB database, Pirex takes 55ms to retrieve a 4 KB entry, compared with 9-30s by state-of-the-art. For practical databases with billions of 4 KB entries, Pirex only takes 16 KB of inbound bandwidth, which is up to three orders of magnitude more efficient.
Last updated:  2025-04-12
PAC-Private Algorithms
Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas
Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise. Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We propose new techniques in order to reduce instability in algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.