All papers (Page 13 of 24085 results)

Last updated:  2024-10-10
LeOPaRd: Towards Practical Post-Quantum Oblivious PRFs via Interactive Lattice Problems
Muhammed F. Esgin, Ron Steinfeld, Erkan Tairi, and Jie Xu
In this work, we introduce a more efficient post-quantum oblivious PRF (OPRF) design, called LeOPaRd. Our proposal is round-optimal and supports verifiability and partial obliviousness, all of which are important for practical applications. The main technical novelty of our work is a new method for computing samples of MLWE (module learning with errors) in a two-party setting. To do this, we introduce a new family of interactive lattice problems, called interactive MLWE and rounding with re-use (iMLWER-RU). We rigorously study the hardness of iMLWER-RU and reduce it (under some natural idealized setting) to a more standard MLWE-like problem where the adversary is additionally given access to a randomized MLWE PRF oracle. We believe iMLWER-RU can be of independent interest for other interactive protocols. LeOPaRd exploits this new iMLWER-RU assumption to realize a lattice-based OPRF design without relying on heavy machinery such as noise flooding and fully homomorphic encryption used in earlier works. LeOPaRd can feature around 136 KB total communication, compared to 300+ KB in earlier works. We also identify gaps in some existing constructions and models, and propose appropriate fixes.
Last updated:  2024-10-10
Related-Key Cryptanalysis of FUTURE
Amit Jana, Smita Das, Ayantika Chatterjee, and Debdeep Mukhopadhyay
In Africacrypt 2022, Gupta \etal introduced a 64-bit lightweight \mds matrix-based \spn-like block cipher designed to encrypt data in a single clock cycle with minimal implementation cost, particularly when unrolled. While various attack models were discussed, the security of the cipher in the related-key setting was not addressed. In this work, we bridge this gap by conducting a security analysis of the cipher under related-key attacks using \milp(Mixed Integer Linear Programming)-based techniques. Our model enables a related-key distinguishing attack on 8 rounds of FUTURE, requiring $2^{64}$ plaintexts, $2^{63}$ \xor operations, and negligible memory. Additionally, we present a 10-round boomerang distinguisher with a probability of $2^{-45}$, leading to a distinguishing attack with $2^{46}$ plaintexts, $2^{46}$ \xor operations, and negligible memory. This result demonstrates a full break of the cipher’s 64-bit security in the related-key setting.
Last updated:  2024-10-10
Efficient Maliciously Secure Oblivious Exponentiations
Carsten Baum, Jens Berlips, Walther Chen, Ivan Damgård, Kevin M. Esvelt, Leonard Foner, Dana Gretton, Martin Kysel, Ronald L. Rivest, Lawrence Roy, Francesca Sage-Ling, Adi Shamir, Vinod Vaikuntanathan, Lynn Van Hauwe, Theia Vogel, Benjamin Weinstein-Raun, Daniel Wichs, Stephen Wooster, Andrew C. Yao, and Yu Yu
Oblivious Pseudorandom Functions (OPRFs) allow a client to evaluate a pseudorandom function (PRF) on her secret input based on a key that is held by a server. In the process, the client only learns the PRF output but not the key, while the server neither learns the input nor the output of the client. The arguably most popular OPRF is due to Naor, Pinkas and Reingold (Eurocrypt 2009). It is based on an Oblivious Exponentiation by the server, with passive security under the Decisional Diffie-Hellman assumption. In this work, we strengthen the security guarantees of the NPR OPRF by protecting it against active attacks of the server. We have implemented our solution and report on the performance. Our main result is a new batch OPRF protocol which is secure against maliciously corrupted servers, but is essentially as efficient as the semi-honest solution. More precisely, the computation (and communication) overhead is a multiplicative factor $o(1)$ as the batch size increases. The obvious solution using zero-knowledge proofs would have a constant factor overhead at best, which can be too expensive for certain deployments. Our protocol relies on a novel version of the DDH problem, which we call the Oblivious Exponentiation Problem (OEP), and we give evidence for its hardness in the Generic Group model. We also present a variant of our maliciously secure protocol that does not rely on the OEP but nevertheless only has overhead $o(1)$ over the known semi-honest protocol. Moreover, we show that our techniques can also be used to efficiently protect threshold blind BLS signing and threshold ElGamal decryption against malicious attackers.
Last updated:  2024-10-10
On Wagner's k-Tree Algorithm Over Integers
Haoxing Lin and Prashant Nalini Vasudevan
The $k$-Tree algorithm [Wagner 02] is a non-trivial algorithm for the average-case $k$-SUM problem that has found widespread use in cryptanalysis. Its input consists of $k$ lists, each containing $n$ integers from a range of size $m$. Wagner's original heuristic analysis suggested that this algorithm succeeds with constant probability if $n \approx m^{1/(\log{k}+1)}$, and that in this case it runs in time $O(kn)$. Subsequent rigorous analysis of the algorithm [Lyubashevsky 05, Shallue 08, Joux-Kippen-Loss 24] has shown that it succeeds with high probability if the input list sizes are significantly larger than this. We present a broader rigorous analysis of the $k$-Tree algorithm, showing upper and lower bounds on its success probability and complexity for any size of the input lists. Our results confirm Wagner's heuristic conclusions, and also give meaningful bounds for a wide range of list sizes that are not covered by existing analyses. We present analytical bounds that are asymptotically tight, as well as an efficient algorithm that computes (provably correct) bounds for a wide range of concrete parameter settings. We also do the same for the $k$-Tree algorithm over $\mathbb{Z}_m$. Finally, we present experimental evaluation of the tightness of our results.
Last updated:  2024-11-05
Rhombus: Fast Homomorphic Matrix-Vector Multiplication for Secure Two-Party Inference
Jiaxing He, Kang Yang, Guofeng Tang, Zhangjie Huang, Li Lin, Changzheng Wei, Ying Yan, and Wei Wang
We present $\textit{Rhombus}$, a new secure matrix-vector multiplication (MVM) protocol in the semi-honest two-party setting, which is able to be seamlessly integrated into existing privacy-preserving machine learning (PPML) frameworks and serve as the basis of secure computation in linear layers. $\textit{Rhombus}$ adopts RLWE-based homomorphic encryption (HE) with coefficient encoding, which allows messages to be chosen from not only a field $\mathbb{F}_p$ but also a ring $\mathbb{Z}_{2^\ell}$, where the latter supports faster computation in non-linear layers. To achieve better efficiency, we develop an input-output packing technique that reduces the communication cost incurred by HE with coefficient encoding by about $21\times$, and propose a split-point picking technique that reduces the number of rotations to that sublinear in the matrix dimension. Compared to the recent protocol $\textit{HELiKs}$ by Balla and Koushanfar (CCS'23), our implementation demonstrates that $\textit{Rhombus}$ improves the whole performance of an MVM protocol by a factor of $7.4\times \sim 8\times$, and improves the end-to-end performance of secure two-party inference of ResNet50 by a factor of $4.6\times \sim 18\times$.
Last updated:  2024-10-09
Secret Sharing with Snitching
Stefan Dziembowski, Sebastian Faust, Tomasz Lizurej, and Marcin Mielniczuk
We address the problem of detecting and punishing shareholder collusion in secret-sharing schemes. We do it in the recently proposed cryptographic model called individual cryptography (Dziembowski, Faust, and Lizurej, Crypto 2023), which assumes that there exist tasks that can be efficiently computed by a single machine but distributing this computation across multiple (mutually distrustful devices) is infeasible. Within this model, we introduce a novel primitive called secret sharing with snitching (SSS), in which each attempt to illegally reconstruct the shared secret $S$ results in a proof that can be used to prove such misbehavior (and, e.g., financially penalize the cheater on a blockchain). This holds in a very strong sense, even if the shareholders attempt not to reconstruct the entire secret~$S$ but only learn some partial information about it. Our notion also captures the attacks performed using multiparty computation protocols (MPCs), i.e., those where the malicious shareholders use MPCs to compute partial information on $S$. The main idea of SSS is that any illegal reconstruction can be proven and punished, which suffices to discourage illegal secret reconstruction. Hence, our SSS scheme effectively prevents shareholders' collusion. We provide a basic definition of threshold ($t$-out-of-$n$) SSS. We then show how to construct it for $t = n$, and later, we use this construction to build an SSS scheme for an arbitrary $t$. In order to prove the security of our construction, we introduce a generalization of the random oracle model (Bellare, Rogaway, CCS 1993), which allows modelling hash evaluations made inside MPC.
Last updated:  2024-10-09
Blaze: Fast SNARKs from Interleaved RAA Codes
Martijn Brehm, Binyi Chen, Ben Fisch, Nicolas Resch, Ron D. Rothblum, and Hadas Zeilberger
In this work we construct a new and highly efficient multilinear polynomial commitment scheme (MLPCS) over binary fields, which we call \emph{Blaze}. Polynomial commitment schemes allow a server to commit to a large polynomial and later decommit to its evaluations. Such schemes have emerged as a key component in recent efficient SNARK constructions. Blaze has an extremely efficient prover, both asymptotically and concretely. The commitment is dominated by $8n$ field additions (i.e., XORs) and one Merkle tree computation. The evaluation proof generation is dominated by $6n$ additions and $5n$ multiplications over the field. The verifier runs in time $O_\lambda(\log^2(n))$. Concretely, for sufficiently large message sizes, the prover is faster than all prior schemes except for Brakedown (Golovnev et al., Crypto 2023), but offers significantly smaller proofs than the latter. The scheme is obtained by combining two ingredients: 1. Building on the code-switching technique (Ron-Zewi and Rothblum, JACM 2024), we show how to compose any error-correcting code together with an interactive oracle proof of proximity (IOPP) underlying existing MLPCS constructions, into a new MLPCS. The new MLPCS inherits its proving time from the code's encoding time, and its verification complexity from the underlying MLPCS. The composition is distinctive in that it is done purely on the information-theoretic side. 2. We apply the above methodology using an extremely efficient error-correcting code known as the Repeat-Accumulate-Accumulate (RAA) code. We give new asymptotic and concrete bounds, which demonstrate that (for sufficiently large message sizes) this code has a better encoding time vs. distance tradeoff than previous linear-time encodable codes that were considered in the literature.
Last updated:  2024-10-09
Mild Asymmetric Message Franking: Illegal-Messages-Only and Retrospective Content Moderation
Zhengan Huang, Junzuo Lai, Gongxian Zeng, and Jian Weng
Many messaging platforms have integrated end-to-end (E2E) encryption into their services. This widespread adoption of E2E encryption has triggered a technical tension between user privacy and illegal content moderation. The existing solutions either support only unframeability or deniability, or they are prone to abuse (the moderator can perform content moderation for all messages, whether illegal or not), or they lack mechanisms for retrospective content moderation. To address the above issues, we introduce a new primitive called \emph{mild asymmetric message franking} (MAMF) to establish illegal-messages-only and retrospective content moderation for messaging systems, supporting unframeability and deniability simultaneously. We provide a framework to construct MAMF, leveraging two new building blocks, which might be of independent interest.
Last updated:  2024-10-09
Tighter Proofs for PKE-to-KEM Transformation in the Quantum Random Oracle Model
Jinrong Chen, Yi Wang, Rongmao Chen, Xinyi Huang, and Wei Peng
In this work, we provide new, tighter proofs for the $T_{RH}$-transformation by Jiang et al. (ASIACRYPT 2023), which converts OW-CPA secure PKEs into KEMs with IND-1CCA security, a variant of typical IND-CCA security where only a single decapsulation query is allowed. Such KEMs are efficient and have been shown sufficient for real-world applications by Huguenin-Dumittan and Vaudenay at EUROCRYPT 2022. We reprove Jiang et al.'s $T_{RH}$-transformation in both the random oracle model (ROM) and the quantum random oracle model (QROM), for the case where the underlying PKE is rigid deterministic. In both ROM and QROM models, our reductions achieve security loss factors of $\mathcal{O}(1)$, significantly improving Jiang et al.'s results which have security loss factors of $\mathcal{O}(q)$ in the ROM and $\mathcal{O}(q^2)$ in the QROM respectively. Notably, central to our tight QROM reduction is a new tool called ''reprogram-after-measure'', which overcomes the reduction loss posed by oracle reprogramming in QROM proofs. This technique may be of independent interest and useful for achieving tight QROM proofs for other post-quantum cryptographic schemes. We remark that our results also improve the reduction tightness of the $T_{H}$-transformation (which also converts PKEs to KEMs) by Huguenin-Dumittan and Vaudenay (EUROCRYPT 2022), as Jiang et al. provided a tight reduction from $T_H$-transformation to $T_{RH}$-transformation (ASIACRYPT 2023).
Last updated:  2024-10-09
NeutronNova: Folding everything that reduces to zero-check
Abhiram Kothapalli and Srinath Setty
We introduce NeutronNova, a new folding scheme for the zero-check relation: an instance-witness pair is in the zero-check relation if a corresponding multivariate polynomial evaluates to zero for all inputs over a suitable Boolean hypercube. The folding scheme is a two-round protocol, and it internally invokes a \emph{single} round of the sum-check protocol. The folding scheme is more efficient than prior state-of-the-art schemes and directly benefits from recent improvements to the sum-check prover. The prover’s work is the cost to commit to a witness and field operations in a single round of the sum-check protocol. So, if the witness contains "small" field elements, the prover only commits to "small" field elements. The verifier’s work is a constant number of group scalar multiplications, field operations, and hash computations. Moreover, the folding scheme generalizes to fold multiple instances at once and requires only $\log{n}$ rounds of the sum-check protocol, where $n$ is the number of instances folded. As a corollary, we provide a folding scheme for any relation R for which there is a reduction of knowledge (RoK) from $\mathcal{R}$ to one or more instance-witness pairs in the zero-check relation. Such RoKs appear implicitly in prior lookup arguments (e.g., Lasso) and high-performance zkVMs for RISC-V (e.g., Jolt). We formalize these RoKs for several relations including indexed lookups, grand products, and CCS (which generalizes Plonkish, AIR, and R1CS). These are simple and constant round RoKs that leverage interaction to perform randomized checks on committed witness elements. Instead of proving these resulting zero-check instances as is done in prior proof systems such as Jolt, NeutronNova provides the more efficient option of continual folding of zero-check instances into a single running instance.
Last updated:  2024-10-09
Nebula: Efficient read-write memory and switchboard circuits for folding schemes
Arasu Arun and Srinath Setty
Folding schemes enable prover-efficient incrementally verifiable computation (IVC), where a proof is generated step-by-step, resulting in a space-efficient prover that naturally supports continuations. These attributes make them a promising choice for proving long-running machine executions (popularly, "zkVMs"). A major problem is designing an efficient read-write memory. Another challenge is overheads incurred by unused machine instructions when incrementally proving a program execution step. Nebula addresses these with new techniques that can paired with modern folding schemes. First, we introduce commitment-carrying IVC, where a proof carries an incremental commitment to the prover’s non-deterministic advice provided at different steps. Second, we show how this unlocks efficient read-write memory (which implies indexed lookups) with a cost-profile identical to that of non-recursive arguments. Third, we provide a new universal "switchboard" circuit construction that combines circuits of different instructions such that one can "turn off" uninvoked circuit elements and constraints, offering a new way to achieve pay-per-use prover costs. We implement a prototype of a Nebula-based zkVM for the Ethereum virtual machine (EVM). We find that Nebula’s techniques qualitatively provide a $30\times$ smaller constraint system to represent the EVM over standard memory-checking techniques, and lead to over $260\times$ faster proof generation for the standard ERC20 token transfer transaction.
Last updated:  2024-10-09
Predicting truncated multiple matrix congruential generators with unknown parameters
Changcun Wang and Zhaopeng Dai
Multiple Matrix congruential generators is an important class of pseudorandom number generators. This paper studies the predictability of a class of truncated multiple matrix congruential generators with unknown parameters. Given a few truncated digits of high-order bits or low-order bits output by a multiple matrix congruential generator, we give a method based on lattice reduction to recover the parameters and the initial state of the generator.
Last updated:  2024-10-08
Boosting SNARKs and Rate-1 Barrier in Arguments of Knowledge
Jiaqi Cheng and Rishab Goyal
We design a generic compiler to boost any non-trivial succinct non-interactive argument of knowledge (SNARK) to full succinctness. Our results come in two flavors: For any constant $\epsilon > 0$, any SNARK with proof size $|\pi| < \frac{|\omega|}{\lambda^\epsilon} + \mathsf{poly}(\lambda, |x|)$ can be upgraded to a fully succinct SNARK, where all system parameters (such as proof/CRS sizes and setup/verifier run-times) grow as fixed polynomials in $\lambda$, independent of witness size. Under an additional assumption that the underlying SNARK has as an \emph{efficient} knowledge extractor, we further improve our result to upgrade any non-trivial SNARK. For example, we show how to design fully succinct SNARKs from SNARKs with proofs of length $|\omega| - \Omega(\lambda)$, or $\frac{|\omega|}{1 + \epsilon} + \mathsf{poly}(\lambda, |x|)$, any constant $\epsilon > 0$. Our result reduces the long-standing challenge of designing fully succinct SNARKs to designing \emph{arguments of knowledge that beat the trivial construction}. It also establishes optimality of rate-1 arguments of knowledge (such as NIZKs [Gentry-Groth-Ishai-Peikert-Sahai-Smith; JoC'15] and BARGs [Devadas-Goyal-Kalai-Vaikuntanathan, Paneth-Pass; FOCS'22]), and suggests any further improvement is tantamount to designing fully succinct SNARKs, thus requires bypassing established black-box barriers [Gentry-Wichs; STOC'11].
Last updated:  2024-10-08
Cryptography and Collective Power
Leah Namisa Rosenbloom
This paper extends the dialogue of "The Moral Character of Cryptographic Work" (Rogaway, 2015) and "Crypto for the People" (Kamara, 2020) by examining the relationship between cryptography and collective power. In particular, it considers cryptography in the context of grassroots organizing—a process by which marginalized people build collective power toward effecting systemic change—and illustrates the ways in which cryptography has both helped and hindered organizing efforts. Based on the synthesis of dozens of qualitative studies, scholarly critiques, and historical artifacts, this work introduces a paradigm shift for cryptographic protocol design—general principles and recommendations for building cryptography to address the lived needs and experiences of marginalized people. Finally, it calls for abolition cryptography: cryptographic theories and practices which dismantle harmful systems and replace them with systems that sustain human lives and livelihoods.
Last updated:  2025-04-20
Juggernaut: Efficient Crypto-Agnostic Byzantine Agreement
Daniel Collins, Yuval Efron, and Jovan Komatovic
It is well known that a trusted setup allows one to solve the Byzantine agreement problem in the presence of $t<n/2$ corruptions, bypassing the setup-free $t<n/3$ barrier. Alas, the overwhelming majority of protocols in the literature have the caveat that their security crucially hinges on the security of the cryptography and setup, to the point where if the cryptography is broken, even a single corrupted party can violate the security of the protocol. Thus these protocols provide higher corruption resilience ($n/2$ instead of $n/3$) for the price of increased assumptions. Is this trade-off necessary? We further the study of crypto-agnostic Byzantine agreement among $n$ parties that answers this question in the negative. Specifically, let $t_s$ and $t_i$ denote two parameters such that (1) $2t_i + t_s < n$, and (2) $t_i \leq t_s < n/2$. Crypto-agnostic Byzantine agreement ensures agreement among honest parties if (1) the adversary is computationally bounded and corrupts up to $t_s$ parties, or (2) the adversary is computationally unbounded and corrupts up to $t_i$ parties, and is moreover given all secrets of all parties established during the setup. We propose a compiler that transforms any pair of resilience-optimal Byzantine agreement protocols in the authenticated and information-theoretic setting into one that is crypto-agnostic. Our compiler has several attractive qualities, including using only $O(\lambda n^2)$ bits over the two underlying Byzantine agreement protocols, and preserving round and communication complexity in the authenticated setting. In particular, our results improve the state-of-the-art in bit complexity by at least two factors of $n$ and provide either early stopping (deterministic) or expected constant round complexity (randomized). We therefore provide fallback security for authenticated Byzantine agreement for free for $t_i \leq n/4$.
Last updated:  2025-04-06
Pacmann: Efficient Private Approximate Nearest Neighbor Search
Mingxun Zhou, Elaine Shi, and Giulia Fanti
We propose a new private Approximate Nearest Neighbor (ANN) search scheme named Pacmann that allows a client to perform ANN search in a vector database without revealing the query vector to the server. Unlike prior constructions that run encrypted search on the server side, Pacmann carefully offloads limited computation and storage to the client, no longer requiring computationally-intensive cryptographic techniques. Specifically, clients run a graph-based ANN search, where in each hop on the graph, the client privately retrieves local graph information from the server. To make this efficient, we combine two ideas: (1) we adapt a leading graph-based ANN search algorithm to be compatible with private information retrieval (PIR) for subgraph retrieval; (2) we use a recent class of PIR schemes that trade offline preprocessing for online computational efficiency. Pacmann achieves significantly better search quality than the state-of-the-art private ANN search schemes, showing up to 2.5$\times$ better search accuracy on real-world datasets than prior work and reaching 90\% quality of a state-of-the-art non-private ANN algorithm. Moreover on large datasets with up to 100 million vectors, Pacmann shows better scalability than prior private ANN schemes with up to $63\%$ reduction in computation time and $24\%$ reduction in overall latency.
Last updated:  2025-03-24
Simplified PIR and CDS Protocols and Improved Linear Secret-Sharing Schemes
Bar Alon, Amos Beimel, and Or Lasri
We consider 3 related cryptographic primitives, private information retrieval (PIR) protocols, conditional disclosure of secrets (CDS) protocols, and secret-sharing schemes; these primitives have many applications in cryptography. We study these primitives requiring information-theoretic security. The complexity of the three primitives has been dramatically improved in the last few years and they are closely related, i.e., the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016) was transformed to construct the CDS protocols of Liu, Vaikuntanathan, and Wee (CRYPTO 2017, Eurocrypt 2018) and these CDS protocols are the main ingredient in the construction of the best-known secret-sharing schemes. To date, the message size required in PIR and CDS protocols and the share size required in secret-sharing schemes are not understood and there are big gaps between their upper bounds and lower bounds. The goal of this paper is to try to better understand the upper bounds by simplifying current constructions and improving their complexity. We obtain the following two independent results: - We simplify, abstract, and generalize the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016), the recent multi-server PIR protocol of Ghasemi, Kopparty, and Sudan (STOC 2025), and the 2-server and multi-server CDS protocols of Liu et al. (CRYPTO 2017, Eurocrypt 2018) and Beimel, Farr`as, and Lasri (TCC 2023). In particular, we present one PIR protocol generalizing both the 2- server and multi-server PIR protocols. This is done by considering a new variant of matching vectors and by using a general share conversion. In addition to simplifying previous protocols, our 2-server protocols can use matching vectors over any $m$ that is the product of two distinct primes. Our construction does not improve the communication complexity of PIR and CDS protocols; however, construction of better matching vectors over $\textit{any}$ $m$ that is the product of two distinct primes will improve the communication complexity of 2-server PIR and CDS protocols. - In many applications of secret-sharing schemes it is important that the scheme is linear, e.g., by using the fact that parties can locally add shares of two secrets and obtain shares of the sum of the secrets. We provide a construction of linear secret-sharing schemes for $n$-party access structures with improved share size of $2^{0.7563n}$. Previously, the best share size for linear secret-sharing schemes was $2^{0.7576n}$ and it is known that for most $n$-party access structures the shares' size is at least $2^{0.5n}$. This result is achieved by a reduction to unbalanced CDS protocols (compared to balanced CDS protocols in previous constructions).
Last updated:  2024-10-08
On the security of the initial tropical Stickel protocol and its modification based on Linde-de la Puente matrices
Sulaiman Alhussaini and Serge˘ı Sergeev
Recently, a more efficient attack on the initial tropical Stickel protocol has been proposed, different from the previously known Kotov-Ushakov attack, yet equally guaranteed to succeed. Given that the Stickel protocol can be implemented in various ways, such as utilizing platforms beyond the tropical semiring or employing alternative commutative matrix ``classes'' instead of polynomials, we firstly explore the generalizability of this new attack across different implementations of the Stickel protocol. We then conduct a comprehensive security analysis of a tropical variant that successfully resists this new attack, namely the Stickel protocol based on Linde-de la Puente (LdlP) matrices. Additionally, we extend the concept of LdlP matrices beyond the tropical semiring, generalizing it to a broader class of semirings.
Last updated:  2025-03-04
An Undetectable Watermark for Generative Image Models
Sam Gunn, Xuandong Zhao, and Dawn Song
We present the first undetectable watermarking scheme for generative image models. Undetectability ensures that no efficient adversary can distinguish between watermarked and un-watermarked images, even after making many adaptive queries. In particular, an undetectable watermark does not degrade image quality under any efficiently computable metric. Our scheme works by selecting the initial latents of a diffusion model using a pseudorandom error-correcting code (Christ and Gunn, 2024), a strategy which guarantees undetectability and robustness. We experimentally demonstrate that our watermarks are quality-preserving and robust using Stable Diffusion 2.1. Our experiments verify that, in contrast to every prior scheme we tested, our watermark does not degrade image quality. Our experiments also demonstrate robustness: existing watermark removal attacks fail to remove our watermark from images without significantly degrading the quality of the images. Finally, we find that we can robustly encode 512 bits in our watermark, and up to 2500 bits when the images are not subjected to watermark removal attacks. Our code is available at https://github.com/XuandongZhao/PRC-Watermark.
Last updated:  2025-02-24
Secret Sharing with Publicly Verifiable Deletion
Jonathan Katz and Ben Sela
Certified deletion, an inherently quantum capability, allows a party holding a quantum state to prove that they have deleted the information contained in that state. Bartusek and Raizes (Crypto 2024) recently studied certified deletion in the context of secret sharing schemes, and showed constructions with privately verifiable proofs of deletion that can be verified only by the dealer who generated the shares. We give two constructions of secret sharing schemes with publicly verifiable certified deletion. Our first construction is based on the post-quantum security of the LWE problem, and each share requires a number of qubits that is linear in the size of an underlying classical secret sharing scheme for the same set of authorized parties. Our second construction is based on a more general assumption—the existence of post quantum one-way functions— but requires an asymptotically larger number of qubits relative to the share size of the underlying classical scheme.
Last updated:  2025-01-30
DeepFold: Efficient Multilinear Polynomial Commitment from Reed-Solomon Code and Its Application to Zero-knowledge Proofs
Yanpei Guo, Xuanming Liu, Kexi Huang, Wenjie Qu, Tianyang Tao, and Jiaheng Zhang
This work presents Deepfold, a novel multilinear polynomial commitment scheme (PCS) based on Reed-Solomon code that offers optimal prover time and a more concise proof size. For the first time, Deepfold adapts the FRI-based multilinear PCS to the list decoding radius setting, requiring significantly fewer query repetitions and thereby achieving a 3$\times$ reduction in proof size compared to Basefold (Crypto'24), while preserving its advantages in prover time. Compared with PolyFRIM (USENIX Security'24), Deepfold achieves a 2$\times$ improvement in prover time, verifier time, and proof size. Another contribution of this work is a batch evaluation scheme, which enables the FRI-based multilinear PCS to handle polynomials whose size is not a power of two more efficiently. Our scheme has broad applications in zk-SNARKs, since PCS is a key component in modern zk-SNARK constructions. For example, when replacing the PCS component of Virgo (S&P'20) with Deepfold, our scheme achieves a 2.5$\times$ faster prover time when proving the knowledge of a Merkle tree with 256 leaves, while maintaining the similar proof size. When replacing the PCS component of HyperPlonk (Eurocrypt'23) with Deepfold, our scheme has about 3.6$\times$ faster prover time. Additionally, when applying our arbitrary length input commitment to verifiable matrix multiplications for matrices of size 1200$\times$768 and 768$\times$2304, which are actual use cases in GPT-2 model, the performance showcases a 2.4$\times$ reduction in prover time compared to previous approaches.
Last updated:  2024-10-08
Bit-fixing Correlation Attacks on Goldreich's Pseudorandom Generators
Ximing Fu, Mo Li, Shihan Lyu, and Chuanyi Liu
We introduce a powerful attack, termed the bit-fixing correlation attack, on Goldreich's pseudorandom generators (PRGs), specifically focusing on those based on the $\mathsf{XOR}\text{-}\mathsf{THR}$ predicate. By exploiting the bit-fixing correlation property, we derive correlation equations with high bias by fixing certain bits. Utilizing two solvers to handle these high-bias correlation equations, we present inverse attacks on $\mathsf{XOR}\text{-}\mathsf{THR}$ based PRGs within the complexity class $\mathsf{NC}^{0}$. We efficiently attack the $\mathsf{XOR}\text{-}\mathsf{MAJ}$ challenges (STOC 2016), demonstrating that the $\mathsf{XOR}\text{-}\mathsf{MAJ}$ predicate fails to be $s$-pseudorandom with $n$-bit security even for a stretch factor $s = 1$, where $n$ is the size of the secret seed. For instance, a challenge of $n=42$ and $s = 1$ can be broken using approximately $2^{28}$ calls to Gaussian elimination. We extend our attack to an instance used in constructing silent Oblivious Transfer (OT) protocols (Eurocrypt 2024), with $n=256$. This attack can be realized with approximately $2^{29}$ calls to Gaussian elimination, and we have implemented this attack on a cluster of 32 CPU cores, successfully recovering the secret seed in 5.5 hours. Furthermore, we extend our results to general Piecewise Symmetric Predicates of the form $\mathsf{XOR}\text{-}\mathsf{X}$, showing that $\mathsf{XOR}\text{-}\mathsf{MAJ}$ is far from well designed predicate against bit-fixing correlation attack. With marginal modification, our attack can also be adapted to the FiLIP cipher instantiated with $\mathsf{THR}$-related predicates, making it effective against most FiLIP instances. For example, the FiLIP cipher instantiated on $\mathsf{XOR} \text{-}\mathsf{THR}$ with key size $982$ can be broken using approximately $2^{51}$ calls to Gaussian elimination. Based on these findings, we show that the traditional security assumptions for Goldreich's PRGs---namely, (a) $\Omega(s)$-resilience and (b) algebraic immunity---are insufficient to guarantee pseudorandomness or one-wayness.
Last updated:  2025-02-15
Stateful Communication with Malicious Parties
Chen-Da Liu-Zhang, Christopher Portmann, and Guilherme Rito
Cryptography's most common use is secure communication---e.g. Alice can use encryption to hide the contents of the messages she sends to Bob (confidentiality) and can use signatures to assure Bob she sent these messages (authenticity). While one typically considers stateless security guarantees---for example a channel that Alice can use to send messages securely to Bob---one can also consider stateful ones---e.g. an interactive conversation between Alice, Bob and their friends where participation is dynamic: new parties can join the conversation and existing ones can leave. A natural application of such stateful guarantees are messengers. We introduce a modular abstraction for stateful group communication, called Chat Sessions, which captures security guarantees that are achievable in fully asynchronous settings when one makes no party-honesty assumptions: anyone (including group members themselves) can be fully dishonest. Our abstraction is parameterized by (and enforces) a permissions policy that defines what operations parties have the right to perform in a given chat state. We show how to construct, use and extend Chat Sessions. Our construction is fully decentralized (in particular, it need not a delivery service), does not incur additional interaction between chat participants (other than what is inherent from chat operations like sending a message) and liveness depends solely on messages being delivered. A key feature of Chat Sessions is modularity: we extend Chat Sessions to capture authenticity, confidentiality, anonymity and off-the-record, and show our construction provides these guarantees if the underlying communication channels do too. We complement this by proving Maurer et al.'s Multi-Designated Receiver Public Key Encryption scheme (Eurocrypt '22) constructs matching communication channels (i.e. with all these guarantees). We use Chat Sessions to construct UatChat: a simple and equally modular messaging application. Since UatChat preserves each of the guarantees mentioned above, this means we give the first fully Off-The-Record messaging application: parties can plausibly deny not only having sent any messages but even of being aware of a chat's existence.
Last updated:  2024-10-08
DART: Distributed argument of knowledge for rough terrains
Steve Thakur
We describe a fully distributed KZG-based Snark instantiable with any pairing-friendly curve with a sufficiently large scalar field. In particular, the proof system is compatible with Cocks-Pinch or Brezing-Weng outer curves to the the widely used curves such as secp256k1, ED25519, BLS12-381 and BN254. This allows us to retain the fully parallelizable nature and the O(1) communication complexity of Pianist ([LXZ+23]) in conjunction with circumventing the huge overhead of non-native arithmetic for prominent use cases such as scalar multiplications and/or pairings for Bitcoin (secp256k1), Cosmos (Ed25519) and Ethereum PoS (BLS12-381) signatures. As in [LXZ+23], we use a bivariate KZG polynomial commitment scheme, which entails a universal updatable CRS linear in the circuit size. The proof size is constant, as are the verification time - dominated by three pairings - and the communication complexity between the Prover machines. With a 9-limb pairing-friendly outer curve to Ed25519, the proof size is 5 KB. With the same curve, the communication complexity for each worker node is 5 KB and that of the master node is 5 KB per machine. The effective Prover time for a circuit of size T ·M on M machines is O(T · log(T)+M · log(M)). The work of each Prover machine is dominated by the MSMs of length T in the group G1 and a single sum of univariate polynomial products computed via multimodular FFTs1 of size 2T. Likewise, the work of the master node is dominated by the MSMs of length M in the group G1 and a single sum of univariate polynomial products via multimodular FFTs of size 2M.
Last updated:  2024-10-13
MPC-in-the-Head Framework without Repetition and its Applications to the Lattice-based Cryptography
Weihao Bai, Long Chen, Qianwen Gao, and Zhenfeng Zhang
The MPC-in-the-Head framework has been pro- posed as a solution for Non-Interactive Zero-Knowledge Arguments of Knowledge (NIZKAoK) due to its efficient proof generation. However, most existing NIZKAoK constructions using this approach require multiple MPC evaluations to achieve negligible soundness error, resulting in proof size and time that are asymptotically at least λ times the size of the circuit of the NP relation. In this paper, we propose a novel method to eliminate the need for repeated MPC evaluations, resulting in a NIZKAoK protocol for any NP relation that we call Diet. The proof size and time of Diet are asymptotically only polylogarithmic with respect to the size of the circuit C of the NP relation but are independent of the security parameter λ. Hence, both the proof size and time can be significantly reduced. Moreover, Diet offers promising concrete efficiency for proving Learning With Errors (LWE) problems and its variants. Our solution provides significant advantages over other schemes in terms of both proof size and proof time, when considering both factors together. Specifically, Diet is a promising method for proving knowledge of secret keys for lattice-based key encapsulation mechanisms (KEMs) such as Frodo and Kyber, offering a practical solution to future post-quantum certificate management. For Kyber 512, our implementation achieves an online proof size of 83.65 kilobytes (KB) with a preprocessing overhead of 152.02KB. The implementation is highly efficient, with an online proof time of only 0.68 seconds and a preprocessing time of 0.81 seconds. Notably, our approach provides the first reported implementation of proving knowledge of secret keys for Kyber 512 using post-quantum primitives-based zero-knowledge proofs.
Last updated:  2025-02-05
Matching radar signals and fingerprints with MPC
Benjamin Hansen Mortensen, Mathias Karsrud Nordal, and Martin Strand
Vessels can be recognised by their navigation radar due to the characteristics of the emitted radar signal. This is particularly useful if one wants to build situational awareness without revealing one's own presence. Most countries maintain databases of radar fingerprints but will not readily share these due to national security regulations. Sharing of such information will generally require some form of information exchange agreement. However, all parties in a coalition benefit from correct identification. We use secure multiparty computation to match a radar signal measurement against secret databases and output plausible matches with their likelihoods. We also provide a demonstrator using MP-SPDZ.
Last updated:  2025-03-03
A Systematic Study of Sparse LWE
Aayush Jain, Huijia Lin, and Sagnik Saha
In this work, we introduce the sparse LWE assumption, an assumption that draws inspiration from both Learning with Errors (Regev JACM 10) and Sparse Learning Parity with Noise (Alekhnovich FOCS 02). Exactly like LWE, this assumption posits indistinguishability of $(\mathbf{A}, \mathbf{s}\mathbf{A}+\mathbf{e} \mod p)$ from $(\mathbf{A}, \mathbf{u})$ for a random $\mathbf{u}$ where the secret $\mathbf{s}$, and the error vector $\mathbf{e}$ is generated exactly as in LWE. However, the coefficient matrix $\mathbf{A}$ in sparse LPN is chosen randomly from $\mathbb{Z}^{n\times m}_{p}$ so that each column has Hamming weight exactly $k$ for some small $k$. We study the problem in the regime where $k$ is a constant or polylogarithmic. The primary motivation for proposing this assumption is efficiency. Compared to LWE, the samples can be computed and stored with roughly $O(n/k)$ factor improvement in efficiency. Our results can be summarized as: Foundations: We show several properties of sparse LWE samples, including: 1) The hardness of LWE/LPN with dimension $k$ implies the hardness of sparse LWE/LPN with sparsity $k$ and arbitrary dimension $n \ge k$. 2) When the number of samples $m=\Omega(n \log p)$, length of the shortest vector of a lattice spanned by rows of a random sparse matrix is large, close to that of a random dense matrix of the same dimension (up to a small constant factor). 3) Trapdoors with small polynomial norm exist for random sparse matrices with dimension $n \times m = O(n \log p)$. 4) Efficient algorithms for sampling such matrices together with trapdoors exist when the dimension is $n \times m = \widetilde{\mathcal{O}}(n^2)$. Cryptanalysis: We examine the suite of algorithms that have been used to break LWE and sparse LPN. While naively many of the attacks that apply to LWE do not exploit sparsity, we consider natural extensions that make use of sparsity. We propose a model to capture all these attacks. Using this model we suggest heuristics on how to identify concrete parameters. Our initial cryptanalysis suggests that concretely sparse LWE with a modest $k$ and slightly bigger dimension than LWE will satisfy similar level of security as LWE with similar parameters. Applications: We show that the hardness of sparse LWE implies very efficient homomorphic encryption schemes for low degree computations. We obtain the first secret key Linearly Homomorphic Encryption (LHE) schemes with slightly super-constant, or even constant, overhead, which further has applications to private information retrieval, private set intersection, etc. We also obtain secret key homomorphic encryption for arbitrary constant-degree polynomials with slightly super-constant, or constant, overhead. We stress that our results are preliminary. However, our results make a strong case for further investigation of sparse LWE.
Last updated:  2024-10-08
A Note on ``Privacy-Preserving and Secure Cloud Computing: A Case of Large-Scale Nonlinear Programming''
Zhengjun Cao and Lihua Liu
We show that the outsourcing algorithm for the case of linear constraints [IEEE Trans. Cloud Comput., 2023, 11(1), 484-498] cannot keep output privacy, due to the simple translation transformation. We also suggest a remedy method by adopting a hybrid transformation which combines the usual translation transformation and resizing transformation so as to protect the output privacy.
Last updated:  2025-04-09
Fully Homomorphic Encryption for Cyclotomic Prime Moduli
Robin Geelen and Frederik Vercauteren
This paper presents a Generalized BFV (GBFV) fully homomorphic encryption scheme that encrypts plaintext spaces of the form $\mathbb{Z}[x]/(\Phi_m(x), t(x))$ with $\Phi_m(x)$ the $m$-th cyclotomic polynomial and $t(x)$ an arbitrary polynomial. GBFV encompasses both BFV where $t(x) = p$ is a constant, and the CLPX scheme (CT-RSA 2018) where $m = 2^k$ and $t(x) = x - b$ is a linear polynomial. The latter can encrypt a single huge integer modulo $\Phi_m(b)$, has much lower noise growth than BFV, but it is not known to be efficiently bootstrappable. We show that by a clever choice of $m$ and higher degree polynomial $t(x)$, our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime $\Phi_2(2^{16}) = 2^{16} + 1$ and the Goldilocks prime $\Phi_6(2^{32}) = 2^{64} - 2^{32} + 1$. These primes are often used in homomorphic encryption applications and zero-knowledge proof systems. Due to the lower noise growth, GBFV can evaluate much deeper circuits compared to native BFV in the same ring dimension. As a result, we can evaluate either larger circuits or work with smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security already at ring dimension $n = 2^{14}$, which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only 2 seconds to bootstrap a ciphertext encrypting up to 8192 elements modulo $2^{16} + 1$.
Last updated:  2024-11-21
WHIR: Reed–Solomon Proximity Testing with Super-Fast Verification
Gal Arnon, Alessandro Chiesa, Giacomo Fenzi, and Eylon Yogev
We introduce WHIR, a new IOP of proximity that offers small query complexity and exceptionally fast verification time. The WHIR verifier typically runs in a few hundred microseconds, whereas other verifiers in the literature require several milliseconds (if not much more). This significantly improves the state of the art in verifier time for hash-based SNARGs (and beyond). Crucially, WHIR is an IOP of proximity for constrained Reed–Solomon codes, which can express a rich class of queries to multilinear polynomials and to univariate polynomials. In particular, WHIR serves as a direct replacement for protocols like FRI, STIR, BaseFold, and others. Leveraging the rich queries supported by WHIR and a new compiler for multilinear polynomial IOPs, we obtain a highly efficient SNARG for generalized R1CS. As a comparison point, our techniques also yield state-of-the-art constructions of hash-based (non-interactive) polynomial commitment schemes for both univariate and multivariate polynomials (since sumcheck queries naturally express polynomial evaluations). For example, if we use WHIR to construct a polynomial commitment scheme for degree 222, with 100 bits of security, then the time to commit and open is 1.2 seconds, the sender communicates 63 KiB to the receiver, and the opening verification time is 360 microseconds.
Last updated:  2024-10-07
Quantum Money from Class Group Actions on Elliptic Curves
Hart Montgomery and Shahed Sharif
We construct a quantum money/quantum lightning scheme from class group actions on elliptic curves over $F_{p}$. Our scheme, which is based on the invariant money construction of Liu-Montgomery-Zhandry (Eurocrypt '23), is simple to describe. We believe it to be the most instantiable and well-defined quantum money construction known so far. The security of our quantum lightning construction is exactly equivalent to the (conjectured) hardness of constructing two uniform superpositions over elliptic curves in an isogeny class which is acted on simply transitively by an exponentially large ideal class group. However, we needed to advance the state of the art of isogenies in order to achieve our scheme. In partcular, we show: 1. An efficient (quantum) algorithm for sampling a uniform superposition over a cryptographically large isogeny class. 2. A method for specifying polynomially many generators for the class group so that polynomial-sized products yield an exponential-sized subset of class group, modulo a seemingly very modest assumption. Achieving these results also requires us to advance the state of the art of the (pure) mathematics of elliptic curves, and we are optimistic that the mathematical tools we developed in this paper can be used to advance isogeny-based cryptography in other ways.
Last updated:  2024-10-07
Block Ciphers in Idealized Models: Automated Proofs and New Security Results
Miguel Ambrona, Pooya Farshim, and Patrick Harasser
We develop and implement AlgoROM, a tool to systematically analyze the security of a wide class of symmetric primitives in idealized models of computation. The schemes that we consider are those that can be expressed over an alphabet consisting of XOR and function symbols for hash functions, permutations, or block ciphers. We implement our framework in OCaml and apply it to a number of prominent constructions, which include the Luby–Rackoff (LR), key-alternating Feistel (KAF), and iterated Even–Mansour (EM) ciphers, as well as substitution-permutation networks (SPN). The security models we consider are (S)PRP, and strengthenings thereof under related-key (RK), key-dependent message (KD), and more generally key-correlated (KC) attacks. Using AlgoROM, we are able to reconfirm a number of classical and previously established security theorems, and in one case we identify a gap in a proof from the literature (Connolly et al., ToSC'19). However, most results that we prove with AlgoROM are new. In particular, we obtain new positive results for LR, KAF, EM, and SPN in the above models. Our results better reflect the configurations actually implemented in practice, as they use a single idealized primitive. In contrast to many existing tools, our automated proofs do not operate in symbolic models, but rather in the standard probabilistic model for cryptography.
Last updated:  2024-10-07
Efficient Pairing-Free Adaptable k-out-of-N Oblivious Transfer Protocols
Keykhosro Khosravani, Taraneh Eghlidos, and Mohammad reza Aref
Oblivious Transfer (OT) is one of the fundamental building blocks in cryptography that enables various privacy-preserving applications. Constructing efficient OT schemes has been an active research area. This paper presents three efficient two-round pairing-free k-out-of-N oblivious transfer protocols with standard security. Our constructions follow the minimal communication pattern: the receiver sends k messages to the sender, who responds with n+k messages, achieving the lowest data transmission among pairing-free k-out-of-n OT schemes. Furthermore, our protocols support adaptivity and also, enable the sender to encrypt the n messages offline, independent of the receiver's variables, offering significant performance advantages in one-sender-multiple-receiver scenarios. We provide security proofs under the Computational Diffie-Hellman (CDH) and RSA assumptions, without relying on the Random Oracle Model. Our protocols combine minimal communication rounds, adaptivity, offline encryption capability, and provable security, making them well-suited for privacy-preserving applications requiring efficient oblivious transfer. Furthermore, the first two proposed schemes require only one operation, making them ideal for resource-constrained devices.
Last updated:  2025-02-20
Halving differential additions on Kummer lines
Damien Robert and Nicolas Sarkis
We study differential additions formulas on Kummer lines that factorize through a degree $2$ isogeny $\phi$. We call the resulting formulas half differential additions: from the knowledge of $\phi(P), \phi(Q)$ and $P-Q$, the half differential addition allows to recover $P+Q$. We explain how Mumford's theta group theory allows, in any model of Kummer lines, to find a basis of the half differential relations. This involves studying the dimension $2$ isogeny $(P, Q) \mapsto (P+Q, P-Q)$. We then use the half differential addition formulas to build a new type of Montgomery ladder, called the half-ladder, using a time-memory trade-off. On a Montgomery curve with full rational $2$-torsion, our half ladder first build a succession of isogeny images $P_i=\phi_i(P_{i-1})$, which only depends on the base point $P$ and not the scalar $n$, for a pre-computation cost of $2S+1m_0$ by bit. Then we use half doublings and half differential additions to compute any scalar multiplication $n \cdot P$, for a cost of $4M+2S+1m_0$ by bit. The total cost is then $4 M + 4 S + 2m_0$, even when the base point $P$ is not normalized. By contrast, the usual Montgomery ladder costs $4M + 4S + 1m + 1m_0$ by bit, for a normalized point. In the appendix, we extend our approach to higher dimensional ladders in theta coordinates or twisted theta coordinates. In dimension $2$, after a pre-computation step which depends on the base point $P$, our half ladder only costs $7M + 4S+3m_0$, compared to $10M+9S+6m_0$ for the standard ladder.
Last updated:  2025-02-27
$\mathsf{Protoss}$ Protocol for Tight Optimal Symmetric Security
Emanuele Di Giandomenico, Yong Li, and Sven Schäge
We present $\mathsf{Protoss}$, a new balanced PAKE protocol with optimal communication efficiency. Messages are only 160 bits long and the computational complexity is lower than all previous approaches. Our protocol is proven secure in the random oracle model and features a security proof in a strong security model with multiple parties and multiple sessions, while allowing for generous attack queries including multiple $\mathsf{Test}$-queries. Moreover, the proof is in the practically relevant single-bit model (that is harder to achieve than the multiple-bit model) and tightly reduces to the Strong Square Diffie-Hellman assumption (SSQRDH). This allows for very efficient, theoretically-sound instantiations and tight compositions with symmetric primitives.
Last updated:  2025-02-24
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting
Nicholas Carlini, Jorge Chávez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, and Adi Shamir
Deep neural networks (DNNs) are valuable assets, yet their public accessibility raises security concerns about parameter extraction by malicious actors. Recent work by Carlini et al. (Crypto’20) and Canales- Martínez et al. (Eurocrypt’24) has drawn parallels between this issue and block cipher key extraction via chosen plaintext attacks. Leveraging differential cryptanalysis, they demonstrated that all the weights and biases of black-box ReLU-based DNNs could be inferred using a polynomial number of queries and computational time. However, their attacks relied on the availability of the exact numeric value of output logits, which allowed the calculation of their derivatives. To overcome this limitation, Chen et al. (Asiacrypt’24) tackled the more realistic hard-label scenario, where only the final classification label (e.g., "dog" or "car") is accessible to the attacker. They proposed an extraction method requiring a polynomial number of queries but an exponential execution time. In addition, their approach was applicable only to a restricted set of architectures, could deal only with binary classifiers, and was demonstrated only on tiny neural networks with up to four neurons split among up to two hidden layers. This paper introduces new techniques that, for the first time, achieve cryptanalytic extraction of DNN parameters in the most challenging hard-label setting, using both a polynomial number of queries and polynomial time. We validate our approach by extracting nearly one million parameters from a DNN trained on the CIFAR-10 dataset, comprising 832 neurons in four hidden layers. Our results reveal the surprising fact that all the weights of a ReLU-based DNN can be efficiently determined by analyzing only the geometric shape of its decision boundaries.
Last updated:  2024-10-07
Re-visiting Authorized Private Set Intersection: A New Privacy-Preserving Variant and Two Protocols
Francesca Falzon and Evangelia Anna Markatou
We revisit the problem of Authorized Private Set Intersection (APSI), which allows mutually untrusting parties to authorize their items using a trusted third-party judge before privately computing the intersection. We also initiate the study of Partial-APSI, a novel privacy-preserving generalization of APSI in which the client only reveals a subset of their items to a third-party semi-honest judge for authorization. Partial-APSI allows for partial verification of the set, preserving the privacy of the party whose items are being verified. Both APSI and Partial-APSI have a number of applications, including genome matching, ad conversion, and compliance with privacy policies such as the GDPR. We present two protocols based on bilinear pairings with linear communication. The first realizes the APSI functionality, is secure against a malicious client, and requires only one round of communication during the online phase. Our second protocol realizes the Partial-APSI functionality and is secure against a client that may maliciously inject elements into its input set, but who follows the protocol semi-honestly otherwise. We formally prove correctness and security of these protocols and provide an experimental evaluation to demonstrate their practicality. Our protocols can be efficiently run on commodity hardware. We also show that our protocols are massively parallelizable by running our experiments on a compute grid across 50 cores.
Last updated:  2024-10-07
Quantum Group Actions
Tomoyuki Morimae and Keita Xagawa
In quantum cryptography, there could be a new world, Microcrypt, where cryptography is possible but one-way functions (OWFs) do not exist. Although many fundamental primitives and useful applications have been found in Microcrypt, they lack ``OWFs-free'' concrete hardness assumptions on which they are based. In classical cryptography, many hardness assumptions on concrete mathematical problems have been introduced, such as the discrete logarithm (DL) problems or the decisional Diffie-Hellman (DDH) problems on concrete group structures related to finite fields or elliptic curves. They are then abstracted to generic hardness assumptions such as the DL and DDH assumptions over group actions. Finally, based on these generic assumptions, primitives and applications are constructed. The goal of the present paper is to introduce several abstracted generic hardness assumptions in Microcrypt, which could connect the concrete mathematical hardness assumptions with applications. Our assumptions are based on a quantum analogue of group actions. A group action is a tuple $(G,S,\star)$ of a group $G$, a set $S$, and an operation $\star:G\times S\to S$. We introduce a quantum analogue of group actions, which we call quantum group actions (QGAs), where $G$ is a set of unitary operators, $S$ is a set of states, and $\star$ is the application of a unitary on a state. By endowing QGAs with some reasonable hardness assumptions, we introduce a natural quantum analogue of the decisional Diffie-Hellman (DDH) assumption and pseudorandom group actions. Based on these assumptions, we construct classical-query pseudorandom function-like state generators (PRFSGs). PRFSGs are a quantum analogue of pseudorandom functions (PRFs), and have many applications such as IND-CPA SKE, EUF-CMA MAC, and private-key quantum money schemes. Because classical group actions are instantiated with many concrete mathematical hardness assumptions, our QGAs could also have some concrete (even OWFs-free) instantiations.
Last updated:  2025-03-01
Solving Multivariate Coppersmith Problems with Known Moduli
Keegan Ryan
We examine the problem of finding small solutions to systems of modular multivariate polynomials. While the case of univariate polynomials has been well understood since Coppersmith's original 1996 work, multivariate systems typically rely on carefully crafted shift polynomials and significant manual analysis of the resulting Coppersmith lattice. In this work, we develop several algorithms that make such hand-crafted strategies obsolete. We first use the theory of Gröbner bases to develop an algorithm that provably computes an optimal set of shift polynomials, and we use lattice theory to construct a lattice which provably contains all desired short vectors. While this strategy is usable in practice, the resulting lattice often has large rank. Next, we propose a heuristic strategy based on graph optimization algorithms that quickly identifies low-rank alternatives. Third, we develop a strategy which symbolically precomputes shift polynomials, and we use the theory of polytopes to polynomially bound the running time. Like Meers and Nowakowski's automated method, our precomputation strategy enables heuristically and automatically determining asymptotic bounds. We evaluate our new strategies on over a dozen previously studied Coppersmith problems. In all cases, our unified approach achieves the same recovery bounds in practice as prior work, even improving the practical bounds for three of the problems. In five problems, we find smaller and more efficient lattice constructions, and in three problems, we improve the existing asymptotic bounds. While our strategies are still heuristic, they are simple to describe, implement, and execute, and we hope that they drastically simplify the application of Coppersmith's method to systems of multivariate polynomials.
Last updated:  2024-10-06
Verifiable Value Added Tax
Victor Sint Nicolaas and Sascha Jafari
Value Added Tax (VAT) is a cornerstone of government rev- enue systems worldwide, yet its self-reported nature has historically been vulnerable to fraud. While transaction-level reporting requirements may tackle fraud, they raise concerns regarding data security and overreliance on tax authorities as fully trusted intermediaries. To address these issues, we propose Verifiable VAT, a protocol that enables confidential and verifiable VAT reporting. Our system allows companies to confidentially report VAT as a homomorphic commitment in a centrally managed permissioned ledger, using zero-knowledge proofs to provide integrity guarantees. We demonstrate that the scheme strictly limits the amount of fraud possible due to misreporting. Additionally, we introduce a scheme so companies can (dis)prove exchange of VAT with fraudulent companies. The proposed protocol is flexible with regards to real-world jurisdictions’ requirements, and underscores the potential of cryptographic methods to enhance the integrity and confidentiality of tax systems.
Last updated:  2024-10-24
Efficiently-Thresholdizable Batched Identity Based Encryption, with Applications
Amit Agarwal, Rex Fernando, and Benny Pinkas
We propose a new cryptographic primitive called "batched identity-based encryption" (Batched IBE) and its thresholdized version. The new primitive allows encrypting messages with specific identities and batch labels, where the latter can represent, for example, a block number on a blockchain. Given an arbitrary subset of identities for a particular batch, our primitive enables efficient issuance of a single decryption key that can be used to decrypt all ciphertexts having identities that are included in the subset while preserving the privacy of all ciphertexts having identities that are excluded from the subset. At the heart of our construction is a new technique that enables public aggregation (i.e. without knowledge of any secrets) of any subset of identities, into a succinct digest. This digest is used to derive, via a master secret key, a single succinct decryption key for all the identities that were digested in this batch. In a threshold system, where the master key is distributed as secret shares among multiple authorities, our method significantly reduces the communication (and in some cases, computation) overhead for the authorities. It achieves this by making their costs for key issuance independent of the batch size. We present a concrete instantiation of a Batched IBE scheme based on the KZG polynomial commitment scheme by Kate et al. (Asiacrypt'10) and a modified form of the BLS signature scheme by Boneh et al. (Asiacrypt'01). The construction is proven secure in the generic group model (GGM). In a blockchain setting, the new construction can be used for achieving mempool privacy by encrypting transactions to a block, opening only the transactions included in a given block and hiding the transactions that are not included in it. With the thresholdized version, multiple authorities (validators) can collaboratively manage the decryption process. Other possible applications include scalable support via blockchain for fairness of dishonest majority MPC, and conditional batched threshold decryption that can be used for implementing secure Dutch auctions and privacy preserving options trading.
Last updated:  2025-02-17
Scalable Two-Round $n$-out-of-$n$ and Multi-Signatures from Lattices in the Quantum Random Oracle Model
Qiqi Lai, Feng-Hao Liu, Yang Lu, Haiyang Xue, and Yong Yu
In this paper, we construct the first asymptotically efficient two-round $n$-out-of-$n$ and multi-signature schemes from lattices in the quantum random oracle model (QROM), using the Fiat-Shamir with Aborts (FSwA) paradigm. Our protocols can be viewed as the QROM~variants of the two-round protocols by Damgård et al. (JoC 2022). A notable feature of our protocol, compared to other counterparts in the classical random oracle model, is that each party performs an independent abort and still outputs a signature in exactly two rounds, making our schemes significantly more scalable. From a technical perspective, the simulation of QROM~and the efficient reduction from breaking underlying assumption to forging signatures are the essential challenges to achieving efficient QROM security for the previously related works. In order to conquer the former one we adopt the quantum-accessible pseudorandom function (QPRF) to simulate QROM. Particularly, we show that there exist a QPRF~which can be programmed and inverted, even against a quantum adversary. For the latter challenge, we tweak and apply the online extractability by Unruh (Eurocrypt 2015).
Last updated:  2024-10-05
OML: Open, Monetizable, and Loyal AI
Zerui Cheng, Edoardo Contente, Ben Finch, Oleg Golev, Jonathan Hayase, Andrew Miller, Niusha Moshrefi, Anshul Nasery, Sandeep Nailwal, Sewoong Oh, Himanshu Tyagi, and Pramod Viswanath
Artificial Intelligence (AI) has steadily improved across a wide range of tasks, and a significant breakthrough towards general intelligence was achieved with the rise of generative deep models, which have garnered worldwide attention. However, the development and deployment of AI are almost entirely controlled by a few powerful organizations and individuals who are racing to create Artificial General Intelligence (AGI). These centralized entities make decisions with little public oversight, shaping the future of humanity, often with unforeseen consequences. In this paper, we propose OML, which stands for Open, Monetizable, and Loyal AI, an approach designed to democratize AI development and shift control away from these monopolistic actors. OML is realized through an interdisciplinary framework spanning AI, blockchain, and cryptography. We present several ideas for constructing OML systems using technologies such as Trusted Execution Environments (TEE), traditional cryptographic primitives like fully homomorphic encryption and functional encryption, obfuscation, and AI-native solutions rooted in the sample complexity and intrinsic hardness of AI tasks. A key innovation of our work is the introduction of a new scientific field: AI-native cryptography, which leverages cryptographic primitives tailored to AI applications. Unlike conventional cryptography, which focuses on discrete data and binary security guarantees, AI-native cryptography exploits the continuous nature of AI data representations and their low-dimensional manifolds, focusing on improving approximate performance. One core idea is to transform AI attack methods, such as data poisoning, into security tools. This novel approach serves as a foundation for OML 1.0, an implemented system that demonstrates the practical viability of AI-native cryptographic techniques. At the heart of OML 1.0 is the concept of model fingerprinting, a novel AI-native cryptographic primitive that helps protect the integrity and ownership of AI models. The spirit of OML is to establish a decentralized, open, and transparent platform for AI development, enabling the community to contribute, monetize, and take ownership of AI models. By decentralizing control and ensuring transparency through blockchain technology, OML prevents the concentration of power and provides accountability in AI development that has not been possible before. To the best of our knowledge, this paper is the first to: • Identify the monopolization and lack of transparency challenges in AI deployment today and formulate the challenge as OML (Open, Monetizable, Loyal). • Provide an interdisciplinary approach to solving the OML challenge, incorporating ideas from AI, blockchain, and cryptography. • Introduce and formally define the new scientific field of AI-native cryptography. • Develop novel AI-native cryptographic primitives and implement them in OML 1.0, analyzing their security and effectiveness. • Leverage blockchain technology to host OML solutions, ensuring transparency, decentralization, and alignment with the goals of democratized AI development. Through OML, we aim to provide a decentralized framework for AI development that prioritizes open collaboration, ownership rights, and transparency, ultimately fostering a more inclusive AI ecosystem.
Last updated:  2025-02-13
Bounded Collusion-Resistant Registered Functional Encryption for Circuits
Yijian Zhang, Jie Chen, Debiao He, and Yuqing Zhang
As an emerging primitive, Registered Functional Encryption (RFE) eliminates the key-escrow issue that threatens numerous works for functional encryption, by replacing the trusted authority with a transparent key curator and allowing each user to sample their decryption keys locally. In this work, we present a new black-box approach to construct RFE for all polynomial-sized circuits. It considers adaptive simulation-based security in the bounded collusion model (Gorbunov et al. - CRYPTO'12), where the security can be ensured only if there are no more than Q >= 1 corrupted users and $Q$ is fixed at the setup phase. Unlike earlier works, we do not employ unpractical Indistinguishability Obfuscation (iO). Conversely, it can be extended to support unbounded users, which is previously only known from iO. Technically, our general compiler exploits garbled circuits and a novel variant of slotted Registered Broadcast Encryption (RBE), namely global slotted RBE. This primitive is similar to slotted RBE, but needs optimally compact public parameters and ciphertext, so as to satisfy the efficiency requirement of the resulting RFE. Then we present two concrete global slotted RBE from pairings and lattices, respectively. With proposed compiler, we hence obtain two bounded collusion-resistant RFE schemes. Here, the first scheme relies on k-Lin assumption, while the second one supports unbounded users under LWE and evasive LWE assumptions.
Last updated:  2025-02-20
Basefold in the List Decoding Regime
Ulrich Haböck
In this writeup we discuss the soundness of the Basefold multilinear polynomial commitment scheme [Zeilberger, Chen, Fisch 23] applied to Reed-Solomon codes, and run with proximity parameters up to the Johnson list decoding bound. Our security analysis relies on a generalization of the celebrated correlated agreement theorem from [Ben-Sasson, et al., 20] to linear subcodes of Reed-Solomon codes, which turns out a by-product of the Guruswami-Sudan list decoder analysis. We further highlight a non-linear variant of the subcode correlated agreement theorem, which is flexible enough to apply to Basefold-like protocols such as recent optimizations of FRI-Binius [Diamond, Posen 24], and which we believe sufficient for proving the security of a recent multilinear version of STIR [Arnon, Chiesa, Fenzi, Yogev 24] in the list-decoding regime
Last updated:  2024-10-05
Can KANs Do It? Toward Interpretable Deep Learning-based Side-channel Analysis
Kota Yoshida, Sengim Karayalcin, and Stjepan Picek
Recently, deep learning-based side-channel analysis (DLSCA) has emerged as a serious threat against cryptographic implementations. These methods can efficiently break implementations protected with various countermeasures while needing limited manual intervention. To effectively protect implementation, it is therefore crucial to be able to interpret \textbf{how} these models are defeating countermeasures. Several works have attempted to gain a better understanding of the mechanics of these models. However, a fine-grained description remains elusive. To help tackle this challenge, we propose using Kolmogorov-Arnold Networks (KANs). These neural networks were recently introduced and showed competitive performance to multilayer perceptrons (MLPs) on small-scale tasks while being easier to interpret. In this work, we show that KANs are well suited to SCA, performing similarly to MLPs across both simulated and real-world traces. Furthermore, we find specific strategies that the trained models learn for combining mask shares and are able to measure what points in the trace are relevant.
Last updated:  2025-02-10
The Supersingular Isogeny Path and Endomorphism Ring Problems: Unconditional Reductions
Maher Mamah
In this paper we study several computational problems related to current post-quantum cryptosystems based on isogenies between supersingular elliptic curves. In particular we prove that the supersingular isogeny path and endomorphism ring problems are unconditionally equivalent under polynomial time reductions. We show that access to a factoring oracle is sufficient to solve the Quaternion path problem of KLPT and prove that these problems are equivalent, where previous results either assumed heuristics or the generalised Riemann Hypothesis (GRH). Consequently, given Shor’s quantum algorithm for factorisation, our results yield unconditional quantum polynomial reductions between the isogeny path and EndRing problems. Recently these reductions have become foundational for the security of isogeny-based cryptography
Last updated:  2024-11-01
Oracle Separation Between Quantum Commitments and Quantum One-wayness
John Bostanci, Boyang Chen, and Barak Nehoran
We show that there exists a unitary quantum oracle relative to which quantum commitments exist but no (efficiently verifiable) one-way state generators exist. Both have been widely considered candidates for replacing one-way functions as the minimal assumption for cryptography—the weakest cryptographic assumption implied by all of computational cryptography. Recent work has shown that commitments can be constructed from one-way state generators, but the other direction has remained open. Our results rule out any black-box construction, and thus settle this crucial open problem, suggesting that quantum commitments (as well as its equivalency class of EFI pairs, quantum oblivious transfer, and secure quantum multiparty computation) appear to be strictly weakest among all known cryptographic primitives.
Last updated:  2025-02-05
A New World in the Depths of Microcrypt: Separating OWSGs and Quantum Money from QEFID
Amit Behera, Giulio Malavolta, Tomoyuki Morimae, Tamer Mour, and Takashi Yamakawa
While in classical cryptography, one-way functions (OWFs) are widely regarded as the “minimal assumption,” the situation in quantum cryptography is less clear. Recent works have put forward two concurrent candidates for the minimal assumption in quantum cryptography: One-way state generators (OWSGs), postulating the existence of a hard search problem with an efficient verification algorithm, and EFI pairs, postulating the existence of a hard distinguishing problem. Two recent papers [Khurana and Tomer STOC’24; Batra and Jain FOCS’24] showed that OWSGs imply EFI pairs, but the reverse direction remained open. In this work, we give strong evidence that the opposite direction does not hold: We show that there is a quantum unitary oracle relative to which EFI pairs exist, but OWSGs do not. In fact, we show a slightly stronger statement that holds also for EFI pairs that output classical bits (QEFID). As a consequence, we separate, via our oracle, QEFID, and one-way puzzles from OWSGs and several other Microcrypt primitives, including efficiently verifiable one-way puzzles and unclonable state generators. In particular, this solves a problem left open in [Chung, Goldin, and Gray Crypto’24]. Using similar techniques, we also establish a fully black-box separation (which is slightly weaker than an oracle separation) between private-key quantum money schemes and QEFID pairs. One conceptual implication of our work is that the existence of an efficient verification algorithm may lead to qualitatively stronger primitives in quantum cryptography.
Last updated:  2025-02-14
Dynamic zk-SNARKs
Weijie Wang, Charalampos Papamanthou, Shravan Srinivasan, and Dimitrios Papadopoulos
In this work, we put forth the notion of dynamic zk-SNARKs. A dynamic zk-SNARK is a zk-SNARK that has an additional update algorithm. The update algorithm takes as input a valid source statement-witness pair $(x,w)\in R$ along with a verifying proof $\pi$, and a valid target statement-witness pair $(x',w')\in R$. It outputs a verifying proof $\pi'$ for $(x',w')$ in sublinear time (for $(x,w)$ and $(x',w')$ with small Hamming distance) potentially with the help of a data structure. To the best of our knowledge, none of the commonly-used zk-SNARKs are dynamic---a single update in $(x,w)$ can be handled only by recomputing the proof, which requires at least linear time. After formally defining dynamic zk-SNARKs, we present two constructions. The first one, Dynarec, is based on recursive zk-SNARKs, has $O(\log n)$ update time and is folklore, in the sense that it shares similarities (and limitations such as small number of compositions and heuristic security) with existing tree-based Incremental Verifiable Computation (IVC) schemes. Our second and central contribution is Dynaverse, a dynamic zk-SNARK based on a new dynamic permutation argument that we propose and whose security rests solely KZG commitments. Dynaverse has $O(\sqrt{n}\log n)$ update time and proofs of $O(\log N)$ size. As a central application of dynamic zk-SNARKs, we build a compiler from any dynamic zk-SNARK to a non-trivial (i.e., sublinear) scheme for recursion-free IVC, allowing us for the first time to base non-trivial IVC security solely on KZG commitments, therefore removing any bound on the number of allowed iterations as well any reliance on heuristic security. We also detail additional applications of dynamic zk-SNARKs such as dynamic state proofs and keyless authentication. Our preliminary evaluation shows that Dynaverse outperforms baseline PLONK proof recomputation by up to approximately 500x as well as heuristically-secure and asymptotically-superior Dynarec by up to one order of magnitude.
Last updated:  2024-10-04
Fiat-Shamir in the Wild
Hieu Nguyen, Uyen Ho, and Alex Biryukov
The Fiat-Shamir transformation is a key technique for removing interactivity from cryptographic proof systems in real-world applications. In this work, we discuss five types of Fiat-Shamir-related protocol design errors and illustrate them with concrete examples mainly taken from real-life applications. We discuss countermeasures for such vulnerabilities.
Last updated:  2025-02-18
A Simple Framework for Secure Key Leasing
Fuyuki Kitagawa, Tomoyuki Morimae, and Takashi Yamakawa
Secure key leasing (a.k.a. key-revocable cryptography) enables us to lease a cryptographic key as a quantum state in such a way that the key can be later revoked in a verifiable manner. We propose a simple framework for constructing cryptographic primitives with secure key leasing via the certified deletion property of BB84 states. Based on our framework, we obtain the following schemes. - A public key encryption scheme with secure key leasing that has classical revocation based on any IND-CPA secure public key encryption scheme. Prior works rely on either quantum revocation or stronger assumptions such as the quantum hardness of the learning with errors (LWE) problem. - A pseudorandom function with secure key leasing that has classical revocation based on one-way functions. Prior works rely on stronger assumptions such as the quantum hardness of the LWE problem. - A digital signature scheme with secure key leasing that has classical revocation based on the quantum hardness of the short integer solution (SIS) problem. Our construction has static signing keys, i.e., the state of a signing key almost does not change before and after signing. Prior constructions either rely on non-static signing keys or indistinguishability obfuscation to achieve a stronger goal of copy-protection. In addition, all of our schemes remain secure even if a verification key for revocation is leaked after the adversary submits a valid certificate of deletion. To our knowledge, all prior constructions are totally broken in this setting. Moreover, in our view, our security proofs are much simpler than those for existing schemes.
Last updated:  2025-04-11
Optimized One-Dimensional SQIsign Verification on Intel and Cortex-M4
Marius A. Aardal, Gora Adj, Arwa Alblooshi, Diego F. Aranha, Isaac A. Canales-Martínez, Jorge Chavez-Saab, Décio Luiz Gazzoni Filho, Krijn Reijnders, and Francisco Rodríguez-Henríquez
SQIsign is a well-known post-quantum signature scheme due to its small combined signature and public-key size. However, SQIsign suffers from notably long signing times, and verification times are not short either. To improve this, recent research has explored both one-dimensional and two-dimensional variants of SQIsign, each with distinct characteristics. In particular, SQIsign2D's efficient signing and verification times have made it a focal point of recent research. However, the absence of an optimized one-dimensional verification implementation hampers a thorough comparison between these different variants. This work bridges this gap in the literature: we provide a state-of-the-art implementation of one-dimensional SQIsign verification, including novel optimizations. We report a record-breaking one-dimensional SQIsign verification time of 8.55 Mcycles on a Raptor Lake Intel processor, closely matching SQIsign2D on the same processor. For uncompressed signatures, the signature size doubles and we verify in only 5.6 Mcycles. Taking advantage of the inherent parallelism available in isogeny computations, we present 5-core variants that can go as low as 1.3 Mcycles. Furthermore, we present the first implementation that supports both 32-bit and 64-bit processors. It includes optimized assembly code for the Cortex-M4 and has been integrated with the pqm4 project. Our results motivate further research into one-dimensional SQIsign, as it boasts unique features among isogeny-based schemes.
Last updated:  2024-10-04
Fully Privacy-preserving Billing Models for Peer-to-Peer Electricity Trading Markets
Akash Madhusudan, Mustafa A. Mustafa, Hilder V.L. Pereira, and Erik Takke
Peer-to-peer energy trading markets enable users to exchange electricity, directly offering them increased financial benefits. However, discrepancies often arise between the electricity volumes committed to in trading auctions and the volumes actually consumed or injected. Solutions designed to address this issue often require access to sensitive information that should be kept private. This paper presents a novel, fully privacy-preserving billing protocol designed to protect users' sensitive consumption and production data in the context of billing protocols for energy trading. Leveraging advanced cryptographic techniques, including fully homomorphic encryption (FHE) and pseudorandom zero sharing (PRZS), our protocol ensures robust security and confidentiality while addressing the critical issue of managing discrepancies between promised and actual electricity volumes. The proposed protocol guarantees that users' sensitive information remains inaccessible to external parties, including the trading platform and billing server. By utilizing FHE, the protocol allows computations on encrypted data without compromising privacy, while PRZS ensures secure aggregation of individual discrepancies of each household. This combination of cryptographic primitives maintains data privacy and enhances billing accuracy, even when fluctuations in energy supply and demand occur. We analyze real-time consumption and production data from 100 households to experimentally validate the effectiveness and efficiency of our billing model. By implementing a flexible framework compatible with any billing method, we demonstrate that our protocol can accurately compute individual bills for 100 households in approximately 0.17 seconds.
Last updated:  2024-10-07
FLUENT: A Tool for Efficient Mixed-Protocol Semi-Private Function Evaluation
Daniel Günther, Joachim Schmidt, Thomas Schneider, and Hossein Yalame
In modern business to customer interactions, handling private or confidential data is essential. Private Function Evaluation (PFE) protocols ensure the privacy of both the customers' input data and the business' function evaluated on it which is often sensitive intellectual property (IP). However, fully hiding the function in PFE results in high performance overhead. Semi-Private Function Evaluation (SPFE) is a generalization of PFE to only partially hide the function, whereas specific non-critical components remain public. Our paper introduces a novel framework designed to make SPFE accessible to non-experts and practical for real-world deployments. To achieve this, we improve on previous SPFE solutions in two aspects. First, we enhance the developer experience by leveraging High-Level Synthesis (HLS), making our tool more user-friendly than previous SPFE frameworks. Second, we achieve a \(2 \times\) speedup compared to the previous state-of-the-art through more efficient underlying constructions and the usage of Lookup Tables (LUTs). We evaluate the performance of our framework in terms of communication and runtime efficiency. Our final implementation is available as an open-source project, aiming to bridge the gap between advanced cryptographic protocols and their practical application in industry scenarios.
Last updated:  2024-10-04
Revisiting Shuffle-Based Private Set Unions with Reduced Communication
Jiseung Kim, Hyung Tae Lee, and Yongha Son
A Private Set Union (PSU) allows two parties having sets $X$ and $Y$ to securely compute the union $X \cup Y$ while revealing no additional information. Recently, there have been proposed so-called shuffle-based PSU protocols due to Garimella et. al. (PKC'21) and Jia et. al. (USENIX'22). Except a few base oblivious transfers, those proposals are fully based on symmetric key primitives and hence enjoy quite low computation costs. However, they commonly have drawbacks on large communication cost of $O(\ell n\log n)$ with input set size $n$ and $\ell \ge O(\lambda + \log n)$ where $\lambda$ is a statistical security parameter. We propose two optimizations for each work that reduce communication cost while maintaining strength in computation cost; the first one optimizes Garimella et. al. to have $O(\ell n + n \log n)$, and the second one optimizes Jia et. al. by reducing the concrete value of $\ell$ by $\log n$. Concretely, the first (second, resp) optimization provides $3.3 - 3.9$x ($1.7 - 1.8$x, resp) lower communication input set sizes $n = 2^{16} - 2^{20}$. We demonstrate by comprehensive analysis and implementation that our optimization leads to better PSU protocol, compared to the state-of-the-art proposal of Zhang et. al. (USENIX'23) as well as previous shuffle-based PSUs. As a concrete amount of improvement, we see $1.4-1.5$x speed up for $100$Mbps network, and $1.8-2.2$x speed up for $10$Mbps network on input set sizes $n = 2^{16} - 2^{20}$.
Last updated:  2024-10-04
Mind the Composition of Toffoli Gates: Structural Algebraic Distinguishers of ARADI
Emanuele Bellini, Mohamed Rachidi, Raghvendra Rohit, and Sharwan K. Tiwari
This paper reveals a critical flaw in the design of ARADI, a recently proposed low-latency block cipher by NSA researchers -- Patricia Greene, Mark Motley, and Bryan Weeks. The weakness exploits the specific composition of Toffoli gates in the round function of ARADI's nonlinear layer, and it allows the extension of a given algebraic distinguisher to one extra round without any change in the data complexity. More precisely, we show that the cube-sum values, though depending on the secret key bits, are always equal in two of the state words. Such a structural property is difficult to obtain by the direct application of division property and has never been seen before in any state-of-the-art block cipher. We call this structural property \textit{weakly-composed-Toffoli gates}, and introduce a theoretical framework which can describe it in general terms. We present algebraic distinguishers that reach 8 out of 16 rounds of ARADI. Most notably, we show that these distinguishers have better data complexities than the division property-based distinguishers for the same number of rounds. We further investigate whether changing the linear layer or the order of composition of Toffoli gates could avoid this property. We give a negative answer to the same and show that it is impossible to prevent this structural property unless the nonlinear layer is re-designed. As a side result, we provide a key-recovery attack on 10 rounds ARADI with $2^{124}$ data and $2^{177}$ time for a 256-bit key. Our work highlights the significance of security analysis during the cipher design phase, and shows that these strong structural distinguishers could have been avoided during this phase.
Last updated:  2024-10-03
Understanding Leakage in Searchable Encryption: a Quantitative Approach
Alexandra Boldyreva, Zichen Gui, and Bogdan Warinschi
Searchable encryption, or more generally, structured encryption, permits search over encrypted data. It is an important cryptographic tool for securing cloud storage. The standard security notion for structured encryption mandates that a protocol leaks nothing about the data or queries, except for some allowed leakage, defined by the leakage function. This is due to the fact that some leakage is unavoidable for efficient schemes. Unfortunately, it was shown by numerous works that even innocuous-looking leakage can often be exploited by attackers to undermine users' privacy and recover their queries and/or data, despite the structured encryption schemes being provably secure. Nevertheless, the standard security remains the go-to notion used to show the "security" of structured encryption schemes. While it is not likely that researchers will design practical structured encryption schemes with no leakage, it is not satisfactory that very few works study ways to assess leakage. This work proposes a novel framework to quantify leakage. Our methodology is inspired by the quantitative information flow, and we call our method $q$-leakage analysis. We show how $q$-leakage analysis is related to the standard security. We also demonstrate the usefulness of $q$-leakage analysis by analyzing the security of two existing schemes with complex leakage functions.
Last updated:  2024-10-03
Tightly Secure Threshold Signatures over Pairing-Free Groups
Renas Bacho and Benedikt Wagner
Threshold signatures have been drawing lots of attention in recent years. Of particular interest are threshold signatures that are proven secure under adaptive corruptions (NIST Call 2023). Sadly, existing constructions with provable adaptive security suffer from at least one of the following drawbacks: (i) strong idealizations such as the algebraic group model (AGM), (ii) an unnatural restriction on the corruption threshold being $t/2$ where $t$ is the signing threshold, or (iii) prohibitively large security loss under established assumptions. Notably, point (iii) has received little to no attention in the literature on this subject. In this work, we introduce Twinkle-T, a new threshold signature scheme which overcomes these limitations. Twinkle-T is the first scheme to have a fully tight security proof under up to $t$ adaptive corruptions without relying on the AGM. It also has a signing protocol consisting of only three rounds and thus matches the currently best threshold signature with full adaptive security Twinkle (Eurocrypt 2024) in the pairing-free discrete logarithm setting. We prove security from a standard non-interactive assumption, namely, the Decisional Diffie-Hellman (DDH) assumption.
Last updated:  2024-10-18
The module action for isogeny based cryptography
Damien Robert
We extend the usual ideal action on oriented elliptic curves to a (Hermitian) module action on oriented (polarised) abelian varieties. Oriented abelian varieties are naturally enriched in $R$-modules, and our module action comes from the canonical power object construction on categories enriched in a closed symmetric monoidal category. In particular our action is canonical and gives a fully fledged symmetric monoidal action. Furthermore, we give algorithms to compute this action in practice, generalising the usual algorithms in rank~$1$. The action allows us to unify in the same framework, on the one hand isogeny based cryptography based on ordinary or oriented elliptic curves, and on the other hand the one based on supersingular elliptic curves defined over $\mathbb{F}_{p^2}$. In particular, from our point of view, supersingular elliptic curves over $\mathbb{F}_p$ are given by a rank~$1$ module action, while (the Weil restriction) of those defined over $\mathbb{F}_{p^2}$ are given by a rank~$2$ module action. As a consequence, rank~$2$ module action inversion is at least as hard as the supersingular isogeny path problem. We thus propose to use Hermitian modules as an avatar of a cryptographic symmetric monoidal action framework. This generalizes the more standard cryptographic group action framework, and still allows for a NIKE (Non Interactive Key Exchange). The main advantage of our action is that, presumably, Kuperberg's algorithm does not apply. Compared to CSIDH, this allows for more compact keys and much better scaling properties. In practice, we propose the key exchange scheme $\otimes$-MIKE (Tensor Module Isogeny Key Exchange). Alice and Bob start from a supersingular elliptic curve $E_0/\mathbb{F}_p$ and both compute a $2^n$-isogeny over $\mathbb{F}_{p^2}$. They each send the $j$-invariant of their curve. Crucially, unlike SIDH, no torsion information at all is required. Their common secret, given by the module action, is then a dimension~$4$ principally polarised abelian variety. We obtain a very compact post-quantum NIKE: only 64B for NIST level~$1$ security.
Last updated:  2024-10-03
Private Laconic Oblivious Transfer with Preprocessing
Rishabh Bhadauria, Nico Döttling, Carmit Hazay, and Chuanwei Lin
Laconic cryptography studies two-message protocols that securely compute on large amounts of data with minimal communication cost. Laconic oblivious transfer (OT) is a central primitive where the receiver's input is a large database $\mathsf{DB}$ and the sender's inputs are two messages $m_0$, $m_1$ along with an index $i$, such that the receiver learns the message determined by the choice bit $\mathsf{DB}_i$. OT becomes even more useful for secure computation when considering its laconic variants, which offer succinctness and round optimality. However, existing constructions are not practically efficient because they rely on heavy cryptographic machinery and non-black-box techniques. In this work, we initiate the study of laconic OT correlations, where the model allows an offline phase to generate the correlations later used in a lightweight online phase. Our correlation is conceptually simple, captured by an inner product computation, and enables us to achieve a private laconic OT protocol where the sender's index $i$ is also hidden from the receiver. Our construction is the first private laconic OT with database-dependent preprocessing based solely on symmetric-key assumptions, achieving sublinear online computational complexity for the receiver. Furthermore, we enhance our construction with updatability and receiver privacy. Finally, we demonstrate the applications of private laconic OT to laconic function evaluation for RAM programs and laconic private set intersection with preprocessing.
Last updated:  2024-10-12
Breaking, Repairing and Enhancing XCBv2 into the Tweakable Enciphering Mode GEM
Amit Singh Bhati, Michiel Verbauwhede, and Elena Andreeva
Tweakable enciphering modes (TEMs) provide security in a variety of storage and space-critical applications like disk and file-based encryption, and packet-based communication protocols, among others. XCB-AES (known as XCBv2) is specified in the IEEE 1619.2 standard for encryption of sector-oriented storage media and it comes with a proof of security for block-aligned input messages. In this work, we demonstrate the $\textit{first}$ and most efficient plaintext recovery attack on XCBv2. We show that XCBv2 is $\textit{insecure}$ also for full block messages by recovering the plaintext (all except the final block) using minimal number of queries namely $\textit{only}$ two. We demonstrate that our attack further applies to the HCI and MXCB TEMs, which follow a similar design approach to XCBv2. Following the responsible disclosure process, we communicated the attack details to IEEE and the authors of XCB-AES. The authors have confirmed the validity of our attack on 02/09/2024. Our next contribution is to strengthen the provable security of XCB-AES (claimed $n/3$ bits in queried blocks). We propose a new modular TEM called GEM which can be seen as a generalization of the Hash-CTR-Hash approach as used in XCB-style and HCTR-style TEMs. We are able to prove that GEM achieves full $n$-bit security using $\textit{only}$ $n$-bit PRP/PRF. We also give two concrete GEM instantiations: $\mathsf{KohiNoor}$ and $\mathsf{DaryaiNoor}$, both of which are based on AES-128 and GHASH-256, and internally use variants of the CTR-based weak pseudorandom functions GCTR-3 and SoCTR, respectively. SoCTR uses AES-128 and GCTR-3 is based on $\mathsf{ButterKnife}$-256. Our security proofs show that both $\mathsf{KohiNoor}$ and $\mathsf{DaryaiNoor}$ provide full $n$-bit security. From applications perspective, $\mathsf{DaryaiNoor}$ addresses the need for reusing classical components, while $\mathsf{KohiNoor}$ enhances performance by leveraging a more modern primitive based on the AES/Deoxys round function. Our implementation demonstrate competitive performance: For typical 4KiB sector size, $\mathsf{KohiNoor}$'s performance is on par with AES$_6$-CTET+, yet achieving higher standard security guarantees. $\mathsf{DaryaiNoor}$ is on par with AES-CTET+ performance-wise while also maintaining higher security with standard components. Our GEM instances triple the security margin of XCB-AES and double that of HCTR2 at the cost of performance loss of only $12\%$ ($\mathsf{KohiNoor}$) and $68\%$ ($\mathsf{DaryaiNoor}$) for 4KiB messages.
Last updated:  2024-10-03
STARK-based Signatures from the RPO Permutation
Shahla Atapoor, Cyprien Delpech de Saint Guilhem, and Al Kindi
This work describes a digital signature scheme constructed from a zero-knowledge proof of knowledge of a pre-image of the Rescue Prime Optimized (RPO) permutation. The proof of knowledge is constructed with the DEEP-ALI interactive oracle proof combined with the Ben-Sasson--Chiesa--Spooner (BCS) transformation in the random oracle model. The EUF-CMA security of the resulting signature scheme is established from the UC-friendly security properties of the BCS transformation and the pre-image hardness of the RPO permutation. The implementation of the scheme computes signatures in 13 ms and verifies them in 1 ms on a single core when the BCS transform is implemented with the Blake3 hash function. (The multi-threaded implementation signs in 9.2 ms and also verifies in 1 ms.) These speeds are obtained with parameters achieving 122 bits of average-case security for \( 2^{122} \)-bounded adversaries with access to at most \( 2^{64} \) signatures.
Last updated:  2025-02-24
Revisiting Keyed-Verification Anonymous Credentials
Michele Orrù
Keyed-verification anonymous credentials (KVACs) have demonstrated their practicality through large-scale deployments in privacy-critical systems like Signal and Tor. Despite their widespread adoption, the theoretical framework underlying KVACs lacks the flexibility needed to support diverse applications, which in general require different security properties. For instance, rate-limiting credentials only need a weaker unforgeability notion (one-more unforgeability), yet the framework cannot easily accommodate this relaxation. Similarly, identity-based applications require stronger properties than unforgeability -—specifically, extractability for security proofs when adversaries can observe other users' credentials. In this work, we address these limitations, introducing new notions of extractability and one-more unforgeability. We improve two foundational works in the space: - The scheme by Chase et al. (CCS 2014), commonly referred to as CMZ or PS MAC can be made statistically anonymous, and issuance cost reduced from \(O(2n)\) to \(O(1)\). We update the proof of Chase et al. in the algebraic group model. - The scheme by Barki et al. (SAC 2016), known as BBDT or BBS MAC can be issued more efficiently (one less group element). Finally, we note that for KVACs, designated-verifier proofs suffice since the verifier is known in advance. We introduce designated-verifier polynomial commitment schemes and instantiate a variant of the popular KZG commitment scheme without pairings. Any interactive oracle proof can be used in tandem with it, leading to designated-verifier fully-succinct zk-SNARKs without pairings for algebraic groups. Our model can improve the deployment of larger protocols relying on KVACs. We show this with some examples that benefit from our approach.
Last updated:  2025-02-23
SNARKs for Virtual Machines are Non-Malleable
Matteo Campanelli, Antonio Faonio, and Luigi Russo
Cryptographic proof systems have a plethora of applications: from building other cryptographic tools (e.g., malicious security for MPC protocols) to concrete settings such as private transactions or rollups. In several settings it is important for proof systems to be non-malleable: an adversary should not to be able to modify a proof they have observed into another for a statement for which they do not know the witness. Proof systems that have been deployed in practice should arguably satisfy this notion: it is crucial in settings such as transaction systems and in order to securely compose proofs with other cryptographic protocols. As a consequence, results on non-malleability should keep up with designs of proofs being deployed. Recently, Arun et al. proposed $\mathsf{Jolt}$ (Eurocrypt 2024), arguably the first efficient proof system whose architecture is based on the lookup singularity approach (Barry Whitehat, 2022). This approach consists in representing a general computation as a series of table lookups. The final result is a SNARK for a Virtual Machine execution (or SNARK VM). Both SNARK VMs and lookup-singularity SNARKs are architectures with enormous potential and will probably be adopted more and more in the next years (and they already are). As of today, however, there is no literature regarding the non-malleability of SNARK VMs. The goal of this work is to fill this gap by providing both concrete non-malleability results and a set of technical tools for a more general study of SNARK VMs security (as well as "modular" SNARKs in general). As a concrete result, we study the non-malleability of (an idealized version of) $\mathsf{Jolt}$ and its fundamental building block, the lookup argument $\mathsf{Lasso}$. While connecting our new result on the non-malleability of $\mathsf{Lasso}$ to that of $\mathsf{Jolt}$, we develop a set of tools that enable the composition of non-malleable SNARKs. We believe this toolbox to be valuable in its own right.
Last updated:  2025-01-17
MAYO Key Recovery by Fixing Vinegar Seeds
Sönke Jendral and Elena Dubrova
As the industry prepares for the transition to post-quantum secure public key cryptographic algorithms, vulnerability analysis of their implementations is gaining importance. A theoretically secure cryptographic algorithm should also be able to withstand the challenges of physical attacks in real-world environments. MAYO is a candidate in the ongoing first round of the NIST post-quantum standardization process for selecting additional digital signature schemes. This paper demonstrates three first-order single-execution fault injection attacks on a MAYO implementation in an ARM Cortex-M4 processor. By using voltage glitching to disrupt the computation of the vinegar seed during the signature generation, we enable the recovery of the secret key directly from the faulty signatures. Our experimental results show that the success rates of the fault attacks in a single execution are 36%, 82%, and 99%, respectively. They emphasize the importance of developing countermeasures against fault attacks prior to the widespread deployment of post-quantum algorithms like MAYO.
Last updated:  2025-03-04
Universally Composable SNARKs with Transparent Setup without Programmable Random Oracle
Christian Badertscher, Matteo Campanelli, Michele Ciampi, Luigi Russo, and Luisa Siniscalchi
Non-interactive zero-knowledge (NIZK) proofs enable a prover to convince a verifier of an NP statement’s validity using a single message, without disclosing any additional information. These proofs are widely studied and deployed, especially in their succinct form, where proof length is sublinear in the size of the NP relation. However, efficient succinct NIZKs typically require an idealized setup, such as a a common reference string, which complicates real-world deployment. A key challenge is developing NIZKs with simpler, more transparent setups. A promising approach is the random-oracle (RO) methodology, which idealizes hash functions as public random functions. It is commonly believed that UC NIZKs cannot be realized using a non-programmable global RO—the simplest incarnation of the RO as a form of setup—since existing techniques depend on the ability to program the oracle. We challenge this belief and present a methodology to build UC-secure NIZKs based solely on a global, non-programmable RO. By applying our framework we are able to construct a NIZK that achieves witness-succinct proofs of logarithmic size, breaking both the programmability barrier and polylogarithmic proof size limitations for UC-secure NIZKs with transparent setups. We further observe that among existing global RO formalizations put forth by Camenisch et al. (Eurocrypt 2018), our choice of setup is necessary to achieve this result. From the technical standpoint, our contributions span both modeling and construction. We leverage the shielded (super-poly) oracle model introduced by Broadnax et al. (Eurocrypt 2017) to define a UC NIZK functionality that can serve as a drop-in replacement for its standard variant—it preserves the usual soundness and zero-knowledge properties while ensuring its compositional guarantees remain intact. To instantiate this functionality under a non-programmable RO setup, we follow the framework of Ganesh et al. (Eurocrypt 2023) and provide new building blocks for it, around which are some of our core technical contributions: a novel polynomial encoding technique and the leakage analysis of its companion polynomial commitment, based on Bulletproofs-style folding. We also provide a second construction, based on a recent work by Chiesa and Fenzi (TCC 2024), and show that it achieves a slightly weaker version of the NIZK functionality.
Last updated:  2025-02-15
Fully Succinct Arguments over the Integers from First Principles
Matteo Campanelli and Mathias Hall-Andersen
In this work we construct fully succinct arguments of knowledge for computations over the infinite ring $\mathbb{Z}$. We are motivated both by their practical applications—e.g. verifying cryptographic primitives based on RSA groups or Ring-LWE; field emulation and field "switching"; arbitrary precision-arithmetic—and by theoretical questions of techniques for constructing arguments over the integers in general. Unlike prior works constructing arguments for $\mathbb{Z}$ or $\mathbb{Z}_{2^k}$, we circumvent most of the complexities of arithmetizing or extracting over these rings directly. Instead, we introduce a general and arguably simpler theoretical framework for building succinct arguments over $\mathbb{Z}$, one which allows protocol designers to reuse existing SNARK techniques. This is possible thanks to our key technique—$\textit{fingerprinting}$, a form of arithmetic hashing—for "boostrapping" protocols over the integers from existing systems over prime fields (e.g., multilinear-flavored ones, such as Spartan). The resulting protocol can then be compiled into a cryptographic argument via a novel kind of polynomial commitment allowing queries to a multivariate $\textit{integer}$ polynomial modulo an arbitrary prime $q$. We show how to instantiate our framework and obtain a concrete scheme, $\mathbb{Z}$artan. This is the first construction in literature being $\textit{fully}$ succinct over integer computation, i.e., with short proofs and fast verification even when the witness consists of very large integers.
Last updated:  2024-10-03
HHL for tensor-decomposable matrices
Cezary Pilaszewicz and Marian Margraf
We use the HHL algorithm to retrieve a quantum state holding the algebraic normal formal of a Boolean function. Unlike the standard HHL applications, we do not describe the cipher as an exponentially big system of equations. Rather, we perform a set of small matrix inversions which corresponds to the Boolean Möbius transform. This creates a superposition holding information about the ANF in the form: $\ket{\mathcal{A}_{f}} =\frac{1}{C} \sum_{I=0}^{2^n-1} c_I \ket{I}$, where $c_I$ is the coefficient of the ANF and $C$ is a scaling factor. The procedure has a time complexity of $\mathcal{O}(n)$ for a Boolean function with $n$ bit input. We also propose two approaches how some information about the ANF can be extracted from such a state.
Last updated:  2024-10-03
Bit t-SNI Secure Multiplication Gadget for Inner Product Masking
John Gaspoz and Siemen Dhooghe
Masking is a sound countermeasure to protect against differential power analysis. Since the work by Balasch et al. in ASIACRYPT 2012, inner product masking has been explored as an alternative to the well known Boolean masking. In CARDIS 2017, Poussier et al. showed that inner product masking achieves higher-order security versus Boolean masking, for the same shared size, in the bit-probing model. Wang et al. in TCHES 2020 verified the inner product masking's security order amplification in practice and proposed new gadgets for inner product masking. Finally, Wu et al. in TCHES 2022 showed that this security amplification comes from the bit-probing model, but that Wang al.'s gadgets are not higher-order bit-probing secure reducing the computation's practical security. The authors concluded their work with the open question of providing an inner product multiplication gadget which maintains the masking's bit-probing security, and conjectured that such gadget maintains the practical security order amplification of the masking during its computation. In this paper, we answer positively to Wu et al.'s open problems. We are the first to present a multiplication gadget for inner product masking which is proven secure in the bit-level probing model using the t-Strong Non-Interference (SNI) property. Moreover, we provide practical evidence that the gadget indeed maintains the security amplification of its masking. This is done via an evaluation of an assembly implementation of the gadget on an ARM Cortex-M4 core. We used this implementation to take leakage measurements and show no leakage happens for orders below the gadget's bit-probing security level either for its univariate or multivariate analysis.
Last updated:  2024-10-02
Fully Composable Homomorphic Encryption
Daniele Micciancio
The traditional definition of fully homomorphic encryption (FHE) is not composable, i.e., it does not guarantee that evaluating two (or more) homomorphic computations in a sequence produces correct results. We formally define and investigate a stronger notion of homomorphic encryption which we call "fully composable homomorphic encryption", or "composable FHE". The definition is both simple and powerful: it does not directly involve the evaluation of multiple functions, and yet it supports the arbitrary composition of homomorphic evaluations. On the technical side, we compare the new definition with other definitions proposed in the past, proving both implications and separations, and show how the "bootstrapping" technique of (Gentry, STOC 2009) can be formalized as a method to transform a (non-composable, circular secure) homomorphic encryption scheme into a fully composable one. We use this formalization of bootstrapping to formulate a number of conjectures and open problems.
Last updated:  2024-10-02
PoUDR: Proof of Unified Data Retrieval in Decentralized Storage Networks
Zonglun Li, Shuhao Zheng, Junliang Luo, Ziyue Xin, Dun Yuan, Shang Gao, Sichao Yang, Bin Xiao, and Xue Liu
Decentralized storage networks, including IPFS and Filecoin, have created a marketplace where individuals exchange storage space for profit. These networks employ protocols that reliably ensure data storage providers accurately store data without alterations, safeguarding the interests of storage purchasers. However, these protocols lack an effective and equitable payment mechanism for data retrieval, particularly when multiple data queriers are involved. This necessitates a protocol that ensures both data integrity and fair compensation for data providers. In decentralized storage, data is fragmented into small blocks and stored across multiple nodes, a process known as data swarming. Due to this property, traditional data exchange protocols are inadequate in terms of communication and economic efficiency. We propose the Proof of Unified Data Retrieval protocol (PoUDR). PoUDR incorporates ZK-SNARK to facilitate a fair data exchange protocol. PoUDR reduces the number of blockchain transactions for both single block and data swarming retrieval. The protocol requires only a single key-revealing transaction submitted by the provider to the blockchain for each data block. This architecture allows for further optimization of transaction numbers through a batched proof technique on the provider's side. This approach necessitates only $N_P$ transactions within a specific time frame when data consisting of $N_D$ blocks, provided by $N_P$ providers, is queried by $N_Q$ queriers. This work provides a comprehensive definition for Secure Swarming Data Exchange (SSDE), including security assumptions. Also it offers a detailed game-based security analysis for the PoUDR protocol. Moreover, the PoUDR protocol has been fully integrated into the Bitswap protocol (IPFS). Within this integration, our proposed Relaxed Groth16 algorithm addresses the significant technical challenge of generating zero-knowledge proofs, leading to substantial cost reductions for overall feasibility of secure data retrieval in decentralized storage networks.
Last updated:  2024-10-02
HEonGPU: a GPU-based Fully Homomorphic Encryption Library 1.0
Ali Şah Özcan and Erkay Savaş
HEonGPU is a high-performance library designed to optimize Fully Homomorphic Encryption (FHE) operations on Graphics Processing Unit (GPU). By leveraging the parallel processing capac- ity of GPUs, HEonGPU significantly reduces the computational overhead typically associated with FHE by executing complex operation concurrently. This allows for faster execution of homomorphic computations on encrypted data, enabling real-time applications in privacy-preserving machine learn- ing and secure data processing. A key advantage of HEonGPU lies in its multi-stream architecture, which not only allows parallel processing of tasks to improve throughput but also eliminates the over- head of data transfers between the host device (i.e., CPU) and GPU. By efficiently managing data within the GPU using multi-streams, HEonGPU minimizes the need for repeated memory transfers, further enhancing performance. HEonGPU’s GPU-optimized design makes it ideal for large-scale encrypted computations, providing users with reduced latency and higher performance across various FHE schemes.
Last updated:  2024-10-02
Robust AE With Committing Security
Viet Tung Hoang and Sanketh Menda
There has been a recent interest to develop and standardize Robust Authenticated Encryption (Robust AE) schemes. NIST, for example, is considering an Accordion mode (a wideblock tweakable blockcipher), with Robust AE as a primary application. On the other hand, recent attacks and applications suggest that encryption needs to be committing. Indeed, committing security isalso a design consideration in the Accordion mode. Yet it is unclear how to build a Robust AE with committing security. In this work, we give a modular solution for this problem. We first show how to transform any wideblock tweakable blockcipher TE to a Robust AE scheme SE that commits just the key. The overhead is cheap, just a few finite-field multiplications and blockcipher calls. If one wants to commit the entire encryption context, one can simply hash the context to derive a 256-bit subkey, and uses SE on that subkey. The use of 256-bit key on SE only means that it has to rely on AES-256 but doesn't require TE to have 256-bit key. Our approach frees the Accordion designs from consideration of committing security. Moreover, it gives a big saving for several key-committing applications that don't want to pay the inherent hashing cost of full committing.
Last updated:  2024-10-02
Findex: A Concurrent and Database-Independent Searchable Encryption Scheme
Théophile Brézot and Chloé Hébant
State-of-the-art database implementations offer a wide range of functionalities and impressive performances while supporting highly concurrent loads. However they all rely on the server knowing the content of the database, which raises issues when sensitive information is being stored on a server that cannot be trusted. Encrypting documents before sending them to a remote server solves the confidentiality issue at the cost of loosing the keyword search functionality. Cryptographic primitives such as Symmetric Searchable Encryption (SSE) schemes have been proposed to recover this functionality. However, no SSE construction properly defines correctness and successfully guarantees security in a concurrent setting. This paper attempts a first step in this direction by recommending linearizability as the standard notion of correctness for a concurrent SSE. We study the impact of concurrency on security and stress the need for finer-grained security models. Hence, we propose the log-security model that guarantees security against an adversary having access to persistency-related logs, fixing a blind spot in the snapshot model while capturing security in a concurrent setting. We also build the first concurrent SSE solution proven correct and secure in a concurrent setting, that can be implemented on top of any database. Our scheme proved to be fast thanks to optimal wait-free search operations and sequentially-optimal, lock-free modifications, that both execute under one micro-second per binding, while only incurring a 13.3% storage expansion.
Last updated:  2024-10-02
Formal Security Analysis of the OpenID FAPI 2.0 Family of Protocols: Accompanying a Standardization Process
Pedram Hosseyni, Ralf Küsters, and Tim Würtele
FAPI 2.0 is a suite of Web protocols developed by the OpenID Foundation's FAPI Working Group (FAPI WG) for third-party data sharing and digital identity in high-risk environments. Even though the specifications are not completely finished, several important entities have started to adopt the FAPI 2.0 protocols, including Norway's national HelseID, Australia's Consumer Data Standards, as well as private companies like Authlete and Australia-based connectID; the predecessor FAPI 1.0 is in widespread use with millions of users. The FAPI WG asked us to accompany the standardization of the FAPI 2.0 protocols with a formal security analysis to proactively identify vulnerabilities before widespread deployment and to provide formal security guarantees for the standards. In this paper, we report on our analysis and findings. Our analysis is based on a detailed model of the Web infrastructure, the so-called Web Infrastructure Model (WIM), which we extend to be able to carry out our analysis of the FAPI 2.0 protocols including important extensions like FAPI-CIBA. Based on the (extended) WIM and formalizations of the security goals and attacker model laid out in the FAPI 2.0 specifications, we provide a formal model of the protocols and carry out a formal security analysis, revealing several attacks. We have worked with the FAPI WG to fix the protocols, resulting in several amendments to the specifications. With these changes in place, we have adjusted our protocol model and formally proved that the security properties hold true under the strong attacker model defined by the FAPI WG.
Last updated:  2024-10-02
Quantum Cryptography from Meta-Complexity
Taiga Hiroka and Tomoyuki Morimae
In classical cryptography, one-way functions (OWFs) are the minimal assumption, while recent active studies have demonstrated that OWFs are not necessarily the minimum assumption in quantum cryptography. Several new primitives have been introduced such as pseudorandom unitaries (PRUs), pseudorandom function-like state generators (PRFSGs), pseudorandom state generators (PRSGs), one-way state generators (OWSGs), one-way puzzles (OWPuzzs), and EFI pairs. They are believed to be weaker than OWFs, but they still imply many useful applications such as private-key quantum money schemes, secret-key encryption, message authentication codes, digital signatures, commitments, and multiparty computations. Now that the possibility of quantum cryptography without OWFs has opened up, the most important goal in the field is to provide them with concrete instantiations. For example, in classical cryptography, there are many instantiations of OWFs based on concrete hardness assumptions, such as the hardness of discrete logarithms or learning with errors. The study of generic primitives is justified by the existence of concrete instantiations. On the other hand, in quantum cryptography, all known constructions of those primitives are only from OWFs. We therefore have the following important open problem: Do they have instantiations based on some concrete hardness assumptions that will not imply OWFs? Ideally, the assumptions should be the ones that are studied in other contexts than cryptography. In this paper, we give a candidate answer to the question by showing that quantum-average-hardness of GapK problem implies the existence of OWPuzzs. A GapK problem is a promise problem to decide whether a given bit string has a small Kolmogorov complexity or not. Its quantum-average-hardness means that an instance is sampled from a quantum-polynomial-time-samplable distribution, and no quantum-polynomial-time algorithm can solve the problem with high probability. As far as we know, this is the first time that a ``Microcrypt'' primitive is constructed based on concrete hardness assumptions that do not seem to imply OWFs. Moreover, the assumptions are studied in other contexts than cryptography, especially in the field of meta-complexity. (Note: During the preparation of this manuscript, Khurana and Tomer \cite{cryptoeprint:2024/1490} uploaded a concurrent work.)
Last updated:  2024-10-02
Security Perceptions of Users in Stablecoins: Advantages and Risks within the Cryptocurrency Ecosystem
Maggie Yongqi Guan, Yaman Yu, Tanusree Sharma, Molly Zhuangtong Huang, Kaihua Qin, Yang Wang, and Kanye Ye Wang
Stablecoins, a type of cryptocurrency pegged to another asset to maintain a stable price, have become an important part of the cryptocurrency ecosystem. Prior studies have primarily focused on examining the security of stablecoins from technical and theoretical perspectives, with limited investigation into users’ risk perceptions and security behaviors in stablecoin practices. To address this research gap, we conducted a mixed-method study that included constructing a stablecoin interaction framework based on the literature, which informed the design of our interview protocol, semi-structured interviews (n=21), and Reddit data analysis (9,326 posts). We found that participants see stable value and regulatory compliance as key security advantages of stablecoins over other cryptocurrencies. However, participants also raised concerns about centralization risks in fiat-backed stablecoins, perceived challenges in crypto-backed stablecoins due to limited reliance on fully automated execution, and confusion regarding the complex mechanisms of algorithmic stablecoins. We proposed improving user education and optimizing mechanisms to address these concerns and promote the safer use of stablecoins.
Last updated:  2024-10-01
VOLE-in-the-head signatures from Subfield Bilinear Collisions
Janik Huth and Antoine Joux
In this paper, we introduce a new method to construct a signature scheme based on the subfield bilinear collision problem published at Crypto 2024. We use techniques based on vector oblivious linear evaluation (VOLE) to significantly improve the running time and signature size of the scheme compared to the MPC-in-the-head version.
Last updated:  2024-11-01
Cryptographic Characterization of Quantum Advantage
Tomoyuki Morimae, Yuki Shirakawa, and Takashi Yamakawa
Quantum computational advantage refers to an existence of computational tasks that are easy for quantum computing but hard for classical one. Unconditionally showing quantum advantage is beyond our current understanding of complexity theory, and therefore some computational assumptions are needed. Which complexity assumption is necessary and sufficient for quantum advantage? In this paper, we show that inefficient-verifier proofs of quantumness (IV-PoQ) exist if and only if classically-secure one-way puzzles (OWPuzzs) exist. As far as we know, this is the first time that a complete cryptographic characterization of quantum advantage is obtained. IV-PoQ capture various types of quantum advantage previously studied, such as sampling-based quantum advantage and searching-based one. Previous work [Morimae and Yamakawa, Crypto 2024] showed that IV-PoQ can be constructed from OWFs, but a construction of IV-PoQ from weaker assumptions was left open. Our result solves the open problem. OWPuzzs are one of the most fundamental quantum cryptographic primitives implied by many quantum cryptographic primitives weaker than one-way functions (OWFs). The equivalence between IV-PoQ and classically-secure OWPuzzs therefore highlights that if there is no quantum advantage, then these fundamental primitives do not exist. The equivalence also means that quantum advantage is an example of the applications of OWPuzzs. Except for commitments, no application of OWPuzzs was known before. Our result shows that quantum advantage is another application of OWPuzzs, which solves the open question of [Chung, Goldin, and Gray, Crypto 2024]. Moreover, it is the first quantum-computation-classical-communication (QCCC) application of OWPuzzs.
Last updated:  2024-10-01
Relaxed Lattice-Based Programmable Hash Functions: New Efficient Adaptively Secure IBEs
Xingye Lu, Jingjing Fan, and Man Ho AU
In this paper, we introduce the notion of relaxed lattice-based programmable hash function (RPHF), which is a novel variant of lattice-based programmable hash functions (PHFs). Lattice-based PHFs, together with preimage trapdoor functions (TDFs), have been widely utilized (implicitly or explicitly) in the construction of adaptively secure identity-based encryption (IBE) schemes. The preimage length and the output length of the underlying PHF and TDF together determine the user secret key and ciphertext lengths of the IBE schemes. However, the current lattice-based PHF definition imposes the restriction that the preimage length of TDF in the IBE schemes cannot be too short, hindering the utilization of size-efficient NTRU TDF. To overcome this hurdle, RPHF relaxes the hash key distribution requirement in the definition of PHF from statistical indistinguishability to computational indistinguishability. This relaxation eliminates limitations on the preimage length of underlying TDFs in IBE, enabling the construction of IBEs from NTRU TDFs. We introduce two instantiations of RPHF: the first produces a hash output length of $2$ ring elements, with a hash key size linear to the input length, and the second yields an output length of $14$ ring elements, with a hash key size proportional to the square root of the input length. Building upon these RPHF instantiations, we propose two adaptively secure lattice-based IBE schemes with ciphertext lengths of $5$ and $17$ ring elements and user secret key lengths of $4$ and $16$ ring elements, respectively. The length of the IBE master public key is roughly equivalent to the size of the hash key of the underlying RPHF. In comparison to existing IBE constructions, our proposed schemes achieve a significant reduction (over an order of magnitude) in ciphertext and secret key sizes. Notably, state-of-the-art constructions from ideal lattices exhibit secret key and ciphertext sizes over 100 ring elements, making our proposed schemes highly efficient. Moreover, the master public key sizes of our IBEs remain comparable.
Last updated:  2024-10-01
More Efficient Lattice-based OLE from Circuit-private Linear HE with Polynomial Overhead
Leo de Castro, Duhyeong Kim, Miran Kim, Keewoo Lee, Seonhong Min, and Yongsoo Song
We present a new and efficient method to obtain circuit privacy for lattice-based linearly homomorphic encryptions (LHE). In particular, our method does not involve noise-flooding with exponetially large errors or iterative bootstrapping. As a direct result, we obtain a semi-honest oblivious linear evaluation (OLE) protocol with the same efficiency, reducing the communication cost of the prior state of the art by 50%. Consequently, the amortized time of our protocol improves the prior work by 33% under 100Mbps network setting. Our semi-honest OLE is the first to achieve both concrete efficiency and asymptotic quasi-optimality. Together with an extension of the recent zero-knowledge proof of plaintext knowledge, our LHE yields actively-secure OLE with 2.7x reduced communication from the prior work. When applied to Overdrive (Eurocrypt '18), an MPC preprocessing protocol, our method provides 1.4x improvement in communication over the state of the art.
Last updated:  2025-02-28
BEAT-MEV: Epochless Approach to Batched Threshold Encryption for MEV Prevention
Jan Bormet, Sebastian Faust, Hussien Othman, and Ziyan Qu
In decentralized finance (DeFi), the public availability of pending transactions presents significant privacy concerns, enabling market manipulation through miner extractable value (MEV). MEV occurs when block proposers exploit the ability to reorder, omit, or include transactions, causing financial loss to users from frontrunning. Recent research has focused on encrypting pending transactions, hiding transaction data until block finalization. To this end, Choudhuri et al. (USENIX '24) introduce an elegant new primitive called Batched Threshold Encryption (BTE) where a batch of encrypted transactions is selected by a committee and only decrypted after block finalization. Crucially, BTE achieves low communication complexity during decryption and guarantees that all encrypted transactions outside the batch remain private. An important shortcoming of their construction is, however, that it progresses in epochs and requires a costly setup in MPC for each batch decryption. In this work, we introduce a novel BTE scheme addressing the limitations by eliminating the need for an expensive epoch setup while achieving practical encryption and decryption times. Additionally, we explore a previously ignored question of how users can coordinate their transactions, which is crucial for the functionality of the system. Along the way, we present several optimizations and trade-offs between communication and computational complexity that allows us to achieve practical performance on standard hardware ($<2$ ms for encryption and $<440$ ms for decrypting $512$ transactions). Finally, we prove our constructions secure in a model that captures practical attacks on MEV-prevention mechanisms.
Last updated:  2025-01-12
Bitwise Garbling Schemes --- A Model with $\frac{3}{2}\kappa$-bit Lower Bound of Ciphertexts
Fei Xu, Honggang Hu, and Changhong Xu
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes. This model proved a $2\kappa$-bit lower bound of ciphertexts for a broad class of garbling schemes. At Crypto 2021, Rosulek and Roy presented the innovative "three-halves" garbling scheme in which AND gates cost $1.5\kappa+5$ bits and XOR gates are free. A noteworthy aspect of their scheme is the slicing-and-dicing technique, which is applicable universally to all AND gates when garbling a boolean circuit. Following this revelation, Rosulek and Roy presented several open problems. Our research primarily addresses one of them: ``Is $1.5\kappa$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?'' In this paper, we propose theBitwise Garbling Schemes. Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model. Moreover, we prove the necessity of the free-XOR technique: If free-XOR is forbidden, we prove a $2\kappa$-bit lower bound. As an extension, we apply our idea to construct a model for fan-in 3 gates. Somewhat unexpectedly, we prove a $\frac{7}{4}\kappa$-bit lower bound. Unfortunately, the corresponding construction is not suitable for 3-input AND gates. This construction may be of independent interest.
Last updated:  2024-10-28
FLI: Folding Lookup Instances
Albert Garreta and Ignacio Manzur
We introduce two folding schemes for lookup instances: FLI and FLI+SOS. Both use a PIOP to check that a matrix has elementary basis vectors as rows, with FLI+SOS adding a twist based on Lasso’s SOS-decomposability. FLI takes two lookup instances $\{\mathbf{a}_1\}, \{\mathbf{a}_2\}\subseteq\mathbf{t}$, and expresses them as matrix equations $M_i\cdot\mathbf{t}^\mathsf{T}=\mathbf{a}_i^\mathsf{T}$ for $i=1,2$, where each matrix $M_i\in\mathbb{F}^{m\times N}$ has rows which are elementary basis vectors in $\mathbb{F}^N$. Matrices that satisfy this condition are said to be in $R_{\mathsf{elem}}$. Then, a folding scheme for $R_{\mathsf{elem}}$ into a relaxed relation is used, which combines the matrices $M_1, M_2$ as $M_1+\alpha M_2$ for a random $\alpha\in\mathbb{F}$. Finally, the lookup equations are combined as $(M_1+\alpha M_2)\cdot \mathbf{t}^{\mathsf{T}} = (\mathbf{a}_1+\alpha \mathbf{a}_2)^\mathsf{T}$. In FLI, only the property that a matrix is in $R_{\mathsf{elem}}$ is folded, and this makes the FLI folding step the cheapest among existing solutions. The price to pay is in the cost for proving accumulated instances. FLI+SOS builds upon FLI to enable folding of large SOS-decomposable tables. This is achieved through a variation of Lasso's approach to SOS-decomposability, which fits FLI naturally. For comparison, we describe (for the first time to our knowledge) straightforward variations of Protostar and Proofs for Deep Thought that also benefit from SOS-decomposability. We see that for many reasonable parameter choices, and especially those arising from lookup-based zkVMs, FLI+SOS can concretely be the cheapest folding solution.
Last updated:  2024-12-11
Folding Schemes with Privacy Preserving Selective Verification
Joan Boyar and Simon Erfurth
Folding schemes are an exciting new primitive, transforming the task of performing multiple zero-knowledge proofs of knowledge for a relation into performing just one zero-knowledge proof, for the same relation, and a number of cheap inclusion-proofs. Recently, folding schemes have been used to amortize the cost associated with proving different statements to multiple distinct verifiers, which has various applications. We observe that for these uses, leaking information about the statements folded together can be problematic, yet this happens with previous constructions. Towards resolving this issue, we give a natural definition of privacy preserving folding schemes, and what security they should offer. To construct privacy preserving folding schemes, we first define a statement hiders, a primitive which might be of independent interest. In a nutshell, a statement hider hides an instance of a relation as a new instance in the same relation. The new instance is in the relation if and only if the initial instance is. With this building block, we can utilize existing folding schemes to construct a privacy preserving folding scheme, by first hiding each of the statements. Folding schemes allow verifying that a statement was folded into another statement, while statement hiders allow verifying that a statement was hidden as another statement.
Last updated:  2024-09-30
Challenges in Timed Cryptography: A Position Paper
Karim Eldefrawy, Benjamin Terner, and Moti Yung
Time-lock puzzles are unique cryptographic primitives that use computational complexity to keep information secret for some period of time, after which security expires. This topic, while over 25 years old, is still in a state where foundations are not well understood: For example, current analysis techniques of time-lock primitives provide no sound mechanism to build composed multi-party cryptographic protocols which use expiring security as a building block. Further, there are analyses that employ idealizations and simulators of unrealistic computational power to be an acceptable sound security argument. Our goal with this short paper is to advocate for understanding what approaches may lead to sound modeling beyond idealization, and what approaches may, in fact, be hopeless at this task of sound modeling. We explain in this paper how existing attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility result by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone (or any repetitive computation where the next state is completely random given the prior state); yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle. We point out that if the generation process relies on a trapdoor function that cannot be treated as a random oracle (to allow efficient generation while avoiding this impossibility result), then, to be consistent, the analysis of the solving process should also not treat such a trapdoor function (and its intermediate states) as a random oracle. We also delineate additional issues with the proof techniques used for time-lock puzzles. Specifically, when a time-lock puzzle must retain privacy for some amount of time, the reduction should bound the running time of the simulator. A simulator that can ``simulate" if given time that if given to an adversary allows said adversary to solve the puzzle is not a valid security argument. We survey the adherence of various attempts to this principle, as well as the properties that different attempts achieve toward composition.
Last updated:  2024-11-14
Schnorr Signatures are Tightly Secure in the ROM under a Non-interactive Assumption
Gavin Cho, Georg Fuchsbauer, and Adam O'Neill
We show that the widely-used Schnorr signature scheme meets existential unforgeability under chosen-message attack (EUF-CMA) in the random oracle model (ROM) if the circular discrete-logarithm (CDL) assumption, a new, non-interactive and falsifiable variant of the discrete-log (DL) problem we introduce, holds in the underlying group. Notably, our reduction is tight, meaning the constructed adversary against CDL has essentially the same running time and success probability as the assumed forger. This is crucial for justifying the size of the underlying group used in practice. To our knowledge, we are the first to exhibit such a reduction. Indeed, prior work required interactive and non-falsifiable assumptions (Bellare and Dai, INDOCRYPT 2020) or additional idealized models beyond the ROM like the algebraic group model (Fuchsbauer et al., EUROCRYPT 2020). We justify CDL by showing it holds in two carefully-chosen idealized models that idealize different aspects of it. Namely, we show that CDL is as hard as DL in these models.
Last updated:  2025-04-12
How to Recover the Full Plaintext of XCB
Peng Wang, Shuping Mao, Ruozhou Xu, Jiwu Jing, and Yuewu Wang
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the \emph{full} plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure random-IV-based stream cipher.
Last updated:  2024-09-28
Overpass Channels: Horizontally Scalable, Privacy-Enhanced, with Independent Verification, Fluid Liquidity, and Robust Censorship Proof, Payments
Brandon "Cryptskii" Ramsay
Overpass Channels presents a groundbreaking approach to blockchain scalability, offering a horizontally scalable, privacy-enhanced payment network with independent verification, fluid liquidity, and robust censorship resistance. This paper introduces a novel architecture that leverages zero-knowledge proofs, specifically zk-SNARKs, to ensure transaction validity and privacy while enabling unprecedented throughput and efficiency. By eliminating the need for traditional consensus mechanisms and miners, Overpass Channels achieves remarkable cost-effectiveness and energy efficiency. The system's design focuses on unilateral payment channels and off-chain transaction processing, allowing for high-speed, low-latency operations without compromising security or decentralization. This paper provides a comprehensive analysis of the Overpass Channels system, including its cryptographic foundations, scalability metrics, integration, and potential applications across various domains, from global payments to confidential voting systems and secure health record management.
Last updated:  2024-09-28
Evaluating Leakage Attacks Against Relational Encrypted Search
Patrick Ehrler, Abdelkarim Kati, Thomas Schneider, and Amos Treiber
Encrypted Search Algorithms (ESAs) are a technique to encrypt data while the user can still search over it. ESAs can protect privacy and ensure security of sensitive data stored on a remote storage. Originally, ESAs were used in the context of documents that consist of keywords. The user encrypts the documents, sends them to a remote server and is still able to search for keywords, without exposing information about the plaintext. The idea of ESAs has also been applied to relational databases, where queries (similar to SQL statements) can be privately executed on an encrypted database.But just as traditional schemes for Keyword-ESAs, also Relational-ESAs have the drawback of exposing some information, called leakage. Leakage attacks have been proposed in the literature that use this information together with auxiliary information to learn details about the plaintext. However, these leakage attacks have overwhelmingly been designed for and applied to Keyword-ESAs and not Relational-ESAs. In this work, we review the suitability of major leakage attacks against ESAs in the relational setting by adapting them accordingly. We perform extensive re-evaluations of the attacks on various relational datasets with different properties. Our evaluations show that major attacks can work against Relational-ESAs in the known-data setting. However, the attack performance differs between datasets, exploited patterns, and attacks.
Last updated:  2024-09-27
Lower Bounds on the Overhead of Indistinguishability Obfuscation
Zhenjian Lu, Noam Mazor, Igor C. Oliveira, and Rafael Pass
We consider indistinguishability obfuscation (iO) for multi-output circuits $C:\{0,1\}^n\to\{0,1\}^n$ of size s, where s is the number of AND/OR/NOT gates in C. Under the worst-case assumption that NP $\nsubseteq$ BPP, we establish that there is no efficient indistinguishability obfuscation scheme that outputs circuits of size $s + o(s/ \log s)$. In other words, to be secure, an efficient iO scheme must incur an $\Omega(s/ \log s)$ additive overhead in the size of the obfuscated circuit. The hardness assumption under which this negative result holds is minimal since an optimal iO scheme with no circuit size overhead exists if NP$\nsubseteq$ BPP. Expanding on this result, we also rule out iO for single-output database-aided circuits with an arbitrary polynomial overhead in circuit size. This strengthens an impossibility result by Goldwasser and Rothblum [GR07], which considered circuits with access to an exponential-length database that the obfuscator has oracle access to; in contrast, our impossibility result holds even w.r.t. polynomial-size databases and even w.r.t. obfuscators that may run in time polynomial in the size of the database (and thus may read the whole database). The proof of our main result builds on a connection between obfuscation and meta-complexity put forward by Mazor and Pass [MP24], and on the NP-hardness of circuit minimization for multi-output circuits established by Loff, Ilango, and Oliveira [ILO20], together with other techniques from cryptography and complexity theory.
Last updated:  2024-09-27
Functional Adaptor Signatures: Beyond All-or-Nothing Blockchain-based Payments
Nikhil Vanjani, Pratik Soni, and Sri AravindaKrishnan Thyagarajan
In scenarios where a seller holds sensitive data $x$, like employee / patient records or ecological data, and a buyer seeks to obtain an evaluation of specific function $f$ on this data, solutions in trustless digital environments like blockchain-based Web3 systems typically fall into two categories: (1) Smart contract-powered solutions and (2) cryptographic solutions leveraging tools such as adaptor signatures. The former approach offers atomic transactions where the buyer learns the function evaluation $f(x)$ (and not $x$ entirely) upon payment. However, this approach is often inefficient, costly, lacks privacy for the seller's data, and is incompatible with systems that do not support smart contracts with required functionalities. In contrast, the adaptor signature-based approach addresses all of the above issues but comes with an "all-or-nothing" guarantee, where the buyer fully extracts $x$ and does not support functional extraction of the sensitive data. In this work, we aim to bridge the gap between these approaches, developing a solution that enables fair functional sales of information while offering improved efficiency, privacy, and compatibility similar to adaptor signatures. Towards this, we propose functional adaptor signatures (FAS) a novel cryptographic primitive that achieves all the desired properties as listed above. Using FAS, the seller can publish an advertisement committing to $x$. The buyer can pre-sign the payment transaction w.r.t. a function $f$, and send it along with the transaction to the seller. The seller adapts the pre-signature into a valid (buyer's) signature and posts the payment and the adapted signature on the blockchain to get paid. Finally, using the pre-signature and the posted signature, the buyer efficiently extracts $f(x)$, and completes the sale. We formalize the security properties of FAS, among which is a new notion called witness privacy to capture seller's privacy, which ensures the buyer does not learn anything beyond $f(x)$. We present multiple variants of witness privacy, namely, witness hiding, witness indistinguishability, and zero-knowledge, to capture varying levels of leakage about $x$ beyond $f(x)$ to a malicious buyer. We introduce two efficient constructions of FAS supporting linear functions (like statistics/aggregates, kernels in machine learning, etc.), that satisfy the strongest notion of witness privacy. One construction is based on prime-order groups and compatible with Schnorr signatures for payments, and the other is based on lattices and compatible with a variant of Lyubashevsky's signature scheme. A central conceptual contribution of our work lies in revealing a surprising connection between functional encryption, a well-explored concept over the past decade, and adaptor signatures, a relatively new primitive in the cryptographic landscape. On a technical level, we avoid heavy cryptographic machinery and achieve improved efficiency, by making black-box use of building blocks like inner product functional encryption (IPFE) while relying on certain security-enhancing techniques for the IPFE in a non-black-box manner. We implement our FAS construction for Schnorr signatures and show that for reasonably sized seller witnesses, the different operations are quite efficient even for commodity hardware.
Last updated:  2025-02-13
Mind the Faulty Keccak: A Practical Fault Injection Attack Scheme Apply to All Phases of ML-KEM and ML-DSA
Yuxuan Wang, Jintong Yu, Shipei Qu, Xiaolin Zhang, Xiaowei Li, Chi Zhang, and Dawu Gu
ML-KEM and ML-DSA are NIST-standardized lattice-based post-quantum cryptographic algorithms. In both algorithms, Keccak is the designated hash algorithm extensively used for deriving sensitive information, making it a valuable target for attackers. In the field of fault injection attacks, few works targeted Keccak, and they have not fully explored its impact on the security of ML-KEM and ML-DSA. Consequently, many attacks remain undiscovered. In this article, we first identify various fault vulnerabilities of KECCAK that determine the (partial) output by manipulating the control flow under a practical loop-abort model. Then, we systematically analyze the impact of a faulty Keccak output and propose six attacks against ML-KEM and five attacks against ML-DSA, including key recovery, signature forgery, and verification bypass. These attacks cover the key generation, encapsulation, decapsulation, signing, and verification phases, making our scheme the first to apply to all phases of ML-KEM and ML-DSA. The proposed attacks are validated on the C implementations of the PQClean library’s ML-KEM and ML-DSA running on embedded devices. Experiments show that the required loop-abort faults can be realized on ARM Cortex-M0+, M3, M4, and M33 microprocessors with low-cost electromagnetic fault injection settings, achieving a success rate of 89.5%. Once the fault injection is successful, all proposed attacks can succeed with a probability of 100%.
Last updated:  2025-03-20
The SMAesH dataset
Gaëtan Cassiers and Charles Momin
Datasets of side-channel leakage measurements are widely used in research to develop and benchmark side-channel attack and evaluation methodologies. Compared to using custom and/or one-off datasets, widely-used and publicly available datasets improve research reproducibility and comparability. Further, performing high-quality measurements requires specific equipment and skills, while also taking a significant amount of time. Therefore, using publicly available datasets lowers the barriers to entry into side-channel research. This paper introduces the SMAesH dataset. SMAesH is an optimized masked hardware implementation of the AES with a provably secure arbitrary-order masking scheme. The SMAesH dataset contains power traces of the first-order protected version of SMAesH acquired on two FPGAs from different generations, along with key, plaintext and masking randomness. A part of the dataset use uniformly random key and plaintext to enable leakage profiling, while another part uses a fixed key (still with uniformly random plaintext) to enable attack validation or leakage assessment in a fixed-versus-random setting. We document the experimental setup used to acquire the dataset and also discuss particular methods employed to maximize the information content in the leakage traces, such as power supply selection, fine-grained trace alignment and resolution optimization. We perfom an in-depth leakage analysis for the two targets. Finally, we briefly discuss the known attacks against the dataset.
Last updated:  2024-09-27
On the rough order assumption in imaginary quadratic number fields
Antonio Sanso
In this paper, we investigate the rough order assumption (\(RO_C\)) introduced by Braun, Damgård, and Orlandi at CRYPTO 23, which posits that class groups of imaginary quadratic fields with no small prime factors in their order are computationally indistinguishable from general class groups. We present a novel attack that challenges the validity of this assumption by leveraging properties of Mordell curves over the rational numbers. Specifically, we demonstrate that if the rank of the Mordell curve \( E_{-16D} \) is at least 2, it contradicts the rough order assumption. Our attack deterministically breaks the \(RO_C\) assumption for discriminants of a special form, assuming the parity conjecture holds and certain conditions are met. Additionally, for both special and generic cases, our results suggest that the presence of nontrivial 3-torsion elements in class groups can undermine the \(RO_C\) assumption. Although our findings are concrete for specific cases, the generic scenario relies on heuristic arguments related to the Birch and Swinnerton-Dyer (BSD) conjecture, a significant and widely believed conjecture in number theory. Attacks against 2-torsion elements in class groups are already well known, but our work introduces a distinct approach targeting 3-torsion elements. These attacks are fundamentally different in nature, and both have relatively straightforward countermeasures, though they do not generalize to higher torsions. While these results do not entirely invalidate the \(RO_C\) assumption, they highlight the need for further exploration of its underlying assumptions, especially in the context of specific torsion structures within class groups.
Last updated:  2025-01-16
Efficient theta-based algorithms for computing $(\ell, \ell)$-isogenies on Kummer surfaces for arbitrary odd $\ell$
Ryo Yoshizumi, Hiroshi Onuki, Ryo Ohashi, Momonari Kudo, and Koji Nuida
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. Recently, many isogeny-based cryptosystems using isogenies between Kummer surfaces were proposed. Most of those cryptosystems use $(2,2)$-isogenies. However, to enhance the possibility of cryptosystems, higher degree isogenies, say $(\ell,\ell)$-isogenies for an odd $\ell$, is also crucial. For an odd $\ell$, the Lubicz-Robert gave a formula to compute $(\ell)^g$-isogenies in general dimension $g$. In this paper, we propose explicit and efficient algorithms to compute $(\ell,\ell)$-isogenies between Kummer surfaces, based on the Lubicz-Robert formula.In particular, we propose two algorithms for computing the codomain of the isogeny and two algorithms for evaluating the image of a point under the isogeny. Then, we count the number of arithmetic operations required for each of our proposed algorithms, and determine the most efficient algorithm in terms of the number of arithmetic operations from each of two types of algorithms for each $\ell$. As an application, using the most efficient one, we implemented the SIDH attack on B-SIDH in SageMath.In setting that originally claimed 128-bit security, our implementation was able to recover that secret key in about 11 hours.
Last updated:  2024-09-26
Witness Semantic Security
Paul Lou, Nathan Manohar, and Amit Sahai
To date, the strongest notions of security achievable for two-round publicly-verifiable cryptographic proofs for $\mathsf{NP}$ are witness indistinguishability (Dwork-Naor 2000, Groth-Ostrovsky-Sahai 2006), witness hiding (Bitansky-Khurana-Paneth 2019, Kuykendall-Zhandry 2020), and super-polynomial simulation (Pass 2003, Khurana-Sahai 2017). On the other hand, zero-knowledge and even weak zero-knowledge (Dwork-Naor-Reingold-Stockmeyer 1999) are impossible in the two-round publicly-verifiable setting (Goldreich-Oren 1994). This leaves an enormous gap in our theoretical understanding of known achievable security and the impossibility results for two-round publicly-verifiable cryptographic proofs for $\mathsf{NP}$. Towards filling this gap, we propose a new and natural notion of security, called witness semantic security, that captures the natural and strong notion that an adversary should not be able to learn any partial information about the prover's witness beyond what it could learn given only the statement $x$. Not only does our notion of witness semantic security subsume both witness indistinguishability and witness hiding, but it also has an easily appreciable interpretation. Moreover, we show that assuming the subexponential hardness of LWE, there exists a two-round public-coin publicly-verifiable witness semantic secure argument. To our knowledge, this is the strongest form of security known for this setting. As a key application of our work, we show that non-interactive zero-knowledge (NIZK) arguments in the common reference string (CRS) model can additionally maintain witness semantic security even when the CRS is maliciously generated. Our work gives the first construction from (subexponential) standard assumptions that achieves a notion stronger than witness-indistinguishability against a malicious CRS authority. In order to achieve our results, we give the first construction of a ZAP from subexponential LWE that is adaptively sound. Additionally, we propose a notion of simulation using non-uniform advice about a malicious CRS, which we also believe will be of independent interest.
Last updated:  2025-03-24
A Note on the SNOVA Security
Lih-Chung Wang, Chun-Yen Chou, Jintai Ding, Yen-Liang Kuan, Jan Adriaan Leegwater, Ming-Siou Li, Bo-Shu Tseng, Po-En Tseng, and Chia-Chun Wang
SNOVA is one of the submissions in the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition. SNOVA is a UOV variant that uses the noncommutative-ring technique to educe the size of the public key. SNOVA's public key size and signature size are well-balanced and have good performance. Recently, Beullens proposed a forgery attack against SNOVA, pointing out that the parameters of SNOVA can be attacked. Beullens also argued that with some slight adjustments his attacks can be prevented. In this note, we explain Beullens' forgery attack and show that the attack can be invalid by two different approaches. Finally, we show that these two approaches do not increase the sizes of the public keys or signatures and the current parameters satisfy the security requirement of NIST.
Last updated:  2024-09-26
Practical Mempool Privacy via One-time Setup Batched Threshold Encryption
Arka Rai Choudhuri, Sanjam Garg, Guru-Vamsi Policharla, and Mingyuan Wang
An important consideration with the growth of the DeFi ecosystem is the protection of clients who submit transactions to the system. As it currently stands, the public visibility of these transactions in the memory pool (mempool) makes them susceptible to market manipulations such as frontrunning and backrunning. More broadly, for various reasons—ranging from avoiding market manipulation to including time-sensitive information in their transactions—clients may want the contents of their transactions to remain private until they are executed, i.e. they have *pending transaction privacy*. Therefore, *mempool privacy* is becoming an increasingly important feature as DeFi applications continue to spread. We construct the first *practical* mempool privacy scheme that uses a *one-time* DKG setup for $n$ decryption servers. Our scheme ensures the strong privacy requirement by not only hiding the transactions until they are decrypted but also guaranteeing privacy for transactions that were not selected in the epoch (*pending transaction privacy*). For each epoch (or block), clients can encrypt their transactions so that, once $B$ (encrypted) transactions are selected for the epoch, they can be decrypted by each decryption server while communicating only $O(1)$ information. Our result improves upon the best-known prior works, which either: (i) require an expensive initial setup involving a (special purpose) multiparty computation protocol executed by the $n$ decryption servers, along with an additional *per-epoch* setup; (ii) require each decryption server to communicate $O(B)$ information; or (iii) do not guarantee pending transaction privacy. We implement our scheme and find that transactions can be encrypted in approximately 8.5 ms, independent of committee size, and the communication required to decrypt an entire batch of transactions is 48 bytes per party, independent of the number of transactions. If deployed on Ethereum, which processes close to 500 transactions per block, it takes close to 3.2 s for each committee member to compute a partial decryption and 3.0 s to decrypt all transactions for a block in single-threaded mode. Compared to prior work, which had an expensive setup phase per epoch, we incur $<2\times$ overhead in the worst case. On some metrics such as partial decryptions size, we actually fare better.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.