Papers updated in last 183 days (Page 11 of 1675 results)

Last updated:  2025-01-26
On symbolic computations over arbitrary commutative rings and cryptography with the temporal Jordan-Gauss graphs.
Vasyl Ustimenko
The paper is dedicated to Multivariate Cryptography over general commutative ring K and protocols of symbolic computations for safe delivery of multivariate maps. We consider itera-tive algorithm of generation of multivariate maps of prescribed degree or density with the trapdoor accelerator, i.e. piece of information which allows to compute the reimage of the map in polynomial time. The concept of Jordan-Gauss temporal graphs is used for the obfus-cation of known graph based public keys and constructions of new cryptosystems. We sug-gest use of the platforms of Noncommutative Cryptography defined in terms of Multivariate Cryptography over K for the conversion of Multivariate Public Keys into El Gamal type Cryptosystems. Some new platforms are introduced.
Last updated:  2025-01-26
DEEP Commitments and Their Applications
Alan Szepieniec
This note studies a method of committing to a polynomial in a way that allows executions of low degree tests such as FRI to be batched and even deferred. In particular, it achieves (unlimited-depth) aggregation for STARKs.
Last updated:  2025-01-26
Efficient 2PC for Constant Round Secure Equality Testing and Comparison
Tianpei Lu, Xin Kang, Bingsheng Zhang, Zhuo Ma, Xiaoyuan Zhang, Yang Liu, Kui Ren, and Chun Chen
Secure equality testing and comparison are two important primitives widely used in many secure computation scenarios, such as privacy-preserving machine learning, private set intersection, and secure data mining, etc. This work proposes new constant-round two-party computation (2PC) protocols for secure equality testing and comparison. Our protocols are designed in the online/offline paradigm. For 32-bit inputs, the online communication cost of our equality testing protocol and secure comparison protocol are as low as 76 bits (1\% of ABY) and 384 bits (5\% of ABY) , respectively. Our benchmarks show that (i) for 32-bit equality testing, our scheme performs $9 \times$ faster than the Guo \emph{et al.} (EUROCRYPT 2023) and $15 \times$ of the garbled circuit (GC) with the half-gate optimization (CRYPTO 2015). (ii) for 32-bit secure comparison, our scheme performs $3 \times$ faster than Guo \emph{et al.} (EUROCRYPT 2023), $6 \times$ faster than both Rathee \emph{et al.} (CCS 2020) and GC with the half-gate optimization.
Last updated:  2025-01-25
Tighter Security for Schnorr Identification and Signatures: A High-Moment Forking Lemma for $\Sigma$-Protocols
Uncategorized
Lior Rotem and Gil Segev
Show abstract
Uncategorized
The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the ``square-root barrier''. In particular, in any group of order $p$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $t$-time attacks on the Schnorr identification and signature schemes have success probability $t^2/p$, whereas existing proofs of security only rule out attacks with success probabilities $(t^2/p)^{1/2}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{1/2}$, respectively, where $q_{\mathcal{H}}$ denotes the number of random-oracle queries issued by the attacker. We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time. In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{2/3}$, respectively.
Last updated:  2025-01-25
VECTIS: Efficient Batching Framework for Group-based CP-SNARKs
Byeongjun Jang, Gweonho Jeong, Hyuktae Kwon, Hyunok Oh, and Jihye Kim
Blockchain applications in finance and identity management increasingly require scalable and privacy-preserving solutions. Cryptographic commitments secure sensitive data on-chain, but verifying properties of these commitments efficiently remains challenging, particularly in large-scale scenarios. For multiple commitments, CP-SNARKs, a family of zk-SNARKs, enhance prover efficiency by shifting large-cost operations outside the circuit and verifying linkages between commitments, but incur verifier-side overhead due to linkage checks. Verification costs grow with the number of commitments, leading to inefficiencies in key size, proof size, and verification time. We propose $\textbf{VECTIS}$, an efficient batching framework for proving multiple commitments. Our approach aggregates multiple commitments into a single batched commitment, enabling the linking proof system to operate on the aggregated commitment instead of individual commitments, thereby significantly reducing the overall verification cost.%streamlining the verification process and improving efficiency. Experimental results show meaningful efficiency gains. For $2^{16}$ commitments, $\textbf{VECTIS}$ reduces the verification time to $0.064$s, achieving over $30\times$ improvement compared to LegoSNARK’s $1.972$s. These results show $\textbf{VECTIS}$’s potential for enabling scalable and efficient privacy-preserving solutions in blockchain applications.
Last updated:  2025-01-25
One-shot Signatures and Applications to Hybrid Quantum/Classical Authentication
Ryan Amos, Marios Georgiou, Aggelos Kiayias, and Mark Zhandry
We define the notion of one-shot signatures, which are signatures where any secret key can be used to sign only a single message, and then self-destructs. While such signatures are of course impossible classically, we construct one-shot signatures using quantum no-cloning. In particular, we show that such signatures exist relative to a classical oracle, which we can then heuristically obfuscate using known indistinguishability obfuscation schemes. We show that one-shot signatures have numerous applications for hybrid quantum/classical cryptographic tasks, where all communication is required to be classical, but local quantum operations are allowed. Applications include one-time signature tokens, quantum money with classical communication, decentralized blockchain-less cryptocurrency, signature schemes with unclonable secret keys, non-interactive certifiable min-entropy, and more. We thus position one-shot signatures as a powerful new building block for novel quantum cryptographic protocols.
Last updated:  2025-01-24
Constrained Pseudorandom Functions for Inner-Product Predicates from Weaker Assumptions
Sacha Servan-Schreiber
In this paper, we provide a novel framework for constructing constrained pseudorandom functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security. Our framework can be instantiated using a random oracle or any suitable related-key-attack (RKA) secure pseudorandom function. This results in three new CPRF constructions: 1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction with a polynomial domain under the assumption that one-way functions exist. All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Last updated:  2025-01-24
QuietOT: Lightweight Oblivious Transfer with a Public-Key Setup
Geoffroy Couteau, Lalita Devadas, Srinivas Devadas, Alexander Koch, and Sacha Servan-Schreiber
Oblivious Transfer (OT) is at the heart of secure computation and is a foundation for many applications in cryptography. Over two decades of work have led to extremely efficient protocols for evaluating OT instances in the preprocessing model, through a paradigm called OT extension. A few OT instances generated in an offline phase can be used to perform many OTs in an online phase efficiently, i.e., with very low communication and computational overheads. Specifically, traditional OT extension protocols use a small number of “base” OTs, generated using any black-box OT protocol, and convert them into many OT instances using only lightweight symmetric-key primitives. Recently, a new paradigm of OT with a *public-key setup* has emerged, which replaces the base OTs with a non-interactive setup: Using only the public key of the other party, two parties can efficiently compute a virtually unbounded number of OT instances on-the-fly. In this paper, we put forth a novel framework for OT extension with a public-key setup and concretely efficient instantiations. An implementation of our framework is 30-100 times faster when compared to the previous state-of-the-art public-key OT protocols, and remains competitive even when compared to OT protocols that *do not* offer a public-key setup. Additionally, our instantiations result in the first public-key schemes with plausible post-quantum security. In summary, this paper contributes: - QuietOT: A framework for OT extension with a public-key setup that uses fast, symmetric-key primitives to generate OT instances following a one-time public-key setup, and offering additional features such as precomputability. - A public-key setup for QuietOT from the RingLWE assumption, resulting in the first post-quantum construction of OT extension with a public-key setup. - An optimized, open-source implementation of our construction that can generate up to 1M OT extensions per second on commodity hardware. In contrast, the state-of-the-art public-key OT protocol is limited to approximately 20K OTs per second. - The first formal treatment of the security of OT with a public-key setup in a multi-party setting, which addresses several subtleties that were overlooked in prior work.
Last updated:  2025-01-24
Application of Mordell-Weil lattices with large kissing numbers to acceleration of multi-scalar multiplication on elliptic curves
Dmitrii Koshelev
This article aims to speed up (the precomputation stage of) multi-scalar multiplication (MSM) on ordinary elliptic curves of $j$-invariant $0$ with respect to specific ``independent'' (a.k.a. ``basis'') points. For this purpose, so-called Mordell--Weil lattices (up to rank $8$) with large kissing numbers (up to $240$) are employed. In a nutshell, the new approach consists in obtaining more efficiently a considerable number (up to $240$) of certain elementary linear combinations of the ``independent'' points. By scaling the point (re)generation process, it is thus possible to get a significant performance gain. As usual, the resulting curve points can be then regularly used in the main stage of an MSM algorithm to avoid repeating computations. Seemingly, this is the first usage of lattices with large kissing numbers in cryptography, while such lattices have already found numerous applications in other mathematical domains. Without exaggeration, MSM is a widespread primitive (often the unique bottleneck) in modern protocols of real-world elliptic curve cryptography. Moreover, the new (re)generation technique is prone to further improvements by considering Mordell--Weil lattices with even greater kissing numbers.
Last updated:  2025-01-24
A Horizontal Attack on the Codes and Restricted Objects Signature Scheme (CROSS)
Jonas Schupp and Georg Sigl
CROSS is a post-quantum secure digital signature scheme submitted to NIST’s Call for Additional Signatures which was recently selected for round 2. It features signature and key sizes in the range of SLH-DSA while providing a substantially faster signing operation. Within this work, we provide the first passive side-channel attack on the scheme. The attack recovers the secret key from all except one parameter sets from a single power trace while requiring at maximum two power traces for the R-SDP(G) 1 Fast instance. To successfully mount the attack, we show how to recover the secret key from side-channel information gained from the syndrome computation in CROSS’ identification protocol. We furthermore show how the hypothesis space for the attack can be restricted using information from the published signature.
Last updated:  2025-01-24
A Compact and Parallel Swap-Based Shuffler based on butterfly Network and its complexity against Side Channel Analysis
Jong-Yeon Park, Wonil Lee, Bo Gyeong Kang, Il-jong Song, Jaekeun Oh, and Kouichi Sakurai
A prominent countermeasure against side channel attacks, the hiding countermeasure, typically involves shuffling operations using a permutation algorithm. Especially in the era of Post-Quantum Cryptography, the importance of the hiding coun- termeasure is emphasized due to computational characteristics like those of lattice and code-based cryptography. In this context, swiftly and securely generating permutations has a critical impact on an algorithm’s security and efficiency. The widely adopted Fisher-Yates shuffle, because of its high security and ease of implementation, is prevalent. However, it has a limitation of complexity O(𝑁) due to its sequential nature. In response, we propose a time-area trade-off swap algorithm, FSS, based on the Butterfly Network with only log(𝑁) depth, log(𝑁) works and O(1) operation time in parallel. We will calculate the maximum gain that an attacker can achieve through butterfly operations with only log(𝑁) depth from side channel analysis perspective. In particular, we will show that it is possible to derive a generalized formula of the attack complexity with higher-order side channel attacks for arbitrary input sizes through a fractal structure of the butterfly network. Furthermore, our research highlights the possibility of generating efficient and secure permutations utilizing a minimal amount of randomness.
Last updated:  2025-01-24
NTRU+Sign: Compact NTRU-Based Signatures Using Bimodal Distributions
Joo Woo, Jonghyun Kim, Ga Hee Hong, Seungwoo Lee, Minkyu Kim, Hochang Lee, and Jong Hwan Park
We present a new lattice-based signature scheme, called ‘NTRU+Sign’, using the Fiat-Shamir with Aborts framework. The proposed scheme is designed based on a novel NTRU-based key structure that fits well with bimodal distributions, enabling efficiency improvements compared to its predecessor, BLISS. The novel NTRU-based key structure is characterized by: (1) effectively changing a modulus from 2q to q, which is different from the existing usage of 2q for bimodal distributions, and (2) drastically reducing the magnitude of a secret key, which directly leads to compactness of signature sizes. We provide two concrete parameter sets for NTRU+Sign, supporting 93-bit and 211-bit security levels. Using the technique from GALACTICS (that was suggested as the constant-time implementation of BLISS), our analysis shows that NTRU+Sign achieves a good balance between computational efficiency and signature compactness, with constant-time implementation. For instance, at the NIST-3 security level, NTRU+Sign produces signatures that are significantly smaller than Dilithium and HAETAE, while providing faster verification speeds. These advantages position NTRU+Sign as a competitive and practical solution for real-world deployments.
Last updated:  2025-01-24
GCKSign: Simple and Efficient Signatures from Generalized Compact Knapsacks
Joo Woo, Kwangsu Lee, and Jong Hwan Park
In 2009, Lyubashevsky proposed a lattice-based signature scheme by applying the Fiat-Shamir transformation and proved its security under the generalized compact knapsack (GCK) problem. This scheme has a simple structure but has large signature and key sizes due to the security requirement of their security reduction. Dilithium, which was submitted to the NIST Post-Quantum Cryptography standardization and selected as one of the final candidates, is an improvement of the Lyubashevsky's signature scheme and decreases key and signature sizes by modifying the form of a public key and including additional steps in key generation, signing, and verification algorithms. Thus, Dilithium has a more complex structure to implement compared to the Lyubashevsky's scheme. To combine the strength of both signature schemes, we modify the Lyubashevsky's signature scheme and present a new security proof that removes their security requirement. As a result, we propose a simple and practical GCKSign signature scheme based on the hardness of a new GCK assumption, called target-modified one-wayness of GCK function. The signature size of our signature scheme decreases 40 percent, the sum of signature and public key sizes decreases 25 percent, and the secret key size decreases 90 percent for the NIST security level III, compared to Dilithium. Furthermore, by the simplicity of our structure, the key generation, signing, and verification algorithms of our scheme run 2.4$\times$, 1.7$\times$, and 2.0$\times$ faster than those of Dilithium, respectively.
Last updated:  2025-01-24
Comprehensive Robustness Analysis of GCM, CCM, and OCB3
Akiko Inoue, Tetsu Iwata, and Kazuhiko Minematsu
Clarifying the robustness of authenticated encryption (AE) schemes, such as security under nonce misuse or Release of Unverified Plaintext (RUP), is critically important due to the extensive use of AEs in real-world applications. We present a comprehensive analysis of the robustness of well-known standards, namely GCM, CCM, and OCB3. Despite many existing studies, we uncovered several robustness properties for them that were not known in the literature. In particular, we show that both GCM and CCM maintain authenticity under RUP. Moreover, CCM keeps this feature even if a nonce is misused. Together with existing analysis, our work gives a complete picture of the robustness of these standards for the first time. Our results also imply several new robust AE schemes based on GCM and CCM.
Last updated:  2025-01-23
EQSIGN: Practical Digital Signatures from the Non-Abelian Hidden Subgroup Problem and Information Theoretic Equivocation
Samuel Lavery
We present a novel digital signature scheme grounded in non-commutative cryptography and implemented over a bilinear matrix group platform. At the core of our design is a unique equivocation function that obfuscates intermediate elements, effectively concealing outputs and minimizing observable information leakage. To the best of our knowledge, this is the first digital signature scheme to combine information-theoretic security with computational hardness, relying on a challenging instance of the Non-Abelian Hidden Subgroup Problem (NAHSP) and strengthened by practical guarantees. This dual-layered security approach ensures robustness against both classical and quantum adversaries while maintaining communication overheads competitive with RSA. Our work represents a significant advancement toward efficient, quantum-resilient digital signatures for real-world applications. This paper is an early pre-release intended to invite collaboration and feedback. The work is presented for research purposes only and is not intended for use in production systems.
Last updated:  2025-01-23
Discrete gaussian sampling for BKZ-reduced basis
Amaury Pouly and Yixin Shen
Discrete Gaussian sampling on lattices is a fundamental problem in lattice-based cryptography. In this paper, we revisit the Markov chain Monte Carlo (MCMC)-based Metropolis-Hastings-Klein (MHK) algorithm proposed by Wang and Ling and study its complexity under the Geometric Series Assuption (GSA) when the given basis is BKZ-reduced. We give experimental evidence that the GSA is accurate in this context, and we give a very simple approximate formula for the complexity of the sampler that is accurate over a large range of parameters and easily computable. We apply our results to the dual attack on LWE of [Pouly and Shen 2024] and significantly improve the complexity estimates of the attack. Finally, we provide some results of independent interest on the Gaussian mass of a random $q$-ary lattices.
Last updated:  2025-01-23
Better Codes for the HQC Cryptosystem
Cyrius Nugier and Jean-Christophe Deneuville
In the HQC cryptosystem, the length $n$ of the code determines several concrete parameters such as the bandwidth usage, the memory consumption, or the decoding efficiency. In this paper, we show that currently known methods to explicitly generate asymptotically good (especially with high relative distances), binary codes with efficient associated procedures cannot be used to improve $n$. We also show that concatenated codes are currently better suited, and by exhausting small codes, find a closer to optimal concatenated code for HQC, which improves upon currently used codes.
Last updated:  2025-01-23
Verifiable Oblivious Pseudorandom Functions from Lattices: Practical-ish and Thresholdisable
Martin R. Albrecht and Kamil Doruk Gur
We revisit the lattice-based verifiable oblivious PRF construction from PKC'21 and remove or mitigate its central three sources of inefficiency. First, applying Rényi divergence arguments, we eliminate one superpolynomial factor from the ciphertext modulus \(q\), allowing us to reduce the overall bandwidth consumed by RLWE samples by about a factor of four. This necessitates us introducing intermediate unpredictability notions to argue PRF security of the final output in the Random Oracle model. Second, we remove the reliance on the \(\mathsf{1D-SIS}\) assumption, which reduces another superpolynomial factor, albeit to a factor that is still superpolynomial. Third, by applying the state-of-the-art in zero-knowledge proofs for lattice statements, we achieve a reduction in bandwidth of several orders of magnitude for this material. Finally, we give a \(t\)-out-of-\(n\) threshold variant of the VOPRF for constant \(t\) and with trusted setup, based on a \(n\)-out-of-\(n\) distributed variant of the VOPRF (and without trusted setup).
Last updated:  2025-01-23
How To Simulate It - A Tutorial on the Simulation Proof Technique
Yehuda Lindell
One of the most fundamental notions of cryptography is that of \emph{simulation}. It stands behind the concepts of semantic security, zero knowledge, and security for multiparty computation. However, writing a simulator and proving security via the use of simulation is a non-trivial task, and one that many newcomers to the field often find difficult. In this tutorial, we provide a guide to how to write simulators and prove security via the simulation paradigm. Although we have tried to make this tutorial as stand-alone as possible, we assume some familiarity with the notions of secure encryption, zero-knowledge, and secure computation.
Last updated:  2025-01-23
Masking Gaussian Elimination at Arbitrary Order, with Application to Multivariate- and Code-Based PQC
Quinten Norga, Suparna Kundu, Uttam Kumar Ojha, Anindya Ganguly, Angshuman Karmakar, and Ingrid Verbauwhede
Digital signature schemes based on multivariate- and code-based hard problems are promising alternatives for lattice-based signature schemes, due to their small signature size. Gaussian Elimination (GE) is a critical operation in the signing procedure of these schemes. In this paper, we provide a masking scheme for GE with back substitution to defend against first- and higher-order attacks. To the best of our knowledge, this work is the first to analyze and propose masking techniques for multivariate- or code-based DS algorithms. We propose a masked algorithm for transforming a system of linear equations into row-echelon form. This is realized by introducing techniques for efficiently making leading (pivot) elements one while avoiding costly conversions between Boolean and multiplicative masking at all orders. We also propose a technique for efficient masked back substitution, which eventually enables a secure unmasking of the public output. All novel gadgets are proven secure in the $t$-probing model. Additionally, we evaluate the overhead of our countermeasure for several post-quantum candidates and their different security levels at first-, second-, and third-order, including UOV, MAYO, SNOVA, QR-UOV, and MQ-Sign. Notably, the operational cost of first-, second-, and third-order masked GE is 2.3$\times$ higher, and the randomness cost is 1.2$\times$ higher in MAYO compared to UOV for security levels III and V. In contrast, these costs are similar in UOV and MAYO for one version of level I. We also show detailed performance results for masked GE implementations for all three security versions of UOV on the Arm Cortex-M4 and compare them with unmasked results. Our masked implementation targeting UOV parameters has an overhead of factor 15.1$\times$, 15.2$\times$, and 15.4$\times$ compared to the unprotected implementation for NIST security level I, III, and V.
Last updated:  2025-01-23
Onion Franking: Abuse Reports for Mix-Based Private Messaging
Matthew Gregoire, Margaret Pierce, and Saba Eskandarian
The fast-paced development and deployment of private messaging applications demands mechanisms to protect against the concomitant potential for abuse. While widely used end-to-end encrypted (E2EE) messaging systems have deployed mechanisms for users to verifiably report abusive messages without compromising the privacy of unreported messages, abuse reporting schemes for systems that additionally protect message metadata are still in their infancy. Existing solutions either focus on a relatively small portion of the design space or incur much higher communication and computation costs than their E2EE brethren. This paper introduces new abuse reporting mechanisms that work for any private messaging system based on onion encryption. This includes low-latency systems that employ heuristic or opportunistic mixing of user traffic, as well as schemes based on mixnets. Along the way, we show that design decisions and abstractions that are well-suited to the E2EE setting may actually impede security and performance improvements in the metadata-hiding setting. We also explore stronger threat models for abuse reporting and moderation not explored in prior work, showing where prior work falls short and how to strengthen both our scheme and others' -- including deployed E2EE messaging platforms -- to achieve higher levels of security. We implement a prototype of our scheme and find that it outperforms the best known solutions in this setting by well over an order of magnitude for each step of the message delivery and reporting process, with overheads almost matching those of message franking techniques used by E2EE encrypted messaging apps today.
Last updated:  2025-01-23
Rudraksh: A compact and lightweight post-quantum key-encapsulation mechanism
Suparna Kundu, Archisman Ghosh, Angshuman Karmakar, Shreyas Sen, and Ingrid Verbauwhede
Resource-constrained devices such as wireless sensors and Internet of Things (IoT) devices have become ubiquitous in our digital ecosystem. These devices generate and handle a major part of our digital data. However, due to the impending threat of quantum computers on our existing public-key cryptographic schemes and the limited resources available on IoT devices, it is important to design lightweight post-quantum cryptographic (PQC) schemes suitable for these devices. In this work, we explored the design space of learning with error-based PQC schemes to design a lightweight key-encapsulation mechanism (KEM) suitable for resource-constrained devices. We have done a scrupulous and extensive analysis and evaluation of different design elements, such as polynomial size, field modulus structure, reduction algorithm, {and} secret and error distribution of an LWE-based KEM. Our explorations led to the proposal of a lightweight PQC-KEM, Rudraksh, without compromising security. Our scheme provides security against chosen ciphertext attacks (CCA) with more than 100 bits of Core-SVP post-quantum security and belongs to the NIST-level-I security category (provide security at least as much as AES-128). We have also shown how ASCON can be used for lightweight pseudo-random number generation and hash function in the lattice-based KEMs instead of the widely used Keccak for lightweight design. Our FPGA results show that Rudraksh currently requires the least area among the PQC KEMs of similar security. Our implementation of Rudraksh provides a $\sim3\times$ improvement in terms of the area requirement compared to the state-of-the-art area-optimized implementation of Kyber, can operate at $63\%$-$76\%$ higher frequency with respect to high-throughput Kyber, and improves time-area-product $\sim2\times$ compared to the state-of-the-art compact implementation of Kyber published in HPEC 2022.
Last updated:  2025-01-23
Public-Key Quantum Money From Standard Assumptions (In The Generic Model)
Jake Doliskani
Our main result is a quantum polynomial-time reduction from the group action discrete logarithm (DLP) problem to a specific cloning problem. A consequence of this result is that the public-key quantum money scheme proposed by Zhandry (2024), based on abelian group actions, is secure in the generic group action model. Specifically, our result shows that breaking the quantum money scheme is equivalent, under quantum polynomial-time reductions, to solving the group action DLP. Two immediate implications of our results are: i) A separation between quantum money and quantum lightning. This separation arises because our reduction is non-uniform, and quantum lightning is not secure against non-uniform adversaries. ii) Cloning vs. preparing Fourier states. Our main theorem shows that the problem of cloning group action Fourier states is equivalent to the problem of preparing these states.
Last updated:  2025-01-23
Post-Quantum Stealth Address Protocols
Marija Mikić, Mihajlo Srbakoski, and Strahinja Praška
The Stealth Address Protocol (SAP) allows users to receive assets through stealth addresses that are unlinkable to their stealth meta-addresses. The most widely used SAP, Dual-Key SAP (DKSAP), and the most performant SAP, Elliptic Curve Pairing Dual-Key SAP (ECPDKSAP), are based on elliptic curve cryptography, which is vulnerable to quantum attacks. These protocols depend on the elliptic curve discrete logarithm problem, which could be efficiently solved on a sufficiently powerful quantum computer using the Shor algorithm. In this paper three novel post-quantum SAPs based on lattice-based cryptography are presented: LWE SAP, Ring-LWE SAP and Module-LWE SAP. These protocols leverage Learning With Errors (LWE) problem to ensure quantum-resistant privacy. Among them, Module-LWE SAP, which is based on the Kyber key encapsulation mechanism, achieves the best performance and outperforms ECPDKSAP by approximately 66.8% in the scan time of the ephemeral public key registry.
Last updated:  2025-01-23
Adversary Resilient Learned Bloom Filters
Allison Bishop and Hayder Tirmazi
The Learned Bloom Filter is a recently proposed data structure that combines the Bloom Filter with a Learning Model while preserving the Bloom Filter's one-sided error guarantees. Creating an adversary-resilient construction of the Learned Bloom Filter with provable guarantees is an open problem. We define a strong adversarial model for the Learned Bloom Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e. not ``Learned'') Bloom Filter by prior work and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). Using our model, we construct an adversary-resilient variant of the Learned Bloom Filter called the Downtown Bodega Filter. We show that: if pseudo-random permutations exist, then an Adversary Resilient Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We construct a hybrid adversarial model for the case where a fraction of the query workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.
Last updated:  2025-01-23
On the structure of the Schur squares of Twisted Generalized Reed-Solomon codes and application to cryptanalysis
Alain Couvreur, Rakhi Pratihar, Nihan Tanisali, and Ilaria Zappatore
Twisted generalized Reed-Solomon (TGRS) codes constitute an interesting family of evaluation codes, containing a large class of maximum distance separable codes non-equivalent to generalized Reed-Solomon (GRS) ones. Moreover, the Schur squares of TGRS codes may be much larger than those of GRS codes with same dimension. Exploiting these structural differences, in 2018, Beelen, Bossert, Puchinger and Rosenkilde proposed a subfamily of Maximum Distance Separable (MDS) Twisted Reed--Solomon (TRS) codes over $\mathbb{F}_q$ with $\ell$ twists $q \approx n^{2^{\ell}}$ for McEliece encryption, claiming their resistance to both Sidelnikov Shestakov attack and Schur products--based attacks. In short, they claimed these codes to resist to classical key recovery attacks on McEliece encryption scheme instantiated with Reed-Solomon (RS) or GRS codes. In 2020, Lavauzelle and Renner presented an original attack on this system based on the computation of the subfield subcode of the public TRS code. In this paper, we show that the original claim on the resistance of TRS and TGRS codes to Schur products based--attacks is wrong. We identify a broad class of codes including TRS and TGRS ones that is distinguishable from random by computing the Schur square of some shortening of the code. Then, we focus on the case of single twist ({i.e.}, $\ell = 1$), which is the most efficient one in terms of decryption complexity, to derive an attack. The technique is similar to the distinguisher-based attacks of RS code-based systems given by Couvreur, Gaborit, Gauthier-Umaña, Otmani, Tillich in 2014.
Last updated:  2025-01-23
Verification-efficient Homomorphic Signatures for Verifiable Computation over Data Streams
Gaspard Anthoine, Daniele Cozzo, and Dario Fiore
Homomorphic signatures for NP (HSNP) allow proving that a signed value is the result of a non-deterministic computation on signed inputs. At CCS'22, Fiore and Tucker introduced HSNP, showed how to use them for verifying arbitrary computations on data streams, and proposed a generic HSNP construction obtained by efficiently combining zkSNARKs with linearly homomorphic signatures (LHS), namely those supporting linear functions. Their proposed LHS however suffered from an high verification cost. In this work we propose an efficient LHS that significantly improves on previous work in terms of verification time. Using the modular approach of Fiore and Tucker, this yields a verifier-efficient HSNP. We show that the HSNP instantiated with our LHS is particularly suited to the case when the data is taken from consecutive samples, which captures important use cases including sliding window statistics such as variances, histograms and stock market predictions.
Last updated:  2025-01-23
A Formal Treatment of Homomorphic Encryption Based Outsourced Computation in the Universal Composability Framework
Wasilij Beskorovajnov, Sarai Eilebrecht, Yufan Jiang, and Jörn Mueller-Quade
The adoption of Homomorphic Encryption (HE) and Secure Function Evaluation (SFE) applications in the real world remains lim- ited, even nearly 50 years after the introduction of HE. This is particu- larly unfortunate given the strong privacy and confidentiality guarantees these tools can offer to modern digital life. While attempting to incorporate a simple straw-man PSI protocol into a web service for matching individuals based on their profiles, we en- countered several shortcomings in current outsourcing frameworks. Ex- isting outsourced protocols either require clients to perform tasks beyond merely contributing their inputs or rely on a non-collusion assumption between a server and a client, which appears implausible in standard web service scenarios. To address these issues, we present, to the best of our knowledge, the first general construction for non-interactive outsourced computation based on black-box homomorphic encryption. This approach relies on a non- collusion assumption between two dedicated servers, which we consider more realistic in a web-service setting. Furthermore, we provide a proof of our construction within the Universal Composability (UC) framework, assuming semi-honest (i.e., passive) adversaries. Unlike general one-sided two-party SFE protocols, our construction addi- tionally requires sender privacy. Specifically, the sender must contribute its inputs solely in encrypted form. This ensures stronger privacy guar- antees and broadens the applicability of the protocol. Overall, the range of applications for our construction includes all one- sided two-party sender-private SFE protocols as well as server-based arithmetic computations on encrypted inputs. Finally, we demonstrate the practical applicability of our general outsourced computation frame- work by applying it to the specific use case of Outsourced Private Set Intersection (OPSI) in a real-world scenario, accompanied by a detailed evaluation of its efficiency.
Last updated:  2025-01-23
Subset sum, a new insight
Samir Bouftass
In this paper, we show that subset sum problem consists on finding a solution over $\mathbb{N}_2$ of equation $n = \textbf{A}X \bullet U$ where A and n are given matrix and integer and U = $[(2^0) (2^1) .... (2^{d-2}) (2^{d-1})]$. We show that it can be subdivized into 2 solvable subproblems.
Last updated:  2025-01-23
dCTIDH: Fast & Deterministic CTIDH
Fabio Campos, Andreas Hellenbrand, Michael Meyer, and Krijn Reijnders
This paper presents dCTIDH, a CSIDH implementation that combines two recent developments into a novel state-of-the-art deterministic implementation. We combine the approach of deterministic variants of CSIDH with the batching strategy of CTIDH, which shows that the full potential of this key space has not yet been explored. This high-level adjustment in itself leads to a significant speed-up. To achieve an effective deterministic evaluation in constant time, we introduce Wombats, a new approach to performing isogenies in batches, specifically tailored to the behavior required for deterministic CSIDH using CTIDH batching. Furthermore, we explore the two-dimensional space of optimal primes for dCTIDH, with regard to both the performance of dCTIDH in terms of finite-field operations per prime and the efficiency of finite-field operations, determined by the prime shape, in terms of cycles. This allows us to optimize both for choice of prime and scheme parameters simultaneously. Lastly, we implement and benchmark constant-time, deterministic dCTIDH. Our results show that dCTIDH not only outperforms state-of-the-art deterministic CSIDH, but even non-deterministic CTIDH: dCTIDH-2048 is faster than CTIDH-2048 by 17 percent, and is almost five times faster than dCSIDH-2048.
Last updated:  2025-01-22
Additive Randomized Encodings from Public Key Encryption
Nir Bitansky, Saroja Erabelli, and Rachit Garg
Introduced by Halevi, Ishai, Kushilevitz, and Rabin (CRYPTO 2023), Additive randomized encodings (ARE) reduce the computation of a $k$-party function $f(x_1,\dots,x_k)$ to locally computing encodings $\hat x_i$ of each input $x_i$ and then adding them together over some Abelian group into an output encoding $\hat y = \sum \hat x_i$, which reveals nothing but the result. The appeal of ARE comes from the simplicity of the non-local computation, involving only addition. This gives rise for instance to non-interactive secure function evaluation in the shuffle model where messages from different parties are anonymously shuffled before reaching their destination. Halevi, Ishai, Kushilevitz, and Rabin constructed ARE based on Diffie-Hellman type assumptions in bilinear groups. We construct ARE assuming public-key encryption. The key insight behind our construction is that one-sided ARE, which only guarantees privacy for one of the parties, are relatively easy to construct, and yet can be lifted to full-fledged ARE. We also give a more efficient black-box construction from the CDH assumption.
Last updated:  2025-01-22
Technology-Dependent Synthesis and Optimization of Circuits for Small S-boxes
Zihao Wei, Siwei Sun, Fengmei Liu, Lei Hu, and Zhiyu Zhang
Boolean formula minimization is a notoriously hard problem that is known to be $\varSigma_2^P$-complete. Circuit minimization, typically studied in the context of a much broader subject known as synthesis and optimization of circuits, introduces another layer of complexity since ultimately those technology-independent epresentations (e.g., Boolean formulas and truth tables) has to be transformed into a netlist of cells of the target technology library. To manage those complexities, the industrial community typically separates the synthesis process into two steps: technology-independent optimization and technology mapping. In each step, this approach only tries to find the local optimal solution and relies heavily on heuristics rather than a systematic search. However, for small S-boxes, a more systematic exploration of the design space is possible. Aiming at the global optimum, we propose a method which can synthesize a truth table for a small S-box directly into a netlist of the cells of a given technology library. Compared with existing technology-dependent synthesis tools like LIGHTER and PEIGEN, our method produces improved results for many S-boxes with respect to circuit area. In particular, by applying our method to the $\mathbb{F}_{2^4}$-inverter involved in the tower field implementation of the AES S-box, we obtain the currently known lightest implementation of the AES S-box. The search framework can be tweaked to take circuit delay into account. As a result, we find implementations for certain S-boxes with both latency and area improved.
Last updated:  2025-01-22
A practical distinguisher on the full Skyscraper permutation
Antoine Bak
Skyscraper is a cryptographic permutation published in TCHES 2025, optimized for use in proof systems such as PlonK. This primitive is based on a 10-round Feistel network combining $x^2$ monomials and lookup-based functions to achieve competitive plain performances and efficiency in proof systems supporting lookups. In terms of security, the $x^2$ monomials are supposed to provide security against statistical attacks, while lookups are supposed to provide security against algebraic attacks. In this note, we show that this primitive has a much lower security margin than expected. Using a rebound attack, we find practical truncated differentials on the full permutation. As a corollary, we also find a practical collision attack on the compression function based on a 9-round Skyscraper permutation, which significantly reduces the security margin of the primitive. All of these attacks have been implemented and work in practice.
Last updated:  2025-01-22
Towards Optimally Small Smoothness Bounds for Cryptographic-Sized Twin Smooth Integers and their Isogeny-based Applications
Bruno Sterner
We give a new approach for finding large smooth twins. Those twins whose sum is a prime are of interest in the parameter setup of certain isogeny-based cryptosystems such as SQIsign. The approach to find such twins is to find two polynomials in $\mathbb{Q}[x]$ that split into a product of small degree factors and differ by $1$. Then evaluate them on a particular smooth integer. This was first explored by Costello, Meyer and Naehrig at EUROCRYPT'21 using polynomials that split completely into linear factors which were found using Diophantine number theory. The polynomials used in this work split into mostly linear factors with the exception of a few quadratic factors. Some of these linear factors are repeated and so the overall smoothness probability is either better or comparable to that of the prior polynomials. We use these polynomials to search for large smooth twins whose sum is prime. In particular, the smoothness bounds of the $384$ and $512$-bit twins that we find are significantly smaller than those found in EUROCRYPT'21.
Last updated:  2025-01-22
Beyond Optimal Fault-Tolerance
Andrew Lewis-Pye and Tim Roughgarden
One of the most basic properties of a consensus protocol is its fault-tolerance--the maximum fraction of faulty participants that the protocol can tolerate without losing fundamental guarantees such as safety and liveness. Because of its importance, the optimal fault-tolerance achievable by any protocol has been characterized in a wide range of settings. For example, for state machine replication (SMR) protocols operating in the partially synchronous setting, it is possible to simultaneously guarantee consistency against $\alpha$-bounded adversaries (i.e., adversaries that control less than an $\alpha$ fraction of the participants) and liveness against $\beta$-bounded adversaries if and only if $\alpha + 2\beta \leq 1$. This paper characterizes to what extent ``better-than-optimal'' fault-tolerance guarantees are possible for SMR protocols when the standard consistency requirement is relaxed to allow a bounded number $r$ of consistency violations, each potentially leading to the rollback of recently finalized transactions. We prove that bounding rollback is impossible without additional timing assumptions and investigate protocols that tolerate and recover from consistency violations whenever message delays around the time of an attack are bounded by a parameter $\Delta^*$ (which may be arbitrarily larger than the parameter $\Delta$ that bounds post-GST message delays in the partially synchronous model). Here, a protocol's fault-tolerance can be a non-constant function of $r$, and we prove, for each $r$, matching upper and lower bounds on the optimal ``recoverable fault-tolerance'' achievable by any SMR protocol. For example, for protocols that guarantee liveness against 1/3-bounded adversaries in the partially synchronous setting, a 5/9-bounded adversary can always cause one consistency violation but not two, and a 2/3-bounded adversary can always cause two consistency violations but not three. Our positive results are achieved through a generic ``recovery procedure'' that can be grafted on to any accountable SMR protocol and restores consistency following a violation while rolling back only transactions that were finalized in the previous $2\Delta^*$ timesteps.
Last updated:  2025-01-22
Shifting our knowledge of MQ-Sign security
Lars Ran and Monika Trimoska
Unbalanced Oil and Vinegar (UOV) is one of the oldest, simplest, and most studied ad-hoc multivariate signature schemes. UOV signature schemes are attractive because they have very small signatures and fast verification. On the downside, they have large public and secret keys. As a result, variations of the traditional UOV scheme are usually developed with the goal to reduce the key sizes. Seven variants of UOV were submitted to the additional call for digital signatures by NIST, prior to which, a variant named MQ-Sign was submitted to the (South) Korean post-quantum cryptography competition (KpqC). MQ-Sign is currently competing in the second round of KpqC with two variants. One of the variants corresponds to the classic description of UOV with certain implementation and parameter choices. In the other variant, called MQ-Sign-LR, a part of the central map is constructed from row shifts of a single matrix. This design makes for smaller secret keys, and in the case where the equivalent keys optimization is used, it also leads to smaller public keys. However, we show in this work that the polynomial systems arising from an algebraic attack have a specific structure that can be exploited. Specifically, we are able to find preimages for $d$-periodic targets under the public map with a probability of $63\%$ for all security levels. The complexity of finding these preimages, as well as the fraction of $d$-periodic target increases with $d$ and hence provides a trade-off. We show that for all security levels one can choose $d=\frac{v}{2}$, for $v$ the number of vinegar variables, and reduce the security claim. Our experiments show practical running times for lower $d$ ranging from 6 seconds to 14 hours.
Last updated:  2025-01-22
Unveiling Privacy Risks in Quantum Optimization Services
Mateusz Leśniak, Michał Wroński, Ewa Syta, and Mirosław Kutyłowski
As cloud-based quantum computing services, such as those offered by D-Wave, become more popular for practical applications, privacy-preserving methods (such as obfuscation) are essential to address data security, privacy, and legal compliance concerns. Several efficient obfuscation methods have been proposed, which do not increase the time complexity of solving the obfuscated problem, for quantum optimization problems. These include {\em sign reversing}, {\em variable permutation}, and the combination of both methods assumed to provide greater protection. Unfortunately, sign reversing has already been shown to be insecure. We present two attacks on variable permutation and the combined method, where it is possible to efficiently recover the deobfuscated problem, particularly when given access to the obfuscated problem and its obfuscated solution, as a cloud-based quantum provider would have. Our attacks are in the context of an optimization problem of cryptanalysis of the Trivium cipher family, but our approach generalizes to other similarly structured problems. Our attacks are efficient and practical. Deobfuscating an optimization problem with \( n \) variables obfuscated with the combined method has a complexity of $O(n^2)$ compared to the complexity of $O\left( n \cdot n! \cdot 2^n \right)$ of the brute force attack. We provide an implementation of our attack; using a commodity laptop, our attack using the full Trivium cipher takes less than two minutes if optimized. We also present possible countermeasures to mitigate our attacks and bring attention to the need for further development in this area.
Last updated:  2025-01-22
Meet-in-the-Middle Attack on Primitives with Binary Matrix Linear Layer
Qingliang Hou, Kuntong Li, Guoyan Zhang, Yanzhao Shen, Qidi You, and Xiaoyang Dong
Meet-in-the-middle (MitM) is a powerful approach for the cryptanalysis of symmetric primitives. In recent years, MitM has led to many improved records about key recovery, preimage and collision attacks with the help of automated tools. However, most of the previous work target $\texttt{AES}$-like hashing where the linear layer is an MDS matrix. And we observe that their automatic model for MDS matrix is not suitable for primitives using a binary matrix as their linear layer. In this paper, we propose the $\texttt{n-XOR}$ model to describe the $\texttt{XOR}$ operation with an arbitrary number of inputs. And it can be applied to primitives with a binary matrix of arbitrary size. Then, we propose a check model to eliminate the possible inaccuracies caused by $\texttt{n-XOR}$. But the check model is limited by the input size (not greater than 4). Combined with the two new models, we find a MitM key recovery attack on 11-round $\texttt{Midori64}$. When the whitening keys are excluded, a MitM key recovery attack can be mounted on the 12-round $\texttt{Midori64}$. Compared with the previous best work, both of the above results have distinct advantages in terms of reducing memory and data complexity. At last, we apply the $\texttt{n-XOR}$ model to the hashing modes of primitives with large size binary matrix. The preimage attack on weakened $\texttt{camellia}-{\tt MMO}$ (without $FL/FL^{-1}$ and whitening layers) and $\texttt{Aria}-{\tt DM}$ are both improved by 1 round.
Last updated:  2025-01-22
On Gaussian Sampling for $q$-ary Lattices and Linear Codes with Lee Weight
Maiara F. Bollauf, Maja Lie, and Cong Ling
We show that discrete Gaussian sampling for a $q$-ary lattice is equivalent to codeword sampling for a linear code over $\mathbb{Z}_q$ with the Lee weight. This insight allows us to derive the theta series of a $q$-ary lattice from the Lee weight distribution of the associated code. We design a novel Gaussian sampler for $q$-ary lattices assuming an oracle that computes the symmetrized weight enumerator of the associated code. We apply this sampler to well-known lattices, such as the $E_8$, Barnes-Wall, and Leech lattice, highlighting both its advantages and limitations, which depend on the underlying code properties. For certain root lattices, we show that the sampler is indeed efficient, forgoing the need to assume an oracle. We also discuss applications of our results in digital signature schemes and the Lattice Isomorphism Problem. In many cases, our sampler achieves a significant speed-up compared to state-of-the-art sampling algorithms in cryptographic applications.
Last updated:  2025-01-22
Zero-Knowledge Proofs of Quantumness
Duong Hieu Phan, Weiqiang Wen, Xingyu Yan, and Jinwei Zheng
With the rapid development of quantum computers, proofs of quantumness have recently become an interesting and intriguing research direction. However, in all current schemes for proofs of quantumness, quantum provers almost invariably face the risk of being maliciously exploited by classical verifiers. In fact, through malicious strategies in interaction with quantum provers, classical verifiers could solve some instances of hard problems that arise from the specific scheme in use. In other words, malicious verifiers can break some schemes (that quantum provers are not aware of) through interaction with quantum provers. All this is due to the lack of formalization that prevents malicious verifiers from extracting useful information in proofs of quantumness. To address this issue, we formalize zero-knowledge proofs of quantumness. Intuitively, the zero-knowledge property necessitates that the information gained by the classical verifier from interactions with the quantum prover should not surpass what can be simulated using a simulated classical prover interacting with the same verifier. As a result, the new zero-knowledge notion can prevent any malicious verifier from exploiting quantum advantage. Interestingly, we find that the classical zero-knowledge proof is sufficient to compile some existing proofs of quantumness schemes into zero-knowledge proofs of quantumness schemes. Due to some technical reasons, it appears to be more general to require zero-knowledge proof on the verifier side instead of the prover side. Intuitively, this helps to regulate the verifier's behavior from malicious to be honest-but-curious. As a result, both parties will play not only one role in the proofs of quantumness but also the dual role in the classical zero-knowledge proof. Specifically, the two principle proofs of quantumness schemes: Shor's factoring-based scheme and learning with errors-based scheme in [Brakerski et al, FOCS, 2018], can be transformed into zero-knowledge proofs of quantumness by requiring an extractable non-interactive zero-knowledge argument on the verifier side. Notably, the zero-knowledge proofs of quantumness can be viewed as an enhanced security notion for proofs of quantumness. To prevent malicious verifiers from exploiting the quantum device's capabilities or knowledge, it is advisable to transition existing proofs of quantumness schemes to this framework whenever feasible.
Last updated:  2025-01-22
Adaptive Hardcore Bit and Quantum Key Leasing over Classical Channel from LWE with Polynomial Modulus
Duong Hieu Phan, Weiqiang Wen, Xingyu Yan, and Jinwei Zheng
Quantum key leasing, also known as public key encryption with secure key leasing (PKE-SKL), allows a user to lease a (quantum) secret key to a server for decryption purpose, with the capability of revoking the key afterwards. In the pioneering work by Chardouvelis et al (arXiv:2310.14328), a PKE-SKL scheme utilizing classical channels was successfully built upon the noisy trapdoor claw-free (NTCF) family. This approach, however, relies on the superpolynomial hardness of learning with errors (LWE) problem, which could affect both efficiency and security of the scheme. In our work, we demonstrate that the reliance on superpolynomial hardness is unnecessary, and that LWE with polynomial-size modulus is sufficient to achieve the same goal. Our approach enhances both efficiency and security, thereby improving the practical feasibility of the scheme on near-term quantum devices. To accomplish this, we first construct a \textit{noticeable} NTCF (NNTCF) family with the adaptive hardcore bit property, based on LWE with polynomial-size modulus. To the best of our knowledge, this is the first demonstration of the adaptive hardcore bit property based on LWE with polynomial-size modulus, which may be of independent interest. Building on this foundation, we address additional challenges in prior work to construct the first PKE-SKL scheme satisfying the following properties: (\textit{i}) the entire protocol utilizes only classical communication, and can also be lifted to support homomorphism. (\textit{ii}) the security is solely based on LWE assumption with polynomial-size modulus. As a demonstration of the versatility of our noticeable NTCF, we show that an efficient proof of quantumness protocol can be built upon it. Specifically, our protocol enables a classical verifier to test the quantumness while relying exclusively on the LWE assumption with polynomial-size modulus.
Last updated:  2025-01-22
Non-Interactive Threshold BBS+ From Pseudorandom Correlations
Sebastian Faust, Carmit Hazay, David Kretzler, Leandro Rometsch, and Benjamin Schlosser
The BBS+ signature scheme is one of the most prominent solutions for realizing anonymous credentials. Its prominence is due to properties like selective disclosure and efficient protocols for creating and showing possession of credentials. Traditionally, a single credential issuer produces BBS+ signatures, which poses significant risks due to a single point of failure. In this work, we address this threat via a novel $t$-out-of-$n$ threshold BBS+ protocol. Our protocol supports an arbitrary security threshold $t \leq n$ and works in the so-called preprocessing setting. In this setting, we achieve non-interactive signing in the online phase and sublinear communication complexity in the number of signatures in the offline phase, which, as we show in this work, are important features from a practical point of view. As it stands today, none of the widely studied signature schemes, such as threshold ECDSA and threshold Schnorr, achieve both properties simultaneously. In this work, we make the observation that presignatures can be directly computed from pseudorandom correlations which allows servers to create signatures shares without additional cross-server communication. Both our offline and online protocols are actively secure in the Universal Composability model. Finally, we evaluate the concrete efficiency of our protocol, including an implementation of the online phase and the expansion algorithm of the pseudorandom correlation generator (PCG) used during the offline phase. The online protocol without network latency takes less than $14 ms$ for $t \leq 30$ and credentials sizes up to $10$. Further, our results indicate that the influence of $t$ on the online signing is insignificant, $\leq 6 \%$ for $t \leq 30$, and the overhead of the thresholdization occurs almost exclusively in the offline phase. Our implementation of the PCG expansion shows that even for a committee size of $10$ servers, each server can expand a correlation of up to $2^{17}$ presignatures in less than $100$ ms per presignature.
Last updated:  2025-01-22
Available Attestation: Towards a Reorg-Resilient Solution for Ethereum Proof-of-Stake
Mingfei Zhang, Rujia Li, Xueqian Lu, and Sisi Duan
Ethereum transitioned from Proof-of-Work consensus to Proof-of-Stake (PoS) consensus in September 2022. While this upgrade brings significant improvements (e.g., lower energy costs and higher throughput), it also introduces new vulnerabilities. One notable example is the so-called malicious \textit{reorganization attack}. Malicious reorganization denotes an attack in which the Byzantine faulty validators intentionally manipulate the canonical chain so the blocks by honest validators are discarded. By doing so, the faulty validators can gain benefits such as higher rewards, lower chain quality, or even posing a liveness threat to the system. In this work, we show that the majority of the known attacks on Ethereum PoS are some form of reorganization attacks. In practice, most of these attacks can be launched even if the network is synchronous (there exists a known upper bound for message transmission and processing). Different from existing studies that mitigate the attacks in an ad-hoc way, we take a systematic approach and provide an elegant yet efficient solution to reorganization attacks. Our solution is provably secure such that no reorganization attacks can be launched in a synchronous network. In a partially synchronous network, our approach achieves the conventional safety and liveness properties of the consensus protocol. Our evaluation results show that our solution is resilient to five types of reorganization attacks and also highly efficient.
Last updated:  2025-01-21
A Survey on Transciphering and Symmetric Ciphers for Homomorphic Encryption
Indranil Thakur, Angshuman Karmakar, Chaoyun Li, and Bart Preneel
Data privacy concerns are sharply rising in the current digital era, hyperdriven by cloud computing, big data analytics, and the Internet of Things. Homomorphic Encryption (HE) has emerged as an ideal technique for computing on encrypted data, but current schemes suffer from slow encryption speed and large ciphertext expansion. Practical implementation is hindered, especially when the client has limited bandwidth, memory, and computing power. In 2011, Naehrig et al. proposed transciphering, reducing computational and communication overload on the client side. This involves symmetric ciphers with minimized multiplicative complexity, referred to as HE-Friendly Ciphers (HEFCs). In this work, we present a detailed study of transciphering for HE by systematizing existing knowledge and crystallizing research challenges. Particularly we conduct a comprehensive study on state-of-the-art HEFC constructions. Our work highlights gaps, open problems, and directions for future research.
Last updated:  2025-01-21
PARScoin: A Privacy-preserving, Auditable, and Regulation-friendly Stablecoin
Amirreza Sarencheh, Aggelos Kiayias, and Markulf Kohlweiss
Stablecoins are digital assets designed to maintain a consistent value relative to a reference point, serving as a vital component in Blockchain and Decentralized Finance (DeFi) ecosystems. Typical implementations of stablecoins via smart contracts come with important downsides such as a questionable level of privacy, potentially high fees, and lack of scalability. We put forth a new design, PARScoin, for a Privacy-preserving, Auditable, and Regulation-friendly Stablecoin that mitigates these issues while enabling high performance both in terms of speed of settlement and scalability. PARScoin enables privacy-preserving transactions where user identities, transaction values, and links to past transactions are hidden. Our system ensures auditability and regulation-friendliness without compromising privacy. Transfers do not need to be settled on blockchains, allowing for blockchain-independent fees. The system is blockchain-agnostic and interoperable with multiple blockchains via standard smart contract-based withdrawals. Our construction is distributed using threshold cryptography, avoiding single point of trust and failure, and is analyzed in the Universal Composition (UC) framework for secure integration with other systems. PARScoin supports self-custody, avoiding restrictions like address blacklisting or coin destruction, and demonstrates real-world efficiency based on our performance analysis.
Last updated:  2025-01-21
poqeth: Efficient, post-quantum signature verification on Ethereum
Ruslan Kysil, István András Seres, Péter Kutas, and Nándor Kelecsényi
This work explores the application and efficient deployment of (standardized) post-quantum (PQ) digital signature algorithms in the blockchain environment. Specifically, we implement and evaluate four PQ signatures in the Ethereum Virtual Machine: W-OTS$^{+}$, XMSS, SPHINCS+, and MAYO. We focus on optimizing the gas costs of the verification algorithms as that is the signature schemes' only algorithm executed on-chain, thus incurring financial costs (transaction fees) for the users. Hence, the verification algorithm is the signature schemes' main bottleneck for decentralized applications. We examine two methods to verify post-quantum digital signatures on-chain. Our practical performance evaluation shows that full on-chain verification is often prohibitively costly. Naysayer proofs (FC'24) allow a novel optimistic verification mode. We observe that the Naysayer verification mode is generally the cheapest, at the cost of additional trust assumptions. We release our implementation called poqeth as an open-source library.
Last updated:  2025-01-21
AGATE: Augmented Global Attested Trusted Execution in the Universal Composability framework
Lorenzo Martinico and Markulf Kohlweiss
A Trusted Execution Environment (TEE) is a security technology, implemented by CPU manufacturers, which guarantees integrity and confidentiality on a restricted execution environment to any remote verifier through attestation. TEEs are deployed on various consumer and commercial hardware platforms, and have been widely adopted as a component in the design of cryptographic protocols both theoretical and practical. Within the provable security community, the use of TEEs as a setup assumption has converged to a standard ideal definition in the Universal Composability setting ($G_{\mathsf{att}}$, defined by Pass et al., Eurocrypt '17). However, it is unclear whether any real TEE design can actually realise such a level of security, or whether the diverse capabilities of today's TEE implementations will in fact converge to a single standard. Therefore, it is necessary for cryptographers and protocol designers to specify what assumptions are necessary for the TEE they are using to support the correctness and security of their protocol. To this end, this paper provides a more careful treatment of trusted execution than the existing literature, focusing on the capabilities of enclaves and adversaries. Our goal is to provide meaningful patterns for comparing different classes of TEEs, particularly how a weaker TEE functionality can implement a stronger one given an appropriate mechanism to bridge the two. We introduce a new, "modular" definition of TEEs that captures a broad range of pre-existing functionalities defined in the literature while maintaining their high level of abstraction. While our goal is not directly to model implementations of specific commercial TEE providers, our modular definition provides a way to capture more meaningful and realistic hardware capabilities. We propose to characterise TEE capabilities along the following terms: - the set of trusted features available to the enclave; - the set of possible attacks on an enclave; - the content of attestation signatures. We then define various possible ideal modular $G_{\mathsf{att}}$ functionality instantiations that capture existing variants in the literature. Finally, we conclude the paper by constructing a protocol template to realise stronger $G_{\mathsf{att}}$ setups from weaker ones, and provide an example of removing an attack.
Last updated:  2025-01-21
Friendly primes for efficient modular arithmetic using the Polynomial Modular Number System
Fangan Yssouf Dosso, Nadia El Mrabet, Nicolas Méloni, François Palma, and Pascal Véron
The Polynomial Modular Number System (PMNS) is a non-positional number system designed for modular arithmetic. Its efficiency, both in software and hardware, has been demonstrated for integers commonly used in Elliptic Curve Cryptography. In recent papers, some authors introduce specific prime forms that are particularly well-suited for PMNS arithmetic. In this work, we extend their results to a broader class of prime numbers. In practice, our approach yields performance that is competitive with, and in some cases superior to, Pseudo-Mersenne arithmetic. As a result, we expand the set of prime numbers that are well-suited for modular arithmetic. Furthermore, we contribute a database of proof of concept Elliptic Curves constructed with those primes that verify the Brainpool Standard.
Last updated:  2025-01-21
Efficient Multi-party Private Set Union Resistant to Maximum Collusion Attacks
Qiang Liu and Joon-Woo Lee
Multi-party Private Set Union (MPSU) enables multiple participants to jointly compute the union of their private sets without leaking any additional information beyond the resulting union. Liu et al. (ASIACRYPT 2023) proposed the first scalable MPSU protocol fully based on symmetric key encryption (SKE), which designates one participant as the "leader" responsible for obtaining the final union. However, the protocol assumes that the leader does not collude with other participants, which weakens its practicality. In this work, we design a scalable MPSU protocol, $\Pi_\text{MPSU}^\text{one-leader}$, which tolerates maximum collusion. The protocol relies primarily on SKE supplemented with additive homomorphic encryption (AHE), with the designated leader obtaining the union result. Furthermore, to address the issue of fairness in scenarios where obtaining the result early provides an advantage, we extend $\Pi_\text{MPSU}^\text{one-leader}$ and propose a protocol that allows all participants to receive the union result simultaneously, called $\Pi_\text{MPSU}^\text{leaderless}$. We implement our proposed schemes and conduct a comprehensive comparison with state-of-the-art solution (Gao et al. PETS 2024) that resist maximum collusion. The results demonstrate that, while our protocol only offers a moderate improvement in runtime, it significantly reduces communication cost by approximately $4$ to $5$ times.
Last updated:  2025-01-20
Learning With Quantization: A Ciphertext Efficient Lattice Problem with Tight Security Reduction from LWE
Shanxiang Lyu, Ling Liu, and Cong Ling
In this paper, we introduce Learning With Quantization (LWQ), a new problem related to the Learning With Errors (LWE) and Learning With Rounding (LWR) problems. LWQ provides a tight security reduction from LWE while enabling efficient ciphertext compression comparable to that of LWR. We adopt polar lattices to instantiate the quantizer of LWQ. Polar lattices are a specific instance of the classical Construction D, which utilizes a set of nested polar codes as component codes. Due to the polarization phenomenon of polar codes, the distribution of the quantization error converges to a discrete Gaussian. Moreover, the quantization algorithm can be executed in polynomial time. Our main result establishes a security reduction from LWE to LWQ, ensuring that the LWQ distribution remains computationally indistinguishable from the uniform distribution. The technical novelty lies in bypassing the noise merging principle often seen in the security reduction of LWR, instead employing a more efficient noise matching principle. We show that the compression rate is ultimately determined by the capacity of the ``LWE channel'', which can be adjusted flexibly. Additionally, we propose a high-information-rate encryption framework based on LWQ, demonstrating its advantage over constructions based on LWE and quantized LWE.
Last updated:  2025-01-20
ICT: Insured Cryptocurrency Transactions
Aydin Abadi, Amirreza Sarencheh, Henry Skeoch, and Thomas Zacharias
Cryptocurrencies have emerged as a critical medium for digital financial transactions, driving widespread adoption while simultaneously exposing users to escalating fraud risks. The irreversible nature of cryptocurrency transactions, combined with the absence of consumer protection mechanisms, leaves users vulnerable to substantial financial losses and emotional distress. To address these vulnerabilities, we introduce Insured Cryptocurrency Transactions (ICT), a novel decentralized insurance framework designed to ensure financial recovery for honest users affected by fraudulent cryptocurrency transactions. We rigorously formalize the ICT framework, establishing strong security guarantees to protect against malicious adversaries. Furthermore, we present Insured Cryptocurrency Exchange (ICE), a concrete instantiation of ICT tailored for centralized cryptocurrency exchanges. ICE relies primarily on a standard smart contract and provides a robust mechanism to compensate users in cases of security breaches, insolvency, or fraudulent activities affecting the exchange. We have implemented ICE’s smart contract and evaluated its on-chain costs. The evaluation results demonstrate ICE’s low operational overhead. To our knowledge, ICT and ICE represent the first formal approaches to decentralized insurance frameworks in the cryptocurrency domain.
Last updated:  2025-01-20
An Extended Hierarchy of Security Notions for Threshold Signature Schemes and Automated Analysis of Protocols That Use Them
Cas Cremers, Aleksi Peltonen, and Mang Zhao
Despite decades of work on threshold signature schemes, there is still limited agreement on their desired properties and threat models. In this work we significantly extend and repair previous work to give a unified syntax for threshold signature schemes and a new hierarchy of security notions for them. Moreover, our new hierarchy allows us to develop an automated analysis approach for protocols that use threshold signatures, which can discover attacks on protocols that exploit the details of the security notion offered by the used scheme, which can help choose the correct security notion (and scheme that fulfills it) that is required for a specific protocol. Unlike prior work, our syntax for threshold signatures covers both non-interactive and interactive signature schemes with any number of key generation and signing rounds, and our hierarchy of security notions additionally includes elements such as various types of corruption and malicious key generation. We show the applicability of our hierarchy by selecting representative threshold signature schemes from the literature, extracting their core security features, and categorizing them according to our hierarchy. As a side effect of our work, we show through a counterexample that a previous attempt at building a unified hierarchy of unforgeability notions does not meet its claimed ordering, and show how to repair it without further restricting the scope of the definitions. Based on our syntax and hierarchy, we develop the first systematic, automated analysis method for higher-level protocols that use threshold signatures. We use a symbolic analysis framework to abstractly model threshold signature schemes that meet security notions in our hierarchy, and implement this in the Tamarin prover. Given a higher-level protocol that uses threshold signatures, and a security notion from our hierarchy, our automated approach can find attacks on such protocols that exploit the subtle differences between elements of our hierarchy. Our approach can be used to formally analyze the security implications of implementing different threshold signature schemes in higher-level protocols.
Last updated:  2025-01-20
Artificial Results From Hardware Synthesis
Ahmed Alharbi and Charles Bouillaguet
In this paper, we revisit venerable lower-bounds on the $AT$ or $AT^2$ performance metric of hardware circuits. A series of works started in the late 1970's has established that if a hardware circuit of area $A$ computes a function $f : \{0, 1\}^n \rightarrow \{0, 1\}^m$ in $T$ clock cycles, then $AT^2$ is asymptotically larger than (a form of) the communication complexity of $f$. These lower-bounds ignore the active component of the circuit such as the logic gates and only take into account the area of the wiring. However, it seems that it is common practice to report the performance characteristics of hardware designs after synthesis, namely after having ``compiled'' the design into the topological description of a hardware circuit made of standard cells. The area of the cells can be be determined with certainty, whereas the area occupied by the wires cannot. This may leads to optimistic performance figures, that may even violate the old lower-bounds. In this paper, we take the case of the Möbius transform as a case study, following the work of Banik and Regazzoni in TCHES, 2024(2) who presented hardware designs that implement it. We first determine the communication complexity of the Möbius transform. Then, following the old methodology, we derive lower-bounds on the area (in $\mu m^2$) of any circuit that implement the operation using several open Process Design Kits for ASIC production. For large enough instances, the wires provably occupy more area than the logic gates themselves. This invalidate previous theoretical claims about the performance of circuits implementing the Möbius transform. Fundamentally, the root cause of the contradiction between ``VLSI-era'' lower-bounds and current performance claims is that the lower-bounds apply to a geometric description of the circuit where the length of wiring is known, while it is common to report performance results on the basis of hardware synthesis alone, where a topological description of the circuit has been obtained but the actual length of wires is unknown.
Last updated:  2025-01-20
Secret-Sharing Schemes for High Slices
Amos Beimel, Oriol Farràs, Or Lasri, and Oded Nir
In a secret-sharing scheme, a secret is shared among $n$ parties such that the secret can be recovered by authorized coalitions, while it should be kept hidden from unauthorized coalitions. In this work we study secret-sharing for $k$-slice access structures, in which coalitions of size $k$ are either authorized or not, larger coalitions are authorized and smaller are unauthorized. Known schemes for these access structures had smaller shares for small $k$'s than for large ones; hence our focus is on "high" $(n-k)$-slices where $k$ is small. Our work is inspired by several motivations: 1) Obtaining efficient schemes (with perfect or computational security) for natural families of access structures; 2) Making progress in the search for better schemes for general access structures, which are often based on schemes for slice access structures; 3) Proving or disproving the conjecture by Csirmaz (J. Math. Cryptol., 2020) that an access structures and its dual can be realized by secret-sharing schemes with the same share size. The main results of this work are: - Perfect schemes for high slices. We present a scheme for $(n-k)$-slices with information-theoretic security and share size $kn\cdot 2^{\tilde{O}(\sqrt{k \log n})}$. Using a different scheme with slightly larger shares, we prove that the ratio between the optimal share size of $k$-slices and that of their dual $(n-k)$-slices is bounded by $n$. - Computational schemes for high slices. We present a scheme for $(n-k)$-slices with computational security and share size $O(k^2 \lambda \log n)$ based on the existence of one-way functions. Our scheme makes use of a non-standard view point on Shamir secret-sharing that allows to share many secrets with different thresholds with low cost. - Multislice access structures. $(a:b)$-multislices are access structures that behave similarly to slices, but are unconstrained on coalitions in a wider range of cardinalities between $a$ and $b$. We use our new schemes for high slices to realize multislices with the same share sizes that their duals have today. This solves an open question raised by Applebaum and Nir (Crypto, 2021), and allows to realize hypergraph access structures that are chosen uniformly at random under a natural set of distributions with share size $2^{0.491n+o(n)}$ compared to the previous result of $2^{0.5n+o(n)}$.
Last updated:  2025-01-20
Conditional Constant Function Problem and Its Quantum Solutions: Attacking Feistel Ciphers
Zhenqiang Li, Shuqin Fan, Fei Gao, Yonglin Hao, Xichao Hu, Linchun Wan, Hongwei Sun, and Qi Su
In this paper, we define the conditional constant function problem (CCFP) and, for a special case of CCFP, we propose a quantum algorithm for solving it efficiently. Such an algorithm enables us to make new evaluations to the quantum security of Feistel block cipher in the case where a quantum adversary only has the ability to make online queries in a classical manner, which is relatively realistic. Specifically, we significantly improved the chosen-plaintext key recovery attacks on two Feistel block cipher variants known as Feistel-KF and Feistel-FK. For Feistel-KF, we construct a 3-round distinguisher based on the special case of CCFP and propose key recovery attacks mounting to $r>3$ rounds. For Feistel-FK, our CCFP based distinguisher covers 4 rounds and the key recovery attacks are applicable for $r>4$ rounds. Utilizing our CCFP solving algorithm, we are able to reduce the classical memory complexity of our key recovery attacks from the previous exponential $O(2^{cn})$ to $O(1)$. The query complexity of our key recovery attacks on Feistel-KF is also significantly reduced from $O(2^{cn})$ to $O(1)$ where $c$'s are constants. Our key recovery results enjoy the current optimal complexities. They also indicate that quantum algorithms solving CCFP could be more promising than those solving the period finding problem.
Last updated:  2025-01-20
Enhancing Threshold Group Action Signature Schemes: Adaptive Security and Scalability Improvements
Michele Battagliola, Giacomo Borin, Giovanni Di Crescenzo, Alessio Meneghetti, and Edoardo Persichetti
Designing post-quantum digital signatures is a very active research area at present, with several protocols being developed, based on a variety of mathematical assumptions. Many of these signatures schemes can be used as a basis to define more advanced schemes, such as ring or threshold signatures, where multiple parties are involved in the signing process. Unfortunately, the majority of these protocols only considers a static adversary, that must declare which parties to corrupt at the beginning of the execution. However, a stronger security notion can be achieved, namely security against adaptive adversaries, that can corrupt parties at any times. In this paper we tackle the challenges of designing a post-quantum adap- tively secure threshold signature scheme: starting from the GRASS sig- nature scheme, which is only static secure, we show that it is possible to turn it into an adaptive secure threshold signature that we call GRASS+. In particular, we introduce two variants of the classical GAIP problem and discuss their security. We prove that our protocol is adaptively secure in the Random Oracle Model, if the adversary corrupts only t 2 parties. We are also able to prove that GRASS+ achieves full adaptive security, with a corruption threshold of t, in the Black Box Group Action Model with Random Oracle. Finally, we improve the performance of the scheme by exploiting a better secret sharing, inspired from the work of Desmedt, Di Crescenzo, and Burmester from ASIACRYPT’94.
Last updated:  2025-01-19
DSKE: Digital Signatures with Key Extraction
Zhipeng Wang, Orestis Alpos, Alireza Kavousi, Harry W. H. Wong, Sze Yiu Chau, Duc V. Le, and Christian Cachin
This work introduces DSKE, digital signatures with key extraction. In a DSKE scheme, the private key can be extracted if more than a threshold of signatures on different messages are ever created while, within the threshold, each signature continues to authenticate the signed message. We give a formal definition of DSKE, as well as two provably secure constructions, one from hash-based digital signatures and one from polynomial commitments. We demonstrate that DSKE is useful for various applications, such as spam prevention and deniability. First, we introduce the GroupForge signature scheme, leveraging DSKE constructions to achieve deniability in digital communication. GroupForge integrates DSKE with a Merkle tree and timestamps to produce a ``short-lived'' signature equipped with extractable sets, ensuring deniability under a fixed public key. We illustrate that GroupForge can serve as a viable alternative to Keyforge in the non-attributable email protocol of Specter, Park, and Green (USENIX Sec '21), thereby eliminating the need for continuous disclosure of outdated private keys. Second, we leverage the inherent extraction property of DSKE to develop a Rate-Limiting Nullifier (RLN) scheme. RLN efficiently identifies and expels spammers once they exceed a predetermined action threshold, thereby jeopardizing their private keys. Moreover, we implement both variants of the DSKE scheme to demonstrate their performance and show it is comparable to existing signature schemes. We also implement GroupForge from the polynomial commitment-based DSKE and illustrate the practicality of our proposed method.
Last updated:  2025-01-19
Integer Commitments, Old and New Tools
Iftach Haitner, Yehuda Lindell, and Nikolaos Makriyannis
This self-contained and detailed tutorial covers RSA-based integer commitments and related protocols. It also presents a new, highly efficient setup protocol for sampling commitment parameters.
Last updated:  2025-01-19
Attestation Proof of Association – provability that attestation keys are bound to the same hardware and person
Eric Verheul
We propose a wallet provider issued attestation called Wallet Trust Evidence (WTE) and three related specific instructions for the European Digital Identity (EUDI) Wallet cryptographic hardware, most notably the generation of a Proof of Association (PoA). These allow the EUDI Wallet providing verifiable assurance to third parties (issuers, relying parties) that attestation private keys are not only bound to conformant cryptographic hardware but also that they are bound to the same such hardware. This allows the EUDI Wallet meeting eIDAS Level of Assurance ``high'' as well as operating in a privacy friendly manner. The instructions specified in this document cater for convenient implementation in all envisioned EUDI Wallet architectures including those based on a GlobalPlatform based Secure Element such as an eID-card or an embedded SIM (eSIM). By their simplicity, the three instructions also allow for convenient Common Criteria certification. This document is a further refinement and cryptographic concretization of the WTE/PoA logic specified in the wallet Architecture and Reference Framework (ARF), which is based on the EPIC-09 result developed in a cooperation between the NI-Scy consortium and the eIDAS expert group. However, the present draft document is meant for discussion only and not approved by the NI-Scy consortium, the eIDAS expert group or Dutch government. This paper concentrates on irrefutable PoAs but also indicates how refutable PoAs can be formed providing plausible deniability which can be beneficial in some use cases. As a side note this paper introduces in an annex the construction of Self Generated Verifiable Pseudonyms (SGVPs). These allow a wallet/user to generate pseudonyms based on information agreed with a relying party, e.g. an URL, and to prove these are correctly formed. Together with the proof of association this allows cryptographically binding (disclosed parts of) attestations with these pseudonyms. This enables various use cases such as an employee representing an organisation in a privacy friendly way using an chamber of commerce attestation cryptographically bound to a separate SGVP-pseudonym. Such functionality currently forms the privacy basis of the Dutch eRecognition scheme (eherkenning.nl).
Last updated:  2025-01-18
Hardware-Accelerated Encrypted Execution of General-Purpose Applications
Charles Gouert, Vinu Joseph, Steven Dalton, Cedric Augonnet, Michael Garland, and Nektarios Georgios Tsoutsos
Fully Homomorphic Encryption (FHE) is a cryptographic method that guarantees the privacy and security of user data during computation. FHE algorithms can perform unlimited arithmetic computations directly on encrypted data without decrypting it. Thus, even when processed by untrusted systems, confidential data is never exposed. In this work, we develop new techniques for accelerated encrypted execution and demonstrate the significant performance advantages of our approach. Our current focus is the Fully Homomorphic Encryption over the Torus (CGGI) scheme, which is a current state-of-the-art method for evaluating arbitrary functions in the encrypted domain. CGGI represents a computation as a graph of homomorphic logic gates and each individual bit of the plaintext is transformed into a polynomial in the encrypted domain. Arithmetic on such data becomes very expensive: operations on bits become operations on entire polynomials. Therefore, evaluating even relatively simple nonlinear functions with the CGGI cryptosystem, such as a sigmoid, can take thousands of seconds on a single CPU thread. Using our novel framework for end-to-end accelerated encrypted execution called ArctyrEX, developers with no knowledge of complex FHE libraries can simply describe their computation as a C program that is evaluated 18x faster on average relative to the GPU-accelerated Concrete library for multiplication-intensive benchmarks.
Last updated:  2025-01-18
CISELeaks: Information Leakage Assessment of Cryptographic Instruction Set Extension Prototypes
Aruna Jayasena, Richard Bachmann, and Prabhat Mishra
Software based cryptographic implementations provide flexibility but they face performance limitations. In contrast, hardware based cryptographic accelerators utilize application-specific customization to provide real-time security solutions. Cryptographic instruction-set extensions (CISE) combine the advantages of both hardware and software based solutions to provide higher performance combined with the flexibility of atomic-level cryptographic operations. While CISE is widely used to develop security solutions, side-channel analysis of CISE-based devices is in its infancy. Specifically, it is important to evaluate whether the power usage and electromagnetic emissions of CISE-based devices have any correlation with its internal operations, which an adversary can exploit to deduce cryptographic secrets. In this paper, we propose a test vector leakage assessment framework to evaluate the pre-silicon prototypes at the early stages of the design life-cycle. Specifically, we first identify functional units with the potential for leaking information through power side-channel signatures and then evaluate them on system prototypes by generating the necessary firmware to maximize the side-channel signature. Our experimental results on two RISC-V based cryptographic extensions, RISCV-CRYPTO and XCRYPTO, demonstrated that seven out of eight prototype AES- and SHA-related functional units are vulnerable to leaking cryptographic secrets through their power side-channel signature even in full system mode with a statistical significance of $\alpha = 0.05$.
Last updated:  2025-01-17
On Multi-Key FuncCPA Secure Encryption Schemes
Eri Nakajima, Keisuke Hara, and Kyosuke Yamashita
The notion of funcCPA security for homomorphic encryption schemes was introduced by Akavia \textit{et~al.}\ (TCC 2022). Whereas it aims to capture the bootstrapping technique in homomorphic encryption schemes, Dodis \textit{et~al.}\ (TCC 2023) pointed out that funcCPA security can also be applied to non-homomorphic public-key encryption schemes (PKE). As an example, they presented a use case for privacy-preserving outsourced computation without homomorphic computation. It should be noted that prior work on funcCPA security, including the use case presented by Dodis \textit{et~al.}, considered only the single-key setting. However, in recent years, multi-party collaboration in outsourced computation has garnered significant attention, making it desirable for funcCPA security to support the multi-key setting. Therefore, in this work, we introduce a new notion of security called Multi-Key funcCPA (MKfunc) to address this need, and show that if a PKE scheme is KDM-secure, then it is also MKfuncCPA secure. Furthermore, we show that similar discussions can be applied to symmetric-key encryption.
Last updated:  2025-01-17
Decompose and conquer: ZVP attacks on GLV curves
Vojtěch Suchánek, Vladimír Sedláček, and Marek Sýs
While many side-channel attacks on elliptic curve cryptography can be avoided by coordinate randomization, this is not the case for the zero-value point (ZVP) attack. This attack can recover a prefix of static ECDH key but requires solving an instance of the dependent coordinates problem (DCP), which is open in general. We design a new method for solving the DCP on GLV curves, including the Bitcoin secp256k1 curve, outperforming previous approaches. This leads to a new type of ZVP attack on multiscalar multiplication, recovering twice as many bits when compared to the classical ZVP attack. We demonstrate a $63\%$ recovery of the private key for the interleaving algorithm for multiscalar multiplication. Finally, we analyze the largest database of curves and addition formulas with over 14 000 combinations and provide the first classification of their resistance against the ZVP attack.
Last updated:  2025-01-17
REED: Chiplet-Based Accelerator for Fully Homomorphic Encryption
Aikata Aikata, Ahmet Can Mert, Sunmin Kwon, Maxim Deryabin, and Sujoy Sinha Roy
Fully Homomorphic Encryption (FHE) enables privacy-preserving computation and has many applications. However, its practical implementation faces massive computation and memory overheads. To address this bottleneck, several Application-Specific Integrated Circuit (ASIC) FHE accelerators have been proposed. All these prior works put every component needed for FHE onto one chip (monolithic), hence offering high performance. However, they encounter common challenges associated with large-scale chip design, such as inflexibility, low yield, and high manufacturing costs. In this paper, we present the first-of-its-kind multi-chiplet-based FHE accelerator ‘REED’ for overcoming the limitations of prior monolithic designs. To utilize the advantages of multi-chiplet structures while matching the performance of larger monolithic systems, we propose and implement several novel strategies in the context of FHE. These include a scalable chiplet design approach, an effective framework for workload distribution, a custom inter-chiplet communication strategy, and advanced pipelined Number Theoretic Transform and automorphism design to enhance performance. Our instruction-set and power simulations experiments with a prelayout netlist indicate that REED 2.5D microprocessor consumes 96.7mm2 chip area, 49.4 W average power in 7nm technology. It could achieve a remarkable speedup of up to 2,991× compared to a CPU (24-core 2×Intel X5690) and offer 1.9× better performance, along with a 50% reduction in development costs when compared to state-of-the-art ASIC FHE accelerators. Furthermore, our work presents the first instance of benchmarking an encrypted deep neural network (DNN) training. Overall, the REED architecture design offers a highly effective solution for accelerating FHE, thereby significantly advancing the practicality and deployability of FHE in real-world applications.
Last updated:  2025-01-17
SQIsign2D-West: The Fast, the Small, and the Safer
Andrea Basso, Luca De Feo, Pierrick Dartois, Antonin Leroux, Luciano Maino, Giacomo Pope, Damien Robert, and Benjamin Wesolowski
We introduce SQIsign2D-West, a variant of SQIsign using two-dimensional isogeny representations. SQIsignHD was the first variant of SQIsign to use higher dimensional isogeny representations. Its eight-dimensional variant is geared towards provable security but is deemed unpractical. Its four-dimensional variant is geared towards efficiency and has significantly faster signing times than SQIsign, but slower verification owing to the complexity of the four-dimensional representation. Its authors commented on the apparent difficulty of getting any improvement over SQIsign by using two-dimensional representations. In this work, we introduce new algorithmic tools that make two-dimensional representations a viable alternative. These lead to a signature scheme with sizes comparable to SQIsignHD, slightly slower signing than SQIsignHD but still much faster than SQIsign, and the fastest verification of any known variant of SQIsign. We achieve this without compromising on the security proof: the assumptions behind SQIsign2D-West are similar to those of the eight-dimensional variant of SQIsignHD. Additionally, like SQIsignHD, SQIsign2D-West favourably scales to high levels of security Concretely, for NIST level I we achieve signing times of 80 ms and verifying times of 4.5 ms, using optimised arithmetic based on intrinsics available to the Ice Lake architecture. For NIST level V, we achieve 470 ms for signing and 31 ms for verifying.
Last updated:  2025-01-17
MAYO Key Recovery by Fixing Vinegar Seeds
Sönke Jendral and Elena Dubrova
As the industry prepares for the transition to post-quantum secure public key cryptographic algorithms, vulnerability analysis of their implementations is gaining importance. A theoretically secure cryptographic algorithm should also be able to withstand the challenges of physical attacks in real-world environments. MAYO is a candidate in the ongoing first round of the NIST post-quantum standardization process for selecting additional digital signature schemes. This paper demonstrates three first-order single-execution fault injection attacks on a MAYO implementation in an ARM Cortex-M4 processor. By using voltage glitching to disrupt the computation of the vinegar seed during the signature generation, we enable the recovery of the secret key directly from the faulty signatures. Our experimental results show that the success rates of the fault attacks in a single execution are 36%, 82%, and 99%, respectively. They emphasize the importance of developing countermeasures against fault attacks prior to the widespread deployment of post-quantum algorithms like MAYO.
Last updated:  2025-01-17
XBOOT: Free-XOR Gates for CKKS with Applications to Transciphering
Chao Niu, Zhicong Huang, Zhaomin Yang, Yi Chen, Liang Kong, Cheng Hong, and Tao Wei
The CKKS scheme is traditionally recognized for approximate homomorphic encryption of real numbers, but BLEACH (Drucker et al., JoC 2024) extends its capabilities to handle exact computations on binary or small integer numbers. Despite this advancement, BLEACH's approach of simulating XOR gates via $(a-b)^2$ incurs one multiplication per gate, which is computationally expensive in homomorphic encryption. To this end, we introduce XBOOT, a new framework built upon BLEACH's blueprint but allows for almost free evaluation of XOR gates. The core concept of XBOOT involves lazy reduction, where XOR operations are simulated with the less costly addition operation, $a+b$, leaving the management of potential overflows to later stages. We carefully handle the modulus chain and scale factors to ensure that the overflows would be conveniently rounded during the CKKS bootstrapping phase without extra cost. We use AES-CKKS transciphering as a benchmark to test the capability of XBOOT, and achieve a throughput exceeding one kilobyte per second, which represents a $2.5\times$ improvement over the state-of-the-art (Aharoni et al., HES 2023). Moreover, XBOOT enables the practical execution of tasks with extensive XOR operations that were previously challenging for CKKS. For example, we can do Rasta-CKKS transciphering at over two kilobytes per second, more than $10\times$ faster than the baseline without XBOOT.
Last updated:  2025-01-16
RSA Blind Signatures with Public Metadata
Ghous Amjad, Kevin Yeo, and Moti Yung
Anonymous tokens are, essentially, digital signature schemes that enable issuers to provide users with signatures without learning the user inputs or the final signatures. These primitives allow applications to propagate trust while simultaneously protecting the user identity. They have become a core component for improving the privacy of several real-world applications including ad measurements, authorization protocols, spam detection, and VPNs. In certain applications, it is natural to associate signatures with specific public metadata, ensuring that trust is only propagated with respect to only a certain set of users and scenarios. To solve this, we study the notion of anonymous tokens with public metadata. We present a variant of RSA blind signatures with public metadata where issuers may only generate signatures that verify for a certain choice of public metadata a modification of a scheme by Abe and Fujisaki [9]. Our protocol exclusively uses standard cryptography with widely available implementations. We prove security from the one-more RSA assumptions with multiple exponents that we introduce. Furthermore, we provide evidence that the concrete security bounds should be nearly identical to standard RSA blind signatures. We show that our protocol incurs minimal overhead over standard RSA blind signatures and report anonymous telemetry for a real-world deployment to showcase its scalability. Moreover, the protocol in this paper has been proposed as a technical specification in an IRTF internet draft [12].
Last updated:  2025-01-16
PSMT: Private Segmented Membership Test for Distributed Record Linkage
Nirajan Koirala, Jonathan Takeshita, Jeremy Stevens, Sam Martin, and Taeho Jung
In various real-world situations, a client may need to verify whether specific data elements they possess are part of a set segmented among numerous data holders. To maintain user privacy, it’s essential that both the client’s data elements and the data holders’ sets remain encrypted throughout the process. Existing approaches like Private Set Intersection (PSI), Multi-Party PSI (MPSI), Private Segmented Membership Test (PSMT), and Oblivious RAM (ORAM) face challenges in these contexts. They either require data holders to access the sets in plaintext, result in high latency when aggregating data from multiple holders, risk exposing the identity of the party with the matching element, cause a large communication overhead for multiple-element queries, or lead to high false positives. This work introduces the primitive of a Private Segmented Membership Test (PSMT) for clients with multiple query elements. We present a basic protocol for solving PSMT using a threshold variant of approximate-arithmetic homomorphic encryption, addressing the challenges of avoiding information leakage about the party with the intersection element, minimizing communication overhead for multiple query elements, and preventing false positives for a large number of data holders ensuring IND-CPA^D security. Our novel approach surpasses current state-of-the-art methods in scalability, supporting significantly more data holders. This is achieved through a novel summation-based homomorphic membership check rather than a product-based one, as well as various novel ideas addressing technical challenges. Our new PSMT protocol supports a large number of parties and query elements (up to 4096 parties and 512 queries in experiments) compared to previous methods. Our experimental evaluation shows that our method's aggregation of results from 1024 data holders with a set size of 2^15 can run in 71.2s and only requires an additional 1.2 seconds per query for processing multiple queries. We also compare our PSMT protocol to other state-of-the-art PSI and MPSI protocols and our previous work and discuss our improvements in usability with a better privacy model and a larger number of parties and queries.
Last updated:  2025-01-16
Provable Security Analysis of the Secure Remote Password Protocol
Dennis Dayanikli and Anja Lehmann
This paper analyses the Secure Remote Password Protocol (SRP) in the context of provable security. SRP is an asymmetric Password-Authenticated Key Exchange (aPAKE) protocol introduced in 1998. It allows a client to establish a shared cryptographic key with a server based on a password of potentially low entropy. Although the protocol was part of several standardization efforts, and is deployed in numerous commercial applications such as Apple Homekit, 1Password or Telegram, it still lacks a formal proof of security. This is mainly due to some of the protocol's design choices which were implemented to circumvent patent issues. Our paper gives the first security analysis of SRP in the universal composability (UC) framework. We show that SRP is UC-secure against passive eavesdropping attacks under the standard CDH assumption in the random oracle model. We then highlight a major protocol change designed to thwart active attacks and propose a new assumption -- the additive Simultaneous Diffie Hellman (aSDH) assumption -- under which we can guarantee security in the presence of an active attacker. Using this new assumption as well as the Gap CDH assumption, we prove security of the SRP protocol against active attacks. Our proof is in the "Angel-based UC framework", a relaxation of the UC framework which gives all parties access to an oracle with super-polynomial power. In our proof, we assume that all parties have access to a DDH oracle (limited to finite fields). We further discuss the plausibility of this assumption and which level of security can be shown without it.
Last updated:  2025-01-16
The HHE Land: Exploring the Landscape of Hybrid Homomorphic Encryption
Hossein Abdinasibfar, Camille Nuoskala, and Antonis Michalas
Hybrid Homomorphic Encryption (HHE) is considered a promising solution for key challenges that emerge when adopting Homomorphic Encryption (HE). In cases such as communication and computation overhead for clients and storage overhead for servers, it combines symmetric cryptography with HE schemes. However, despite a decade of advancements, enhancing HHE usability, performance, and security for practical applications remains a significant stake. This work contributes to the field by presenting a comprehensive analysis of prominent HHE schemes, focusing on their performance and security. We implemented three superior schemes--PASTA, HERA, and Rubato--using the Go programming language and evaluated their performance in a client-server setting. To promote open science and reproducibility, we have made our implementation publicly available on GitHub. Furthermore, we conducted an extensive study of applicable attacks on HHE schemes, categorizing them under algebraic-based, differential-based, linear-based, and LWE-based attacks. Our security analysis revealed that while most existing schemes meet theoretical security requirements, they remain vulnerable to practical attacks. These findings emphasize the need for improvements in practical security measures, such as defining standardized parameter sets and adopting techniques like noise addition to counter these attacks. This survey provides insights and guidance for researchers and practitioners to design and develop secure and efficient HHE systems, paving the way for broader real-world adoption.
Last updated:  2025-01-16
Smaller Sphincs$^{+}$
Scott Fluhrer and Quynh Dang
NIST published FIPS 205 based on the specification of Sphincs$^{+}$. A formula to determine the security strength of a given parameter set is listed in SPHINCSsubmission31. It is quite complex to use that formula to get the security degradation behavior based on different increases in the number of signatures (called $2^{m}$ in this paper) per signing key. The task would become even more complex when we need to compare the security degradation characteristics of many parameter sets. In this paper, we provide a simpler formula to determine the security strengths of a given parameter set at various numbers of signatures produced by one signing key. With this new formula, the task of comparing parameter sets, especially, their security degradation characteristics become easy and that allowed us to search for best parameter sets for users to consider to use and for standard bodies to consider for standardization.
Last updated:  2025-01-16
GRASP: Accelerating Hash-based PQC Performance on GPU Parallel Architecture
Yijing Ning, Jiankuo Dong, Jingqiang Lin, Fangyu Zheng, Yu Fu, Zhenjiang Dong, and Fu Xiao
$SPHINCS^+$, one of the Post-Quantum Cryptography Digital Signature Algorithms (PQC-DSA) selected by NIST in the third round, features very short public and private key lengths but faces significant performance challenges compared to other post-quantum cryptographic schemes, limiting its suitability for real-world applications. To address these challenges, we propose the GPU-based paRallel Accelerated $SPHINCS^+$ (GRASP), which leverages GPU technology to enhance the efficiency of $SPHINCS^+$ signing and verification processes. We propose an adaptable parallelization strategy for $SPHINCS^+$, analyzing its signing and verification processes to identify critical sections for efficient parallel execution. Utilizing CUDA, we perform bottom-up optimizations, focusing on memory access patterns and hypertree computation, to enhance GPU resource utilization. These efforts, combined with kernel fusion technology, result in significant improvements in throughput and overall performance. Extensive experimentation demonstrates that our optimized CUDA implementation of $SPHINCS^+$ achieves superior performance. Specifically, our GRASP scheme delivers throughput improvements ranging from 1.37× to 3.45× compared to state-of-the-art GPU-based solutions and surpasses the NIST reference implementation by over three orders of magnitude, highlighting a significant performance advantage.
Last updated:  2025-01-16
On Composing Generic Voting Schemes for Improved Privacy
Oskar Goldhahn
Hybrid encryption provides a way for schemes to distribute trust among many computational assumptions, for instance by composing existing schemes. This is increasingly relevant as quantum computing advances because it lets us get the best of both worlds from the privacy of the post quantum schemes and the more battle tested classical schemes. We show how to compose members of a very general class of voting schemes and prove that this preserves correctness and integrity and improves privacy compared to its constituent parts. We also show an example composition using a lattice based decryption mixnet where the improvement in privacy can indirectly lead to an improvement in integrity.
Last updated:  2025-01-16
Shielded CSV: Private and Efficient Client-Side Validation
Jonas Nick, Liam Eagen, and Robin Linus
Cryptocurrencies allow mutually distrusting users to transact monetary value over the internet without relying on a trusted third party. Bitcoin, the first cryptocurrency, achieved this through a novel protocol used to establish consensus about an ordered transaction history. This requires every transaction to be broadcasted and verified by the network, incurring communication and computational costs. Furthermore, transactions are visible to all nodes of the network, eroding privacy, and are recorded permanently, contributing to increasing storage requirements over time. To limit resource usage of the network, Bitcoin currently supports an average of 11 transactions per second. Most cryptocurrencies today still operate in a substantially similar manner. Private cryptocurrencies like Zcash and Monero address the privacy issue by replacing transactions with proofs of transaction validity. However, this enhanced privacy comes at the cost of increased communication, storage, and computational requirements. Client-Side Validation (CSV) is a paradigm that addresses these issues by removing transaction validation from the blockchain consensus rules. This approach allows sending the coin along with a validity proof directly to its recipient, reducing communication, computation and storage cost. CSV protocols deployed on Bitcoin today do not fully leverage the paradigm's potential, as they still necessitate the overhead of publishing ordinary Bitcoin transactions. Moreover, the size of their coin proofs is proportional to the coin's transaction history, and provide limited privacy. A recent improvement is the Intmax2 CSV protocol, which writes significantly less data to the blockchain compared to a blockchain transaction and has succinct coin proofs. In this work, we introduce Shielded CSV, which improves upon state-of-the-art CSV protocols by providing the first construction that offers truly private transactions. It addresses the issues of traditional private cryptocurrency designs by requiring only 64 bytes of data per transaction, called a nullifier, to be written to the blockchain. Moreover, for each nullifier in the blockchain, Shielded CSV users only need to perform a single Schnorr signature verification, while non-users can simply ignore this data. The size and verification cost of coin proofs for Shielded CSV receivers is independent of the transaction history. Thus, one application of Shielded CSV is adding privacy to Bitcoin at a rate of 100 transactions per second, provided there is an adequate bridging mechanism to the blockchain. We specify Shielded CSV using the Proof Carrying Data (PCD) abstraction. We then discuss two implementation strategies that we believe to be practical, based on Folding Schemes and Recursive STARKs, respectively. Finally, we propose future extensions, demonstrating the power of the PCD abstraction and the extensibility of Shielded CSV. This highlights the significant potential for further improvements to the Shielded CSV framework and protocols built upon it.
Last updated:  2025-01-16
Constant latency and finality for dynamically available DAG
Hans Schmiedel, Runchao Han, Qiang Tang, Ron Steinfeld, and Jiangshan Yu
Directed Acyclic Graph (DAG) based protocols have shown great promise to improve the performance of blockchains. The CAP theorem shows that it is impossible to have a single system that achieves both liveness (known as dynamic availability) and safety under network partition.This paper explores two types of DAG-based protocols prioritizing liveness or safety, named structured dissemination and Graded Common Prefix (GCP), respectively. For the former, we introduce the first DAG-based protocol with constant expected latency, providing high throughput dynamic availability under the sleepy model. Its expected latency is $3\Delta$ and its throughput linearly scales with participation. We validate these expected performance improvements over existing constant latency sleepy model BFT by running prototypes of each protocol across multiple machines. The latter, GCP, is a primitive that provides safety under network partition, while being weaker than standard consensus. As a result, we are able to obtain a construction that runs in only $2$ communication steps, as opposed to the $4$ steps of existing low latency partially synchronous BFT. In addition, GCP can easily avoid relying on single leaders' proposals, becoming more resilient to crashes. We also validate these theoretical benefits of GCP experimentally. We leverage our findings to extend the Ebb-and-Flow framework, where two BFT sub-protocols allow different types of clients in the same system to prioritize either liveness or safety. Our extension integrates our two types of DAG-based protocols. This provides a hybrid DAG-based protocol with high throughput, dynamical availability, and finality under network partitions, without running a standard consensus protocol twice as required in existing work.
Last updated:  2025-01-16
Efficient Homomorphic Integer Computer from CKKS
Jaehyung Kim
As Fully Homomorphic Encryption (FHE) enables computation over encrypted data, it is a natural question of how efficiently it handles standard integer computations like $64$-bit arithmetic. It has long been believed that the CGGI/DM family or the BGV/BFV family are the best options, depending on the size of the parallelism. The Cheon-Kim-Kim-Song (CKKS) scheme, although being widely used in many applications like machine learning, was not considered a good option as it is more focused on computing real numbers rather than integers. Recently, Drucker et al. [J. Cryptol.] suggested to use CKKS for discrete computations, by separating the error/noise from the discrete message. Since then, there have been several breakthroughs in the discrete variant of CKKS, including the CKKS-style functional bootstrapping by Bae et al. [Asiacrypt'24]. Notably, the CKKS-style functional bootstrapping can be regarded as a parallelization of CGGI/DM functional bootstrapping, and it is several orders of magnitude faster in terms of throughput. Based on the CKKS-style functional bootstrapping, Kim and Noh [ePrint, 2024/1638] designed an efficient homomorphic modular reduction for CKKS, leading to modulo small integer arithmetic. Although it is known that CKKS is efficient for handling small integers like $4$ or $8$ bits, it is still unclear whether its efficiency extends to larger integers like $32$ or $64$ bits. In this paper, we propose a novel method for homomorphic unsigned integer computations. We represent a large integer (e.g. $64$-bit) as a vector of smaller chunks (e.g. $4$-bit) and construct arithmetic operations relying on the CKKS-style functional bootstrapping. The proposed scheme supports many of the operations supported in TFHE-rs while outperforming it in terms of amortized running time. Notably, our homomorphic 64-bit multiplication takes $17.9$ms per slot, which is more than three orders of magnitude faster than TFHE-rs.
Last updated:  2025-01-16
Efficient theta-based algorithms for computing $(\ell, \ell)$-isogenies on Kummer surfaces for arbitrary odd $\ell$
Ryo Yoshizumi, Hiroshi Onuki, Ryo Ohashi, Momonari Kudo, and Koji Nuida
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. Recently, many isogeny-based cryptosystems using isogenies between Kummer surfaces were proposed. Most of those cryptosystems use $(2,2)$-isogenies. However, to enhance the possibility of cryptosystems, higher degree isogenies, say $(\ell,\ell)$-isogenies for an odd $\ell$, is also crucial. For an odd $\ell$, the Lubicz-Robert gave a formula to compute $(\ell)^g$-isogenies in general dimension $g$. In this paper, we propose explicit and efficient algorithms to compute $(\ell,\ell)$-isogenies between Kummer surfaces, based on the Lubicz-Robert formula.In particular, we propose two algorithms for computing the codomain of the isogeny and two algorithms for evaluating the image of a point under the isogeny. Then, we count the number of arithmetic operations required for each of our proposed algorithms, and determine the most efficient algorithm in terms of the number of arithmetic operations from each of two types of algorithms for each $\ell$. As an application, using the most efficient one, we implemented the SIDH attack on B-SIDH in SageMath.In setting that originally claimed 128-bit security, our implementation was able to recover that secret key in about 11 hours.
Last updated:  2025-01-16
Morgana: a laconic circuit builder
Lev Soukhanov and Yaroslav Rebenko
We construct a novel SNARK proof system, Morgana. The main property of our system is its small circuit keys, which are proportional in size to the description of the circuit, rather than to the number of constraints. Previously, a common approach to this problem was to first construct a universal circuit (colloquially known as a zk-VM), and then simulate an application circuit within it. However, this approach introduces significant overhead. Our system, on the other hand, results in a direct speedup compared to Spartan, the state-of-the-art SNARK for R1CS. Additionally, small circuit keys enable the construction of zk-VMs from our system through a novel approach: first, outputting a commitment to the circuit key, and second, executing our circuit argument for this circuit key.
Last updated:  2025-01-16
Quantum One-Wayness of the Single-Round Sponge with Invertible Permutations
Joseph Carolan and Alexander Poremba
Sponge hashing is a widely used class of cryptographic hash algorithms which underlies the current international hash function standard SHA-3. In a nutshell, a sponge function takes as input a bit-stream of any length and processes it via a simple iterative procedure: it repeatedly feeds each block of the input into a so-called block function, and then produces a digest by once again iterating the block function on the final output bits. While much is known about the post-quantum security of the sponge construction when the block function is modeled as a random function or one-way permutation, the case of invertible permutations, which more accurately models the construction underlying SHA-3, has so far remained a fundamental open problem. In this work, we make new progress towards overcoming this barrier and show several results. First, we prove the ``double-sided zero-search'' conjecture proposed by Unruh (eprint' 2021) and show that finding zero-pairs in a random $2n$-bit permutation requires at least $\Omega(2^{n/2})$ many queries---and this is tight due to Grover's algorithm. At the core of our proof lies a novel ``symmetrization argument'' which uses insights from the theory of Young subgroups. Second, we consider more general variants of the double-sided search problem and show similar query lower bounds for them. As an application, we prove the quantum one-wayness of the single-round sponge with invertible permutations in the quantum random permutation model.
Last updated:  2025-01-16
SoK: Trusted setups for powers-of-tau strings
Faxing Wang, Shaanan Cohney, and Joseph Bonneau
Many cryptographic protocols rely upon an initial \emph{trusted setup} to generate public parameters. While the concept is decades old, trusted setups have gained prominence with the advent of blockchain applications utilizing zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs), many of which rely on a ``powers-of-tau'' setup. Because such setups feature a dangerous trapdoor which undermines security if leaked, multiparty protocols are used to prevent the trapdoor from being known by any one party. Practical setups utilize an elaborate public ceremony to build confidence that the setup was not subverted. In this paper, we aim to systematize existing knowledge on trusted setups, drawing the distinction between setup \emph{protocols} and \emph{ceremonies}, and shed light on the different features of various approaches. We establish a taxonomy of protocols and evaluate real-world ceremonies based on their design principles, strengths, and weaknesses.
Last updated:  2025-01-15
$\mathsf{Cougar}$: Cubic Root Verifier Inner Product Argument under Discrete Logarithm Assumption
Hyeonbum Lee, Seunghun Paik, Hyunjung Son, and Jae Hong Seo
An inner product argument (IPA) is a cryptographic primitive used to construct a zero-knowledge proof system, which is a notable privacy-enhancing technology. We propose a novel efficient IPA called $\mathsf{Cougar}$. $\mathsf{Cougar}$ features cubic root verifier and logarithmic communication under the discrete logarithm (DL) assumption. At Asiacrypt2022, Kim et al. proposed two square root verifier IPAs under the DL assumption. Our main objective is to overcome the limitation of square root complexity in the DL setting. To achieve this, we combine two distinct square root IPAs from Kim et al.: one with pairing ($\mathsf{Protocol3}$; one was later named $\mathsf{Leopard}$) and one without pairing ($\mathsf{Protocol4}$). To construct $\mathsf{Cougar}$, we first revisit $\mathsf{Protocol4}$ and reconstruct it to make it compatible with the proof system for the homomorphic commitment scheme. Next, we utilize $\mathsf{Protocol3}$ as the proof system for the reconstructed $\mathsf{Protocol4}$. Finally, to facilitate proving the relation between elliptic curve points appearing in $\mathsf{Protocol4}$, we introduce a novel $\mathsf{Plonkish}$-based proof system equipped with custom gates for mixed elliptic curve addition. We show that $\mathsf{Cougar}$ indeed satisfies all the claimed features, along with providing a soundness proof under the DL assumption. In addition, we implemented $\mathsf{Cougar}$ in Rust, demonstrating that the verification time of $\mathsf{Cougar}$ increases much slowly as the length of the witness $N$ grows, compared to other IPAs under the DL assumption and transparatent setup: BulletProofs and $\mathsf{Leopard}$. Concretely, $\mathsf{Cougar}$ takes 0.346s for verification in our setting when $N = 2^{20}$, which is a $50\times$ speed-up from BulletProofs.
Last updated:  2025-01-15
PunSearch: Enabling Puncturable Encrypted Search over Lattice for Cloud Storage Systems
Yibo Cao, Shiyuan Xu, Gang Xu, Xiu-Bo Chen, Tao Shang, Yuling Chen, and Zongpeng Li
Searchable encryption (SE) has been widely studied for cloud storage systems, allowing data encrypted search and retrieval. However, existing SE schemes can not support the fine-grained searchability revocation, making it impractical for real applications. Puncturable encryption (PE) [Oakland'15] can revoke the decryption ability of a data receiver for a specific message, which can potentially alleviate this issue. Moreover, the threat of quantum computing remains an important and realistic concern, potentially leading to data privacy leakage for cloud storage systems. Consequently, designing a post-quantum puncturable encrypted search scheme is still far-reaching. In this paper, we propose PunSearch, the first puncturable encrypted search scheme over lattice for outsourced data privacy-preserving in cloud storage systems. PunSearch provides a fine-grained searchability revocation while enjoying quantum safety. Different from existing PE schemes, we construct a novel trapdoor generation mechanism through evaluation algorithms and lattice pre-image sampling technique. We then design a search permission verification method to revoke the searchability for specific keywords. Furthermore, we formalize a new IND-Pun-CKA security model, and utilize it to analyze the security of PunSearch. Comprehensive performance evaluation indicates that the computational overheads of Encrypt, Trapdoor, Search, and Puncture algorithms in PunSearch are just 0.06, 0.005, 0.05, and 0.31 times of other prior arts, respectively under the best cases. These results demonstrate that PunSearch is effective and secure for cloud storage systems.
Last updated:  2025-01-15
Foundations of Data Availability Sampling
Mathias Hall-Andersen, Mark Simkin, and Benedikt Wagner
Towards building more scalable blockchains, an approach known as data availability sampling (DAS) has emerged over the past few years. Even large blockchains like Ethereum are planning to eventually deploy DAS to improve their scalability. In a nutshell, DAS allows the participants of a network to ensure the full availability of some data without any one participant downloading it entirely. Despite the significant practical interest that DAS has received, there are currently no formal definitions for this primitive, no security notions, and no security proofs for any candidate constructions. For a cryptographic primitive that may end up being widely deployed in large real-world systems, this is a rather unsatisfactory state of affairs. In this work, we initiate a cryptographic study of data availability sampling. To this end, we define data availability sampling precisely as a clean cryptographic primitive. Then, we show how data availability sampling relates to erasure codes. We do so by defining a new type of commitment schemes which naturally generalizes vector commitments and polynomial commitments. Using our framework, we analyze existing constructions and prove them secure. In addition, we give new constructions which are based on weaker assumptions, computationally more efficient, and do not rely on a trusted setup, at the cost of slightly larger communication complexity. Finally, we evaluate the trade-offs of the different constructions.
Last updated:  2025-01-15
Formal Definition and Verification for Combined Random Fault and Random Probing Security
Sonia Belaid, Jakob Feldtkeller, Tim Güneysu, Anna Guinet, Jan Richter-Brockmann, Matthieu Rivain, Pascal Sasdrich, and Abdul Rahman Taleb
In our highly digitalized world, an adversary is not constrained to purely digital attacks but can monitor or influence the physical execution environment of a target computing device. Such side-channel or fault-injection analysis poses a significant threat to otherwise secure cryptographic implementations. Hence, it is important to consider additional adversarial capabilities when analyzing the security of cryptographic implementations besides the default black-box model. For side-channel analysis, this is done by providing the adversary with knowledge of some internal values, while for fault-injection analysis the capabilities of the adversaries include manipulation of some internal values. In this work, we extend probabilistic security models for physical attacks, by introducing a general random probing model and a general random fault model to capture arbitrary leakage and fault distributions, as well as the combination of these models. Our aim is to enable a more accurate modeling of low-level physical effects. We then analyze important properties, such as the impact of adversarial knowledge on faults and compositions, and provide tool-based formal verification methods that allow the security assessment of design components. These methods are introduced as extension of previous tools VERICA and IronMask which are implemented, evaluated and compared.
Last updated:  2025-01-15
How (not) to hash into class groups of imaginary quadratic fields?
István András Seres, Péter Burcsi, and Péter Kutas
Class groups of imaginary quadratic fields (class groups for short) have seen a resurgence in cryptography as transparent groups of unknown order. They are a prime candidate for being a trustless alternative to RSA groups because class groups do not need a (distributed) trusted setup to sample a cryptographically secure group of unknown order. Class groups have recently found many applications in verifiable secret sharing, secure multiparty computation, transparent polynomial commitments, and perhaps most importantly, in time-based cryptography, i.e., verifiable delay functions, (homomorphic) time-lock puzzles, timed commitments, etc. However, there are various roadblocks to making class groups widespread in practical cryptographic deployments. We initiate the rigorous study of hashing into class groups. Specifically, we want to sample a uniformly distributed group element in a class group such that nobody knows its discrete logarithm with respect to any public parameter. We point out several flawed algorithms in numerous publicly available class group libraries. We further illustrate the insecurity of these hash functions by showing concrete attacks against cryptographic protocols, i.e., verifiable delay functions, if they were deployed with one of those broken hash-to-class group functions. We propose two families of cryptographically secure hash functions into class groups. We implement these constructions and evaluate their performance. We release our implementation as an open-source library.
Last updated:  2025-01-15
Treating dishonest ciphertexts in post-quantum KEMs -- explicit vs. implicit rejection in the FO transform
Kathrin Hövelmanns and Mikhail Kudinov
We revisit a basic building block in the endeavor to migrate to post-quantum secure cryptography, Key Encapsulation Mechanisms (KEMs). KEMs enable the establishment of a shared secret key, using only public communication. When targeting chosen-ciphertext security against quantum attackers, the go-to method is to design a Public-Key Encryption (PKE) scheme and then apply a variant of the PKE-to-KEM conversion known as the Fujisaki-Okamoto (FO) transform, which we revisit in this work. Intuitively, FO ensures chosen-ciphertext security by rejecting dishonest messages. This comes in two flavors -- the KEM could reject by returning 'explicit' failure symbol $\bot$ or instead by returning a pseudo-random key ('implicit' reject). During the NIST post-quantum standardization process, designers chose implicit rejection, likely due to the availability of security proofs against quantum attackers. On the other hand, implicit rejection introduces complexity and can easily deteriorate into explicit rejection in practice. While it was proven that implicit rejection is not less secure than explicit rejection, the other direction was less clear. This is relevant because the available security proofs against quantum attackers still leave things to be desired. When envisioning future improvements, due to, e.g., advancements in quantum proof techniques, having to treat both variants separately creates unnecessary overhead. In this work, we thus re-evaluate the relationship between the two design approaches and address the so-far unexplored direction: in the classical random oracle model, we show that explicit rejection is not less secure than implicit rejection, up to a rare edge case. This, however, uses the observability of random oracle queries. To lift the proof into the quantum world, we make use of the extractable QROM (eQROM). As an alternative that works without the eQROM, we give an indirect proof that involves a new necessity statement on the involved PKE scheme.
Last updated:  2025-01-15
FOLEAGE: $\mathbb{F}_4$OLE-Based Multi-Party Computation for Boolean Circuits
Maxime Bombar, Dung Bui, Geoffroy Couteau, Alain Couvreur, Clément Ducros, and Sacha Servan-Schreiber
Secure Multi-party Computation (MPC) allows two or more parties to compute any public function over their privately-held inputs, without revealing any information beyond the result of the computation. Modern protocols for MPC generate a large amount of input-independent preprocessing material called multiplication triples, in an offline phase. This preprocessing can later be used by the parties to efficiently instantiate an input-dependent online phase computing the function. To date, the state-of-the-art secure multi-party computation protocols in the preprocessing model are tailored to secure computation of arithmetic circuits over large fields and require little communication in the preprocessing phase, typically O(N · m) to generate m triples among N parties. In contrast, when it comes to computing preprocessing for computations that are naturally represented as Boolean circuits, the state-of-the-art techniques have not evolved since the 1980s, and in particular, require every pair of parties to execute a large number of oblivious transfers before interacting to convert them to N-party triples, which induces an Ω(N^2 · m) communication overhead. In this paper, we introduce FOLEAGE, which addresses this gap by introducing an efficient preprocessing protocol tailored to Boolean circuits. FOLEAGE exhibits excellent performance: It generates m multiplication triples over F2 using only N · m + O(N^2 · log m) bits of communication for N-parties, and can concretely produce over 12 million triples per second in the 2-party setting on one core of a commodity machine. Our result builds upon an efficient Pseudorandom Correlation Generator (PCG) for multiplication triples over the field F4. Roughly speaking, a PCG enables parties to stretch a short seed into a large number of pseudorandom correlations non-interactively, which greatly improves the efficiency of the offline phase in MPC protocols. Our construction significantly outperforms the state-of-the-art, which we demonstrate via a prototype implementation. This is achieved by introducing a number of protocol-level, algorithmic-level, and implementation-level optimizations on the recent PCG construction of Bombar et al. (Crypto 2023) from the Quasi-Abelian Syndrome Decoding assumption.
Last updated:  2025-01-15
KyberSlash: Exploiting secret-dependent division timings in Kyber implementations
Daniel J. Bernstein, Karthikeyan Bhargavan, Shivam Bhasin, Anupam Chattopadhyay, Tee Kiah Chia, Matthias J. Kannwischer, Franziskus Kiefer, Thales Paiva, Prasanna Ravi, and Goutam Tamvada
This paper presents KyberSlash1 and KyberSlash2 - two timing vulnerabilities in several implementations (including the official reference code) of the Kyber Post-Quantum Key Encapsulation Mechanism, recently standardized as ML-KEM. We demonstrate the exploitability of both KyberSlash1 and KyberSlash2 on two popular platforms: the Raspberry Pi 2 (Arm Cortex-A7) and the Arm Cortex-M4 microprocessor. Kyber secret keys are reliably recovered within minutes for KyberSlash2 and a few hours for KyberSlash1. We responsibly disclosed these vulnerabilities to maintainers of various libraries and they have swiftly been patched. We present two approaches for detecting and avoiding similar vulnerabilities. First, we patch the dynamic analysis tool Valgrind to allow detection of variable-time instructions operating on secret data, and apply it to more than 1000 implementations of cryptographic primitives in SUPERCOP. We report multiple findings. Second, we propose a more rigid approach to guarantee the absence of variable-time instructions in cryptographic software using formal methods.
Last updated:  2025-01-15
BlindPerm: Efficient MEV Mitigation with an Encrypted Mempool and Permutation
Alireza Kavousi, Duc V. Le, Philipp Jovanovic, and George Danezis
To mitigate the negative effects of the maximal extractable value (MEV), we propose techniques that utilize randomized permutation to shuffle the order of transactions in a committed block before execution. We argue that existing approaches based on encrypted mempools cannot provide sufficient mitigation, particularly against block producer, and can be extended by permutation-based techniques to provide multi-layer protection. With a focus on PoS committee-based consensus we then introduce BlindPerm, a framework enhancing an encrypted mempool with permutation and present various optimizations. Notably, we propose a protocol where this enhancement comes at essentially no overheads by piggybacking on the encrypted mempool. Further, we demonstrate how to extend our mitigation technique to support PoW longest-chain consensus. Finally, we illustrate the effectiveness of our solutions on arbitrage and sandwich attacks as the two main types of MEV extraction through running simulations using historical Ethereum data.
Last updated:  2025-01-14
CAPSS: A Framework for SNARK-Friendly Post-Quantum Signatures
Thibauld Feneuil and Matthieu Rivain
In this paper, we present a general framework for constructing SNARK-friendly post-quantum signature schemes based on minimal assumptions, specifically the security of an arithmetization-oriented family of permutations. The term "SNARK-friendly" here refers to the efficiency of the signature verification process in terms of SNARK constraints, such as R1CS or AIR constraints used in STARKs. Within the CAPSS framework, signature schemes are designed as proofs of knowledge of a secret preimage of a one-way function, where the one-way function is derived from the chosen permutation family. To obtain compact signatures with SNARK-friendly verification, our primary goal is to achieve a hash-based proof system that is efficient in both proof size and arithmetization of the verification process. To this end, we introduce SmallWood, a hash-based polynomial commitment and zero-knowledge argument scheme tailored for statements arising in this context. The SmallWood construction leverages techniques from Ligero, Brakedown, and Threshold-Computation-in-the-Head (TCitH) to achieve proof sizes that outperform the state of the art of hash-based zero-knowledge proof systems for witness sizes ranging from $2^5$ to $2^{16}$. From the SmallWood proof system and further optimizations for SNARK-friendliness, the CAPSS framework offers a generic transformation of any arithmetization-oriented permutation family into a SNARK-friendly post-quantum signature scheme. We provide concrete instances built on permutations such as Rescue-Prime, Poseidon, Griffin, and Anemoi. For the Anemoi family, achieving 128-bit security, our approach produces signatures of sizes ranging from 9 to 13.3 KB, with R1CS constraints between 19K and 29K. This represents a 4-6$\times$ reduction in signature size and a 5-8$\times$ reduction in R1CS constraints compared to Loquat (CRYPTO 2024), a SNARK-friendly post-quantum signature scheme based on the Legendre PRF.
Last updated:  2025-01-14
A Key-Recovery Attack on a Leaky Seasign Variant
Shai Levin
We present a key-recovery attack on a variant of the Seasign signature scheme presented by [Kim24], which attempts to avoid rejection sampling by presampling vectors $\mathbf{f}$ such that the $\mathbf{f}-\mathbf{e}$ is contained in an acceptable bound, where $\mathbf{e}$ is the secret key. We show that this choice leads to a bias of these vectors such that, in a small number of signatures, the secret key can either be completely recovered or its keyspace substantially reduced. In particular, on average, given $20$ signatures, with parameter set II of their paper, the attack reduces the private key to 128 possibilities
Last updated:  2025-01-14
On the Privacy of Sublinear-Communication Jaccard Index Estimation via Min-hash Sketching
Seung Geol Choi, Dana Dachman-Soled, Mingyu Liang, Linsheng Liu, and Arkady Yerukhimovich
The min-hash sketch is a well-known technique for low-communication approximation of the Jaccard index between two input sets. Moreover, there is a folklore belief that min-hash sketch based protocols protect the privacy of the inputs. In this paper, we investigate this folklore to quantify the privacy of the min-hash sketch. We begin our investigation by considering the privacy of min-hash in a centralized setting where the hash functions are chosen by the min-hash functionality and are unknown to the participants. We show that in this case the min-hash output satisfies the standard definition of differential privacy (DP) without any additional noise. This immediately yields a privacy-preserving sublinear-communication semi-honest 2-PC protocol based on FHE where the hash function is evaluated homomorphically. To improve the efficiency of this protocol, we next consider an implementation in the random oracle model. Here, the protocol participants jointly sample public prefixes for domain separation of the random oracle, and locally evaluate the resulting hash functions on their input sets. Unfortunately, we show that in this public hash function setting, the min-hash output is no longer DP. We therefore consider the notion of distributional differential privacy (DDP) introduced by Bassily et al.(FOCS 2013). We show that if the honest party's set has sufficiently high min-entropy then the output of the min-hash functionality achieves DDP, again without any added noise. This yields a more efficient semi-honest two-party protocol in the random oracle model, where parties first locally hash their input sets and then perform a 2PC for comparison. By proving that our protocols satisfy DP and DDP respectively, our results formally confirm and qualify the folklore belief that min-hash based protocols protect the privacy of their inputs.
Last updated:  2025-01-14
Fair Signature Exchange
Hossein Hafezi, Aditi Partap, Sourav Das, and Joseph Bonneau
We introduce the concept of Fair Signature Exchange (FSE). FSE enables a client to obtain signatures on multiple messages in a fair manner: the client receives all signatures if and only if the signer receives an agreed-upon payment. We formalize security definitions for FSE and present a practical construction based on the Schnorr signature scheme, avoiding computationally expensive cryptographic primitives such as SNARKs. Our scheme imposes minimal overhead on the Schnorr signer and verifier, leaving the signature verification process unchanged from standard Schnorr signatures. Fairness is enforced using a blockchain as a trusted third party, while exchanging only a constant amount of information on-chain regardless of the number of signatures exchanged. We demonstrate how to construct a batch adaptor signature scheme using FSE, and our FSE construction based on Schnorr results in an efficient implementation of a batch Schnorr adaptor signature scheme for the discrete logarithm problem. We implemented our scheme to show that it has negligible overhead compared to standard Schnorr signatures. For instance, exchanging $2^{10}$ signatures on the Vesta curve takes approximately $80$ms for the signer and $300$ms for the verifier, with almost no overhead for the signer and $2$x overhead for the verifier compared to the original Schnorr protocol. Additionally, we propose an extension to blind signature exchange, where the signer does not learn the messages being signed. This is achieved through a natural adaptation of blinded Schnorr signatures.
Last updated:  2025-01-14
Skyscraper: Fast Hashing on Big Primes
Clémence Bouvier, Lorenzo Grassi, Dmitry Khovratovich, Katharina Koschatko, Christian Rechberger, Fabian Schmid, and Markus Schofnegger
Arithmetic hash functions defined over prime fields have been actively developed and used in verifiable computation (VC) protocols. Among those, elliptic-curve-based SNARKs require large (\(256\)-bit and higher) primes. Such hash functions are notably slow, losing a factor of up to \(1000\) compared to regular constructions like SHA-2/3. In this paper, we present the hash function $\textsf{Skyscraper}$, which is aimed at large prime fields and provides major improvements compared to $\texttt{Reinforced Concrete}$ and $\texttt{Monolith}$. First, the design is exactly the same for all large primes, which simplifies analysis and deployment. Secondly, it achieves a performance comparable to cryptographic hash standards by using low-degree non-invertible transformations and minimizing modulo reductions. Concretely, it hashes two \(256\)-bit prime field (BLS12-381 curve scalar field) elements in \(135\) nanoseconds, whereas SHA-256 needs \(42\) nanoseconds on the same machine. The low circuit complexity of $\textsf{Skyscraper}$, together with its high native speed, should allow a substantial reduction in many VC scenarios, particularly in recursive proofs.
Last updated:  2025-01-14
Founding Zero-Knowledge Proofs of Training on Optimum Vicinity
Gefei Tan, Adrià Gascón, Sarah Meiklejohn, Mariana Raykova, Xiao Wang, and Ning Luo
Zero-knowledge proofs of training (zkPoT) allow a party to prove that a model is trained correctly on a committed dataset without revealing any additional information about the model or the dataset. Existing zkPoT protocols prove the entire training process in zero knowledge; i.e., they prove that the final model was obtained in an iterative fashion starting from the training data and a random seed (and potentially other parameters) and applying the correct algorithm at each iteration. This approach inherently requires the prover to perform work linear to the number of iterations. In this paper, we take a different approach to proving the correctness of model training. Our approach is motivated by efficiency but also more urgently by the observation that the prover's ability to pick the random seed used for training introduces the potential for it to bias the model. In other words, if the input to the training algorithm is biased, the resulting model will be biased even if the prover correctly ran the training algorithm. Rather than prove the correctness of the training process, we thus directly prove the correctness of the training model using a notion we call optimum vicinity, which bounds the distance between the trained model and the mathematically optimal model for models that can be viewed as the solution to a convex optimization problem. We show both theoretically and experimentally that this ensures the trained model behaves similarly to the optimal model, and show this is not true for existing approaches. We also demonstrate significant performance improvements as compared to the existing zkPoT paradigm: the statement proven in ZK in our protocol has a size independent of the number of training iterations, and our Boolean (respectively arithmetic) circuit size is up to $246\times$ (respectively $5\times$) smaller than that of a baseline zkPoT protocol that verifies the whole training process.
Last updated:  2025-01-14
Doubly Efficient Fuzzy Private Set Intersection for High-dimensional Data with Cosine Similarity
Hyunjung Son, Seunghun Paik, Yunki Kim, Sunpill Kim, Heewon Chung, and Jae Hong Seo
Fuzzy private set intersection (Fuzzy PSI) is a cryptographic protocol for privacy-preserving similarity matching, which is one of the essential operations in various real-world applications such as facial authentication, information retrieval, or recommendation systems. Despite recent advancements in fuzzy PSI protocols, still a huge barrier remains in deploying them for these applications. The main obstacle is the high dimensionality, e.g., from 128 to 512, of data; lots of existing methods, Garimella et al. (CRYPTO’23, CRYPTO’24) or van Baarsen et al. (EUROCRYPT’24), suffer from exponential overhead on communication and/or computation cost. In addition, the dominant similarity metric in these applications is cosine similarity, which disables several optimization tricks based on assumptions for the distribution of data, e.g., techniques by Gao et al. (ASIACRYPT’24). In this paper, we propose a novel fuzzy PSI protocol for cosine similarity, called FPHE, that overcomes these limitations at the same time. FPHE features linear complexity on both computation and communication with respect to the dimension of set elements, only requiring much weaker assumption than prior works. The basic strategy of ours is to homomorphically compute cosine similarity and run an approximated comparison function, with a clever packing method for efficiency. In addition, we introduce a novel proof technique to harmonize the approximation error from the sign function with the noise flooding, proving the security of FPHE under the semi-honest model. Moreover, we show that our construction can be extended to support various functionalities, such as labeled or circuit fuzzy PSI. Through experiments, we show that FPHE can perform fuzzy PSI over 512-dimensional data in a few minutes, which was computationally infeasible for all previous proposals under the same assumption as ours.
Last updated:  2025-01-13
More Efficient Lattice-Based Electronic Voting from NTRU
Patrick Hough, Caroline Sandsbråten, and Tjerand Silde
In recent years, there has been much focus on developing core cryptographic primitives based on lattice assumptions, driven by the NIST call for post-quantum key encapsulation and digital signature algorithms. However, more work must be conducted on efficient privacy-preserving protocols based on quantum-safe assumptions. Electronic voting is one such privacy-preserving protocol whose adoption is increasing across the democratic world. E-voting offers both a fast and convenient alternative to postal voting whilst further ensuring cryptographic privacy of votes and offering full verifiability of the process. Owing to the sensitivity of voting and its infrastructure challenges, it is crucial to ensure security against quantum computers is baked into e-voting solutions. We present an e-voting scheme from quantum-safe assumptions based on the hardness of the RLWE and NTRU lattice problems, providing concrete parameters and an efficient implementation. Our design achieves a factor $5.3 \times$ reduction in ciphertext size, $2.5 \times$ reduction in total communication cost, and $2 \times$ reduction in total computation time compared to the state-of-the-art lattice-based voting scheme by Aranha et al. (ACM CCS 2023). We argue that the efficiency of this scheme makes it suitable for real-world elections. Our scheme makes use of non-ternary NTRU secrets to achieve optimal parameters. In order to compute the security of our design, we extend the ternary-NTRU work of Ducas and van Woerden (ASIACRYPT 2021) by determining the concrete fatigue point (for general secrets) of NTRU to be $q = 0.0058 \cdot \sigma^2 \cdot d^{2.484}$ (above which parameters become overstretched) for modulus $q$, ring dimension $d$, and secrets drawn from a Gaussian of parameter $\sigma$. We consider this relation to be of independent interest and demonstrate its significance by improving the efficiency of the (partially) blind signature scheme by del Pino and Katsumata (CRYPTO 2022).
Last updated:  2025-01-13
Proximity Gaps in Interleaved Codes
Benjamin E. Diamond and Angus Gruen
A linear error-correcting code exhibits proximity gaps if each affine line of words either consists entirely of words which are close to the code or else contains almost no such words. In this short note, we prove that for each linear code which exhibits proximity gaps within the unique decoding radius, that code's interleaved code also does. Combining our result with a recent argument of Angeris, Evans and Roh ('24), we extend those authors' sharpening of the tensor-based proximity gap of Diamond and Posen (Commun. Cryptol. '24) up to the unique decoding radius, at least in the Reed–Solomon setting.
Last updated:  2025-01-13
Extreme Algebraic Attacks
Pierrick Méaux and Qingju Wang
When designing filter functions in Linear Feedback Shift Registers (LFSR) based stream ciphers, algebraic criteria of Boolean functions such as the Algebraic Immunity (AI) become key characteristics because they guarantee the security of ciphers against the powerful algebraic attacks. In this article, we investigate a generalization of the algebraic attacks proposed by Courtois and Meier on filtered LFSR twenty years ago. We consider how the standard algebraic attack can be generalized beyond filtered LFSR to stream ciphers applying a Boolean filter function to an updated state. Depending on the updating process, we can use different sets of annihilators than the ones used in the standard algebraic attack; it leads to a generalization of the concept of algebraic immunity, and more efficient attacks. To illustrate these strategies, we focus on one of these generalizations and introduce a new notion called Extreme Algebraic Immunity (EAI). We perform a theoretic study of the EAI criterion and explore its relation to other algebraic criteria. We prove the upper bound of the EAI of an n-variable Boolean function and further show that the EAI can be lower bounded by the AI restricted to a subset, as defined by Carlet, Méaux and Rotella at FSE 2017. We also exhibit functions with EAI guaranteed to be lower than the AI, in particular we highlight a pathological case of functions with optimal algebraic immunity and EAI only n/4. As applications, we determine the EAI of filter functions of some existing stream ciphers and discuss how extreme algebraic attacks using EAI could apply to some ciphers. Our generalized algebraic attack does not give a better complexity than Courtois and Meier's result on the existing stream ciphers. However, we see this work as a study to avoid weaknesses in the construction of future stream cipher designs.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.