Papers updated in last 365 days (3020 results)

Last updated:  2025-04-19
Keyed-Verification Anonymous Credentials with Highly Efficient Partial Disclosure
Omid Mirzamohammadi, Jan Bobolz, Mahdi Sedaghat, Emad Heydari Beni, Aysajan Abidin, Dave Singelee, and Bart Preneel
An anonymous credential (AC) system with partial disclosure allows users to prove possession of a credential issued by an issuer while selectively disclosing a subset of their attributes to a verifier in a privacy-preserving manner. In keyed-verification AC (KVAC) systems, the issuer and verifier share a secret key. Existing KVAC schemes rely on computationally expensive zero-knowledge proofs during credential presentation, with the presentation size growing linearly with the number of attributes. In this work, we propose two highly efficient KVAC constructions that eliminate the need for zero-knowledge proofs during the credential presentation and achieve constant-size presentations. Our first construction adapts the approach of Fuchsbauer et al. (JoC'19), which achieved constant-size credential presentation in a publicly verifiable setting using their proposed structure-preserving signatures on equivalence classes (SPS-EQ) and set commitment schemes, to the KVAC setting. We introduce structure-preserving message authentication codes on equivalence classes (SP-MAC-EQ) and designated-verifier set commitments (DVSC), resulting in a KVAC system with constant-size credentials (2 group elements) and presentations (5 group elements). To avoid the bilinear groups and pairing operations required by SP-MAC-EQ, our second construction uses a homomorphic MAC with a simplified DVSC. While this sacrifices constant-size credentials ($n+2$ group elements, where $n$ is the number of attributes), it retains constant-size presentations (2 group elements) in a pairingless setting. We formally prove the security of both constructions and provide open-source implementation results demonstrating their practicality. We extensively benchmarked our KVAC protocols and, additionally, bechmarked the efficiency of our SP-MAC-EQ scheme against the original SPS-EQ scheme, showcasing significant performance improvements.
Last updated:  2025-04-19
Publicly Verifiable Generalized Secret Sharing Schemes and Their Applications
Liang Zhang, Dongliang Cai, Tao Liu, Haibin Kan, and Jiheng Zhang
Generalized secret sharing (GSS) enables flexible access control in distributed systems by allowing secrets to be shared across arbitrary monotone access structures. However, its adoption in transparent and trustless environments is hindered due to the reliance on trusted participants and secure communication channels. This reliance restricts GSS's ability to provide flexible control in the presence of adversaries. In this paper, we propose publicly verifiable generalized secret sharing (PVGSS), a scheme that allows to distribute a secret using generalized access structures while enabling non-interactive public verifiability of participant honesty. PVGSS scheme offers significant potential to advance the fields of blockchain and applied cryptography, such as attribute-based cryptosystem, fine-grained MPC and on-chain secret escrow. Intuitively, we first build two GSS schemes by leveraging recursive Shamir secret sharing and linear secret sharing scheme. Then, we encrypt GSS shares and generate the corresponding non-interactive zero-knowledge proofs. Further, we innovatively adopt the GSS reconstruction algorithm to prove that all the encrypted shares are bound to the dealer's secret with only $O(n)$ verification complexity. To exemplify the practical applicability, we implement a decentralized exchange (DEX) protocol, where fairness and accountable arbitration are considered. Our benchmarks on the BN128 curve demonstrate the computational efficiency of PVGSS schemes, while Ethereum gas cost analysis confirms the viability of the DEX implementation.
Last updated:  2025-04-18
Bitcoin-Enhanced Proof-of-Stake Security: Possibilities and Impossibilities
Ertem Nusret Tas, David Tse, Fangyu Gai, Sreeram Kannan, Mohammad Ali Maddah-Ali, and Fisher Yu
Bitcoin is the most secure blockchain in the world, supported by the immense hash power of its Proof-of-Work miners. Proof-of-Stake chains are energy-efficient, have fast finality but face several security issues: susceptibility to non-slashable long-range safety attacks, low liveness resilience and difficulty to bootstrap from low token valuation. We show that these security issues are inherent in any PoS chain without an external trusted source, and propose a new protocol, Babylon, where an off-the-shelf PoS protocol checkpoints onto Bitcoin to resolve these issues. An impossibility result justifies the optimality of Babylon. A use case of Babylon is to reduce the stake withdrawal delay: our experimental results show that this delay can be reduced from weeks in existing PoS chains to less than 5 hours using Babylon, at a transaction cost of less than 10K USD per annum for posting the checkpoints onto Bitcoin.
Last updated:  2025-04-18
Combining Outputs of a Random Permutation: New Constructions and Tight Security Bounds by Fourier Analysis
Itai Dinur
We consider constructions that combine outputs of a single permutation $\pi:\{0,1\}^n \rightarrow \{0,1\}^n$ using a public function. These are popular constructions for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). One of the best-known constructions (denoted SXoP$[2,n]$) XORs the outputs of 2 domain-separated calls to $\pi$. Modeling $\pi$ as a uniformly chosen permutation, several previous works proved a tight information-theoretic indistinguishability bound for SXoP$[2,n]$ of about $q/2^{n}$, where $q$ is the number of queries. However, tight bounds are unknown for the generalized variant (denoted SXoP$[r,n]$) which XORs the outputs of $r \geq 2$ domain-separated calls to a uniform permutation. In this paper, we obtain two results. Our first result improves the known bounds for SXoP$[r,n]$ for all (constant) $r \geq 3$ (assuming $q \leq O(2^n/r)$ is not too large) in both the single-user and multi-user settings. In particular, for $r=3$, our bound is about $\sqrt{u}q_{\max}/2^{2.5n}$ (where $u$ is the number of users and $q_{\max}$ is the maximal number of queries per user), improving the best-known previous result by a factor of at least $2^n$. For odd $r$, our bounds are tight for $q > 2^{n/2}$, as they match known attacks. For even $r$, we prove that our single-user bounds are tight by providing matching attacks. Our second and main result is divided into two parts. First, we devise a family of constructions that output $n$ bits by efficiently combining outputs of 2 calls to a permutation on $\{0,1\}^n$, and achieve multi-user security of about $\sqrt{u} q_{\max}/2^{1.5n}$. Then, inspired by the CENC construction of Iwata [FSE'06], we further extend this family to output $2n$ bits by efficiently combining outputs of 3 calls to a permutation on $\{0,1\}^n$. The extended construction has similar multi-user security of $\sqrt{u} q_{\max}/2^{1.5n}$. The new single-user ($u=1$) bounds of $q/2^{1.5n}$ for both families should be contrasted with the previously best-known bounds of $q/2^n$, obtained by the comparable constructions of SXoP$[2,n]$ and CENC. All of our bounds are proved by Fourier analysis, extending the provable security toolkit in this domain in multiple ways.
Last updated:  2025-04-18
Myco: Unlocking Polylogarithmic Accesses in Metadata-Private Messaging
Darya Kaviani, Deevashwer Rathee, Bhargav Annem, and Raluca Ada Popa
As billions of people rely on end-to-end encrypted messaging, the exposure of metadata, such as communication timing and participant relationships, continues to deanonymize users. Asynchronous metadata-hiding solutions with strong cryptographic guarantees have historically been bottlenecked by quadratic $O(N^2)$ server computation in the number of users $N$ due to reliance on private information retrieval (PIR). We present Myco, a metadata-private messaging system that preserves strong cryptographic guarantees while achieving $O(N \log^2 N)$ efficiency. To achieve this, we depart from PIR and instead introduce an oblivious data structure through which senders and receivers privately communicate. To unlink reads and writes, we instantiate Myco in an asymmetric two-server distributed-trust model where clients write messages to one server tasked with obliviously transmitting these messages to another server, from which clients read. Myco achieves throughput improvements of up to 302x over multi-server and 2,219x over single-server state-of-the-art systems based on PIR.
Last updated:  2025-04-18
SNARKs for Stateful Computations on Authenticated Data
Johannes Reinhart, Erik-Oliver Blass, and Bjoern Annighoefer
We present a new generalization of (zk-)SNARKs combining two additional features at the same time. Besides the verification of correct computation, our new SNARKs also allow, first, the verification of input data authenticity. Specifically, a verifier can confirm that the input to the computation originated from a trusted source. Second, our SNARKs support verification of stateful computations across multiple rounds, ensuring that the output of the current round correctly depends on the internal state of the previous round. Our focus is on concrete practicality, so we abstain from arithmetizing hash functions or signatures in our SNARKs. Rather, we modify the internals of an existing SNARK to extend its functionality. With our construction, prover runtime improves significantly over the baseline by a factor of 90. Verification time increases by 36%, but is less than comparable approaches that do not arithmetize hash functions or signatures. Our SNARKs are specifically suited to applications in cyber-physical control systems. These are often safety critical and need to be protected against malicious actors as well as non-malicious but unintended system failures due to design errors or random faults in components. To demonstrate relevance and practicality, we implement and benchmark our new SNARKs in a sample real-world scenario with an aircraft flight control system.
Last updated:  2025-04-18
Breaking ECDSA with Two Affinely Related Nonces
Jamie Gilchrist, William J Buchanan, and Keir Finlow-Bates
The security of the Elliptic Curve Digital Signature Algorithm (ECDSA) depends on the uniqueness and secrecy of the nonce, which is used in each signature. While it is well understood that nonce $k$ reuse across two distinct messages can leak the private key, we show that even if a distinct value is used for $k_2$, where an affine relationship exists in the form of: \(k_m = a \cdot k_n + b\), we can also recover the private key. Our method requires only two signatures (even over the same message) and relies purely on algebra, with no need for lattice reduction or brute-force search(if the relationship, or offset, is known). To our knowledge, this is the first closed-form derivation of the ECDSA private key from only two signatures over the same message, under a known affine relationship between nonces.
Last updated:  2025-04-18
Breaking verifiability and vote privacy in CHVote
Véronique Cortier, Alexandre Debant, and Pierrick Gaudry
Abstract. CHVote is one of the two main electronic voting systems developed in the context of political elections in Switzerland, where the regulation requires a specific setting and specific trust assumptions. We show that actually, CHVote fails to achieve vote secrecy and individual verifiability (here, recorded-as-intended), as soon as one of the online components is dishonest, contradicting the security claims of CHVote. In total, we found 9 attacks or variants against CHVote, 2 of them being based on a bug in the reference implementation. We confirmed our findings through a proof-of-concept implementation of our attacks.
Last updated:  2025-04-18
AsyRand: asynchronous distributed randomness beacon with reconfiguration
Liang Zhang, Tao Liu, Haibin Kan, and Jiheng Zhang
Distributed randomness beacon protocols, which generate publicly verifiable randomness at regular intervals, are crucial for a wide range of applications. The publicly verifiable secret sharing (PVSS) scheme is a promising cryptographic primitive for implementing beacon protocols, such as Hydrand (S\&P'20), SPURT (S\&P'22), OptRand (NDSS'23) and GRandLine (CCS'24). However, two key challenges remain unresolved: asynchrony and reconfiguration. In this paper, we introduce the AsyRand beacon protocol to address these challenges. First, we incorporate a producer-consumer model to decouple the production and consumption of PVSS commitments, which are managed using queue data structures. Then, we leverages reliable broadcast (RBC) for message dissemination in a producer process and invents a t-validated asynchronous Byzantine agreement (t-VABA) protocol to consume PVSS commitments. Consequently, the producer and consumer processes can operate simultaneously and asynchronously, without the need for a global clock. Moreover, the producer-consumer model enables each party to detect potential faults in other parties by monitoring the queue states. When needed, honest parties can initiate a removal process for faulty parties via a t-VABA protocol. Also, a new party can leverage RBC protocol to request to join in without system restart. As an independent contribution, we propose a novel PVSS scheme based on the Sigma protocol and Fiat-Shamir heuristic. Regarding complexity, AsyRand achieves state-of-the-art performance with O(n^2) communication complexity, O(n) computation complexity, and O(n) verification complexity. Experimental results highlight the high performance of AsyRand compared to related works.
Last updated:  2025-04-18
MatriGear: Accelerating Authenticated Matrix Triple Generation with Scalable Prime Fields via Optimized HE Packing
Hyunho Cha, Intak Hwang, Seonhong Min, Jinyeong Seo, and Yongsoo Song
The SPDZ protocol family is a popular choice for secure multi-party computation (MPC) in a dishonest majority setting with active adversaries. Over the past decade, a series of studies have focused on improving its offline phase, where special additive shares called authenticated triples are gener- ated. However, to accommodate recent demands for matrix operations in secure machine learning and big integer arith- metic in distributed RSA key generation, updates to the offline phase are required. In this work, we propose a new protocol for the SPDZ offline phase, MatriGear, which improves upon the previous state-of-the-art construction, TopGear (Baum et al., SAC ’19), and its variant for matrix triples (Chen et al., Asiacrypt ’20). Our protocol aims to achieve a speedup in matrix triple generation and support for larger prime fields up to 4096 bits in size. To achieve this, we devise a variant of the BFV scheme and a new homomorphic matrix multiplication algorithm optimized for our purpose. As a result, our protocol achieves about 3.6x speedup for generating scalar triples in a 1024-bit prime field and about 34x speedup for generating 128x128 matrix triples. In addition, we reduce the size of evaluation keys from 27.4 GB to 0.22 GB and the communication cost for MAC key generation from 816 MB to 16.6 MB.
Last updated:  2025-04-18
Grafting: Decoupled Scale Factors and Modulus in RNS-CKKS
Jung Hee Cheon, Hyeongmin Choe, Minsik Kang, Jaehyung Kim, Seonghak Kim, Johannes Mono, and Taeyeong Noh
The CKKS Fully Homomorphic Encryption (FHE) scheme enables approximate arithmetic on encrypted complex numbers for a desired precision. Most implementations use RNS with carefully chosen parameters to balance precision, efficiency, and security. However, a key limitation in RNS-CKKS is the rigid coupling between the scale factor, which determines numerical precision, and the modulus, which ensures security. Since these parameters serve distinct roles—one governing arithmetic correctness and the other defining cryptographic structure—this dependency imposes design constraints, such as a lack of suitable NTT primes and limited precision flexibility, ultimately leading to inefficiencies. We propose Grafting, a novel approach to decouple scale factors from the modulus by introducing (universal) sprouts, reusable modulus factors that optimize word-sized packing while allowing flexible rescaling. With the universal property, sprouts allow rescaling by arbitrary bit-lengths and key-switching at any modulus bit-length without requiring additional key-switching keys. Decoupling the scale factor from the modulus in Grafting yields significant efficiency gains: (1) Optimized RNS packing by decomposing the modulus into machine word-sized components, accelerating computations and reducing the ciphertext and encryption/evaluation key sizes; and (2) A freely adjustable scale factor independent of the modulus, unifying the ring structure across applications and reducing modulus consumption through adaptive scalings. Our experiments demonstrate that Grafting improves performance across standard SHE/FHE parameter sets for ring dimensions $2^{14}$-$2^{16}$ by up to $1.83\times$ and $2.01\times$ for key-switchings and multiplications, respectively, and up to $1.92\times$ for bootstrapping. Grafting also reduces public key and ciphertext sizes by up to $62\%$ without compressions, maintaining the same number of public keys as before. As an application, we showcase the CKKS gate bootstrapping for bits (Bae et al.; Eurocrypt'24), achieving $1.89\times$ speed-up due to the reduced number of RNS factors. Finally, we revisit the homomorphic comparison (Cheon et al.; Asiacrypt'20), evaluating it with carefully chosen scale factors for each iteration, reporting up to $204$-bit fewer modulus consumption ($27\%$ reduction) in the standard parameter set, without precision loss.
Last updated:  2025-04-18
Reducing Honest Re-Encryption Attack to Chosen Ciphertext Attack
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, and Dominik Wojtczak
Proxy re-encryption (PRE) schemes allow a delegator to designate a proxy to re-encrypt its ciphertext into a ciphertext that the delegatee can decrypt. In contrast, the proxy gains nothing helpful from this transformation. This decryption-power transfer is proper in applications of encrypted email forwarding, key escrow, and publish/subscribe systems. The security notions for PRE are inherited from the standard public key encryption (PKE) schemes, i.e., indistinguishability under chosen-plaintext attacks (CPA) and security under chosen-ciphertext attacks (CCA). A recently popular notion, indistinguishability under honest re-encryption attacks (HRA), was proposed by Cohen in 2019, indicating that CPA security is insufficient for PRE because some CPA-secure PRE leaks the secret key of the delegator. Many post-quantum secure PRE schemes have recently been designed under the HRA security model. However, HRA security differs from traditional CCA security, and there is no known reduction between them. The existing results show they appear to be incompatible. This paper aims to bridge those two security notions via reductions. In addition, we found that many existing HRA-secure schemes are vulnerable to collusion. We provide a generic transformation from a CPA-secure PRE to a collusion-resistant and CPA-secure PRE. This transformation also applies to HRA-secure PREs.
Last updated:  2025-04-18
Priv-PFL: A Privacy-Preserving and Efficient Personalized Federated Learning Approach
Alireza Aghabagherloo, Roozbeh Sarenche, Maryam Zarezadeh, Bart Preneel, and Stefan Köpsell
Federated Learning (FL) allows clients to engage in learning without revealing their raw data. However, traditional FL focuses on developing a single global model for all clients, limiting their ability to have personalized models tailored to their specific needs. Personalized FL (PFL) enables clients to obtain their customized models, either with or without a central party. Current PFL research includes mechanisms to detect poisoning attacks, in which a couple of malicious nodes try to manipulate training convergence by submitting misleading data. However, these detection approaches often overlook privacy concerns, as they require clients to share their models with all other clients. This paper extends BALANCE, a personalized poisoning detection mechanism based on client models and their expectations. Our method enhances both security and privacy by ensuring clients are not required to share their model data with other clients. By leveraging server-assisted PFL and Fully Homomorphic Encryption (FHE), we enable a central party to identify unpoisoned clients from the perspective of individual clients and train personalized models securely. Additionally, we introduce an efficient personalized client selection algorithm that prevents redundant checks and ensures the inheritance of unpoisoned clients.
Last updated:  2025-04-18
Two Party Secret Shared Joins
Srinivasan Raghuraman, Peter Rindal, and Harshal Shah
We present concrete techniques for adapting the protocols of Mohassel et al (CCS 2020) and Badrinarayanan et al (CCS 2022) for compute SQL-like querying operations on secret shared database tables to the two party setting. The afore mentioned protocols are presented in a generic setting with access to certain idealized functionalities, e.g. secret shared permutations. However, they only instantiate their protocols in the honest majority three party setting due to other settings being considered too inefficient. We show that this is no longer the case. In particular, the recent work of Peceny et al. (eprint 2024) gives a concretely efficient two party permutation protocol. Additionally, we give a new and highly efficient protocol for evaluating the strong PRF recently proposed by Alamati et al. (Crypto 2024). Building on these advancements, along with a variety of protocol improvements and significant cryptographic engineering, our open source implementation demonstrate concretely efficient two party SQL-like querying functionality on secret shared data. We focus on the two party setting with secret shared input and output tables. The first protocol $\Pi_\textsf{Join-OO}$ is designed for the setting where the join keys are unique, similar to Private Set Intersection (PSI) except that the inputs and output are secret shared. This protocol is constant round and $O(n)$ running time. The secret protocol $\Pi_\textsf{Join-OM}$ allows one of the tables to contain repeating join keys. Our instantiations achieves $O(n\log n)$ running time and $O(\log n)$ rounds of interaction.
Last updated:  2025-04-17
Hermes: Efficient and Secure Multi-Writer Encrypted Database
Tung Le and Thang Hoang
Searchable encryption (SE) enables privacy-preserving keyword search on encrypted data. Public-key SE (PKSE) supports multi-user searches but suffers from high search latency due to expensive public-key operations. Symmetric SE (SSE) offers a sublinear search but is mainly limited to single-user settings. Recently, hybrid SE (HSE) has combined SSE and PKSE to achieve the best of both worlds, including multi-writer encrypted search functionalities, forward privacy, and sublinear search with respect to database size. Despite its advantages, HSE inherits critical security limitations, such as susceptibility to dictionary attacks, and still incurs significant overhead for search access control verification, requiring costly public-key operation invocations (i.e., pairing) across all authorized keywords. Additionally, its search access control component must be rebuilt periodically for forward privacy, imposing substantial writer overhead. In this paper, we propose Hermes, a new HSE scheme that addresses the aforementioned security issues in prior HSE designs while maintaining minimal search complexity and user efficiency at the same time. Hermes enables multi-writer encrypted search functionalities and offers forward privacy along with resilience to dictionary attacks. To achieve this, we develop a new identity-based encryption scheme with hidden identity and key-aggregate properties, which could be of independent interest. We also design novel partitioning and epoch encoding techniques in Hermes to minimize search complexity and offer low user overhead in maintaining forward privacy. We conducted intensive experiments to assess and compare the performance of Hermes and its counterpart on commodity hardware. Experimental results showed that Hermes performs search one to two orders of magnitude faster than the state-of-the-art HSE while offering stronger security guarantees to prevent dictionary and injection attacks.
Last updated:  2025-04-17
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
Loubna Ghammam, Nadia El Mrabet, Walid Haddaji, and Leila Ben Abdelghani
In pairing-based cryptography, the final exponentiation with a large fixed exponent is crucial for ensuring unique outputs in both Tate and optimal ate pairings. While significant strides have been made in optimizing elliptic curves with even embedding degrees, progress remains limited for curves with odd embedding degrees, especially those divisible by $3$. This paper introduces novel techniques to optimize the computation of the final exponentiation for the optimal ate pairing on such curves. The first technique leverages the structure of certain existing seeds to enable the use of cyclotomic cubing and extends this concept to generate new seeds with similar characteristics. The second technique focuses on producing new sparse ternary representation seeds to utilize cyclotomic cubing as a replacement for squaring. These approaches result in performance improvements of up to $19.3\%$ in the computation of the final exponentiation for the optimal ate pairing on $BLS15$ and $BLS27$ curves.
Last updated:  2025-04-17
Fherret: Proof of FHE Correct-and-Honest Evaluation with Circuit Privacy from MPCitH
Janik Huth, Antoine Joux, and Giacomo Santato
The major Fully Homomorphic Encryption (FHE) schemes guarantee the privacy of the encrypted message only in the honest-but-curious setting, when the server follows the protocol without deviating. However, various attacks in the literature show that an actively malicious server can recover sensitive information by executing incorrect functions, tampering with ciphertexts, or observing the client’s reaction during decryption. Existing integrity solutions for FHE schemes either fail to guarantee circuit privacy, exposing the server's computations to the client, or introduce significant computational overhead on the prover by requiring proofs of FHE operations on ciphertexts. In this work, we present Fherret, a novel scheme leveraging the MPC-in-the-Head (MPCitH) paradigm to provide a proof of correct-and-honest homomorphic evaluation while preserving circuit privacy. This proof guarantees that the client can safely decrypt the ciphertext obtained from the server without being susceptible to reaction-based attacks, such as verification and decryption oracle attacks. Additionally, this proof guarantees that the server’s evaluation maintains correctness, thereby protecting the client from $\mathsf{IND}\text{-}\mathsf{CPA}^{\mathsf{D}}$-style attacks. Our solution achieves a prover overhead of $4\lambda$ homomorphic evaluations of random functions from the function space $\mathcal{F}$, while retaining a competitive verifier overhead of $2 \lambda$ homomorphic evaluations and a communication size proportional to $\sqrt{2\lambda}$ times the size of a function from $\mathcal{F}$. Furthermore, Fherret is inherently parallelizable, achieving a parallel computation overhead similar to a homomorphic evaluation of a random function from $\mathcal{F}$ for both the prover and the verifier.
Last updated:  2025-04-17
Projective Space Stern Decoding and Application to SDitH
Kevin Carrier, Valérian Hatey, and Jean-Pierre Tillich
We show that here standard decoding algorithms for generic linear codes over a finite field can speeded up by a factor which is essentially the size of the finite field by reducing it to a low weight codeword problem and working in the relevant projective space. We apply this technique to SDitH and demonstrate that the parameters of the original submission fail to meet the security requirements set by NIST. However, the updated version, released after the discovery of this attack, is no longer challenged by our attack.
Last updated:  2025-04-17
Threshold (Fully) Homomorphic Encryption
Carl Bootland, Kelong Cong, Daniel Demmler, Tore Kasper Frederiksen, Benoit Libert, Jean-Baptiste Orfila, Dragos Rotaru, Nigel P. Smart, Titouan Tanguy, Samuel Tap, and Michael Walter
This document is a preliminary version of what is intended to be submitted to NIST by Zama as part of their threshold call. The document also serves as partial documentation of the protocols used in the Zama MPC system for threshold TFHE. However, note that the Zama software includes many optimizations built on top of the simple specifications given here. In particular the TFHE parameters given here are larger than those used by the Zama software. This is because the Zama TFHE library contains optimizations which are beyond the scope of this document. Thus the parameters given in this document are compatible with the description of TFHE given here, and take no account of the extra optimizations in the Zama software. Also note that we describe more protocols than that provided in the Zama software. In particular this document describes BGV and BFV threshold implementations, MPC-in-the-Head based proofs of correct encryption. We present mechanisms to perform robust threshold key generation and decryption for Fully Homomorphic Encryption schemes such as BGV, BFV and TFHE, in the case of super honest majority, t < n/3, or t < n/4, in the presence of malicious adversaries. The main mechanism for threshold decryptions follow the noise flooding principle, which we argue is sufficient for BGV and BFV. For TFHE a more subtle technique is needed to apply noise flooding, since TFHE parameters are small. To deal with all three FHE scheme, and obtain a unified framework for all such schemes, we are led to consider secret sharing over Galois Rings and not just finite fields. We consider two sets of threshold profiles, depending on whether binomial(n,t) is big or small. In the small case we obtain for all schemes an asynchronous protocol for robust threshold decryption, and we obtain a robust synchronous protocol for threshold key generation; both with t < n/3. For the large case we only support TFHE, and our protocols require an “offline phase” which requires synchronous networks and can “only” tolerate t < n/4. The threshold key generation operation, and the above mentioned offline phase, require access to a generic offline MPC functionality over arbitrary Galois Rings. This functionality is fully specified here. Finally, we present Zero-Knowledge proof techniques for proving the valid encryption of an FHE ciphertext. These proofs are important in a number of application contexts.
Last updated:  2025-04-17
Mind the Grammar: Side-Channel Analysis driven by Grammatical Evolution
Mattia Napoli, Alberto Leporati, Stjepan Picek, and Luca Mariot
Deep learning-based side-channel analysis is an extremely powerful option for profiling side-channel attacks. However, to perform well, one needs to select the neural network model and training time hyperparameters carefully. While many works investigated these aspects, random search could still be considered the current state-of-the-art. Unfortunately, random search has drawbacks, since the chances of finding a good architecture significantly drop when considering more complex targets. In this paper, we propose a novel neural architecture search approach for SCA based on grammatical evolution - SCAGE. We define a custom SCA grammar that allows us to find well-performing and potentially unconventional architectures. We conduct experiments on four datasets, considering both synchronized and desynchronized versions, as well as using feature intervals or raw traces. Our results show SCAGE to perform extremely well in all settings, outperforming random search and related works in most of the considered scenarios.
Last updated:  2025-04-17
SoK: PQC PAKEs - Cryptographic Primitives, Design and Security
Nouri Alnahawi, David Haas, Erik Mauß, and Alexander Wiesmaier
Password Authenticated Key Exchange (PAKE) establishes secure communication channels using relatively short, often human memorable, passwords for authentication. The currently standardized PAKEs however rely on classical asymmetric (public key) cryptography. Thus, these classical PAKEs may become insecure, should the expected quantum threat become a reality. Despite the growing interest in realizing quantum-safe PAKEs, they did not receive much attention from the ongoing Post-Quantum Cryptography (PQC) integration efforts. Thus, there is a significant gap in awareness compared to PQC primitives subject to the official governmental and institutional standardization processes. In this work, we provide a comprehensive overview of the existing PQC PAKEs focusing on their design rationales, authentication methods and asymmetric key agreement primitives. Further, we classify PQC PAKEs w.r.t. their properties and security assurances. Finally, we address PAKE designs that are still unexplored in the PQC realm and discuss the possibility of their adaptation. Thus, we offer a detailed reference for future work on PQC PAKEs.
Last updated:  2025-04-17
Accelerating Hash-Based Polynomial Commitment Schemes with Linear Prover Time
Florian Hirner, Florian Krieger, Constantin Piber, and Sujoy Sinha Roy
Zero-knowledge proofs (ZKPs) are cryptographic protocols that enable one party to prove the validity of a statement without revealing any information beyond its truth. A central building block in many ZKPs are polynomial commitment schemes (PCS) where constructions with \textit{linear-time provers} are especially attractive. Two such examples are Brakedown and its extension Orion which enable linear-time and quantum-resistant proving by leveraging linear-time encodable Spielman codes. However, these PCS operate over large datasets, creating significant computational bottlenecks. For example, committing to and proving a degree $2^{28}$ polynomial requires around 1.1 GB of data while taking 463 seconds on a high-end server CPU. This work addresses the performance bottleneck in Orion-like PCS by optimizing their most critical operations: Spielman encoding and Merkle commitments. These operations involve Gigabytes of data and suffer from random off-chip memory access patterns that drastically reduce off-chip bandwidth. We resolve this issue and introduce \textit{inverted expander graphs} to eliminate random writes and reduce off-chip memory accesses by over 50\%. Additionally, we propose an on-the-fly graph sampling method that avoids streaming large auxiliary data by generating expander graphs dynamically on-chip. We also provide a formal security proof for our proposed graph transformation. Beyond encoding, we accelerate Merkle Tree construction over large data sets through a scalable multi-pass SHA3 pipeline. Finally, we reutilize existing hardware components used in commitment to accelerate the so-called proximity and consistency checks during proof generation. Building upon these concepts, we present the first hardware architecture for PCS -- with linear prover time -- on a Xilinx Alveo U280 FPGA. In addition, we discuss the practical challenges of manually partitioning, placing, and routing our large-scale architecture to efficiently map it to the multi-SLR and HBM-equipped FPGA. The final implementation achieves a speedup of two orders of magnitude for full proof generation, covering commitment and proving steps. When combined with Virgo as an outer CP-SNARK protocol, our accelerator reduces end-to-end latency by up to 3.85$\times$ --- close to the theoretical maximum of 3.9$\times$.
Last updated:  2025-04-17
A Multi-Differential Approach to Enhance Related-Key Neural Distinguishers
Xue Yuan and Qichun Wang
At CRYPTO 2019, Gohr pioneered the integration of differential cryptanalysis with neural networks, demonstrating significant advantages over traditional distinguishers. Subsequently, at Inscrypt 2020, Su et al. proposed the concept of constructing polyhedral differential neural distinguishers by leveraging multiple effective input differences. More recently, at FSE 2024, Bellini et al. introduced a general-purpose tool for automating the training of single-key differential neural distinguishers for various block ciphers. Inspired by this body of work, we aim to extend automated search techniques to related-key differential neural distinguishers, enabling the discovery of effective input differences and key differences for such distinguishers. To achieve this, we employ a genetic optimization algorithm to identify effective differential combinations. To validate the efficacy of our method, we apply it to the Simeck and Simon cipher families, successfully identifying effective differential combinations for the three variants of Simeck and ten variants of Simon. Furthermore, inspired by the concept of polyhedral neural distinguishers, we adopt a novel data format that leverages multiple distinct input differences and key differences to construct positive and negative samples, providing the neural network with a richer set of features. Our approach not only identify high-quality distinguishers for previously unexplored cipher variants but also achieve higher accuracy for related-key differential neural distinguishers compared to the state-of-the-art.
Last updated:  2025-04-17
Faster amortized bootstrapping using the incomplete NTT for free
Thales B. Paiva, Gabrielle De Micheli, Syed Mahbub Hafiz, Marcos A. Simplicio Jr., and Bahattin Yildiz
Amortized bootstrapping techniques have been proposed for FHEW/TFHE to efficiently refresh multiple ciphertexts simultaneously within a polynomial modulus. Although recent proposals have very efficient asymptotic complexity, reducing the amortized cost essentially to $\tilde{O}(1)$ FHE multiplications, the practicality of such algorithms still suffers from substantial overhead and high decryption failure rates (DFR). In this study, we improve upon one of the state-of-the-art amortized bootstrapping algorithms (Guimarães et al., ASIACRYPT 2023) for FHEW/TFHE-like schemes by introducing an alternative algorithmic strategy. Specifically, we combine Guimarães et al.'s strategy based on a two-part NTT with an incomplete Number Theoretic Transform (NTT) algorithm. As a result, we demonstrate a 2.12$\times$ speedup compared to the algorithm of Guimarães et al. and a $1.12\times$ improvement over the state-of-the-art (sequential) TFHE-rs while achieving a DFR close to $2^{-32}$ for 7-bit messages, although the DFR is higher for 8-bit messages. We also explore trade-offs between execution time and DFR, identifying parameter sets that improve execution time of Guimarães et al. by $1.41\times$, while simultaneously reducing the DFR by a factor of $2^{-22}$ for 8-bit messages.
Last updated:  2025-04-16
PAKE Combiners and Efficient Post-Quantum Instantiations
Julia Hesse and Michael Rosenberg
Much work has been done recently on developing password-authenticated key exchange (PAKE) mechanisms with post-quantum security. However, modern guidance recommends the use of hybrid schemes—schemes which rely on the combined hardness of a post-quantum assumption, e.g., Learning with Errors (LWE), and a more traditional assumption, e.g., decisional Diffie-Hellman. To date, there is no known hybrid PAKE construction, let alone a general method for achieving such. In this paper, we present two efficient PAKE combiners—algorithms that take two PAKEs satisfying mild assumptions, and output a third PAKE with combined security properties—and prove these combiners secure in the Universal Composability (UC) model. Our sequential combiner, instantiated with efficient existing PAKEs such as CPace (built on Diffie-Hellman-type assumptions) and CAKE (built on lattice assumptions), yields the first known hybrid PAKE.
Last updated:  2025-04-16
Mario: Multi-round Multiple-Aggregator Secure Aggregation with Robustness against Malicious Actors
Truong Son Nguyen, Tancrède Lepoint, and Ni Trieu
Federated Learning (FL) enables multiple clients to collaboratively train a machine learning model while keeping their data private, eliminating the need for data sharing. Two common approaches to secure aggregation (SA) in FL are the single-aggregator and multiple-aggregator models. This work focuses on improving the multiple-aggregator model. Existing multiple-aggregator protocols such as Prio (NSDI 2017), Prio+ (SCN 2022), Elsa (S&P 2023) either offer robustness only in the presence of semi-honest servers or provide security without robustness and are limited to two aggregators. We introduce Mario, the first multiple-aggregator Secure Aggregation protocol that is both secure and robust in a malicious setting. Similar to prior work of Prio and Prio+, Mario provides secure aggregation in a setup of $n$ servers and $m$ clients. Unlike previous work, Mario removes the assumption of semi-honest servers, and provides a complete protocol with robustness under malicious clients and malicious servers. Our implementation shows that \system is $3.40\times$ and $283.4\times$ faster than Elsa and Prio+, respecitively.
Last updated:  2025-04-16
VRaaS: Verifiable Randomness as a Service on Blockchains
Jacob Gorman, Lucjan Hanzlik, Aniket Kate, Easwar Vivek Mangipudi, Pratyay Mukherjee, Pratik Sarkar, and Sri AravindaKrishnan Thyagarajan
Web3 applications, such as on-chain games, NFT minting, and leader elections necessitate access to unbiased, unpredictable, and publicly verifiable randomness. Despite its broad use cases and huge demand, there is a notable absence of comprehensive treatments of on-chain verifiable randomness services. To bridge this, we offer an extensive formal analysis of on-chain verifiable randomness services. We present the $first$ formalization of on-chain verifiable randomness in the blockchain setting by introducing the notion of Verifiable Randomness as a Service (VRaaS). We formally define VRaaS using an ideal functionality $\mathcal{F}_{\text{VRaaS}}$ in the Universal Composability model. Our definition not only captures the core features of randomness services, such as unbiasability, unpredictability, and public verifiability, but also accounts for many other crucial nuances pertaining to different entities involved, such as smart contracts. Within our framework we study a generic design of Verifiable Random Function (VRF)-based randomness service - where the randomness requester provides an input on which the randomness is evaluated as VRF output. We show that it does satisfy our formal VRaaS definition. Furthermore, we show that the generic protocol captures many real-world randomness services like Chainlink VRF and Supra dVRF. Moreover, we investigate the minimalism of the framework. Towards that, first we show that, the two transactions in-built in our framework are actually $necessary$ for any randomness service to support the essential qualities. We also discover $practical$ $vulnerabilities$ in other designs such as Algorand beacon, Pyth VRF and Band VRF, captured within our framework.
Last updated:  2025-04-16
Efficient Foreign-Field Arithmetic in PLONK
Miguel Ambrona, Denis Firsov, and Inigo Querejeta-Azurmendi
PLONK is a prominent universal and updatable zk-SNARK for general circuit satisfiability, which allows a prover to produce a short certificate of the validity of a certain statement/computation. Its expressive model of computation and its highly efficient verifier complexity make PLONK a powerful tool for a wide range of blockchain applications. Supporting standard cryptographic primitives (such us ECDSA over SECP256k1) or advanced recursive predicates (e.g. incrementally verifiable computation) on a SNARK presents a significant challenge. It requires so-called foreign-field arithmetic (enforcing constraints over algebraic fields that differ from the SNARK native field) which was previously believed to incur an overhead of two or three orders of magnitude. We build on the techniques by Lubarov and Baylina and observe that, by considering tight bounds on their encoding of foreign-field multiplication, the number of PLONK constraints can be significantly reduced. We show that these techniques also extend to elliptic curve emulation, with an overhead of just one order of magnitude (with respect to its native counterpart). We validate soundness and completeness of our main results in EasyCrypt. Finally, we implement an open-source library with support for foreign-field arithmetic. Our experimental results showcase the generality of our techniques and confirm their suitability for real-world applications.
Last updated:  2025-04-16
A Formal Security Analysis of Hyperledger AnonCreds
Ashley Fraser and Steve Schneider
In an anonymous credential system, users collect credentials from issuers, and can use their credentials to generate privacy-preserving identity proofs that can be shown to third-party verifiers. Since the introduction of anonymous credentials by Chaum in 1985, there has been promising advances with respect to system design, security analysis and real-world implementations of anonymous credential systems. In this paper, we examine Hyperledger AnonCreds, an anonymous credential system that was introduced in 2017 and is currently undergoing specification. Despite being implemented in deployment-ready identity system platforms, there is no formal security analysis of the Hyperledger AnonCreds protocol. We rectify this, presenting syntax and a security model for, and a first security analysis of, the Hyperledger AnonCreds protocol. In particular, we demonstrate that Hyperledger AnonCreds is correct, and satisfies notions of unforgeability and anonymity. We conclude with a discussion on the implications of our findings, highlighting the importance of rigorous specification efforts to support security evaluation of real-world cryptographic protocols.
Last updated:  2025-04-16
BitVM: Quasi-Turing Complete Computation on Bitcoin
Lukas Aumayr, Zeta Avarikioti, Robin Linus, Matteo Maffei, Andrea Pelosi, Christos Stefo, and Alexei Zamyatin
A long-standing question in the blockchain community is which class of computations are efficiently expressible in cryptocurrencies with limited scripting languages, such as Bitcoin Script. Such languages expose a reduced trusted computing base, thereby being less prone to hacks and vulnerabilities, but have long been believed to support only limited classes of payments. In this work, we confute this long-standing belief by showing for the first time that arbitrary computations can be encoded in today's Bitcoin Script without introducing any language modification or additional security assumptions, such as trusted hardware, trusted parties, or committees with an honest majority. We present BitVM, a two-party protocol that realizes a generic virtual machine by combining cryptographic primitives and economic incentives. We conduct a formal analysis of BitVM, characterizing its functionality, system assumptions, and security properties. We further demonstrate the practicality of our approach by implementing a prototype and performing an experimental evaluation: in the optimistic case (i.e., when parties agree), our protocol requires just three on-chain transactions, whereas in the pessimistic case, the number of transactions grows logarithmically with the size of the virtual machine. We exemplify the deployment potential of BitVM by building a Bitcoin-sidechain bridge application. This work not only solves a long-standing theoretical problem, but it also promises a strong practical impact, enabling the development of complex applications in Bitcoin.
Last updated:  2025-04-16
Accountable Liveness
Uncategorized
Andrew Lewis-Pye, Joachim Neu, Tim Roughgarden, and Luca Zanolini
Show abstract
Uncategorized
Safety and liveness are the two classical security properties of consensus protocols. Recent works have strengthened safety with accountability: should any safety violation occur, a sizable fraction of adversary nodes can be proven to be protocol violators. This paper studies to what extent analogous accountability guarantees are achievable for liveness. To reveal the full complexity of this question, we introduce an interpolation between the classical synchronous and partially-synchronous models that we call the $x$-partially-synchronous network model in which, intuitively, at most an $x$ fraction of the time steps in any sufficiently long interval are asynchronous (and, as with a partially-synchronous network, all time steps are synchronous following the passage of an unknown "global stablization time"). We prove a precise characterization of the parameter regime in which accountable liveness is achievable: if and only if $x < 1/2$ and $f < n/2$, where $n$ denotes the number of nodes and $f$ the number of nodes controlled by an adversary. We further refine the problem statement and our analysis by parameterizing by the number of violating nodes identified following a liveness violation, and provide evidence that the guarantees achieved by our protocol are near-optimal (as a function of $x$ and $f$). Our results provide rigorous foundations for liveness-accountability heuristics such as the "inactivity leaks" employed in Ethereum.
Last updated:  2025-04-16
Fast amortized bootstrapping with small keys and polynomial noise overhead
Antonio Guimarães and Hilder V. L. Pereira
Most homomorphic encryption (FHE) schemes exploit a technique called single-instruction multiple-data (SIMD) to process several messages in parallel. However, they base their security in somehow strong assumptions, such as the hardness of approximate lattice problems with superpolynomial approximation factor. On the other extreme of the spectrum, there are lightweight FHE schemes that have much faster bootstrapping but no SIMD capabilities. On the positive side, the security of these schemes is based on lattice problems with (low-degree) polynomial approximation factor only, which is a much weaker security assumption. Aiming the best of those two options, Micciancio and Sorrell (ICALP'18) proposed a new amortized bootstrapping that can process many messages at once, yielding sublinear time complexity per message, and allowing one to construct FHE based on lattice problems with polynomial approximation factor. Some subsequent works on this line achieve near-optimal asymptotic performance, nevertheless, concrete efficiency remains mostly an open problem. The only existing implementation to date (GPV23, Asiacrypt 2023) requires keys of up to a hundred gigabytes while only providing gains for relatively large messages. In this paper, we introduce a new method for amortized bootstrapping where the number of homomorphic operations required per message is $O(h)$ and the noise overhead is $O(\sqrt{h \lambda} \log \lambda)$, where $h$ is the Hamming weight of the LWE secret key and $\lambda$ is the security parameter. This allows us to use much smaller parameters and to obtain faster running time. Our method is based on a new efficient homomorphic evaluation of sparse polynomial multiplication. We bootstrap 2 to 8-bit messages in 1.1 ms to 26.5 ms, respectively. Compared to TFHE-rs, this represents a performance improvement of 3.9 to 41.5 times while requiring bootstrapping keys up to 50.4 times smaller.
Last updated:  2025-04-16
Onion Encryption Revisited: Relations Among Security Notions
Daichong Chao, Liehuang Zhu, Dawei Xu, Tong Wu, Chuan Zhang, and Fuchun Guo
This paper compares the relative strengths of prominent security notions for onion encryption within the Tor setting, specifically focusing on CircuitHiding (EUROCRYPT 2018, an anonymity flavor notion) and OnionAE (PETS 2018, a stateful authenticated encryption flavor notion). Although both are state-of-the-art, Tor-specific notions, they have exhibited different definitional choices, along with variations in complexity and usability. By employing an indirect approach, we compare them using a set of onion layer-centric notions: IND\$-CPA, IPR/IPR$^+$, and INT-sfCTXT, to compare with the two, respectively. Since the same notion set that implies OnionAE does not imply CircuitHiding, and vice versa, this leads to the conclusion that OnionAE and CircuitHiding are mutually separable. Therefore, neither notion fully expresses satisfactory security on its own. Importantly, IND\$-CPA, IPR$^+$ (a stronger variant of IPR), and INT-sfCTXT collectively and strictly imply OnionAE and CircuitHiding. Given their onion layer-centric and thus simpler nature, this provides a practical approach to simultaneously satisfying CircuitHiding and OnionAE. While the formal treatment of (general) public-key onion routing has been relatively well-studied, formal treatment tailored to Tor remains insufficient, and thus our work narrows this gap.
Last updated:  2025-04-16
DahLIAS: Discrete Logarithm-Based Interactive Aggregate Signatures
Jonas Nick, Tim Ruffing, and Yannick Seurin
An interactive aggregate signature scheme allows $n$ signers, each with their own secret/public key pair $(sk_i, pk_i)$ and message $m_i$, to jointly produce a short signature that simultaneously witnesses that $m_i$ has been signed under $pk_i$ for every $i \in \{1, \dots, n\}$. Despite the large potential for savings in terms of space and verification time, which constitute the two main bottlenecks for large blockchain systems such as Bitcoin, aggregate signatures have received much less attention than the other members of the multi-party signature family, namely multi-signatures such as $\mathsf{MuSig2}$ and threshold signatures such as $\mathsf{FROST}$. In this paper, we propose $\mathsf{DahLIAS}$, the first aggregate signature scheme with constant-size signatures—a signature has the same shape as a standard Schnorr signature—directly based on discrete logarithms in pairing-free groups. The signing protocol of $\mathsf{DahLIAS}$ consists of two rounds, the first of which can be preprocessed without the message, and verification (for a signature created by $n$ signers) is dominated by one multi-exponentiation of size $n+1$, which is asymptotically twice as fast as batch verification of $n$ individual Schnorr signatures. $\mathsf{DahLIAS}$ is designed with real-world applications in mind. Besides the aforementioned benefits of space savings and verification speedups, $\mathsf{DahLIAS}$ offers key tweaking, a technique commonly used in Bitcoin to derive keys in hierarchical deterministic wallets and to save space as well as enhance privacy on the blockchain. We prove $\mathsf{DahLIAS}$ secure in the concurrent setting with key tweaking under the (algebraic) one-more discrete logarithm assumption in the random oracle model.
Last updated:  2025-04-16
Let us walk on the 3-isogeny graph: efficient, fast, and simple
Jesús-Javier Chi-Domínguez, Eduardo Ochoa-Jimenez, and Ricardo-Neftalí Pontaza-Rodas
Constructing and implementing isogeny-based cryptographic primitives is an active research. In particular, performing length-$n$ isogenies walks over quadratic field extensions of $\mathbb{F}_p$ plays an exciting role in some constructions, including Hash functions, Verifiable Delay Functions, Key-Encapsulation Mechanisms, and generic proof systems for isogeny knowledge. Remarkably, many isogeny-based constructions, for efficiency, perform $2$-isogenies through square root calculations. This work analyzes the idea of using $3$-isogenies instead of $2$-isogenies, which replaces the requirement of calculating square roots with cube roots. Performing length-$m$ $3$-isogenies allows shorter isogeny walks than when employing length-$n$ $2$-isogenies since a cube root calculation costs essentially the same as computing a square root, and we require $3^m \approx 2^n$ to provide the same security level. We propose an efficient mapping from arbitrary supersingular Montgomery curves defined over $\mathbb{F}_{p^2}$ to the $3$-isogeny curve model from Castryck, Decru, and Vercauteren (Asiacrypt 2020); a deterministic algorithm to compute all order-$3$ points on arbitrary supersingular Montgomery curves, and an efficient algorithm to compute length-$m$ $3$-isogeny chains. We improve the length-$m$ $3$-isogeny walks required by the KEM from Nakagawa and Onuki (CRYPTO 2024) by using our results and introducing more suitable parameter sets that are friendly with C-code implementations. In particular, our experiments illustrate an improvement of 26.41\%--35.60\% in savings when calculating length-$m$ $3$-isogeny chains and using our proposed parameters instead of those proposed by Nakagawa and Onuki (CRYPTO 2024). Finally, we enhance the key generation of $\mathsf{CTIDH}$-2048 by including radical $3$-isogeny chains over the basefield $\mathbb{F}_p$, reducing the overhead of finding a $3$-torsion basis as required in some instantiations of the $\mathsf{CSIDH}$ protocol. Our experiments illustrate the advantage of radical $3$ isogenies in the key generation of $\mathsf{CTIDH}$-2048, with an improvement close to 4x faster than the original $\mathsf{dCTIDH}$.
Last updated:  2025-04-16
Eva: Efficient Privacy-Preserving Proof of Authenticity for Lossily Encoded Videos
Chengru Zhang, Xiao Yang, David Oswald, Mark Ryan, and Philipp Jovanovic
With the increasing usage of fake videos in misinformation campaigns, proving the provenance of an edited video becomes critical, in particular, without revealing the original footage. We formalize the notion and security model of proofs of video authenticity and give the first cryptographic video authentication protocol Eva, which supports lossy codecs and arbitrary edits and is proven secure under well-established cryptographic assumptions. Compared to previous cryptographic methods for image authentication, Eva is not only capable of handling significantly larger amounts of data originating from the complex lossy video encoding but also achieves linear prover time, constant RAM usage, and constant proof size with respect to video size. These improvements have optimal theoretic complexity and are enabled by our two new theoretical advancements of integrating lookup arguments with folding-based incrementally verifiable computation (IVC) and compressing IVC proof efficiently, which may be of independent interest. For our implementation of Eva, we then integrate them with the Nova folding scheme, which we call Lova. As for concrete performance, we additionally utilize various optimizations such as tailored circuit design and GPU acceleration to make Eva highly practical: for a 2-minute HD (1280×720) video encoded in H.264 at 30 frames per second, Eva generates a 448 B proof in about 2.4 hours on consumer-grade hardware at 2.6 μs per pixel, surpassing state-of-the-art cryptographic image authentication schemes by more than an order of magnitude in terms of prover time and proof size.
Last updated:  2025-04-16
SASTA: Ambushing Hybrid Homomorphic Encryption Schemes with a Single Fault
Aikata Aikata, Ahaan Dabholkar, Dhiman Saha, and Sujoy Sinha Roy
Fully Homomorphic Encryption offers an effective solution for privacy-preserving computation, but its adoption is hindered by substantial computational and communication overheads. To address these, the Hybrid Homomorphic Encryption (HHE) protocol was developed, where the client encrypts data using a symmetric encryption scheme (SE), and the server homomorphically evaluates its decryption. Previous studies have demonstrated that the HHE protocol has no impact on the correctness of applications; however, in this work, we shift the focus to its security resilience when subjected to Differential Fault Analysis (DFA). While DFA has proven effective against standalone symmetric-key primitives, no DFA study has been proposed that exploits the HHE protocol as a whole. Furthermore, previous DFA approaches on SE rely on strong assumptions such as nonce reuse, which limits their applicability in real-world protocols or practical applications. In this work, we show that the structure of the HHE protocol itself exposes new avenues for fault exploitation. We introduce Sasta-DFA, which, to our knowledge, is the first DFA targeting HHE protocol in its entirety. Our study demonstrates that an attacker can achieve complete key recovery with a single fault injection. A key feature of this attack is that it does not require nonce reuse, thus adhering to nonce-related specifications. We adapt the IND-CPAD threat model proposed by Li and Micciancio at Eurocrypt’21 for HHE in the context of fault attacks. We conduct the first DFA study on the emerging HHE-specific integer-based SE schemes— Rubato, Hera, Pasta, and Masta. Notably, our attack methodology is generalizable and applicable to a broader class of HHE-friendly SE schemes, including boolean schemes like Rasta and even the standard scheme AES. We also present the first experimental validation of fault analysis on these new HHE-enabling schemes. Our attack, mounted on an ATXmega128D4-AU microcontroller, successfully demonstrates full key recovery. Finally, we also extend Sasta-DFA to Authenticated Transciphering protocols under a weaker threat model that removes any functional dependency.
Last updated:  2025-04-16
Zero-Knowledge Protocol for Knowledge of Known Discrete Logarithms: Applications to Ring Confidential Transactions and Anonymous Zether
Li Lin, Tian Qiu, Xin Wang, Hailong Wang, Changzheng Wei, Ying Yan, Wei Wang, and Wenbiao Zhao
The securities of a large fraction of zero-knowledge arguments of knowledge schemes rely on the discrete logarithm (DL) assumption or the discrete logarithm relation assumption, such as Bulletproofs (S&P 18) and compressed $\Sigma$-protocol (CRYPTO 20). At the heart of these protocols is an interactive proof of knowledge between a prover and a verifier showing that a Pedersen vector commitment $P=h^{\rho}\cdot\textbf{g}^{\textbf{x}}$ to a vector $\textbf{x}$ satisfies multi-variate equations, where the DL relations among the vector of generators $\textbf{g}$ are unknown. However, in some circumstances, the prover may know the DL relations among the generators, and the DL relation assumption no longer holds, such as ring signatures, ring confidential transactions (RingCT) and K-out-of-N proofs, which will make the soundness proof of these protocols infeasible. This paper is concerned with a problem called knowledge of known discrete logarithms (KKDL) that appears but has not been clearly delineated in the literature. Namely, it asks to prove a set of multi-exponent equalities, starting with the fact that the prover may know the DL relations among the generators of these equalities. Our contributions are three-fold: (1) We propose a special honest-verifier zero-knowledge protocol for the problem. Using the Fiat-Shamir heuristic and the improved inner-product argument of Bulletproofs, the proof size of our protocol is logarithmic to the dimension of the vector. (2) As applications, our protocol can be utilized to construct logarithmic-size RingCT securely which fixes the issues of Omniring (CCS 19), ring signatures (with signature size $2\cdot \lceil \log_2(N) \rceil+10$ for ring size $N$) and $K$-out-of-$N$ proof of knowledge (with proof size $2\cdot \lceil \log_2(N) \rceil+14$) which achieves the most succinct proof size improving on previous results. Meanwhile, we propose the first account-based multi-receiver privacy scheme considering the sender's privacy with logarithmic proof size (to the best of our knowledge). (3) We describe an attack on RingCT-3.0 (FC 20) where an attacker can spend a coin of an arbitrary amount that never existed on the blockchain.
Last updated:  2025-04-16
Neural network design options for RNG's verification
José Luis Crespo, Jaime Gutierrez, and Angel Valle
In this work, we explore neural network design options for discriminating Random Number Generators(RNG), as a complement to existing statistical test suites, being a continuation of a recent paper of the aothors. Specifically, we consider variations in architecture and data preprocessing. We test their impact on the network's ability to discriminate sequences from a low-quality RNG versus a high-quality one—that is, to discriminate between "optimal" sequence sets and those from the generator under test. When the network fails to distinguish them, the test is passed. For this test to be useful, the network must have real discrimination capabilities. We review several network design possibilities showing significant differences in the obtained results. The best option presented here is convolutional networks working on 5120-byte sequences.
Last updated:  2025-04-16
An NVMe-based Secure Computing Platform with FPGA-based TFHE Accelerator
Yoshihiro Ohba, Tomoya Sanuki, Claude Gravel, Kentaro Mihara, Asuka Wakasugi, and Kenta Adachi
In this study, we introduce a new approach to secure computing by implementing a platform that utilizes a non-volatile memory express (NVMe)-based system with an FPGA-based Torus fully homomorphic encryption (TFHE) accelerator, solid state drive (SSD), and middleware on the host-side. Our platform is the first to offer completely secure computing capabilities for TFHE by using an FPGA-based accelerator. We defined secure computing instructions to evaluate 14-bit to 14-bit functions using TFHE. Our middleware allows for the communication of ciphertexts, keys, and secure computing programs while invoking secure computing programs through NVMe commands with metadata. Our performance evaluation demonstrates that our secure computing platform outperforms CPU-based and GPU-based platforms by 15 to 120 times and 2.5 to 3 times, respectively, in gate bootstrapping execution time. Additionally, our platform uses 7 to 12 times less electric energy consumption during the gate bootstrapping execution time than CPU-based platforms and 4.95 times less than a GPU-based platform. The performance of a machine learning application running on our platform shows that bootstrapping accounts for more than 80% of ciphertext learning time.
Last updated:  2025-04-16
Trilithium: Efficient and Universally Composable Distributed ML-DSA Signing
Antonín Dufka, Semjon Kravtšenko, Peeter Laud, and Nikita Snetkov
In this paper, we present Trilithium: a protocol for distributed key generation and signing compliant with FIPS 204 (ML-DSA). Our protocol allows two parties, "server" and "phone" with assistance of correlated randomness provider (CRP) to produce a standard ML-DSA signature. We prove our protocol to be secure against a malicious server or phone in the universal composability (UC) model, introducing some novel techniques to argue the security of two-party secure computation protocols with active security against one party, but only active privacy against the other. We provide an implementation of our protocol in Rust and benchmark it, showing the practicality of the protocol.
Last updated:  2025-04-16
Tree-based Quantum Carry-Save Adder
Hyunjun Kim, Sejin Lim, Kyungbae Jang, Siyi Wang, Anubhab Baksi, Anupam Chattopadhyay, and Hwajeong Seo
Quantum computing is regarded as one of the most significant upcoming advancements in computer science. Although fully operational quantum computers have yet to be realized, they are expected to solve specific problems that are difficult to solve using classical computers. Given the limitations of quantum computing resources, it is crucial to design compact quantum circuits for core operations, such as quantum arithmetic. In this paper, we focus on optimizing the circuit depth of quantum multi-operand addition, which is a fundamental component in quantum implementations (as an example, SHA-2). Building on the foundational quantum carry-save approach by Phil Gossett, we introduce a tree-based quantum carry-save adder. Our design integrates the Wallace and Dadda trees to optimize carry handling during multi-operand additions. To further reduce circuit depth, we utilize additional ancilla qubits for parallel operations and introduce an efficient technique for reusing these ancilla qubits. Our tree-based carry-save adder achieves the lowest circuit depth ($T$-depth) and provides an improvement of over 82% (up to 99%) in the qubit count–circuit depth product for multi-operand addition. Furthermore, we apply our method to multiplication, achieving the lowest circuit depth and an improvement of up to 87% in the qubit count–circuit depth product.
Last updated:  2025-04-16
Uncertainty Estimation in Neural Network-enabled Side-channel Analysis and Links to Explainability
Seyedmohammad Nouraniboosjin and Fatemeh Ganji
Side-channel analysis (SCA) has emerged as a critical field in securing hardware implementations against potential vulnerabilities. With the advent of artificial intelligence(AI), neural network-based approaches have proven to be among the most useful techniques for profiled SCA. Despite the success of NN-assisted SCA, a critical challenge remains, namely understanding predictive uncertainty. NNs are often uncertain in their predictions, leading to incorrect key guesses with high probabilities, corresponding to a higher rank associated with the correct key. This uncertainty stems from multiple factors, including measurement errors, randomness in physical quantities, and variability in NN training. Understanding whether this uncertainty arises from inherent data characteristics or can be mitigated through better training is crucial. Additionally, if data uncertainty dominates, identifying specific trace features responsible for misclassification becomes essential. We propose a novel approach to estimating uncertainty in NN-based SCA by leveraging Renyi entropy, which offers a generalized framework for capturing various forms of uncertainty. This metric allows us to quantify uncertainty in NN predictions and explain its impact on key recovery. We decompose uncertainty into epistemic (model-related) and aleatoric (data-related) components. Given the challenge of estimating probability distributions in high-dimensional spaces, we use matrix-based Renyi α-entropy and α-divergence to better approximate leakage distributions, addressing the limitations of KL divergence in SCA. We also explore the sources of uncertainty, e.g., resynchronization, randomized keys, as well as hyperparameters related to NN training. To identify which time instances (features in traces) contribute most to uncertainty, we also integrate SHAP explanations with our framework, overcoming the limitations of conventional sensitivity analysis. Lastly, we show that predictive uncertainty strongly correlates with standard SCA metrics like rank, offering a complementary measure for evaluating attack complexity. Our theoretical findings are backed by extensive experiments on available datasets and NN models.
Last updated:  2025-04-15
Proofs of Useful Work from Arbitrary Matrix Multiplication
ilan komargodski, Itamar Schen, and Omri Weinstein
We revisit the longstanding open problem of implementing Nakamoto's proof-of-work (PoW) consensus based on a real-world computational task $T(x)$ (as opposed to artificial random hashing), in a truly permissionless setting where the miner itself chooses the input $x$. The challenge in designing such a Proof-of-Useful-Work (PoUW) protocol, is using the native computation of $T(x)$ to produce a PoW certificate with prescribed hardness and with negligible computational overhead over the worst-case complexity of $T(\cdot)$ -- This ensures malicious miners cannot ``game the system" by fooling the verifier to accept with higher probability compared to honest miners (while using similar computational resources). Indeed, obtaining a PoUW with $O(1)$-factor overhead is trivial for any task $T$, but also useless. Our main result is a PoUW for the task of Matrix Multiplication $\mathsf{MatMul}(A,B)$ of arbitrary matrices with $1+o(1)$ multiplicative overhead compared to na\"ive $\mathsf{MatMul}$ (even in the presence of Fast Matrix Multiplication-style algorithms, which are currently impractical). We conjecture that our protocol has optimal security in the sense that a malicious prover cannot obtain any significant advantage over an honest prover. This conjecture is based on reducing hardness of our protocol to the task of solving a batch of low-rank random linear equations which is of independent interest. Since $\mathsf{MatMul}$s are the bottleneck of AI compute as well as countless industry-scale applications, this primitive suggests a concrete design of a new L1 base-layer protocol, which nearly eliminates the energy-waste of Bitcoin mining -- allowing GPU consumers to reduce their AI training and inference costs by ``re-using" it for blockchain consensus, in exchange for block rewards (2-for-1). This blockchain is currently under construction.
Last updated:  2025-04-15
Post-quantum Cryptographic Analysis of SSH
Benjamin Benčina, Benjamin Dowling, Varun Maram, and Keita Xagawa
The Secure Shell (SSH) protocol is one of the first security protocols on the Internet to upgrade itself to resist attacks against future quantum computers, with the default adoption of the "quantum (otherwise, classically)" secure hybrid key exchange in OpenSSH from April 2022. However, there is a lack of a comprehensive security analysis of this quantum-resistant version of SSH in the literature: related works either focus on the hybrid key exchange in isolation and do not consider security of the overall protocol, or analyze the protocol in security models which are not appropriate for SSH, especially in the "post-quantum" setting. In this paper, we remedy the state of affairs by providing a thorough post-quantum cryptographic analysis of SSH. We follow a "top-down" approach wherein we first prove security of SSH in a more appropriate model, namely, our post-quantum extension of the so-called authenticated and confidential channel establishment (ACCE) protocol security model; our extension which captures "harvest now, decrypt later" attacks could be of independent interest. Then we establish the cryptographic properties of SSH's underlying primitives, as concretely instantiated in practice, based on our protocol-level ACCE security analysis: for example, we prove relevant cryptographic properties of "Streamlined NTRU Prime", a key encapsulation mechanism (KEM) which is used in recent versions of OpenSSH and TinySSH, in the quantum random oracle model, and address open problems related to its analysis in the literature. Notably, our ACCE security analysis of post-quantum SSH relies on the weaker notion of IND-CPA security of the ephemeral KEMs used in the hybrid key exchange. This is in contrast to prior works which rely on the stronger assumption of IND-CCA secure ephemeral KEMs. Hence we conclude the paper with a discussion on potentially replacing IND-CCA secure KEMs in current post-quantum implementations of SSH with simpler and faster IND-CPA secure counterparts, and also provide the corresponding benchmarks.
Last updated:  2025-04-15
Hedging Public-Key Encryption in the Real World
Alexandra Boldyreva, Christopher Patton, and Thomas Shrimpton
Hedged PKE schemes are designed to provide useful security when the per-message randomness fails to be uniform, say, due to faulty implementations or adversarial actions. A simple and elegant theoretical approach to building such schemes works like this: Synthesize fresh random bits by hashing all of the encryption inputs, and use the resulting hash output as randomness for an underlying PKE scheme. The idea actually goes back to the Fujisaki-Okamoto transform for turning CPA-secure encryption into CCA-secure encryption, and is also used to build deterministic PKE schemes. In practice, implementing this simple construction is surprisingly difficult, as the high- and mid-level APIs presented by the most commonly used crypto libraries (e.g. OpenSSL and forks thereof) do not permit one to specify the per-encryption randomness. Thus application developers are forced to piece together low-level functionalities and attend to any associated, security-critical algorithmic choices. Other approaches to hedged PKE present similar problems in practice. We reconsider the matter of building hedged PKE schemes, and the security notions they aim to achieve. We lift the current best-possible security notion for hedged PKE (IND-CDA) from the CPA setting to the CCA setting, and then show how to achieve it using primitives that are readily available from high-level APIs. We also propose a new security notion, MM-CCA, which generalizes traditional IND-CCA to admit imperfect randomness. Like IND-CCA, and unlike IND-CDA, our notion gives the adversary the public key. We show that MM-CCA is achieved by RSA-OAEP in the random-oracle model; this is significant in practice because RSA-OAEP is directly available from high-level APIs across all libraries we surveyed. We sort out relationships among the various notions, and also develop new results for existing hedged PKE constructions.
Last updated:  2025-04-15
On the Definition of Malicious Private Information Retrieval
Bar Alon and Amos Beimel
A multi-server private information retrieval (PIR) protocol allows a client to obtain an entry of its choice from a database, held by one or more servers, while hiding the identity of the entry from small enough coalitions of servers. In this paper, we study PIR protocols in which some of the servers are malicious and may not send messages according to the pre-described protocol. In previous papers, such protocols were defined by requiring that they are correct, private, and robust to malicious servers, i.e., by listing 3 properties that they should satisfy. However, 40 years of experience in studying secure multiparty protocols taught us that defining the security of protocols by a list of required properties is problematic. In this paper, we rectify this situation and define the security of PIR protocols with malicious servers using the real vs. ideal paradigm. We study the relationship between the property-based definition of PIR protocols and the real vs. ideal definition, showing the following results: - We prove that if we require full security from PIR protocols, e.g., the client outputs the correct value of the database entry with high probability even if a minority of the servers are malicious, then the two definitions are equivalent. This implies that constructions of such protocols that were proven secure using the property-based definition are actually secure under the ``correct'' definition of security. - We show that if we require security-with-abort from PIR protocols (called PIR protocols with error-detection in previous papers), i.e., protocols in which the user either outputs the correct value or an abort symbol, then there are protocols that are secure under the property-based definition; however, they do not satisfy the real vs. ideal definition, that is, they can be attacked allowing selective abort. This shows that the property-based definition of PIR protocols with security-with-abort is problematic. - We consider the compiler of Eriguchi et al. (TCC 22) that starts with a PIR protocol that is secure against semi-honest servers and constructs a PIR protocol with security-with-abort; this compiler implies the best-known PIR protocols with security-with-abort. We show that applying this protocol does not result in PIR protocols that are secure according to the real vs. ideal definition. However, we prove that a simple modification of this compiler results in PIR protocols that are secure according to the real vs. ideal definition.
Last updated:  2025-04-15
SUMAC: an Efficient Administrated-CGKA Using Multicast Key Agreement
Nicolas Bon, Céline Chevalier, Guirec Lebrun, and Ange Martinelli
Since the standardization of the Secure Group Messaging protocol Messaging Layer Security (MLS) [4 ], whose core subprotocol is a Continuous Group Key Agreement (CGKA) mechanism named TreeKEM, CGKAs have become the norm for group key exchange protocols. However, in order to alleviate the security issue originating from the fact that all users in a CGKA are able to carry out sensitive operations on the member group, an augmented protocol called Administrated-CGKA (A-CGKA) has been recently created [2]. An A-CGKA includes in the cryptographic protocol the management of the administration rights that restrict the set of privileged users, giving strong security guarantees for the group administration. The protocol designed in [2] is a plugin added to a regular (black-box) CGKA, which consequently add some complexity to the underlying CGKA and curtail its performances. Yet, leaving the fully decentralized paradigm of a CGKA offers the perspective of new protocol designs, potentially more efficient. We propose in this paper an A-CGKA called SUMAC, which offers strongly enhanced communication and storage performances compared to other A-CGKAs and even to TreeKEM. Our protocol is based on a novel design that modularly combines a regular CGKA used by the administrators of the group and a Tree-structured Multicast Key Agreement (TMKA) [9] – which is a centralized group key exchange mechanism administrated by a single group manager – between each administrator and all the standard users. That TMKA gives SUMAC an asymptotic communication cost logarithmic in the number of users, similarly to a CGKA. However, the concrete performances of our protocol are much better than the latter, especially in the post-quantum framework, due to the intensive use of secret-key cryptography that offers a lighter bandwidth than the public-key encryption schemes from a CGKA. In practice, SUMAC improves the communication cost of TreeKEM by a factor 1.4 to 2.4 for admin operations and a factor 2 to 38 for user operations. Similarly, its storage cost divides that of TreeKEM by a factor 1.3 to 23 for an administrator and 3.9 to 1,070 for a standard user. Our analysis of SUMAC is provided along with a ready-to-use open-source rust implementation that confirms the feasibility and the performances of our protocol.
Last updated:  2025-04-15
Quantum Periodic Distinguisher Construction: Symbolization Method and Automated Tool
Qun Liu, Haoyang Wang, Jinliang Wang, Boyun Li, and Meiqin Wang
As one of the famous quantum algorithms, Simon's algorithm enables the efficient derivation of the period of periodic functions in polynomial time. However, the complexity of constructing periodic functions has hindered the widespread application of Simon's algorithm in symmetric-key cryptanalysis. Currently, aside from the exhaustive search-based testing method introduced by Canale et al. at CRYPTO 2022, there is no unified model for effectively searching for periodic distinguishers. Although Xiang et al. established a link between periodic function and truncated differential theory at ToSC 2024, their approach lacks the ability to construct periods using unknown differentials and does not provide automated tools. This limitation underscores the inadequacy of existing methods in identifying periodic distinguishers for complex structures. In this paper, we address the challenge of advancing periodic distinguishers for symmetric-key ciphers. First, we propose a more generalized theory for constructing periodic distinguishers, addressing the limitations of Xiang et al.'s theory in handling unknown differences. We further extend our theory to probabilistic periodic distinguishers, thereby extending the separability property proposed by Hodžić et al. in 2020. As a result, our theory can cover a wider range of periodic distinguishers. Second, we introduce a novel symbolic representation to simplify the search of periodic distinguishers. Based upon this representation, we propose the first fully automated SMT-based search model, which efficiently addresses the challenges of manual searching in complex structures. Finally, we extend the model to SPN structures based on our new theory. Our model has broad applicability through significant advancements in analyzing generalized Feistel structures (GFSs) and SPN-based ciphers. As a general model, we have achieved new quantum distinguishers with the following round configurations: 10 rounds for GFS-4F, 10 rounds for LBlock, 10 rounds for TWINE, and 16 rounds for Skipjack-B, improving the previous best results by 2, 2, 2, and 3 rounds, respectively. In the domain of SPN-based ciphers, our model has enabled the identification of novel periodic distinguishers, including the first 9-round distinguisher for SKINNY and the first 12-round distinguisher for CRAFT. These achievements lay the foundation for quantum cryptanalysis of SPN-based ciphers using Simon’s algorithm.
Last updated:  2025-04-15
Pirouette: Query Efficient Single-Server PIR
Jiayi Kang and Leonard Schild
Private information retrieval (PIR) allows a client to query a public database privately and serves as a key building block for privacy-enhancing applications. Minimizing query size is particularly important in many use cases, for example when clients operate on low-power or bandwidth-constrained devices. However, existing PIR protocols exhibit large query sizes: to query $2^{25}$ records, the smallest query size of 14.8KB is reported in Respire [Burton et al., CCS'24]. Respire is based on fully homomorphic encryption (FHE), where a common approach to lower the client-to-server communication cost is transciphering. When combining the state-of-the-art transciphering [Bon et al., CHES'24] with Respire, the resulting protocol (referred to as T-Respire) has a 336B query size, while incurring a 16.2x times higher server computation cost than Respire. Our work presents the Pirouette protocol, which achieves a query size of just 36B without transciphering. This represents a 9.3x reduction compared to T-Respire and a 420x reduction to Respire. For queries over $2^{25}$ records, the single-core server computation in Pirouette is only 2x slower than Respire and 8.1x faster than T-Respire, and the server computation is highly parallelizable. Furthermore, Pirouette requires no database-specific hint for clients and naturally extends to support queries over encrypted databases.
Last updated:  2025-04-15
Efficient SPA Countermeasures using Redundant Number Representation with Application to ML-KEM
Rishub Nagpal, Vedad Hadžić, Robert Primas, and Stefan Mangard
Simple power analysis (SPA) attacks and their extensions, profiled and soft-analytical side-channel attacks (SASCA), represent a significant threat to the security of cryptographic devices and remain among the most powerful classes of passive side-channel attacks. In this work, we analyze how numeric representations of secrets can affect the amount of exploitable information leakage available to the adversary. We present an analysis of how mutual information changes as a result of the integer ring size relative to the machine word-size. Furthermore, we study the Redundant Number Representation (RNR) countermeasure and show that its application to ML-KEM can resist the most powerful SASCA attacks and provides a low-cost alternative to shuffling. We eval- uate the performance of RNR-ML-KEM with both simulated and prac- tical SASCA experiments on the ARM Cortex-M4 based on a worst-case attack methodology. We show that RNR-ML-KEM sufficiently renders these attacks ineffective. Finally, we evaluate the performance of the RNR-ML-KEM NTT and INTT and show that SPA security can be achieved with a 62.8% overhead for the NTT and 0% overhead for the INTT relative to the ARM Cortex-M4 reference implementation used.
Last updated:  2025-04-15
A Critical Analysis of Deployed Use Cases for Quantum Key Distribution and Comparison with Post-Quantum Cryptography
Nick Aquina, Bruno Cimoli, Soumya Das, Kathrin Hövelmanns, Fiona Johanna Weber, Chigo Okonkwo, Simon Rommel, Boris Škorić, Idelfonso Tafur Monroy, and Sebastian Verschoor
Quantum Key Distribution (QKD) is currently being discussed as a technology to safeguard communication in a future where quantum computers compromise traditional public-key cryptosystems. In this paper, we conduct a comprehensive security evaluation of QKD-based solutions, focusing on real-world use cases sourced from academic literature and industry reports. We analyze these use cases, assess their security and identify the possible advantages of deploying QKD-based solutions. We further compare QKD-based solutions with Post-Quantum Cryptography (PQC), the alternative approach to achieving security when quantum computers compromise traditional public-key cryptosystems, evaluating their respective suitability for each scenario. Based on this comparative analysis, we critically discuss and comment on which use cases QKD is suited for, considering factors such as implementation complexity, scalability, and long-term security. Our findings contribute to a better understanding of the role QKD could play in future cryptographic infrastructures and offer guidance to decision-makers considering the deployment of QKD.
Last updated:  2025-04-15
Ciphertext-Ciphertext Matrix Multiplication: Fast for Large Matrices
Jai Hyun Park
Matrix multiplication of two encrypted matrices (CC-MM) is a key challenge for privacy-preserving machine learning applications. As modern machine learning models focus on scalability, fast CC-MM on large datasets is increasingly in demand. In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from $1\ 960$ MB to $1.57$ MB for CC-MM with comparable efficiency. In a single-thread implementation, the C-MT algorithm takes $0.76$ seconds to transpose a $2\ 048\times 2\ 048$ encrypted matrix. The CC-MM algorithm requires $85.2$ seconds to multiply two $4\ 096\times 4\ 096$ encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over $800$.
Last updated:  2025-04-15
Multi-Party Private Set Operations from Predicative Zero-Sharing
Minglang Dong, Yu Chen, Cong Zhang, Yujie Bai, and Yang Cao
Typical protocols in the multi-party private set operations (MPSO) setting enable $m > 2$ parties to perform certain secure computation on the intersection or union of their private sets, realizing a very limited range of MPSO functionalities. Most works in this field focus on just one or two specific functionalities, resulting in a large variety of isolated schemes and a lack of a unified framework in MPSO research. In this work, we present an MPSO framework, which allows $m$ parties, each holding a set, to securely compute any set formulas (arbitrary compositions of a finite number of binary set operations, including intersection, union and difference) on their private sets. Our framework is highly versatile and can be instantiated to accommodate a broad spectrum of MPSO functionalities. To the best of our knowledge, this is the first framework to achieve such a level of flexibility and generality in MPSO, without relying on generic secure multi-party computation (MPC) techniques. Our framework exhibits favorable theoretical and practical performance. With computation and communication complexity scaling linearly with the set size $n$, it achieves optimal complexity that is on par with the naive solution for widely used functionalities, such as multi-party private set intersection (MPSI), MPSI with cardinality output (MPSI-card), and MPSI with cardinality and sum (MPSI-card-sum), for the first time in the standard semi-honest model. Furthermore, the instantiations of our framework, which primarily rely on symmetric-key techniques, provide efficient protocols for MPSI, MPSI-card, MPSI-card-sum, and multi-party private set union (MPSU), with online performance that either surpasses or matches the state of the art in standard semi-honest model. At the technical core of our framework is a newly introduced primitive called predicative zero-sharing. This primitive captures the universality of a number of MPC protocols and is composable. We believe it may be of independent interest.
Last updated:  2025-04-15
Efficient Multi-Party Private Set Union Without Non-Collusion Assumptions
Minglang Dong, Cong Zhang, Yujie Bai, and Yu Chen
Multi-party private set union (MPSU) protocol enables $m$ $(m > 2)$ parties, each holding a set, to collectively compute the union of their sets without revealing any additional information to other parties. There are two main categories of multi-party private set union (MPSU) protocols: The first category builds on public-key techniques, where existing works require a super-linear number of public-key operations, resulting in poor practical efficiency. The second category builds on oblivious transfer and symmetric-key techniques. The only work in this category, proposed by Liu and Gao (ASIACRYPT 2023), features the best concrete performance among all existing protocols, but still has super-linear computation and communication. Moreover, it does not achieve the standard semi-honest security, as it inherently relies on a non-collusion assumption, which is unlikely to hold in practice. There remain two significant open problems so far: no MPSU protocol achieves semi-honest security based on oblivious transfer and symmetric-key techniques; no MPSU protocol achieves both linear computation and linear communication complexity. In this work, we resolve both of them. - We propose the first MPSU protocol based on oblivious transfer and symmetric-key techniques in the standard semi-honest model. This protocol's online performance is $3.9-10.0 \times$ faster than Liu and Gao in the LAN setting. Concretely, our protocol requires only $4.4$ seconds in online phase for $3$ parties with sets of $2^{20}$ items each. - We propose the first MPSU protocol achieving linear computation and linear communication complexity, based on public-key operations. This protocol has the best total performance in the WAN settings, due to a factor of $3.0-36.5\times$ improvement in terms of overall communication compared to Liu and Gao. Concretely, on slow network ($50$ Mbps), their protocol takes $6.6$ hours to run for $9$ parties with sets of $2^{18}$ items each, whereas ours only takes $47$ minutes. We implement our protocols and conduct extensive experiments to compare the performance of our protocols and the state-of-the-art. To the best of our knowledge, our code is the first correct and secure implementation of MPSU that reports on large-size experiments.
Last updated:  2025-04-15
A Note on P $\neq$ NP
Ping Wang
The question of whether the complexity class P equals NP is a major unsolved problem in theoretical computer science. The key to proving that P $\neq$ NP is to show that there is no efficient (polynomial time) algorithm for a language in NP. For a language in NP, it is impossible to prove that it is not in P, because we can only claim that no better algorithm has been found so far, and there is no way to guarantee (or prove) that a more efficient algorithm does not exist. It is impossible to prove P $\neq$ NP rigorously, as any problem with size $n$ and solution space $2^n$ has certain structures and properties, and there is no way to prove that more efficient algorithms that take advantage of these structures or properties do not exist. In all attempts to prove P $\neq$ NP, all we can do is try to provide the best clues or "evidence" for P $\neq$ NP. In this paper, we introduce a new language, the Add/XNOR problem, which has the simplest structure and perfect randomness, by extending the subset sum problem. We conjecture that the square-root time complexity is required to solve the Add/XNOR problem, making it surpass the subset sum problem as the latter has algorithms that break the square-root complexity bound, a better evidence for P $\neq$ NP. Furthermore, by giving up commutative and associative properties, we design a magma equipped with a permutation and successfully achieve Conjecture 2, which is the first problem with exponential complexity that is believed to require an exhaustive search to solve, by far the strongest evidence for P $\neq$ NP.
Last updated:  2025-04-15
Recovering S-Box Design Structures and Quantifying Distances between S-Boxes using Deep Learning
Donggeun Kwon, Deukjo Hong, Jaechul Sung, and Seokhie Hong
At ASIACRYPT’19, Bonnetain et al. demonstrated that an S-box can be distinguished from a permutation chosen uniformly at random by quantifying the distances between their behaviors. In this study, we extend this approach by proposing a deep learning-based method to quantify distances between two different S-boxes and evaluate similarities in their design structures. First, we introduce a deep learning-based framework that trains a neural network model to recover the design structure of a given S-box based on its cryptographic table. We then interpret the decision-making process of our trained model to analyze which coefficients in the table play significant roles in identifying S-box structures. Additionally, we investigate the inference results of our model across various scenarios to evaluate its generalization capabilities. Building upon these insights, we propose a novel approach to quantify distances between structurally different S-boxes. Our method effectively assesses structural similarities by embedding S-boxes using the deep learning model and measuring the distances between their embedding vectors. Furthermore, experimental results confirm that this approach is also applicable to structures that the model has never seen during training. Our findings demonstrate that deep learning can reveal the underlying structural similarities between S-boxes, highlighting its potential as a powerful tool for S-box reverse-engineering.
Last updated:  2025-04-15
ToFA: Towards Fault Analysis of GIFT and GIFT-like Ciphers Leveraging Truncated Impossible Differentials
Anup Kumar Kundu, Shibam Ghosh, Aikata Aikata, and Dhiman Saha
In this work, we introduce ToFA, the first fault attack (FA) strategy that attempts to leverage the classically well-known idea of impossible differential cryptanalysis to mount practically verifiable attacks on bit-oriented ciphers like GIFT and BAKSHEESH. The idea used stems from the fact that truncated differential paths induced due to fault injection in certain intermediate rounds of the ciphers lead to active SBox-es in subsequent rounds whose inputs admit specific truncated differences. This leads to a (multi-round) impossible differential distinguisher, which can be incrementally leveraged for key-guess elimination via partial decryption. The key-space reduction further exploits the multi-round impossibility, capitalizing on the relations due to the quotient-remainder (QR) groups of the GIFT and BAKSHEESH linear layer, which increases the filtering capability of the distinguisher. Moreover, the primary observations made in this work are independent of the actual SBox. Clock glitch based fault attacks were mounted on 8-bit implementations of GIFT-64/GIFT-128 using a ChipWhisperer Lite board on an 8-bit ATXmega128D4-AU micro-controller. Unique key recovery was achieved for GIFT-128 with 3 random byte faults, while for GIFT-64, key space was reduced to $2^{32}$, the highest achievable for GIFT-64, with a single level fault due to its key-schedule. This work also reports the highest fault injection penetration for any variant of GIFT and BAKSHEESH. Finally, this work reiterates the role of classical cryptanalysis strategies in fault vulnerability assessment while leading to the most efficient fault attacks on GIFT.
Last updated:  2025-04-15
Impossible Differential Attack on SAND-128
Nobuyuki Sugio
Impossible differential attack is one of the major cryptanalytical methods for symmetric-key block ciphers. In this paper, we evaluate the security of SAND-128 against impossible differential attack. SAND is an AND-RX-based lightweight block cipher proposed by Chen et al. in Designs, Codes and Cryptography 2022. There are two variants of SAND, namely SAND-64 and SAND-128, due to structural differences. In this paper, we search for impossible differential distinguishers of SAND-128 using the Constraint Programming (CP) and reveal 14-round impossible differential distinguishers. The number of 14-round distinguishers is $2^{14} \times 7 = 114,688$. Furthermore, we demonstrate a key recovery attack on 21-round SAND-128. The complexities for the attack require $2^{124}$ data, $2^{127.2}$ encryptions, and $2^{122}$ bytes of memory, respectively. Although this result currently achieves the best attack on round-reduced SAND-128, this attack does not threaten the security of SAND-128 against impossible differential attack.
Last updated:  2025-04-15
Fuzzy Extractors are Practical: Cryptographic Strength Key Derivation from the Iris
Sohaib Ahmad, Sixia Chen, Luke Demarest, Benjamin Fuller, Caleb Manicke, Alexander Russell, and Amey Shukla
Despite decades of effort, a chasm existed between the theory and practice of device-level biometric authentication. Deployed authentication algorithms rely on data that overtly leaks private information about the biometric; thus systems rely on externalized security measures such as trusted execution environments. The authentication algorithms have no cryptographic guarantees. We close this chasm. We introduce a key derivation system with 105 bits of entropy and a 91% true accept rate for the iris. Our system advances 1) the feature extraction from the iris and 2) the fuzzy extractor used to derive keys. The fuzzy extractor builds on sample-then-lock (Canetti et al., Journal of Cryptology 2021). We 1) Introduce a new method of sampling that achieves a better TAR versus entropy tradeoff when features have different quality, 2) Correct their security proof, showing the minimum of min-entropy of subsets is the relevant security measure, and 3) Tighten their concrete analysis, nearly doubling security under reasonable assumptions. Our final feature extractor incorporates ideas from the new sampling method to produce features optimized for the sample-then-lock construction. The only statistical assumption needed to show security of our system is necessary: the accuracy of min-entropy estimation. Our quantitive level of security is well above prior work. Simhadri et al. (ISC, 2019) report $32$ bits on the iris, but they have a bug in their analysis (that we fix) that reduces their strength. Zhang et al.'s (ePrint 2021/1559) system achieves $45$ bits on the face but assumes independence between biometrics and the used error-correcting code, an assumption that cannot be easily verified. Other prior work assumes that bits of biometrics are i.i.d., an assumption that is demonstrably false. (Or that all correlation is pairwise between features (Hine et al., TIFS 2023).) Irises used to evaluate TAR and security are class disjoint from those used for training and collecting statistics (the open dataset regime).
Last updated:  2025-04-15
Precision For A Qubit Operation and Failure of Achieving Quantum Supremacy
Zhengjun Cao and Zhenfu Cao
M. Dyakonov [IEEE Spectrum, 2019] asked: what precision is required for manipulating a quantum gate, and what role is the notion of ENERGY playing in quantum computing theory. We answer the questions by investigating the Quantum Fourier Transformation (QFT). We show that the precision for a qubit operation is no less than $4.72694\times 10^{-36}$, in accordance with the energy of a free particle. Quantum entanglement is indispensable to quantum supremacy. The present quantum computers have failed to instantiate quantum entanglement up to our expectation, due to inexact qubit operations. Quantum transition incurred during the transition of space-time continuum covertly changes qubits and corrupts quantum supremacy.
Last updated:  2025-04-15
VerITAS: Verifying Image Transformations at Scale
Trisha Datta, Binyi Chen, and Dan Boneh
Verifying image provenance has become an important topic, especially in the realm of news media. To address this issue, the Coalition for Content Provenance and Authenticity (C2PA) developed a standard to verify image provenance that relies on digital signatures produced by cameras. However, photos are usually edited before being published, and a signature on an original photo cannot be verified given only the published edited image. In this work, we describe VerITAS, a system that uses zero-knowledge proofs (zk-SNARKs) to prove that only certain edits have been applied to a signed photo. While past work has created image editing proofs for photos, VerITAS is the first to do so for realistically large images (30 megapixels). Our key innovation enabling this leap is the design of a new proof system that enables proving knowledge of a valid signature on a large amount of witness data. We run experiments on realistically large images that are more than an order of magnitude larger than those tested in prior work. In the case of a computationally weak signer, such as a camera, we are able to generate a proof of valid edits for a 90 MB image in just over thirteen minutes, costing about $0.54 on AWS per image. In the case of a more powerful signer, we are able to generate a proof of valid edits for a 90 MB image in just over three minutes, costing only \$0.13 on AWS per image. Either way, proof verification time is less than a second. Our techniques apply broadly whenever there is a need to prove that an efficient transformation was applied correctly to a large amount of signed private data.
Last updated:  2025-04-14
Revocable Encryption, Programs, and More: The Case of Multi-Copy Security
Prabhanjan Ananth, Saachi Mutreja, and Alexander Poremba
Fundamental principles of quantum mechanics have inspired many new research directions, particularly in quantum cryptography. One such principle is quantum no-cloning which has led to the emerging field of revocable cryptography. Roughly speaking, in a revocable cryptographic primitive, a cryptographic object (such as a ciphertext or program) is represented as a quantum state in such a way that surrendering it effectively translates into losing the capability to use this cryptographic object. All of the revocable cryptographic systems studied so far have a major drawback: the recipient only receives one copy of the quantum state. Worse yet, the schemes become completely insecure if the recipient receives many identical copies of the same quantum state---a property that is clearly much more desirable in practice. While multi-copy security has been extensively studied for a number of other quantum cryptographic primitives, it has so far received only little treatment in context of unclonable primitives. Our work, for the first time, shows the feasibility of revocable primitives, such as revocable encryption and revocable programs, which satisfy multi-copy security in oracle models. This suggest that the stronger notion of multi-copy security is within reach in unclonable cryptography more generally, and therefore could lead to a new research direction in the field.
Last updated:  2025-04-14
The Learning Stabilizers with Noise problem
Alexander Poremba, Yihui Quek, and Peter Shor
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory. Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
Last updated:  2025-04-14
Tighter Provable Security for TreeKEM
Karen Azari and Andreas Ellison
The Messaging Layer Security (MLS) protocol, recently standardized in RFC 9420, aims to provide efficient asynchronous group key establishment with strong security guarantees. The main component of MLS, which is the source of its key efficiency and security properties, is a protocol called TreeKEM. Given that a major vision for the MLS protocol is for it to become the new standard for messaging applications like WhatsApp, Facebook Messenger, Signal, etc., it has the potential to be used by a huge number of users. Thus, it is important to better understand the security of MLS and hence also of TreeKEM. In previous work, TreeKEM was proven adaptively secure in the Random Oracle Model (ROM) with a polynomial loss in security by proving a result about the security of an arbitrary IND-CPA secure public-key encryption scheme in a public-key version of a security game called GSD. Unfortunately, the concrete security guarantees implied by this line of work are not sufficient for parameters used in practice. In this work, we prove a tighter bound for the security of TreeKEM when DHIES is used for public-key encryption; DHIES is currently the only standardized scheme in MLS. Our bound implies meaningful security guarantees even for small practical parameters. We follow the above approach and first introduce a modified version of the public-key GSD game better suited for analyzing TreeKEM. We then provide a simple and detailed proof of security for DHIES in this game in the ROM and achieve a smaller security loss compared to the previous best result. Finally, we state the result on the security of TreeKEM implied by this bound and give an interpretation of the result with protocol parameters used in practice.
Last updated:  2025-04-14
On the Security of Two IKKR-type Code-Based Cryptosystems
Kirill Vedenev
The paper analyzes the security of two recently proposed code-based cryptosystems that employ encryption of the form $y = m G_{\text{pub}} + eE_{pub}$: the Krouk-Kabatiansky-Tavernier (KKT) cryptosystem and the Lau-Ivanov-Ariffin-Chin-Yap (LIACY) cryptosystem. We demonstrate that the KKT cryptosystem can be reduced to a variant of the McEliece scheme, where a small set of columns in the public generator matrix is replaced with random ones. This reduction implies that the KKT cryptosystem is vulnerable to existing attacks on Wieschebrink's encryption scheme, particularly when Generalized Reed-Solomon (GRS) codes are used. In addition, we present a full key-recovery attack on the LIACY cryptosystem by exploiting its linear-algebraic structure and leveraging distinguishers of subcodes of GRS codes. Our findings reveal critical vulnerabilities in both systems, effectively compromising their security despite their novel designs.
Last updated:  2025-04-14
Assessing the Impact of a Variant of the Latest Dual Attack
Kevin Carrier, Charles Meyer-Hilfiger, Yixin Shen, and Jean-Pierre Tillich
The dual attacks on the Learning With Errors (LWE) problem are currently a subject of controversy. In particular, the results of [Matzov,2022], which claim to significantly lower the security level of CRYSTALS-Kyber, a lattice-based cryptosystem currently being standardized by NIST, are not widely accepted. The analysis behind their attack depends on a series of assumptions that, in certain scenarios, have been shown to contradict established theorems or well-tested heuristics [Ducas,Pulles,CRYPTO2023]. In this paper, we introduce a new dual lattice attack on LWE, drawing from ideas in coding theory. Our approach revisits the dual attack proposed by [Matzov,2022], replacing modulus switching with an efficient decoding algorithm. This decoding is achieved by generalizing polar codes over $\mathbb{Z}_{q}$, and we confirm their strong distortion properties through benchmarks. This modification enables a reduction from small-LWE to plain-LWE, with a notable decrease in the secret dimension. Additionally, we replace the enumeration step in the attack by assuming the secret is zero for the portion being enumerated, iterating this assumption over various choices for the enumeration part. We make an analysis of our attack without using the flawed independence assumptions used in [Matzov,2022] and we fully back up our analysis with experimental evidences. Lastly, we assess the complexity of our attack on CRYSTALS-Kyber; showing that the security levels for Kyber-512/768/1024 are 3.5/11.9/12.3 bits below the NIST requirements (143/207/272 bits) in the same nearest-neighbor cost model as in the Kyber submission. All in all the cost of our attack matches and even slightly beat in some cases the complexities originally claimed by the attack of [Matzov,2022].
Last updated:  2025-04-14
Hybrid Fingerprinting for Effective Detection of Cloned Neural Networks
Can Aknesil, Elena Dubrova, Niklas Lindskog, Jakob Sternby, and Håkan Englund
As artificial intelligence plays an increasingly important role in decision-making within critical infrastructure, ensuring the authenticity and integrity of neural networks is crucial. This paper addresses the problem of detecting cloned neural networks. We present a method for identifying clones that employs a combination of metrics from both the information and physical domains: output predictions, probability score vectors, and power traces measured from the device running the neural network during inference. We compare the effectiveness of each metric individually, as well as in combination. Our results show that the effectiveness of both the information and the physical domain metrics is excellent for a clone that is a near replica of the target neural network. Furthermore, both the physical domain metric individually and the hybrid approach outperformed the information domain metrics at detecting clones whose weights were extracted with low accuracy. The presented method offers a practical solution for verifying neural network authenticity and integrity. It is particularly useful in scenarios where neural networks are at risk of model extraction attacks, such as in cloud-based machine learning services.
Last updated:  2025-04-14
Simpler and Faster Pairings from the Montgomery Ladder
Giacomo Pope, Krijn Reijnders, Damien Robert, Alessandro Sferlazza, and Benjamin Smith
We show that Montgomery ladders compute pairings as a by-product, and explain how a small adjustment to the ladder results in simple and efficient algorithms for the Weil and Tate pairing on elliptic curves using cubical arithmetic. We demonstrate the efficiency of the resulting cubical pairings in several applications from isogeny-based cryptography. Cubical pairings are simpler and more performant than pairings computed using Miller's algorithm: we get a speed-up of over 40% for use-cases in SQIsign, and a speed-up of about 7% for use-cases in CSIDH. While these results arise from a deep connection to biextensions and cubical arithmetic, in this article we keep things as concrete (and digestible) as possible. We provide a concise and complete introduction to cubical arithmetic as an appendix.
Last updated:  2025-04-14
Switching the Top Slice of the Sandwich with Extra Filling Yields a Stronger Boomerang for NLFSR-based Block Ciphers
Amit Jana, Mostafizar Rahman, Prathamesh Ram, Dhiman Saha, and Goutam Paul
The Boomerang attack was one of the first attempts to visualize a cipher ($E$) as a composition of two sub-ciphers ($E_1\circ E_0$) to devise and exploit two high-probability (say $p,q$) shorter trails instead of relying on a single low probability (say $s$) longer trail for differential cryptanalysis. The attack generally works whenever $p^2 \cdot q^2 > s$. However, it was later succeeded by the so-called ``sandwich attack'' which essentially splits the cipher in three parts $E'_1\circ E_m \circ E'_0$ adding an additional middle layer ($E_m$) with distinguishing probability of $p^2\cdot r\cdot q^2$. It is primarily the generalization of a body of research in this direction that investigate what is referred to as the switching activity and capture the dependencies and potential incompatibilities of the layers that the middle layer separates. This work revisits the philosophy of the sandwich attack over multiple rounds for NLFSR-based block ciphers and introduces a new method to find high probability boomerang distinguishers. The approach formalizes boomerang attacks using only ladder, And switches. The cipher is treated as $E = E_1 \circ E_m$, a specialized form of a sandwich attack which we called as the ``open-sandwich attack''. The distinguishing probability for this attack configuration is $r \cdot q^2$. Using this innovative approach, the study successfully identifies a deterministic boomerang distinguisher for the keyed permutation of the TinyJambu cipher over 320 rounds. Additionally, a 640-round boomerang with a probability of $2^{-22}$ is presented with 95% success rate. In the related-key setting, we unveil full-round boomerangs with probabilities of $2^{-19}$, $2^{-18}$, and $2^{-12}$ for all three variants, demonstrating a 99% success rate. Similarly, for Katan-32, a more effective related-key boomerang spanning 140 rounds with a probability of $2^{-15}$ is uncovered with 70% success rate. Further, in the single-key setting, a 84-round boomerang with probability $2^{-30}$ found with success rate of 60%. This research deepens the understanding of boomerang attacks, enhancing the toolkit for cryptanalysts to develop efficient and impactful attacks on NLFSR-based block ciphers.
Last updated:  2025-04-14
Reusable Fuzzy Extractors for Low-Entropy Distributions
Uncategorized
Ran Canetti, Benjamin Fuller, Omer Paneth, Leonid Reyzin, and Adam Smith
Show abstract
Uncategorized
Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, CCS 2004) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person's biometric is enrolled with multiple unrelated organizations). We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated (the only prior construction assumed a very specific, unrealistic class of correlations). The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates--lower than those supported by prior (nonreusable) constructions--assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, providing a computationally secure and an information-theoretically secure construction for large-alphabet sources.
Last updated:  2025-04-14
SQIAsignHD: SQIsignHD Adaptor Signature
Farzin Renan and Péter Kutas
Adaptor signatures can be viewed as a generalized form of standard digital signature schemes by linking message authentication to the disclosure of a secret value. As a recent cryptographic primitive, they have become essential for blockchain applications, including cryptocurrencies, by reducing on-chain costs, improving fungibility, and enabling off-chain payments in payment-channel networks, payment-channel hubs, and atomic swaps. However, existing adaptor signature constructions are vulnerable to quantum attacks due to Shor's algorithm. In this work, we introduce $\mathsf{SQIAsignHD}$, a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, to define the underlying hard relation. We, furthermore, provide a formal security proof for our proposed scheme.
Last updated:  2025-04-14
Aether: Approaching the Holy Grail in Asynchronous BFT
Xiaohai Dai, Chaozheng Ding, Julian Loss, and Ling Ren
State-of-the-art asynchronous Byzantine Fault Tolerance (BFT) protocols integrate a partially-synchronous optimistic path. The holy grail in this paradigm is to match the performance of a partially-synchronous protocol in favorable situations and match the performance of a purely asynchronous protocol in unfavorable situations. Several prior works have made progress toward this goal by matching the efficiency of a partially-synchronous protocol in favorable conditions. However, their performance compared to purely asynchronous protocols is reduced when network conditions are unfavorable. To address these shortcomings, a recent work, Abraxas (CCS'23), presents the first optimistic asynchronous BFT protocol that retains stable throughput in all situations. However, Abraxas still incurs very high worst-case latency in unfavorable situations because it is slow at detecting the failure of its optimistic path. Another recent work, ParBFT (CCS'23) guarantees good latency in all situations, but suffers from reduced throughput in unfavorable situations due to its use of extra Asynchronous Binary Agreement (ABA) instances. To approach our holy grail, we propose Aether, which delivers performance comparable to partially-synchronous protocols in favorable situations, and attains performance on par with purely asynchronous protocols in unfavorable situations—in both throughput and latency. Aether also runs the two paths simultaneously. It adopts two-chain HotStuff as the optimistic path, thus achieving high performance in favorable situations. As for the pessimistic path, we introduce a new primitive Dual-functional Byzantine Agreement (DBA), which packs the functionalities of biased ABA and Validated Asynchronous Byzantine Agreement (VABA). Aether runs DBA instances continuously as the pessimistic path. DBA’s ABA functionality quickly detects the optimistic path’s failure, ensuring Aether’s low latency in unfavorable situations. Meanwhile, the VABA functionality continuously produces blocks, maintaining Aether’s high throughput. Additionally, the biased property ensures that blocks committed via the optimistic path are respected by DBA instances, guaranteeing consistency across two paths. We conduct extensive experiments to demonstrate that Aether achieves high throughput and low latency in all situations.
Last updated:  2025-04-14
A Dilithium-like Multisignature in Fully Split Ring and Quantum Random Oracle Model
Shimin Pan, Tsz Hon Yuen, and Siu-Ming Yiu
Multisignature schemes are crucial for secure operations in digital wallets and escrow services within smart contract platforms, particularly in the emerging post-quantum era. Existing post-quantum multisignature constructions either do not address the stringent requirements of the Quantum Random Oracle Model (QROM) or fail to achieve practical efficiency due to suboptimal parameter choices. In this paper, we present a novel Dilithium-based multisignature scheme designed to be secure in the QROM and optimized for practical use. Our scheme operates over the polynomial ring $\mathbb{Z}_q[X]/(x^n+1)$ with $q \equiv 1 \pmod{2n}$, enabling full splitting of the ring and allowing for efficient polynomial arithmetic via the Number Theoretic Transform (NTT). This structure not only ensures post-quantum security but also bridges the gap between theoretical constructs and real-world implementation needs. We further propose a new hardness assumption, termed $\nu$-SelfTargetMSIS, extending SelfTargetMSIS (Eurocrypt 2018) to accommodate multiple challenge targets. We prove its security in the QROM and leverage it to construct a secure and efficient multisignature scheme. Our approach avoids the limitations of previous techniques, reduces security loss in the reduction, and results in a more compact and practical scheme suitable for deployment in post-quantum cryptographic systems.
Last updated:  2025-04-14
On the Average Random Probing Model
Julien Béguinot and Loïc Masure
Masking is one of the main countermeasures against side-channel analysis since it relies on provable security. In this context, “provable” means that a security bound can be exhibited for the masked implementation through a theoretical analysis in a given threat model. The main goal in this line of research is therefore to provide the tightest security bound, in the most realistic model, in the most generic way. Yet, all of these objectives cannot be reached together. That is why the masking literature has introduced a large spectrum of threat models and reductions between them, depending on the desired trade-off with respect to these three goals. In this paper, we focus on three threat models, namely the noisy-leakage model (realistic yet hard to work with), the random probing (unrealistic yet easy to work with), and more particularly a third intermediate model called average random probing. Average random probing has been introduced by Dziembowski et al. at Eurocrypt 2015, in order to exhibit a tight reduction between noisy-leakage and random probing models, recently proven by Brian et al. at Eurocrypt 2024. This milestone has strong practical consequences, since otherwise the reduction from the noisy leakage model to the random probing model introduces a prohibitively high constant factor in the security bound, preventing security evaluators to use it in practice. However, we exhibit a gap between the average random probing definitions of Dziembowski et al. (denoted hereafter by DFS-ARP) and Brian et al. (simply denoted by ARP). Whereas any noisy leakage can be tightly reduced to DFS-ARP, we show in this paper that it cannot be tightly reduced to ARP, unless requiring extra assumptions, e.g., if the noisy leakage is deterministic. Our proof techniques do not involve more tools than the one used so far in such reductions, namely basic probability facts, and known properties of the total variation distance. As a consequence, the reduction from the noisy leakage to the random probing — without high constant factor — remains unproven. This stresses the need to clarify the practical relevance of analyzing the security of masking in the random probing model since most of the current efforts towards improving the constructions and their security proofs in the random probing model might be hindered by potentially unavoidable loss in the reduction from more realistic but currently less investigated leakage models.
Last updated:  2025-04-14
$\textsf{PAE}$: Towards More Efficient and BBB-secure AE From a Single Public Permutation
Arghya Bhattacharjee, Ritam Bhaumik, Avijit Dutta, and Eik List
Four recent trends have emerged in the evolution of authenticated encryption schemes: (1) Regarding simplicity, the adoption of public permutations as primitives allows for sparing a key schedule and the need for storing round keys; (2) using the sums of permutation outputs, inputs, or outputs has been a well-studied means to achieve higher security beyond the birthday bound; (3) concerning robustness, schemes should provide graceful security degradation if a limited amount of nonces repeats during the lifetime of a key, and (4) Andreeva et al.'s ForkCipher approach can increase the efficiency of a scheme since they can use fewer rounds per output branch compared to full-round primitives. In this work, we improve on the state of the art by combining those aspects for efficient authenticated encryption. We propose $\textsf{PAE}$, an efficient nonce-based AE scheme that employs a public permutation and one call to an XOR-universal hash function. $\textsf{PAE}$ provides $O(2n/3)$-bit security and high throughput by combining forked public-permutation-based variants of $\textsf{nEHtM}$ and an Encrypted Davies-Meyer. Thus, it can use a single, in part round-reduced, public permutation for most operations, spare a key schedule, and guarantee security beyond the birthday bound even under limited nonce reuse.
Last updated:  2025-04-14
HQC Beyond the BSC: Towards Error Structure-Aware Decoding
Marco Baldi, Sebastian Bitzer, Nicholas Lilla, and Paolo Santini
In Hamming Quasi-Cyclic (HQC), one of the finalists in the NIST competition for the standardization of post-quantum cryptography, decryption relies on decoding a noisy codeword through a public error-correcting code. The noise vector has a special form that depends on the secret key (a pair of sparse polynomials). However, the decoder, which is currently employed in HQC, is agnostic to the secret key, operating under the assumption that the error arises from a Binary Symmetric Channel (BSC). In this paper, we demonstrate that this special noise structure can instead be leveraged to develop more powerful decoding strategies. We first study the problem from a coding-theoretic perspective. The current code design, which admits a non-zero decryption failure rate, is close to optimal in the setting of a decoder that is agnostic to the error structure. We show that there are code-decoder pairs with a considerably shorter code length that can guarantee unique decoding by taking the error structure into account. This result is non-constructive, i.e., we do not provide an explicit code construction and it remains open whether efficient decoding is possible. Nevertheless, it highlights the potential that can be tapped by taking the error structure into account. We then argue that, in practice, the matter of decoding in HQC can be related to solving an instance of the noisy syndrome decoding problem, in which the parity-check matrix is constituted by the polynomials in the secret key. We show that, using decoders for Low-Density Parity-Check (LDPC) and Moderate-Density Parity-Check (MDPC) codes, one can significantly reduce the entity of the noise and, de facto, also the Decoding Failure Rate (DFR) of the HQC decoder. This preliminary study leaves some open questions and problems. While it shows that decoding in HQC can be improved, the modeling of the DFR gets more complicated: even for the basic decoder we propose in this paper, we have not been able to devise a reliable DFR model. This is likely due to the fact that the decoder structure resembles the iterative nature of LDPC/MDPC decoders, for which devising a reliable DFR estimation is a well-known difficult problem.
Last updated:  2025-04-14
NTWE: A Natural Combination of NTRU and LWE
Joel Gärtner
Lattice-based cryptosystems are some of the primary post-quantum secure alternatives to the asymmetric cryptography that is used today. These lattice-based cryptosystems typically rely on the hardness of some version of either the NTRU or the LWE problem. In this paper, we present the NTWE problem, a natural combination of the NTRU and LWE problems, and construct a new lattice-based cryptosystem based on the hardness of the NTWE problem. As with the NTRU and LWE problems, the NTWE problem naturally corresponds to a problem in a $q$-ary lattice. This allows the hardness of the NTWE problem to be estimated in the same way as it is estimated for the LWE and NTRU problems. We parametrize our cryptosystem from such a hardness estimate and the resulting scheme has performance that is competitive with that of typical lattice-based schemes. In some sense, our NTWE-based cryptosystem can be seen as a less structured and more compact version of a cryptosystem based on the module-NTRU problem. Thus, parameters for our cryptosystem can be selected with the flexibility of a module-LWE-based scheme, while other properties of our system are more similar to those in an NTRU-based system.
Last updated:  2025-04-14
PipeSwap: Forcing the Timely Release of a Secret for Atomic Cross-Chain Swaps
Peifang Ni, Anqi Tian, and Jing Xu
Atomic cross-chain swaps mitigate the interoperability challenges faced by current cryptocurrencies, thereby facilitating inter-currency exchange and trading between the distrusting users. Although numerous atomic swaps protocols utilizing Hash Timelock Contracts have been deployed and put into practice, they are substantially far from universality due to their inherent dependence of rich scripting language supported by the underlying blockchains. The recently proposed Universal Atomic Swaps protocol [IEEE S&P'22] represents a significant advancement in the field of scriptless cross-chain swaps by ingeniously delegating scripting functionalities to cryptographic locking mechanisms, particularly the adaptor signatures and timed commitment schemes. However, we identify a new form of attack termed the double-claiming attack that leverages these scriptless functionalities to undermine atomicity with a high probability. This attack is inherent to the designs adopted by the existing scriptless cross-chain swaps protocols as well as the payment channel networks. We further quantify the severity of this attack based on real-word swap transactions processed by the most widely deployed decentralized exchange platforms, highlighting the critical challenges in designing universal atomic swaps. To address the double-claiming attack while ensuring both security and practical universality, we also present a cross-chain swaps protocol called \textup{PipeSwap}. Specifically, PipeSwap protects the frozen coins from being double-claimed by a novelly designed paradigm of pipelined coins flow that utilizes the techniques of two-hop swap and two-hop refund. In addition to a comprehensive security analysis in the Universal Composability framework, we develop a proof-of-concept implementation of PipeSwap with Schnorr/ECDSA signatures, and conduct extensive experiments to evaluate the overhead. The experimental results show that PipeSwap can be performed in less than 1.7 seconds while maintaining less than 7 kb of communication overhead on commodity machines.
Last updated:  2025-04-14
Biextensions in pairing-based cryptography
Jianming Lin, Damien Robert, Chang-An Zhao, and Yuhao Zheng
Bilinear pairings constitute a cornerstone of public-key cryptography, where advancements in Tate pairings and their efficient variants have emerged as a critical research domain within cryptographic science. Currently, the computation of pairings can be effectively implemented through three distinct algorithmic approaches: Miller’s algorithm, the elliptic net algorithm (as developed by Stange), and cubical-based algorithms (as proposed by Damien Robert). Biextensions are the geometric object underlying the arithmetic of pairings, and all three approaches can be seen as a different way to represent biextension elements. In this paper, we revisit the biextension geometric point of view for pairing computation and investigate in more detail the cubical representation for pairing-based cryptography. Utilizing the twisting isomorphism, we derive explicit formulas and algorithmic frameworks for the ate pairing and optimal ate pairing computations. Additionally, we present detailed formulas and introduce an optimized shared cubical ladder algorithm for super-optimal ate pairings. Through concrete computational analyses, we compare the performance of our cubical-based methods with the Miller's algorithm on various well-known families of pairing-friendly elliptic curves. Our results demonstrate that the cubical-based algorithm outperforms the Miller's algorithm by bits in certain specific situations, establishing its potential as an alternative for pairing computation.
Last updated:  2025-04-14
Self-Orthogonal Minimal Codes From (Vectorial) p-ary Plateaued Functions
René Rodríguez Aldama, Enes Pasalic, Fengrong Zhang, and Yongzhuang Wei
In this article, we derive the weight distribution of linear codes stemming from a subclass of (vectorial) $p$-ary plateaued functions (for a prime $p$), which includes all the explicitly known examples of weakly and non-weakly regular plateaued functions. This construction of linear codes is referred in the literature as the first generic construction. First, we partition the class of $p$-ary plateaued functions into three classes $\mathscr{C}_1, \mathscr{C}_2,$ and $\mathscr{C}_3$, according to the behavior of their dual function $f^*$. Using these classes, we refine the results presented in a series of articles \cite{Mesnager2017, MesOzSi,Pelen2020, RodPasZhaWei, WeiWangFu}. Namely, we derive the full weight distributions of codes stemming from all $s$-plateaued functions for $n+s$ odd (parametrized by the weight of the dual $wt(f^*)$), whereas for $n+s$ even, the weight distributions are derived from the class of $s$-plateaued functions in $\mathscr{C}_1$ parametrized using two parameters (including $wt(f^*)$ and a related parameter $Z_0$). Additionally, we provide more results on the different weight distributions of codes stemming from functions in subclasses of the three different classes. The exact derivation of such distributions is achieved by using some well-known equations over finite fields to count certain dual preimages. In order to improve the dimension of these codes, we then study the vectorial case, thus providing the weight distributions of a few codes associated to known vectorial plateaued functions and obtaining codes with parameters $[p^n-1,2n, p^n-p^{n-1} - {p}^{(n+s-2)/2}(p-1)]$. For the first time, we provide the full weight distributions of codes from (a subclass of) vectorial $p$-ary plateaued functions. This class includes all known explicit examples in the literature. The obtained codes are minimal and self-orthogonal virtually in all cases. Notably, we show that this is the best one can achieve---there are no $q$-ary self-dual minimal codes for any prime power $q$, except for the ternary tetracode and the binary repetition code.
Last updated:  2025-04-14
Revisiting the attacker's knowledge in inference attacks against Searchable Symmetric Encryption
Marc Damie, Jean-Benoist Leger, Florian Hahn, and Andreas Peter
Encrypted search schemes have been proposed to address growing privacy concerns. However, several leakage-abuse attacks have highlighted some security vulnerabilities. Recent attacks assumed an attacker's knowledge containing data "similar" to the indexed data. However, this vague assumption is barely discussed in literature: how likely is it for an attacker to obtain a "similar enough" data? Our paper provides novel statistical tools usable on any attack in this setting to analyze its sensitivity to data similarity. First, we introduce a mathematical model based on statistical estimators to analytically understand the attackers' knowledge and the notion of similarity. Second, we conceive statistical tools to model the influence of the similarity on the attack accuracy. We apply our tools on three existing attacks to answer questions such as: is similarity the only factor influencing accuracy of a given attack? Third, we show that the enforcement of a maximum index size can make the ``similar-data'' assumption harder to satisfy. In particular, we propose a statistical method to estimate an appropriate maximum size for a given attack and dataset. For the best known attack on the Enron dataset, a maximum index size of 200 guarantees (with high probability) the attack accuracy to be below 5%.
Last updated:  2025-04-14
SoK: FHE-Friendly Symmetric Ciphers and Transciphering
Chao Niu, Benqiang Wei, Zhicong Huang, Zhaomin Yang, Cheng Hong, Meiqin Wang, and Tao Wei
Fully Homomorphic Encryption (FHE) enables computation on encrypted data without decryption, demonstrating significant potential for privacy-preserving applications. However, FHE faces several challenges, one of which is the significant plaintext-to-ciphertext expansion ratio, resulting in high communication overhead between client and server. The transciphering technique can effectively address this problem by first encrypting data with a space-efficient symmetric cipher, then converting symmetric ciphertext to FHE ciphertext without decryption. Numerous FHE-friendly symmetric ciphers and transciphering methods have been developed by researchers, each with unique advantages and limitations. These often require extensive knowledge of both symmetric cryptography and FHE to fully grasp, making comparison and selection among these schemes challenging. To address this, we conduct a comprehensive survey of over 20 FHE-friendly symmetric ciphers and transciphering methods, evaluating them based on criteria such as security level, efficiency, and compatibility. We have designed and executed experiments to benchmark the performance of the feasible combinations of symmetric ciphers and transciphering methods across various application scenarios. Our findings offer insights into achieving efficient transciphering tailored to different task contexts. Additionally, we make our example code available open-source, leveraging state-of-the-art FHE implementations.
Last updated:  2025-04-14
K-Linkable Ring Signatures and Applications in Generalized Voting
Wonseok Choi, Xiangyu Liu, Lirong Xia, and Vassilis Zikas
$\textit{Linkable ring signatures}$ (LRS) allow a user to sign anonymously on behalf of a ring, while maintaining linkability—two signatures from the same signer are publicly identified, i.e., linked. This linkability makes LRS suitable to prevent double-voting in classical, $\textit{plurality}$ voting protocols—each voter casts one vote and the candidate with the most votes wins the election. Several voting scenarios rely on (generalized) rules rather than plurality. For example, in $\textit{ranked voting}$, voters submit a ranking of the candidates, and the outcome is a function of these rankings. Such generalized voting rules are common in social choice theory, and have recently found their way into blockchain governance, e.g., for prioritizing (voting on) proposed (candidate) projects. However, unlike plurality voting, using LRS for voters to sign their votes (rankings) does not guarantee vote privacy as one can observe the rankings of each individual voter, which, depending on the scoring rule, is more information than what the outcome of the election offers. We introduce $k$-$\textit{linkable ring signatures}$ ($k$-LRS) as a primitive for simultaneously achieving anonymity and privacy in generalized voting. A $k$-LRS scheme has the following properties: ($k$-)$\textit{Anonymity}$: a user can sign anonymously (on behalf of the ring) up to $k$ times, so that even an unbounded adversary cannot link his signatures. ($k$-)$\textit{Linkability}$: If any signer signs more than $k$ times, all his signatures are publicly linked $\textit{(individual $k$-linkability)}$; and, any set of $c$ signers cannot generate more than $k\cdot c$ unlinked signatures $\textit{(collective $k$-linkability)}$. We provide two constructions of $k$-LRS: one is from the DDH, and the other is from SIS (hence post-quantum). Finally, we show how $k$-LRS can be applied to a broad range of voting rules, including $\textit{score voting}$, $\textit{multi-voting}$, and $\textit{Borda}$. Our protocols are non-interactive voting—each voter just posts a message on a bulletin board—which highlights the potential of $k$-LRS in blockchain-governance scenarios.
Last updated:  2025-04-13
A Unified Treatment of Anamorphic Encryption
Wonseok Choi, Daniel Collins, Xiangyu Liu, and Vassilis Zikas
Receiver anamorphic encryption (hereafter anamorphic encryption), introduced by Persiano et al. at Eurocrypt 2022, allows for a double message to be symmetrically hidden in a public-key encryption ciphertext via a pre-shared -double key-. In anamorphic encryption, confidentiality must be preserved even if the adversary (or the -dictator-) has access to all regular keys. It has been the subject of several works since its introduction that explore tweaks and extensions to the core primitive. However, this study has not been systematic, and so disparate security notions have been proposed, for which their relationships are not clear. Moreover, there are clear gaps in the literature, including in the treatment of chosen-ciphertext attacks. In this work, we conduct a systematic study of receiver anamorphic encryption. We unify existing security notions and propose several new ones, and prove implications and separations between them. Our main findings are as follows. First, we identify gaps in previous security notions against an anamorphic -sender-, namely an adversary who is given the double key, and propose three new security notions to bridge these gaps. We also identify several gaps in the treatment of chosen-ciphertext attacks, a setting only very recently considered in anamorphic cryptography (Jaeger and Stracovsky, Asiacrypt 2024). Moreover, observing that no previous construction achieves all desirable security properties in this setting, we propose a suitable construction that does. Finally, we propose several security notions for -asymmetric- anamorphic encryption, and explore the case here where the dictator and the anamorphic sender collude.
Last updated:  2025-04-13
Simple and General Counterexamples for Private-Coin Evasive LWE
Nico Döttling, Abhishek Jain, Giulio Malavolta, Surya Mathialagan, and Vinod Vaikuntanathan
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Last updated:  2025-04-13
Powerformer: Efficient and High-Accuracy Privacy-Preserving Language Model with Homomorphic Encryption
Dongjin Park, Eunsang Lee, and Joon-Woo Lee
We propose \textit{Powerformer}, an efficient homomorphic encryption (HE)-based privacy-preserving language model (PPLM) designed to reduce computation overhead while maintaining model performance. Powerformer incorporates three key techniques to optimize encrypted computations: 1) A novel distillation technique that replaces softmax and layer normalization (LN) with computationally efficient power and linear functions, ensuring no performance degradation while enabling seamless encrypted computation. 2) A pseudo-sign composite approximation method that accurately approximates GELU and tanh functions with minimal computational overhead. 3) A homomorphic matrix multiplication algorithm specifically optimized for Transformer models, enhancing efficiency in encrypted environments. By integrating these techniques, Powerformer based on the BERT-base model achieves a 45\% reduction in computation time compared to the state-of-the-art HE-based PPLM without any loss in accuracy.
Last updated:  2025-04-13
(Interleaved) Extended Gabidulin Codes and Their Applications to RQC
Yongcheng Song, Rongmao Chen, Fangguo Zhang, Xinyi Huang, Jian Weng, and Huaxiong Wang
In this paper, we investigate the Extended Gabidulin (EG) codes and the Interleaved EG (IEG) codes, and enhance the Rank Quasi-Cyclic (RQC) encryption scheme. Our primary contribution is the development of a general decoding algorithm for (I)EG codes, for which we precisely provide the DFR, bound the decoding capacity, and estimate the decoding complexity. As the core tool, we demonstrate that the Linear Reconstruction (LR) problem derived from the decoding (I)EG codes problem can be probabilistically solved, enabling (I)EG codes to achieve arbitrarily small DFRs, decode up to the rank Gilbert-Varshamov bound (even close to the minimal distance), and decode by the Welch-Berlekamp like algorithm. An interesting and important byproduct is that we demonstrate that decoding interleaved Gabidulin codes can be achieved deterministically by solving the LR problem. We finally apply the EG codes to improve RQC (NIST PQC & Asiacrypt 2023). For 128-bit security, our optimized RQC reduces bandwidth by 69 % and 34 % compared to the original versions, respectively. The scheme also achieves at least 50% improvement in efficiency and mitigates MM algebraic attacks (as discussed in Eurocrypt 2020, Asiacrypt 2020 & 2023) as EG codes facilitate schemes operating over smaller finite fields. Overall, our scheme outperforms code-based schemes of NIST PQC Round 4 submissions, such as HQC, BIKE, and Classic McEliece, in terms of bandwidth. A conservative parameters set still remains competitive bandwidths.
Last updated:  2025-04-13
Eccfrog512ck2: An Enhanced 512-bit Weierstrass Elliptic Curve
Víctor Duarte Melo and William J Buchanan
Whilst many key exchange and digital signature methods use the NIST P256 (secp256r1) and secp256k1 curves, there is often a demand for increased security. With these curves, we have a 128-bit security. These security levels can be increased to 256-bit security with NIST P-521 Curve 448 and Brainpool-P512. This paper outlines a new curve - Eccfrog512ck2 - and which provides 256-bit security and enhanced performance over NIST P-521. Along with this, it has side-channel resistance and is designed to avoid weaknesses such as related to the MOV attack. It shows that Eccfrog512ck2 can have a 61.5% speed-up on scalar multiplication and a 33.3% speed-up on point generation over the NIST P-521 curve.
Last updated:  2025-04-13
Fast, private and regulated payments in asynchronous networks
Maxence Brugeres, Victor Languille, Petr Kuznetsov, and Hamza Zarfaoui
We propose a decentralized asset-transfer system that enjoys full privacy: no party can learn the details of a transaction, except for its issuer and its recipient. Furthermore, the recipient is only aware of the amount of the transaction. Our system does not rely on consensus or synchrony assumptions, and therefore, it is responsive, since it runs at the actual network speed. Under the hood, every transaction creates a consumable coin equipped with a non-interactive zero-knowledge proof (NIZK) that confirms that the issuer has sufficient funds without revealing any information about her identity, the recipient's identity, or the payment amount. Moreover, we equip our system with a regulatory enforcement mechanism that can be used to regulate transfer limits or restrict specific addresses from sending or receiving funds, while preserving the system's privacy guarantees. Finally, we report on Paxpay, our implementation of Fully Private Asset Transfer (FPAT) that uses the Gnark library for the NIZKs. In our benchmark, Paxpay exhibits better performance than earlier proposals that either ensure only partial privacy, require some kind of network synchrony or do not implement regulation features. Our system thus reconciles privacy, responsiveness, regulation enforcement and performance.
Last updated:  2025-04-13
Reusable Dynamic Multi-Party Homomorphic Encryption
Jung Hee Cheon, Hyeongmin Choe, Seunghong Kim, and Yongdong Yeo
Homomorphic Encryption (HE) is a promising primitive for evaluating arbitrary circuits while keeping the user's privacy. We investigate how to use HE in the multi-party setting where data is encrypted with several distinct keys. One may use the Multi-Key Homomorphic Encryption (MKHE) in this setting, but it has space/computation overhead of $\mathcal O(n)$ for the number of users $n$, which makes it impractical when $n$ grows large. On the contrary, Multi-Party Homomorphic Encryption (MPHE) is the other Homomorphic Encryption primitive in the multi-party setting, where the space/computation overhead is $\mathcal O(1)$; however, is limited in terms of ciphertext reusability and dynamicity, that ciphertexts are encrypted just for a group of parties and cannot be reused for other purposes, and that additional parties cannot join the computation dynamically. Contrary to MKHE, where the secret key owners engage only in the decryption phase, we consider a more relaxed situation where the secret key owners can communicate before the computation. In that case, we can reduce the size of a ciphertext and the evaluation complexity from $\mathcal O(n)$ to $\mathcal O(1)$ as in a single-key HE setting. We call this primitive as {\em Reusable Dynamic Multi-Party Homomorphic Encryption}, which is more suitable in real-world scenarios. We show that 1) the procedures before the computation can be done in a very few rounds of communications, 2) the evaluation/space complexities are independent of the number of users, and 3) the functionalities are as efficient as MKHE, with asymptotic analysis and with implementation.
Last updated:  2025-04-13
Aurora: Leaderless State-Machine Replication with High Throughput
Hao Lu, Jian Liu, and Kui Ren
State-machine replication (SMR) allows a {deterministic} state machine to be replicated across a set of replicas and handle clients' requests as a single machine. Most existing SMR protocols are leader-based requiring a leader to order requests and coordinate the protocol. This design places a disproportionately high load on the leader, inevitably impairing the scalability. If the leader fails, a complex and bug-prone fail-over protocol is needed to switch to a new leader. An adversary can also exploit the fail-over protocol to slow down the protocol. In this paper, we propose a crash-fault tolerant SMR named Aurora, with the following properties: • Leaderless: it does not require a leader, hence completely get rid of the fail-over protocol. • Scalable: it can scale up to 11 replicas. • Robust: it behaves well even under a poor network connection. We provide a full-fledged implementation of Aurora and systematically evaluate its performance. Our benchmark results show that Aurora achieves a throughput of around two million Transactions Per Second (TPS), up to 8.7X higher than the state-of-the-art leaderless SMR.
Last updated:  2025-04-13
Vector Commitment Design, Analysis, and Applications: A Survey
Vir Pathak, Sushmita Ruj, and Ron van der Meyden
Due to their widespread applications in decentralized and privacy preserving technologies, commitment schemes have become increasingly important cryptographic primitives. With a wide variety of applications, many new constructions have been proposed, each enjoying different features and security guarantees. In this paper, we systematize the designs, features, properties, and applications of vector commitments (VCs). We define vector, polynomial, and functional commitments and we discuss the relationships shared between these types of commitment schemes. We first provide an overview of the definitions of the commitment schemes we will consider, as well as their security notions and various properties they can have. We proceed to compare popular constructions, taking into account the properties each one enjoys, their proof/update information sizes, and their proof/commitment complexities. We also consider their effectiveness in various decentralized and privacy preserving applications. Finally, we conclude by discussing some potential directions for future work.
Last updated:  2025-04-12
How to Recover the Full Plaintext of XCB
Peng Wang, Shuping Mao, Ruozhou Xu, Jiwu Jing, and Yuewu Wang
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the \emph{full} plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure random-IV-based stream cipher.
Last updated:  2025-04-12
PAC-Private Algorithms
Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas
Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise. Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We propose new techniques in order to reduce instability in algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.
Last updated:  2025-04-12
Adaptive Robustness of Hypergrid Johnson-Lindenstrauss
Andrej Bogdanov, Alon Rosen, Neekon Vafa, and Vinod Vaikuntanathan
Johnson and Lindenstrauss (Contemporary Mathematics, 1984) showed that for $n > m$, a scaled random projection $\mathbf{A}$ from $\mathbb{R}^n$ to $\mathbb{R}^m$ is an approximate isometry on any set $S$ of size at most exponential in $m$. If $S$ is larger, however, its points can contract arbitrarily under $\mathbf{A}$. In particular, the hypergrid $([-B, B] \cap \mathbb{Z})^n$ is expected to contain a point that is contracted by a factor of $\kappa_{\mathsf{stat}} = \Theta(B)^{-1/\alpha}$, where $\alpha = m/n$. We give evidence that finding such a point exhibits a statistical-computational gap precisely up to $\kappa_{\mathsf{comp}} = \widetilde{\Theta}(\sqrt{\alpha}/B)$. On the algorithmic side, we design an online algorithm achieving $\kappa_{\mathsf{comp}}$, inspired by a discrepancy minimization algorithm of Bansal and Spencer (Random Structures \& Algorithms, 2020). On the hardness side, we show evidence via a multiple overlap gap property (mOGP), which in particular captures online algorithms; and a reduction-based lower bound, which shows hardness under standard worst-case lattice assumptions. As a cryptographic application, we show that the rounded Johnson-Lindenstrauss embedding is a robust property-preserving hash function (Boyle, Lavigne and Vaikuntanathan, TCC 2019) on the hypergrid for the Euclidean metric in the computationally hard regime. Such hash functions compress data while preserving $\ell_2$ distances between inputs up to some distortion factor, with the guarantee that even knowing the hash function, no computationally bounded adversary can find any pair of points that violates the distortion bound.
Last updated:  2025-04-12
MProve-Nova: A Privacy-Preserving Proof of Reserves Protocol for Monero
Varun Thakore and Saravanan Vijayakumaran
A proof of reserves (PoR) protocol enables a cryptocurrency exchange to prove to its users that it owns a certain amount of coins, as a first step towards proving that it is solvent. We present the design, implementation, and security analysis of MProve-Nova, a PoR protocol for Monero that leverages the Nova recursive SNARK to achieve two firsts (without requiring any trusted setup). It is the first Monero PoR protocol that reveals only the number of outputs owned by an exchange; no other information about the outputs or their key images is revealed. It is also the first Monero PoR protocol where the proof size and proof verification time are constant, i.e. they are independent of the number of outputs on the Monero blockchain and the number of outputs owned by the exchange. To achieve constant verification times, MProve-Nova requires a pre-processing step which creates two Merkle trees from all the outputs and key images on the Monero blockchain. MProve-Nova consists of two Nova-based subprotocols, a reserves commitment generator (RCG) protocol used to compute a commitment to the total reserves owned by an exchange and a non-collusion (NC) protocol used to prove non-collusion between two exchanges. For the RCG protocol, we observed proof sizes of about 28 KB and verification times of 4.3 seconds. For the NC protocol, we observed proof sizes of about 24 KB and verification times of 0.2 seconds. Proving times for both protocols increase linearly with the number of outputs owned by the exchange but remain independent of the number of outputs on the Monero blockchain. On average, the RCG protocol required about 42 minutes per 1000 outputs and the NC protocol required about 5 minutes per 1000 outputs.
Last updated:  2025-04-12
Spilling-Cascade: an Optimal PKE Combiner for KEM Hybridization
Céline Chevalier, Guirec Lebrun, and Ange Martinelli
Hybrid Post-Quantum cryptography is a cautious approach that aims to guard against the threat posed by the quantum computer, through the simultaneous use of Post-Quantum (PQ) and classical (i.e. pre-quantum) cryptosystems, should the post-quantum schemes used prove insecure. Regarding the hybridization of Key Encapsulation Mechanisms (KEMs), most recent studies focus on safely combining the symmetric keys output by a parallel execution of classical and Post-Quantum KEMs. While this architecture is straightforward, it appears to lack bandwidth optimization. Hence, we propose a novel method for hybridizing several KEMs more effectively, by combining the underlying Public-Key Encryption schemes (PKEs) in an innovative variant of the cascade composition that we call “spilling-cascade”, before turning the hybrid PKE into a KEM with a FO transformation. We prove that this architecture constitutes a robust combiner for encryption schemes up to IND-CPA security, which permits to eventually generate an IND-CCA-secure KEM. In terms of performance, our spilling-cascade scheme has a better communication cost than the commonly used parallel combination, with a bandwidth gain of its ciphertext that ranges from 2.8% to 13 % com- pared to the latter, depending on the number and the characteristics of the PKEs that are combined. Moreover, we prove that for given PKEs to hybridize, the ciphertext communication cost of the spilling-cascade is optimal.
Last updated:  2025-04-12
HyperPianist: Pianist with Linear-Time Prover and Logarithmic Communication Cost
Chongrong Li, Pengfei Zhu, Yun Li, Cheng Hong, Wenjie Qu, and Jiaheng Zhang
Recent years have seen great improvements in zero-knowledge proofs (ZKPs). Among them, zero-knowledge SNARKs are notable for their compact and efficiently-verifiable proofs, but suffer from high prover costs. Wu et al. (Usenix Security 2018) proposed to distribute the proving task across multiple machines, and achieved significant improvements in proving time. However, existing distributed ZKP systems still have quasi-linear prover cost, and may incur a communication cost that is linear in circuit size. In this paper, we introduce $\mathsf{HyperPianist}$. Inspired by the state-of-the-art distributed ZKP system Pianist (Liu et al., S&P 2024) and the multivariate proof system HyperPlonk (Chen et al., EUROCRYPT 2023), we design a distributed multivariate polynomial interactive oracle proof (PIOP) system with a linear-time prover cost and logarithmic communication cost. Unlike Pianist, $\mathsf{HyperPianist}$ incurs no extra overhead in prover time or communication when applied to general (non-data-parallel) circuits. To instantiate the PIOP system, we adapt two additively-homomorphic multivariate polynomial commitment schemes, multivariate KZG (Papamanthou et al., TCC 2013) and Dory (Lee et al., TCC 2021), into the distributed setting, and get $\mathsf{HyperPianist^K}$ and $\mathsf{HyperPianist^D}$ respectively. Both systems have linear prover complexity and logarithmic communication cost; furthermore, $\mathsf{HyperPianist^D}$ requires no trusted setup. We also propose $\mathsf{HyperPianist+}$, incorporating an optimized lookup argument based on Lasso (Setty et al., EUROCRYPT 2024) with lower prover cost. Experiments demonstrate $\mathsf{HyperPianist^K}$ and $\mathsf{HyperPianist^D}$ achieve speedups of $63.1\times$ and $40.2\times$ over HyperPlonk with 32 distributed machines. Compared to Pianist, $\mathsf{HyperPianist^K}$ can be $2.9\times$ and $4.6\times$ as fast and $\mathsf{HyperPianist^D}$ can be $2.4\times$ and $3.8\times$ as fast, on vanilla gates and custom gates respectively. With layered circuits, $\mathsf{HyperPianist^K}$ is up to $5.9\times$ as fast on custom gates, and $\mathsf{HyperPianist^D}$ achieves a $4.7\times$ speedup.
Last updated:  2025-04-12
Malleable SNARKs and Their Applications
Suvradip Chakraborty, Dennis Hofheinz, Roman Langrehr, Jesper Buus Nielsen, Christoph Striecks, and Daniele Venturi
Succinct non-interactive arguments of knowledge (SNARKs) are variants of non-interactive zero-knowledge proofs (NIZKs) in which complex statements can be proven in a compact way. SNARKs have had tremendous impact in several areas of cryptography, including verifiable computing, blockchains, and anonymous communication. A recurring concept in many applications is the concept of recursive SNARKs, in which a proof references a previous proof to show an evolved statement. In this work, we investigate malleable SNARKs, a generalization of this concept of recursion. An adaptation of the existing concept of malleable NIZKs, malleable SNARKs allow to modify SNARK proofs to show related statements, but such that such mauled proofs are indistinguishable from “properly generated” fresh proofs of the related statement. We show how to instantiate malleable SNARKs for universal languages and relations, and give a number of applications: the first post-quantum RCCA-secure rerandomizable and updatable encryption schemes, a generic construction of reverse firewalls, and an unlinkable (i.e., computation-hiding) targeted malleable homomorphic encryption scheme. Technically, our malleable SNARK construction relies on recursive proofs, but with a twist: in order to support the strong indistinguishability properties of mauled and fresh SNARK proofs, we need to allow an unbounded recursion depth. To still allow for a reasonable notion of extractability in this setting (and in particular to guarantee that extraction eventually finishes with a “proper” witness that does not refer to a previous SNARK proof), we rely on a new and generic computational primitive called adversarial one-way function (AOWF) that may be of independent interest. We give an AOWF candidate and prove it secure in the random oracle model.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.