All papers (23807 results)

Last updated:  2025-03-07
Improved Cryptanalysis of ChaCha: Beating PNBs with Bit Puncturing
Antonio Flórez-Gutiérrez and Yosuke Todo
ChaCha is a widely deployed stream cipher and one of the most important symmetric primitives. Due to this practical importance, many cryptanalysis have been proposed. Until now, Probabilistic Neutral Bits (PNBs) have been the most successful. Given differential-linear distinguishers, PNBs are the technique for key recovery relying on an experimental backward correlation obtained through blackbox analysis. A careful theoretical analysis exploiting the round function design may find a better attack and improve our understanding, but the complicated nature of the ARX structure makes such analysis difficult. % We propose a theoretical methodology inspired by bit puncturing, which was recently proposed at Eurocrypt 2024. Our method has a theoretical foundation and is thus fundamentally different from PNBs, to which it is the first effective alternative. As a result, we significantly improved the attack complexity for 6, 7, and 7.5-round ChaCha. The 7-round attack is about $2^{40}$ times faster than the previous best. Furthermore, we propose the first 7.5-round attack with a non-negligible advantage over an exhaustive search.
Last updated:  2025-03-06
The Algebraic One-More MISIS Problem and Applications to Threshold Signatures
Chenzhi Zhu and Stefano Tessaro
This paper introduces a new one-more computational problem for lattice-based cryptography, which we refer to as the Algebraic One-More MISIS problem, or AOM-MISIS for short. It is a modification of the AOM-MLWE problem recently introduced by Espitau et al. (CRYPTO ’24) to prove security of new two-round threshold signatures. Our first main result establishes that the hardness of AOM-MISIS is implied by the hardness of MSIS and MLWE (with suitable parameters), both of which are standard assumptions for efficient lattice-based cryptography. We prove this result via a new generalization of a technique by Tessaro and Zhu (EUROCRYPT ’23) used to prove hardness of a one-more problem for linear hash functions assuming their collision resistance, for which no clear lattice analogue was known. Since the hardness of AOM-MISIS implies the hardness of AOM-MLWE, our result resolves the main open question from the work of Espitau et al., who only provided a similar result for AOM-MLWE restricted to selective adversaries, a class which does not cover the use for threshold signatures. Furthermore, we show that our novel formulation of AOM-MISIS offers a better interface to develop tighter security bounds for state-of-the-art two-round threshold signatures. We exemplify this by providing new proofs of security, assuming the hardness of MLWE and MSIS, for two threshold signatures, the one proposed in the same work by Espitau et al., as well as a recent construction by Chairattana-Apirom et al. (ASIACRYPT 2024). For the former scheme, we also show that it satisfies the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO ’22), as a result of independent interest.
Last updated:  2025-03-06
Constant-Time Code: The Pessimist Case
Thomas Pornin
This note discusses the problem of writing cryptographic implementations in software, free of timing-based side-channels, and many ways in which that endeavour can fail in practice. It is a pessimist view: it highlights why such failures are expected to become more common, and how constant-time coding is, or will soon become, infeasible in all generality.
Last updated:  2025-03-06
Fine-Grained Verifier NIZK and Its Applications
Shuai Han, Shengli Liu, Xiangyu Liu, and Dawu Gu
In this paper, we propose a new type of non-interactive zero-knowledge (NIZK), called Fine-grained Verifier NIZK (FV-NIZK), which provides more flexible and more fine-grained verifiability of proofs than standard NIZK that supports public verifiability and designated-verifier NIZK (DV-NIZK) that supports private verifiability. FV-NIZK has two statistically (or computationally) equivalent verification approaches: --- a master verification using the master secret key $msk$; --- a fine-grained verification using a derived secret key $sk_d$, which is derived from $msk$ w.r.t. $d$ (which may stand for user identity, email address, vector, etc.). We require unbounded simulation soundness (USS) of FV-NIZK to hold, even if an adversary obtains derived secret keys $sk_d$ with $d$ of its choices, and define proof pseudorandomness which stipulates the pseudorandomness of proofs for adversaries that are not given any secret key. We present two instantiations of FV-NIZK for linear subspace languages, based on the matrix decisional Diffie-Hellman (MDDH) assumption. One of the FV-NIZK instantiations is pairing-free and achieves almost tight USS and proof pseudorandomness. We also adapt the two instantiations to support unbounded fine-grained secret key delegations. We illustrate the usefulness of FV-NIZK by showing two applications and obtain the following pairing-free schemes: --- the first almost tightly multi-challenge CCA (mCCA)-secure inner-product functional encryption (IPFE) scheme without pairings; --- the first public-key encryption (PKE) scheme that reconciles the inherent contradictions between public verifiability and anonymity. We formalize such PKE as Fine-grained Verifiable PKE (FV-PKE), which derives a special key from the decryption secret key, such that for those who obtain the derived key, they can check the validity of ciphertexts but the anonymity is lost from their views (CCA-security still holds for them), while for others who do not get the derived key, they cannot do the validity check but the anonymity holds for them. Our FV-PKE scheme achieves almost tight mCCA-security for adversaries who obtain the derived keys, and achieves almost tight ciphertext pseudorandomness (thus anonymity) for others who do not get any derived key.
Last updated:  2025-03-06
MIDAS: an End-to-end CAD Framework for Automating Combinational Logic Locking
Akashdeep Saha, Siddhartha Chowdhury, Rajat Subhra Chakraborty, and Debdeep Mukhopadhyay
Logic locking has surfaced as a notable safeguard against diverse hazards that pose a risk to the integrated circuit (IC) supply chain. Existing literature on logic locking largely encompasses the art of proposing new constructions, on the one hand, and unearthing weaknesses in such algorithms on the other. Somehow, in this race of make and break, the stress on automation of adopting such techniques on real-life circuits has been rather limited. For the first time, we present a generic end-to-end combinational logic locking CAD framework, MIDAS. This framework analyses circuit netlists and generates locked netlists. Due to its generic circuit analysis, it bridges the gap, integrates diverse logic locking techniques, and offers a scope of integration of potential future ones. MIDAS framework’s efficacy has been verified through its application on ISCAS’85 and ISCAS’99 benchmark circuits, locked using six different schemes such as EPIC, Anti-SAT, SFLL-HD, SFLL-fault, CAS-Lock, and LoPher. MIDAS minimizes the hardware overhead requirements of otherwise resource-intensive locking technique LoPher by extracting an influential portion of circuit to lock and utilizing a simple fitness function. We also assess the overhead increase for the aforementioned locking methods, thereby complicating the identification of influential nodes within the locked netlists. Finally, we evaluate MIDAS by selectively locking parts of a commercially-designed open-source RISC-V core.
Last updated:  2025-03-07
Black-Box (and Fast) Non-Malleable Zero Knowledge
Vincenzo Botta, Michele Ciampi, Emmanuela Orsini, Luisa Siniscalchi, and Ivan Visconti
Non-malleable zero-knowledge (NMZK), originally introduced in the seminal work of Dolev, Dwork, and Naor (STOC 91), is a fundamental concept for modeling the security of proof systems against man-in-the-middle attacks. Recently, Kim, Liang, and Pandey (CRYPTO 2022) presented the first efficient constant-round NMZK argument system based solely on symmetric-key cryptography. Their construction relies on a non-black-box use of the involved cryptographic primitives and on multiple executions of Ligero (CCS 2017) that affect both the round complexity and the computational efficiency of their protocol. Their work left open the natural important challenge of achieving NMZK using the underlying primitives only in a black-box fashion (regardless of the number of rounds and actual efficiency). In this paper, we solve the aforementioned open problem by presenting the first NMZK argument system based on the black-box use of cryptographic primitives. Our work is optimal in the use of primitives since we only need one-way functions, and asymptotically optimal in the number of rounds since we only require a constant number of rounds. Our argument system is non-malleable with respect to the strong "simulation-extractability" flavor of non-malleability. Furthermore, we also show that our construction can be efficiently instantiated in Minicrypt, significantly improving upon the work of Kim et al., both in terms of round complexity and computational efficiency.
Last updated:  2025-03-06
Commitment Schemes Based on Module-LIP
Hengyi Luo, Kaijie Jiang, Yanbin Pan, and Anyu Wang
Recently, Jiang et al. (EUROCRYPT 2025) proposed a universal framework for constructing commitment schemes using group actions, and instantiated it with the Lattice Isomorphism Problem (LIP). This paper attempts to construct an instantiation based on module-LIP with this framework. More precisely, we first present a reduction from $\mathcal{O}_{\mathbb{L}}^2$-LIP to $\mathcal{O}_{\mathbb{L}}^2$-LAP. Then we develop a re-randomized algorithm based on the self-reduction framework of Module-LIP (Ducas et al. ASIACRYPT 2022), adapting it to the framework to construct commitment schemes.
Last updated:  2025-03-06
Non-interactive Anonymous Tokens with Private Metadata Bit
Foteini Baldimtsi, Lucjan Hanzlik, Quan Nguyen, and Aayush Yadav
Anonymous tokens with private metadata bit (ATPM) have received increased interest as a method for anonymous client authentication while also embedding trust signals that are only readable by the authority who holds the issuance secret key and nobody else. A drawback of all existing ATPM constructions is that they require client-issuer interaction during the issuance process. In this work, we build the first non-interactive anonymous tokens (NIAT) with private metadata bit, inspired by the recent work of Hanzlik (Eurocrypt '23) on non-interactive blind signatures. We discuss how the non-interaction property during the issuance process allows for more efficient issuance protocols that avoid the need for online signing. We construct an efficient NIAT scheme based on Structure-preserving Signatures on Equivalence Classes (SPS-EQ) and experimentally evaluate its performance. We also present an extension to our NIAT construction that allows the identification of clients who attempt to double-spend (i.e., present the same token twice).
Last updated:  2025-03-06
Enhanced CKKS Bootstrapping with Generalized Polynomial Composites Approximation
Seonhong Min, Joon-woo Lee, and Yongsoo Song
Bootstrapping in approximate homomorphic encryption involves evaluating the modular reduction function. Traditional methods decompose the modular reduction function into three components: scaled cosine, double-angle formula, and inverse sine. While these approaches offer a strong trade-off between computational cost and level consumption, they lack flexibility in parameterization. In this work, we propose a new method to decompose the modular reduction function with improved parameterization, generalizing prior trigonometric approaches. Numerical experiments demonstrate that our method achieves near-optimal approximation errors. Additionally, we introduce a technique that integrates the rescaling operation into matrix operations during bootstrapping, further reducing computational overhead.
Last updated:  2025-03-05
On Improved Cryptanalytic Results against ChaCha for Reduced Rounds ≥ 7
Nitin Kumar Sharma, Sabyasachi Dey, Santanu Sarkar, and Subhamoy Maitra
In this paper, we analyze the subtle issues of complexity estimates related to state-of-the-art cryptanalytic efforts on ChaCha. In this regard, we demonstrate that the currently best-known cryptanalytic result on $7$-round ChaCha with time $2^{189.7}$ and data $2^{102.63}$ [Xu et al., ToSC 2024] can be estimated as $2^{178.12}$ for time and $2^{101.09}$ for data complexity. We improve the best-known result for the $7.25$ round by obtaining an improved set of Probabilistic Neutral Bits and considering our revised estimation. Our result with time complexity $2^{212.43}$ and data complexity $2^{100.56}$ improves the result of Xu et al., where they could achieve time and data complexity $2^{223.9}$ and $2^{100.80}$, respectively. For both the $7$ and $7.25$ rounds, we can show an improvement of the order of $2^{11}$ in the time complexity. For $7.5$-round, we improve the result of Dey [IEEE-IT 2024], which reports the time and data complexity of $2^{255.24}$ and $2^{32.64}$, respectively. By applying the formula of the same paper and incorporating additional PNBs, we obtain improved time and data complexity of $2^{253.23}$ and $2^{34.47}$, respectively. Thus, this paper describes the currently best-known cryptanalytic results against reduced round ChaCha. Our results do not affect the security claims of the complete algorithm with 20 rounds. Also, we provide a rebuttal of the Work by Wang et al. \cite{wangeprint} and analyze their claim about the error in the ``Divide-and-Conquer'' Approach.
Last updated:  2025-03-05
BUFFing Threshold Signature Schemes
Marc Fischlin, Aikaterini Mitrokotsa, and Jenit Tomy
We explore advanced security notions for threshold signature schemes, focusing on Beyond UnForgeability Features (BUFF), introduced by Cremers et al. (S&P’21) in the non-threshold setting. The BUFF properties protect against attacks based on maliciously chosen keys, e.g., expropriating a message-signature pair under a new public key (called exclusive ownership). We first formalize these notions in the threshold setting and examine their relationships. Notably, unlike regular signature schemes, the hierarchy of variants of exclusive ownership notions only holds for threshold schemes if they are also robust. We then present a generic compiler that transforms any threshold signature scheme to satisfy exclusive ownership, and message-bound signature properties with minimal overhead. Furthermore, we modify the threshold BLS signature scheme to achieve these additional properties without increasing the signature size. Lastly, we identify specific structures in threshold signature schemes where BUFF properties can be naturally extended from the underlying standard signature scheme, and we analyze and prove the security properties in some of the existing threshold schemes.
Last updated:  2025-03-05
Exploring How to Authenticate Application Messages in MLS: More Efficient, Post-Quantum, and Anonymous Blocklistable
Keitaro Hashimoto, Shuichi Katsumata, and Guillermo Pascual-Perez
The Message Layer Security (MLS) protocol has recently been standardized by the IETF. MLS is a scalable secure group messaging protocol expected to run more efficiently compared to the Signal protocol at scale, while offering a similar level of strong security. Even though MLS has undergone extensive examination by researchers, the majority of the works have focused on confidentiality. In this work, we focus on the authenticity of the application messages exchanged in MLS. Currently, MLS authenticates every application message with an EdDSA signature and while manageable, the overhead is greatly amplified in the post-quantum setting as the NIST-recommended Dilithium signature results in a 40x increase in size. We view this as an invitation to explore new authentication modes that can be used instead. We start by taking a systematic view on how application messages are authenticated in MLS and categorize authenticity into four different security notions. We then propose several authentication modes, offering a range of different efficiency and security profiles. For instance, in one of our modes, COSMOS++, we replace signatures with one-time tokens and a MAC tag, offering roughly a 75x savings in the post-quantum communication overhead. While this comes at the cost of weakening security compared to the authentication mode used by MLS, the lower communication overhead seems to make it a worthwhile trade-off with security.
Last updated:  2025-03-05
A Note on the Blindness of the Scheme from ePrint 2025/397
Lucjan Hanzlik
This note demonstrates that the blind signature scheme based on cryptographic group actions, as proposed in ePrint paper 2025/397, fails to ensure blindness. Specifically, we construct an adversary that achieves a $1/8$ advantage in the blindness experiment. The attack leverages selective abort techniques (also known as selective failure attacks), a well-known strategy in the MPC literature.
Last updated:  2025-03-05
Matchmaker: Fast Secure Inference across Deployment Scenarios
Neha Jawalkar, Nishanth Chandran, Divya Gupta, Rahul Sharma, and Arkaprava Basu
Secure Two-Party Computation (2PC) enables secure inference with cryptographic guarantees that protect the privacy of the model owner and client. However, it adds significant performance overhead. In this work, we make 2PC-based secure inference efficient while considering important deployment scenarios. We observe that the hitherto unconsidered latency of fetching keys from storage significantly impacts performance, as does network speed. We design a Linear Secret Sharing (LSS)-based system $LSS^M$ and a Function Secret Sharing (FSS)-based system $FSS^M$ for secure inference, optimized for small key size and communication, respectively. Notably, our highly-optimized and hardware-aware CPU-based $LSS^M$ outperforms prior GPU-based LSS systems by up to $50\times$. We then show that the best choice between $LSS^M$ and $FSS^M$ depends on the deployment scenario. In fact, under certain deployments, a combination of $LSS^M$ and $FSS^M$ can leverage heterogeneous processing across CPU and GPU. Such protocol-system co-design lets us outperform state-of-the-art secure inference systems by up to $21\times$ (geomean $3.25\times$).
Last updated:  2025-03-05
Multi-Client Attribute-Based Unbounded Inner Product Functional Encryption, and More
Subhranil Dutta, Aikaterini Mitrokotsa, Tapas Pal, and Jenit Tomy
This paper presents the concept of a multi-client functional encryption (MC-FE) scheme for attribute-based inner product functions (AB-IP), initially proposed by Abdalla et al. [ASIACRYPT’20], in an unbounded setting. In such a setting, the setup is independent of vector length constraints, allowing secret keys to support functions of arbitrary lengths, and clients can dynamically choose vector lengths during encryption. The functionality outputs the sum of inner products if vector lengths and indices meet a specific relation, and all clients’ attributes satisfy the key’s policy. We propose the following constructions based on the matrix decisional Diffie-Hellman assumption in a natural permissive setting of unboundedness: – the first multi-client attribute-based unbounded IPFE (MC-AB-UIPFE) scheme secure in the standard model, overcoming previous limitations where clients could only encrypt fixed-length data; – the first multi-input AB-UIPFE (MI-AB-UIPFE) in the public key setting; improving upon prior bounded constructions under the same assumption; – the first dynamic decentralized UIPFE (DD-UIPFE); enhancing the dynamism property of prior works. Technically, we follow the blueprint of Agrawal et al. [CRYPTO’23] but begin with a new unbounded FE called extended slotted unbounded IPFE. We first construct a single-input AB-UIPFE in the standard model and then extend it to multi-input settings. In a nutshell, our work demonstrates the applicability of function-hiding security of IPFE in realizing variants of multi-input FE capable of encoding unbounded length vectors both at the time of key generation and encryption.
Last updated:  2025-03-05
Private Computation on Common Fuzzy Records
Kyoohyung Han, Seongkwang Kim, and Yongha Son
Private computation on common records refers to analyze data from two databases containing shared records without revealing personal information. As a basic requirement for private computation, the databases involved essentially need to be aligned by a common identification system. However, it is hard to expect such common identifiers in real world scenario. For this reason, multiple quasi-identifiers can be used to identify common records. As some quasi-identifiers might be missing or have typos, it is important to support fuzzy records setting. Identifying common records using quasi-identifiers requires manipulation of highly sensitive information, which could be privacy concerns. This work studies the problem of enabling such data analysis on the fuzzy records of quasi-identifiers. To this end, we propose ordered threshold-one (OTO) matching which can be efficiently realized by circuit-based private set intersection (CPSI) protocols and some multiparty computation (MPC) techniques. Furthermore, we introduce some generic encoding techniques from traditional matching rules to the OTO matching. Finally, we achieve a secure efficient private computation protocol which supports various matching rules which have already been widely used. We also demonstrate the superiority of our proposal with experimental validation. First, we empirically check that our encoding to OTO matching does not affect accuracy a lot for the benchmark datasets found in the fuzzy record matching literature. Second, we implement our protocol and achieve significantly faster performance at the cost of communication overhead compared to previous privacy-preserving record linkage (PPRL) protocols. In the case of 100K records for each dataset, our work shows 147.58MB communication cost, 10.71s setup time, and 1.97s online time, which is 7.78 times faster compared to the previous work (50.12 times faster when considering online time only).
Last updated:  2025-03-05
A Note on Obfuscation-based Attacks on Private-coin Evasive LWE
Tzu-Hsiang Huang, Wei-Hsiang Hung, and Shota Yamada
The evasive learning with errors (evasive LWE) assumption is a new assumption recently introduced by Wee (Eurocrypt 2022) and Tsabary (Crypto 2022) independently, as a significant strengthening of the standard LWE assumption. While the assumption is known to imply various strong primitives including witness encryption [Wee22,Tsabary22], the assumption in the most general case (i.e., the private coin variant) is considered quite implausible due to the obfuscation based attack mentioned in [Wee22]. This obfuscation based attack is then later formalized by Vaikuntanathan, Wee, and Wichs [VWW22]. In this note, we revisit their attack and show that the attack actually does not work by showing a concrete counterexample. We then show that their attack can be made valid with some modifications. Along the way, we also improve the counterexample by making it provable. Specifically, our counterexample is valid assuming the (plain) LWE assumption and the existence of instance-hiding witness encryption, whereas their original counterexample was dependent on the heuristic assumption of the existence of an ideal obfuscation.
Last updated:  2025-03-05
Non-Interactive Verifiable Aggregation
Ojaswi Acharya, Suvasree Biswas, Weiqi Feng, Adam O'Neill, and Arkady Yerukhimovich
Consider a weak analyst that wishes to outsource data collection and computation of aggregate statistics over a a potentially large population of (also weak) clients to a powerful server. For flexibility and efficiency, we consider public-key and non-interactive protocols, meaning the clients know the analyst's public key but do not share secrets, and each client sends at most one message. Furthermore, the final step should be silent, whereby the analyst simply downloads the (encrypted) result from the server when needed. To capture this setting, we define a new primitive we call Non-Interactive Verifiable Aggregation (NIVA). We require both privacy and robustness for a NIVA protocol to be deemed secure. Namely, our security notion for NIVA ensures that the clients' data remains private to both the server and the analyst, while also ensuring that malicious clients cannot skew the results by providing faulty data. We propose a secure NIVA protocol, which we call PEAR (for Private, Efficient, Accurate, Robust), which can validate inputs according to any NP validity rule. PEAR is based on a novel combination of functional encryption for inner-products (Abdalla et al., PKC 2015) and fully-linear probabilistically-checkable proofs (Boneh et al., Crypto 2019). We emphasize that PEAR is non-interactive, public-key, and makes black-box use of the underlying cryptographic primitives. Additionally, we devise substantial optimizations of PEAR for practically-relevant validity rules. Finally, we implement PEAR to show feasibility for such validity rules, conducting a thorough performance evaluation. In particular, we compare PEAR to two more straightforward or "off-the-shelf" NIVA protocols and show performance gains, demonstrating the merit of our new approach. The bottleneck in our protocol comes from the fact that we require the underlying IPFE scheme to be "unrestricted" over a large field. As more efficient such schemes are developed, they can be immediately be plugged into PEAR for further gains.
Last updated:  2025-03-05
Samaritan: Linear-time Prover SNARK from New Multilinear Polynomial Commitments
Chaya Ganesh, Sikhar Patranabis, and Nitin Singh
We study linear-time prover SNARKs and make the following contributions: We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. SamaritanPCS is a drop-in replacement for the popular PST scheme, and improves upon PST in all relevant parameters. We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over BLS12-381 curve) has a proof size of 6.7KB. We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the $\log^2 n$ proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3$\times$ smaller argument size at 1 million constraints. We match the argument size of HyperPlonk, which is the smallest linear-time SNARK for the Plonkish constraint system, while achieving slightly better verification complexity. We believe that our transformation and our techniques for applying lookups based on logarithmic derivatives to the multilinear setting are of wider interest.
Last updated:  2025-03-04
ProofFrog: A Tool For Verifying Game-Hopping Proofs
Ross Evans, Matthew McKague, and Douglas Stebila
Cryptographic proofs allow researchers to provide theoretical guarantees on the security that their constructions provide. A proof of security can completely eliminate a class of attacks by potential adversaries. Human fallibility, however, means that even a proof reviewed by experts may still hide flaws or outright errors. Proof assistants are software tools built for the purpose of formally verifying each step in a proof, and as such have the potential to prevent erroneous proofs from being published and insecure constructions from being implemented. Unfortunately, existing tooling for verifying cryptographic proofs has found limited adoption in the cryptographic community, in part due to concerns with ease of use. We present ProofFrog: a new tool for verifying cryptographic game-hopping proofs. ProofFrog is designed with the average cryptographer in mind, using an imperative syntax similar to C for specifying games and a syntax for proofs that closely models pen-and-paper arguments. As opposed to other proof assistant tools which largely operate by manipulating logical formulae, ProofFrog manipulates abstract syntax trees (ASTs) into a canonical form to establish indistinguishable or equivalent behaviour for pairs of games in a user-provided sequence. We also detail the domain-specific language developed for use with the ProofFrog proof engine, the exact transformations it applies to canonicalize ASTs, and case studies of verified proofs. A tool like ProofFrog that prioritizes ease of use can lower the barrier of entry to using computer-verified proofs and aid in catching insecure constructions before they are made public.
Last updated:  2025-03-04
Evaluation of Privacy-aware Support Vector Machine (SVM) Learning using Homomorphic Encryption
William J Buchanan and Hisham Ali
The requirement for privacy-aware machine learning increases as we continue to use PII (Personally Identifiable Information) within machine training. To overcome these privacy issues, we can apply Fully Homomorphic Encryption (FHE) to encrypt data before it is fed into a machine learning model. This involves creating a homomorphic encryption key pair, and where the associated public key will be used to encrypt the input data, and the private key will decrypt the output. But, there is often a performance hit when we use homomorphic encryption, and so this paper evaluates the performance overhead of using the SVM machine learning technique with the OpenFHE homomorphic encryption library. This uses Python and the scikit-learn library for its implementation. The experiments include a range of variables such as multiplication depth, scale size, first modulus size, security level, batch size, and ring dimension, along with two different SVM models, SVM-Poly and SVM-Linear. Overall, the results show that the two main parameters which affect performance are the ring dimension and the modulus size, and that SVM-Poly and SVM-Linear show similar performance levels.
Last updated:  2025-03-04
Trapdoor Hash Functions and PIR from Low-Noise LPN
Damiano Abram, Giulio Malavolta, and Lawrence Roy
Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function $f$, a hash on $x$ together with a (small) input encoding allow one to recover $f(x)$. TDHs are a versatile tool and a useful building block for more complex cryptographic protocols. In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate $\epsilon = O(\log^{1+\beta} n / n)$ for $\beta>0$, i.e., in the so-called low-noise regime. The construction achieves $2^{\Theta(\log^{1-\beta} \lambda)}$ compression factor. As an application, we obtain a private-information retrieval (PIR) with communication complexity $L / 2^{\Theta(\log^{1-\beta} L)}$, for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than $L$) from any code-based assumption.
Last updated:  2025-03-04
On the Soundness of Algebraic Attacks against Code-based Assumptions
Miguel Cueto Noval, Simon-Philipp Merz, Patrick Stählin, and Akin Ünal
We study recent algebraic attacks (Briaud-Øygarden EC'23) on the Regular Syndrome Decoding (RSD) problem and the assumptions underlying the correctness of their attacks' complexity estimates. By relating these assumptions to interesting algebraic-combinatorial problems, we prove that they do not hold in full generality. However, we show that they are (asymptotically) true for most parameter sets, supporting the soundness of algebraic attacks on RSD. Further, we prove—without any heuristics or assumptions—that RSD can be broken in polynomial time whenever the number of error blocks times the square of the size of error blocks is larger than 2 times the square of the dimension of the code. Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
Last updated:  2025-03-08
Deimos Cipher: A High-Entropy, Secure Encryption Algorithm with Strong Diffusion and Key Sensitivity
Mohsin Belam
Deimos Cipher is a symmetric encryption algorithm designed to achieve high entropy and strong diffusion while maintaining efficiency. It employs advanced cryptographic transformations to ensure robust security against modern cryptanalysis techniques. Entropy tests demonstrate its ability to generate highly randomized ciphertext, surpassing industry standards. Avalanche effect analysis confirms optimal diffusion, achieving an average bit change of 50.18% in large datasets. Key sensitivity tests reveal a 50.54% ciphertext difference for minimal key variations, ensuring strong resistance to differential cryptanalysis. With fast encryption and decryption speeds, Deimos Cipher offers a balanced approach between security and performance, making it suitable for secure communication and data protection. This paper presents the algorithm's design, security analysis, and benchmarking against established cryptographic standards.
Last updated:  2025-03-04
Garblet: Multi-party Computation for Protecting Chiplet-based Systems
Mohammad Hashemi, Shahin Tajik, and Fatemeh Ganji
The introduction of shared computation architectures assembled from heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
Last updated:  2025-03-04
Multi-Authority Functional Encryption: Corrupt Authorities, Dynamic Collusion, Lower Bounds, and More
Rishab Goyal and Saikumar Yadugiri
Decentralization is a great enabler for adoption of modern cryptography in real-world systems. Widespread adoption of blockchains and secure multi-party computation protocols are perfect evidentiary examples for dramatic rise in deployment of decentralized cryptographic systems. Much of cryptographic research can be viewed as reducing (or eliminating) the dependence on trusted parties, while shielding from stronger adversarial threats. In this work, we study the problem of multi-authority functional encryption (MAFE), a popular decentralized generalization of functional encryption (FE). Our main contributions are: 1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or $\log n$-party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities. 2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any $k$ out of $n$ local authorities as long as ${{n}\choose{k}}$ = poly$(\lambda)$. Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions. 3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is $\omega(\log \lambda)$. This proves optimality of our second result for a wide range of parameters. 4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for $\mathsf{NC}^1 \circ \mathsf{P}/\mathsf{Poly}$ from standard pairing-based assumptions, and for $\mathsf{DNF} \circ \mathsf{P}/\mathsf{Poly}$ from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for $\mathsf{P}/\mathsf{Poly} \circ \mathsf{P}/\mathsf{Poly}$ from evasive LWE.
Last updated:  2025-03-04
Security of the Ascon Authenticated Encryption Mode in the Presence of Quantum Adversaries
Nathalie Lang, Stefan Lucks, Bart Mennink, and Suprita Talnikar
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately $\min\{2^{c/3},2^{k/2}\}$ evaluations, where $c$ is the capacity, $k$ the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode. Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.
Last updated:  2025-03-04
TreeKEM: A Modular Machine-Checked Symbolic Security Analysis of Group Key Agreement in Messaging Layer Security
Théophile Wallez, Jonathan Protzenko, and Karthikeyan Bhargavan
The Messaging Layer Security (MLS) protocol standard proposes a novel tree-based protocol that enables efficient end-to-end encrypted messaging over large groups with thousands of members. Its functionality can be divided into three components: TreeSync for authenticating and synchronizing group state, TreeKEM for the core group key agreement, and TreeDEM for group message encryption. While previous works have analyzed the security of abstract models of TreeKEM, they do not account for the precise low-level details of the protocol standard. This work presents the first machine-checked security proof for TreeKEM. Our proof is in the symbolic Dolev-Yao model and applies to a bit-level precise, executable, interoperable specification of the protocol. Furthermore, our security theorem for TreeKEM composes naturally with a previous result for TreeSync to provide a strong modular security guarantee for the published MLS standard.
Last updated:  2025-03-04
Low Communication Threshold FHE from Standard (Module-)LWE
Hiroki Okada and Tsuyoshi Takagi
Threshold fully homomorphic encryption (ThFHE) is an extension of FHE that can be applied to multiparty computation (MPC) with low round complexity. Recently, Passelègue and Stehlé (Asiacrypt 2024) presented a simulation-secure ThFHE scheme with polynomially small decryption shares from “yet another” learning with errors assumption (LWE), in which the norm of the secret key is leaked to the adversary. While “yet another” LWE is reduced from standard LWE, its module variant, “yet another” module-LWE (MLWE), lacks a known reduction from standard MLWE. Because of this, it is left as an open question to extend their scheme to the MLWE-based construction. In this paper, we address this open problem: we propose a simulation-secure ThFHE scheme with polynomially small decryption shares whose security is (directly) reduced from standard LWE/MLWE. Our core technique, which we call “noise padding”, eliminates the need of “yet another” assumptions: we distribute shares of a small error and use them to adjust the distribution of decryption noise so that no information about the secret key is leaked. As side benefits of our construction, our ThFHE efficiently realizes arbitrary T-out-of-N threshold decryption via simple Shamir secret sharing instead of {0, 1}-linear secret sharing. Furthermore, the sizes of keys, ciphertexts and decryption shares in our scheme are constant w.r.t. the number of parties N ; we achieve compactness w.r.t. N.
Last updated:  2025-03-04
Hybrid Obfuscated Key Exchange and KEMs
Felix Günther, Michael Rosenberg, Douglas Stebila, and Shannon Veitch
Hiding the metadata in Internet protocols serves to protect user privacy, dissuade traffic analysis, and prevent network ossification. Fully encrypted protocols require even the initial key exchange to be obfuscated: a passive observer should be unable to distinguish a protocol execution from an exchange of random bitstrings. Deployed obfuscated key exchanges such as Tor's pluggable transport protocol obfs4 are Diffie–Hellman-based, and rely on the Elligator encoding for obfuscation. Recently, Günther, Stebila, and Veitch (CCS '24) proposed a post-quantum variant pq-obfs, using a novel building block called obfuscated key encapsulation mechanisms (OKEMs): KEMs whose public keys and ciphertexts look like random bitstrings. For transitioning real-world protocols, pure post-quantum security is not enough. Many are taking a hybrid approach, combining traditional and post-quantum schemes to hedge against security failures in either component. While hybrid KEMs are already widely deployed (e.g., in TLS 1.3), existing hybridization techniques fail to provide hybrid obfuscation guarantees for OKEMs. Further, even if a hybrid OKEM existed, the pq-obfs protocol would still not achieve hybrid obfuscation. In this work, we address these challenges by presenting the first OKEM combiner that achieves hybrid IND-CCA security with hybrid ciphertext obfuscation guarantees, and using this to build Drivel, a modification of pq-obfs that is compatible with hybrid OKEMs. Our OKEM combiner allows for a variety of practical instantiations, e.g., combining obfuscated versions of DHKEM and ML-KEM. We additionally provide techniques to achieve unconditional public key obfuscation for LWE-based OKEMs, and explore broader applications of hybrid OKEMs, including a construction of the first hybrid password-authenticated key exchange (PAKE) protocol secure against adaptive corruptions in the UC model.
Last updated:  2025-03-03
Delegatable ABE with Full Security from Witness Encryption
Rishab Goyal and Saikumar Yadugiri
Delegatable Attribute-Based Encryption (DABE) is a well-known generalization of ABE, proposed to mirror organizational hierarchies. In this work, we design a fully-secure DABE scheme from witness encryption and other simple assumptions. Our construction does not rely on Random Oracles, and we provide a black-box reduction to polynomial hardness of underlying assumptions. To the best of our knowledge, this is the first DABE construction (beyond hierarchical identity-based encryption) that achieves full security without relying on complexity leveraging. Our DABE supports an unbounded number of key delegations, and the secret key size grows just linearly with each key delegation operation.
Last updated:  2025-03-03
AsyRand: fast asynchronous distributed randomness beacon with reconfiguration
Liang Zhang, Tao Liu, Zhanrong Ou, Haibin Kan, and Jiheng Zhang
Distributed randomness beacon protocols, which generate publicly verifiable randomness at regular intervals, are crucial for a wide range of applications. The publicly verifiable secret sharing (PVSS) scheme is a promising cryptographic primitive for implementing beacon protocols, such as Hydrand (S\&P '20) and SPURT (S\&P '22). However, two key challenges for practical deployment remain unresolved: asynchrony and reconfiguration. In this paper, we introduce the $AsyRand$ beacon protocol to address these challenges. In brief, $AsyRand$ leverages Bracha Reliable Broadcast (BRB) or BRB-like protocols for message dissemination and incorporates a producer-consumer model to decouple the production and consumption of PVSS commitments. In the producer-consumer model, PVSS commitments are produced and consumed using a queue data structure. Specifically, the producer process is responsible for generating new PVSS commitments and reaching consensus on them within the queue, while the consumer process continuously consumes the commitments to recover PVSS secrets and generate new beacon values. This separation allows the producer and consumer processes to operate simultaneously and asynchronously, without the need for a global clock. Moreover, the producer-consumer model enables each party to detect potential faults in other parties by monitoring the queue length. If necessary, parties in $AsyRand$ can initiate a removal process for faulty parties. BRB is also employed to facilitate the addition of new parties without requiring a system restart. In summary, $AsyRand$ supports reconfiguration, enhancing both the protocol's usability and reliability. Additionally, we propose a novel PVSS scheme based on the $\Sigma$ protocol, which is of independent interest. Regarding complexity, $AsyRand$ achieves state-of-the-art performance with $O(n^2)$ communication complexity, $O(n)$ computation complexity, and $O(n)$ verification complexity.
Last updated:  2025-03-03
Withdrawable signatures in Fiat-Shamir with aborts constructions
Ramses Fernandez
This article presents an extension of the work performed by Liu, Baek and Susilo on withdrawable signatures to the Fiat-Shamir with aborts paradigm. We introduce an abstract construction, and provide security proofs for this proposal. As an instantiation, we provide a concrete construction for a withdrawable signature scheme based on Dilithium.
Last updated:  2025-03-03
SNARKs for Stateful Computations on Authenticated Data
Johannes Reinhart, Erik-Oliver Blass, and Bjoern Annighoefer
We present a new generalization of (zk-)SNARKs combining two additional features at the same time. Besides the verification of correct computation, our new SNARKs also allow, first, the verification of input data authenticity. Specifically, a verifier can confirm that the input to the computation originated from a trusted source. Second, our SNARKs support verification of stateful computations across multiple rounds, ensuring that the output of the current round correctly depends on the internal state of the previous round. Our SNARKs are specifically suited to applications in cyber-physical control systems, where computations are periodically carried out and need to be checked immediately. Our focus is on concrete practicality, so we abstain from arithmetizing hash functions or signatures in our SNARKs. Rather, we modify the internals of an existing SNARK to extend its functionality. Additionally, we present new optimizations to reduce proof size, prover time, and verification time in our setting. With our construction, prover runtime improves significantly over the baseline by a factor of 89. Verification time is 70 % less for computations on authenticated data and 33 % less for stateful computations. To demonstrate relevance and practicality, we implement and benchmark our new SNARKs in a sample real-world scenario with a (simple) quadcopter flight control system.
Last updated:  2025-03-03
Periodic Table of Cryptanalysis: Geometric Approach with Different Bases
Kai Hu, Chi Zhang, Chengcheng Chang, Jiashu Zhang, Meiqin Wang, and Thomas Peyrin
In the past three decades, we have witnessed the creation of various cryptanalytic attacks. However, relatively little research has been done on their potential underlying connections. The geometric approach, developed by Beyne in 2021, shows that a cipher can be viewed as a linear operation when we treat its input and output as points in an induced \textit{free vector space}. By performing a change of basis for the input and output spaces, one can obtain various transition matrices. Linear, differential, and (ultrametic) integral attacks have been well reinterpreted by Beyne's theory in a unified way. Thus far, the geometric approach always uses the same basis for the input and output spaces. We observe here that this restriction is unnecessary and allowing different bases makes the geometric approach more flexible and able to interpret/predict more attack types. Given some set of bases for the input and output spaces, a family of basis-based attacks is defined by combining them, and all attacks in this family can be studied in a unified automatic search method. We revisit three kinds of bases from previous geometric approach papers and extend them to four extra ones by introducing new rules when generating new bases. With the final seven bases, we can obtain $7^{2d}$ different basis-based attacks in the $d$-th order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack. We then provide four examples of applications of this new framework. First, we show that by choosing a better pair of bases, Beyne and Verbauwhede's ultrametric integral cryptanalysis can be interpreted as a single element of a transition matrix rather than as a linear combination of elements. This unifies the ultrametric integral cryptanalysis with the previous linear and quasi-differential attacks. Second, we revisit the multiple-of-$n$ property with our refined geometric approach and exhibit new multiple-of-$n$ distinguishers that can reach more rounds of the \skinny-64 cipher than the state-of-the-art. Third, we study the multiple-of-$n$ property for the first-order case, which is similar to the subspace trail but it is the divisibility property that is considered. This leads to a new distinguisher for 11-round-reduced \skinny-64. Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values. We emphasize that all these applications were not possible before.
Last updated:  2025-03-03
Related-Key Differential and Boomerang Cryptanalysis in the Fixed-Key Model
Chengcheng Chang, Kai Hu, Muzhou Li, and Meiqin Wang
Differential cryptanalysis, along with its variants such as boomerang attacks, is widely used to evaluate the security of block ciphers. These cryptanalytic techniques often rely on assumptions like the \textit{hypothesis of stochastic equivalence} and \textit{Markov ciphers assumption}. Recently, more attention has been paid to verifying whether differential characteristics (DCs) meet these assumptions, finding both positive and negative results. A part of these efforts includes the automatic search methods for both the value and difference propagation (e.g., Liu et al. CRYPTO 2020, Nageler et al. ToSC 2025/1), structural constraints analysis (e.g., Tan and Peyrin, ToSC 2022/4), and the quasidifferential (Beyne and Rijmen, CRYPTO 2022). Nevertheless, less attention has been paid to the related-key DCs and boomerang distinguishers, where the same assumptions are used. To the best of our knowledge, only some related-tweakey DCs of \skinny were checked thanks to its linear word-based key-schedule, and no similar work is done for boomerang distinguishers. The verification of related-key DCs and boomerang distinguishers is as important as that of DCs, as they often hold the longest attack records for block ciphers. This paper focuses on investigating the validity of DCs in the related-key setting and boomerang distinguishers in both single- and related-key scenarios. For this purpose, we generalize Beyne and Rijmen's quasidifferential techniques for the related-key DCs and boomerang attacks. First, to verify related-key DCs, the related-key quasi-DC is proposed. Similar to the relationship between the quasi-DC and DC, the exact probability of a related-key DC is equal to the sum of all corresponding related-key quasi-DCs' correlations. Since the related-key quasi-DCs involve the key information, we can determine the probability of the target related-key DC in different key subspaces. We find both positive and negative results. For example, we verify the 18-round related-key DC used in the best attack on \gift-64 whose probability is $2^{-58}$, finding that this related-key DC has a higher probability for $2^{128} \times (2^{-5} + 2^{-8})$ keys which is around $2^{-50}$, but it is impossible for the remaining keys. Second, we identify proper bases to describe the boomerang distinguishers with the geometric approach. A quasi-BCT is constructed to consider the value influence in the boomerang connectivity table (BCT). For the DC parts, the quasi-biDDT is used. Connecting the quasi-BCT and quasi-biDDT, we can verify the probability of a boomerang distinguisher with quasi-boomerang characteristics. This also allows us to analyze the probability of the boomerang in different key spaces. For a 17-round boomerang distinguisher of \skinny-64-128 whose probability is $2^{-50}$, we find that the probability can be $2^{-44}$ for half of keys, and impossible for the other half.
Last updated:  2025-03-03
PEGASIS: Practical Effective Class Group Action using 4-Dimensional Isogenies
Pierrick Dartois, Jonathan Komada Eriksen, Tako Boris Fouotsa, Arthur Herlédan Le Merdy, Riccardo Invernizzi, Damien Robert, Ryan Rueger, Frederik Vercauteren, and Benjamin Wesolowski
In this paper, we present the first practical algorithm to compute an effective group action of the class group of any imaginary quadratic order $\mathcal{O}$ on a set of supersingular elliptic curves primitively oriented by $\mathcal{O}$. Effective means that we can act with any element of the class group directly, and are not restricted to acting by products of ideals of small norm, as for instance in CSIDH. Such restricted effective group actions often hamper cryptographic constructions, e.g. in signature or MPC protocols. Our algorithm is a refinement of the Clapoti approach by Page and Robert, and uses $4$-dimensional isogenies. As such, it runs in polynomial time, does not require the computation of the structure of the class group, nor expensive lattice reductions, and our refinements allows it to be instantiated with the orientation given by the Frobenius endomorphism. This makes the algorithm practical even at security levels as high as CSIDH-4096. Our implementation in SageMath takes 1.5s to compute a group action at the CSIDH-512 security level, 21s at CSIDH-2048 level and around 2 minutes at the CSIDH-4096 level. This marks the first instantiation of an effective cryptographic group action at such high security levels. For comparison, the recent KLaPoTi approach requires around 200s at the CSIDH-512 level in SageMath and 2.5s in Rust.
Last updated:  2025-03-03
Re-Randomize and Extract: A Novel Commitment Construction Framework Based on Group Actions
Kaijie Jiang, Anyu Wang, Hengyi Luo, Guoxiao Liu, Tang Gang, Yanbin Pan, and Xiaoyun Wang
Cryptographic group actions have attracted growing attention as a useful tool for constructing cryptographic schemes. Among their applications, commitment schemes are particularly interesting as fundamental primitives, playing a crucial role in protocols such as zero-knowledge proofs, multi-party computation, and more. In this paper, we introduce a novel framework to construct commitment schemes based on cryptographic group actions. Specifically, we propose two key techniques for general group actions: re-randomization and randomness extraction. Roughly speaking, a re-randomization algorithm introduces randomness within an orbit for any input element, while a randomness extractor maps this randomness to uniformity over the message space. We demonstrate that these techniques can significantly facilitate the construction of commitment schemes, providing a flexible framework for constructing either perfectly hiding or perfectly binding commitments, depending on the type of extractor involved. Moreover, we extend our framework to support the construction of commitments with additional desirable properties beyond hiding and binding, such as dual-mode commitments and enhanced linkable commitments. These extensions are achieved by further adapting the extractor to satisfy trapdoor or homomorphic properties. Finally, we instantiate all our proposed commitment schemes using lattices, specifically leveraging the lattice isomorphism problem (LIP) and the lattice automorphism problem (LAP) as underlying cryptographic assumptions. To the best of our knowledge, this is the first commitment scheme construction based on LIP/LAP. Additionally, we use LIP to provide a repair and improvement to the tensor isomorphism-based non-interactive commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (ASIACRYPT 2023), which was recently shown to be insecure by an attack from Gilchrist, Marco, Petit, and Tang (CRYPTO 2024).
Last updated:  2025-03-03
Computational Quantum Anamorphic Encryption and Anamorphic Secret Sharing
SAYANTAN GANGULY and Shion Samadder Chaudhury
The concept of anamorphic encryption, first formally introduced by Persiano et al. in their influential 2022 paper titled ``Anamorphic Encryption: Private Communication Against a Dictator,'' enables embedding covert messages within ciphertexts. One of the key distinctions between a ciphertext embedding a covert message and an original ciphertext, compared to an anamorphic ciphertext, lies in the indistinguishability between the original ciphertext and the anamorphic ciphertext. This encryption procedure has been defined based on a public-key cryptosystem. Initially, we present a quantum analogue of the classical anamorphic encryption definition that is based on public-key encryption. Additionally, we introduce a definition of quantum anamorphic encryption that relies on symmetric key encryption. Furthermore, we provide a detailed generalized construction of quantum anamorphic symmetric key encryption within a general framework, which involves taking any two quantum density matrices of any different dimensions and constructing a single quantum density matrix, which is the quantum anamorphic ciphertext containing ciphertexts of both of them. Subsequently, we introduce a definition of computational anamorphic secret-sharing and extend the work of \c{C}akan et al. on computational quantum secret-sharing to computational quantum anamorphic secret-sharing, specifically addressing scenarios with multiple messages, multiple keys, and a single share function. This proposed secret-sharing scheme demonstrates impeccable security measures against quantum adversaries.
Last updated:  2025-03-03
Tight Adaptive Simulation Security for Identity-based Inner-Product FE in the (Quantum) Random Oracle Model
Tenma Edamura and Atsushi Takayasu
Abdalla et al. (ASIACRYPT 2020) introduced a notion of identity-based inner-product functional encryption (IBIPFE) that combines identity-based encryption and inner-product functional encryption (IPFE). Thus far, several pairing-based and lattice-based IBIPFE schemes have been proposed. However, there are two open problems. First, there are no known IBIPFE schemes that satisfy the adaptive simulation-based security. Second, known IBIPFE schemes that satisfy the adaptive indistinguishability-based security or the selective simulation-based security do not have tight reductions. In this paper, we propose lattice-based and pairing-based IBIPFE schemes that satisfy the tight adaptive simulation-based security. At first, we propose a generic transformation from an indistinguishability-based secure $(L + 1)$-dimensional (IB)IPFE scheme to a simulation-based secure $L$-dimensional (IB)IPFE scheme. The proposed transformation improves Agrawal et al.'s transformation for plain IPFE (PKC 2020) that requires an indistinguishability-based secure $2L$-dimensional scheme. Then, we construct a lattice-based IBIPFE scheme that satisfies the tight adaptive indistinguishability-based security under the LWE assumption in the quantum random oracle model. We apply the proposed transformation and obtain the first lattice-based IBIPFE scheme that satisfies adaptive simulation-based security. Finally, we construct a pairing-based IBIPFE scheme that satisfies the tight adaptive simulation-based security under the DBDH assumption in the random oracle model. The pairing-based scheme does not use the proposed transformation towards the best efficiency.
Last updated:  2025-03-06
Blind Signatures from Cryptographic Group Actions
Uncategorized
Dung Hoang Duong, Xuan Thanh Khuc, Youming Qiao, Willy Susilo, and Chuanqi Zhang
Show abstract
Uncategorized
We provide a generic construction of blind signatures from cryptographic group actions following the framework of the blind signature CSIOtter introduced by Katsumata et al. (CRYPTO'23) in the context of isogeny (commutative group action). We adapt and modify that framework to make it work even for non-commutative group actions. As a result, we obtain a blind signature from abstract group actions which are proven to be secure in the random oracle model. We also propose an instantiation based on a variant of linear code equivalence, interpreted as a symmetric group action.
Last updated:  2025-03-03
Trail-Estimator: An Automated Verifier for Differential Trails in Block Ciphers
Thomas Peyrin, Quan Quan Tan, Hongyi Zhang, and Chunning Zhou
Differential cryptanalysis is a powerful technique for attacking block ciphers, wherein the Markov cipher assumption and stochastic hypothesis are commonly employed to simplify the search and probability estimation of differential trails. However, these assumptions often neglect inherent algebraic constraints, potentially resulting in invalid trails and inaccurate probability estimates. Some studies identified violations of these assumptions and explored how they impose constraints on key material, but they have not yet fully captured all relevant ones. This study proposes Trail-Estimator, an automated verifier for differential trails on block ciphers, consisting of two parts: a constraint detector Cons-Collector and a solving tool Cons-Solver. We first establish the fundamental principles that will allow us to systematically identify all constraint subsets within a differential trail, upon which Cons-Collector is built. Then, Cons-Solver utilizes specialized preprocessing techniques to efficiently solve the detected constraint subsets, thereby determining the key space and providing a comprehensive probability distribution of differential trails. To validate its effectiveness, Trail-Estimator is applied to verify 14 differential trails for the SKINNY, LBLOCK, and TWINE block ciphers. Experimental results show that Trail-Estimator consistently identifies previously undetected constraints for SKINNY and discovers constraints for the first time for LBLOCK and TWINE. Notably, it is the first tool to discover long nonlinear constraints extending beyond five rounds in these ciphers. Furthermore, Trail-Estimator's accuracy is validated by experiments showing its predictions closely match the real probability distribution of short-round differential trails.
Last updated:  2025-03-03
Provably Secure Approximate Computation Protocols from CKKS
Intak Hwang, Yisol Hwang, Miran Kim, Dongwon Lee, and Yongsoo Song
Secure multi-party computation (MPC) enables collaborative, privacy-preserving computation over private inputs. Advances in homomorphic encryption (HE), particularly the CKKS scheme, have made secure computation practical, making it well-suited for real-world applications involving approximate computations. However, the inherent approximation errors in CKKS present significant challenges in developing MPC protocols. This paper investigates the problem of secure approximate MPC from CKKS. We first analyze CKKS-based protocols in two-party setting. When only one party holds a private input and the other party acts as an evaluator, a simple protocol with the noise smudging technique on the encryptor's side achieves security in the standard manner. When both parties have private inputs, we demonstrate that the protocol incorporating independent errors from each party achieves a relaxed standard security notion, referred to as a liberal security. Nevertheless, such a protocol fails to satisfy the standard security definition. To address this limitation, we propose a novel protocol that employs a distributed sampling approach to generate smudging noise in a secure manner, which satisfies the standard security definition. Finally, we extend the two-party protocols to the multi-party setting. Since the existing threshold CKKS-based MPC protocol only satisfies the liberal security, we present a novel multi-party protocol achieving the standard security by applying multi-party distributed sampling of a smudging error. For all the proposed protocols, we formally define the functionalities and provide rigorous security analysis within the simulation-based security framework. To the best of our knowledge, this is the first work to explicitly define the functionality of CKKS-based approximate MPC and achieve formal security guarantees.
Last updated:  2025-03-02
Reducing the Number of Qubits in Solving LWE
Barbara Jiabao Benedikt
At Crypto 2021, May presented an algorithm solving the ternary Learning-With-Error problem, where the solution is a ternary vector $s\in\{0,\pm 1\}^{n}$ with a known number of $(+1)$ and $(-1)$ entries. This attack significantly improved the time complexity of $\mathcal{S}^{0.5}$ from previously known algorithms to $\mathcal{S}^{0.25}$, where $\mathcal{S}$ is the size of the key space. Therefore, May exploited that using more representations, i.e., allowing ternary interim results with additional $(+1)$ and $(-1)$ entries, reduces the overall time complexity. Later, van Hoof et al. (PQCrypto 2021) combined May's algorithm with quantum walks to a new attack that performs in time $\mathcal{S}^{0.19}$. However, this quantum attack requires an exponential amount of qubits. This work investigates whether the ternary LWE problem can also be solved using only $\mathcal{O}(n)$ qubits. Therefore, we look closely into Dicke states, which are an equal superposition over all binary vectors with a fixed Hamming weight. Generalizing Dicke states to ternary vectors makes these states applicable to the ternary LWE problem. Bärtschi and Eidenbenz (FCT 2019) proposed a quantum circuit to prepare binary Dicke states deterministically in linear time $\mathcal{O}(n)$. Their procedure benefits from the inductive structure of Dicke states, i.e., that a Dicke state of a particular dimension can be built from Dicke states of lower dimensions. Our work proves that this inductive structure is also present in generalized Dicke states with an underlying set other than $\{0,1\}^{n}$. Utilizing this structure, we introduce a new algorithm that deterministically prepares generalized Dicke states in linear time, for which we also provide an implementation in Qiskit. Finally, we apply our generalized Dicke states to the ternary LWE problem, and construct an algorithm that requires $\mathcal{O}(n)$ qubits and classical memory space up to $\mathcal{S}^{0.22}$. We achieve $\mathcal{S}^{0.379}$ as best obtainable time complexity.
Last updated:  2025-03-02
An Efficient Quantum Oblivious Transfer Protocol
Sushmita Sarkar, Vikas Srivastava, Tapaswini Mohanty, Sumit Kumar Debnath, and Sihem Mesnager
Oblivious Transfer (OT) is a significant two party privacy preserving cryptographic primitive. OT involves a sender having several pieces of information and a receiver having a choice bit. The choice bit represents the piece of information that the receiver wants to obtain as an output of OT. At the end of the protocol, sender remains oblivious about the choice bit and receiver remains oblivious to the contents of the information that were not chosen. It has applications ranging from secure multi-party computation, privacy-preserving protocols to cryptographic protocols for secure communication. Most of the classical OT protocols are based on number theoretic assumptions which are not quantum secure and existing quantum OT protocols are not so efficient and practical. Herein, we present the design and analysis of a simple yet efficient quantum OT protocol, namely qOT. qOT is designed by using the asymmetric key distribution proposed by Gao et al. [18] as a building block. The designed qOT requires only single photons as a source of a quantum state, and the measurements of the states are computed using single particle projective measurement. These make qOT efficient and practical. Our proposed design is secure against quantum attacks. Moreover, qOT also provides long-term security.
Last updated:  2025-03-02
Blockchain-based Secure D2D localisation with adaptive precision
Gewu Bu, Bilel Zaghdoudi, Maria Potop-Butucaru, and Serge Fdida
In this paper we propose a secure best effort methodology for providing localisation information to devices in a heterogenous network where devices do not have access to GPS-like technology or heavy cryptographic infrastructure. Each device will compute its localisation with the highest possible accuracy based solely on the data provided by its neighboring anchors. The security of the localisation is guarantied by registering the localisation information on a distributed ledger via smart contracts. We prove the security of our solution under the adaptive chosen message attacks model. We furthermore evaluate the effectiveness of our solution by measuring the average register location time, failed requests, and total execution time using as DLT case study Hyperledger Besu with QBFT consensus.
Last updated:  2025-03-01
Monotone-Policy BARGs and More from BARGs and Quadratic Residuosity
Shafik Nassar, Brent Waters, and David J. Wu
A tuple of NP statements $(x_1, \ldots, x_k)$ satisfies a monotone policy $P \colon \{0,1\}^k \to \{0,1\}$ if $P(b_1,\ldots,b_k)=1$, where $b_i = 1$ if and only if $x_i$ is in the NP language. A monotone-policy batch argument (monotone-policy BARG) for NP is a natural extension of regular batch arguments (BARGs) that allows a prover to prove that $x_1, \ldots, x_k$ satisfy a monotone policy $P$ with a proof of size $\mathsf{poly}(\lambda, |\mathcal{R}|, \log k)$, where $|\mathcal{R}|$ is the size of the Boolean circuit computing the NP relation $\mathcal{R}$. Previously, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO 2023) and Nassar, Waters, and Wu (TCC 2024) showed how to construct monotone-policy BARGs from (somewhere-extractable) BARGs for NP together with a leveled homomorphic encryption scheme (Brakerski et al.) or an additively homomorphic encryption scheme over a sufficiently-large group (Nassar et al.). In this work, we improve upon both works by showing that BARGs together with additively homomorphic encryption over any group suffices (e.g., over $\mathbb{Z}_2$). For instance, we can instantiate the additively homomorphic encryption with the classic Goldwasser-Micali encryption scheme based on the quadratic residuosity (QR) assumption. Then, by appealing to existing compilers, we also obtain a monotone-policy aggregate signature scheme from any somewhere-extractable BARG and the QR assumption.
Last updated:  2025-03-01
Lattice-Based Post-Quantum iO from Circular Security with Random Opening Assumption (Part II: zeroizing attacks against private-coin evasive LWE assumptions)
Yao-Ching Hsieh, Aayush Jain, and Huijia Lin
Indistinguishability obfuscation (iO) stands out as a powerful cryptographic primitive but remains notoriously difficult to realize under simple-to-state, post-quantum assumptions. Recent works have proposed lattice-inspired iO constructions backed by new “LWE-with-hints” assumptions, which posit that certain distributions of LWE samples retain security despite auxiliary information. However, subsequent cryptanalysis has revealed structural vulnerabilities in these assumptions, leaving us without any post-quantum iO candidates supported by simple, unbroken assumptions. Motivated by these proposals, we introduce the \emph{Circular Security with Random Opening} (CRO) assumption—a new LWE-with-hint assumption that addresses structural weaknesses from prior assumptions, and based on our systematic examination, does not appear vulnerable to known cryptanalytic techniques. In CRO, the hints are random ``openings'' of zero-encryptions under the Gentry--Sahai--Waters (GSW) homomorphic encryption scheme. Crucially, these zero-encryptions are efficiently derived from the original LWE samples via a special, carefully designed procedure, ensuring that the openings are marginally random. Moreover, the openings do not induce any natural leakage on the LWE noises. These two features--- marginally random hints and the absence of (natural) noise leakage---rule out important classes of attacks that had undermined all previous LWE-with-hint assumptions for iO. Therefore, our new lattice-based assumption for iO provides a qualitatively different target for cryptanalysis compared to existing assumptions. To build iO under this less-structured CRO assumption, we develop several new technical ideas. In particular, we devise an oblivious LWE sampling procedure, which succinctly encodes random LWE secrets and smudging noises, and uses a tailored-made homomorphic evaluation procedure to generate secure LWE samples. Crucially, all non-LWE components in this sampler, including the secrets and noises of the generated samples, are independently and randomly distributed, avoiding attacks on non-LWE components. In the second part of this work, we investigate recent constructions of obfuscation for pseudorandom functionalities. We show that the same cryptanalytic techniques used to break previous LWE-with-hints assumptions for iO (Hopkins-Jain-Lin CRYPTO 21) can be adapted to construct counterexamples against the private-coin evasive LWE assumptions underlying these pseudorandom obfuscation schemes. Unlike prior counterexamples for private-coin evasive LWE assumptions, our new counterexamples take the form of zeroizing attacks, contradicting the common belief that evasive-LWE assumptions circumvent zeroizing attacks by restricting to ``evasive'' or pseudorandom functionalities.
Last updated:  2025-03-01
An ETSI GS QKD compliant TLS implementation
Thomas Prévost, Bruno Martin, and Olivier Alibart
This paper presents our implementation of the Quantum Key Distribution standard ETSI GS QKD 014 v1.1.1, which required a modification of the Rustls library. We modified the TLS protocol while maintaining backward compatibility on the client and server side. We thus wish to participate in the effort to generalize the use of Quantum Key Distribution on the Internet. Finally we used this library for a video conference call encrypted by QKD.
Last updated:  2025-02-28
Fair Exchange for Decentralized Autonomous Organizations via Threshold Adaptor Signatures
Ruben Baecker, Paul Gerhart, Jonathan Katz, and Dominique Schröder
A Decentralized Autonomous Organization (DAO) enables multiple parties to collectively manage digital assets in a blockchain setting. We focus on achieving fair exchange between DAOs using a cryptographic mechanism that operates with minimal blockchain assumptions and, crucially, does not rely on smart contracts. Specifically, we consider a setting where a DAO consisting of $n_\mathsf{S}$ sellers holding shares of a witness $w$ interacts with a DAO comprising $n_\mathsf{B}$ buyers holding shares of a signing key $sk$; the goal is for the sellers to exchange $w$ for a signature under $sk$ transferring a predetermined amount of funds. Fairness is required to hold both between DAOs (i.e., ensuring that each DAO receives its asset if and only if the other does) as well as within each DAO (i.e., ensuring that all members of a DAO receive their asset if and only if every other member does). We formalize these fairness properties and present an efficient protocol for DAO-based fair exchange under standard cryptographic assumptions. Our protocol leverages certified witness encryption and threshold adaptor signatures, two primitives of independent interest that we introduce and show how to construct efficiently.
Last updated:  2025-02-28
Generic Composition: From Classical to Quantum Security
Nathalie Lang, Jannis Leuther, and Stefan Lucks
Authenticated encryption (AE) provides both authenticity and privacy. Starting with Bellare's and Namprempre's work in 2000, the Encrypt-then-MAC composition of an encryption scheme for privacy and a MAC for authenticity has become a well-studied and common approach. This work investigates the security of the Encrypt-then-MAC composition in a quantum setting which means that adversarial queries as well as the responses to those queries may be in superposition. We demonstrate that the Encrypt-then-MAC composition of a chosen-plaintext (IND-qCPA) secure symmetric encryption scheme SE and a plus-one unforgeable MAC fails to achieve chosen-ciphertext (IND-qCCA) security. On the other hand, we show that it suffices to choose a quantum pseudorandom function (qPRF) as the MAC. Namely, the Encrypt-then-MAC composition of SE and a qPRF is IND-qCCA secure. The same holds for the Encrypt-and-MAC composition of SE and a qPRF
Last updated:  2025-02-28
How Small Can S-boxes Be
Chenhao Jia, Tingting Cui, Qing Ling, Yan He, Kai Hu, Yu Sun, and Meiqin Wang
S-boxes are the most popular nonlinear building blocks used in symmetric-key primitives. Both cryptographic properties and implementation cost of an S-box are crucial for a good cipher design, especially for lightweight ones. This paper aims to determine the exact minimum area of optimal 4-bit S-boxes (whose differential uniform and linearity are both 4) under certain standard cell library. Firstly, we evaluate the upper and lower bounds upon the minimum area of S-boxes, by proposing a Prim-like greedy algorithm and utilizing properties of balanced Boolean functions to construct bijective S-boxes. Secondly, an SAT-aided automatic search tool is proposed that can simultaneously consider multiple cryptographic properties such as the uniform, linearity, algebraic degree, and the implementation costs such as area, and gate depth complexity. Thirdly, thanks to our tool, we manage to find the exact minimum area for different types of 4-bit S-boxes. The measurement in this paper uses the gate equivalent (GE) as standard unit under UMC 180 nm library, all 2/3/4-input logic gates are taken into consideration. Our results show that the minimum area of optimal 4-bit S-box is 11 GE and the depth is 3. If we do not use the 4-input gates, this minimum area increases to 12 GE and the depth in this case is 4, which is the same if we only use 2-input gates. If we further require that the S-boxes should not have fixed points, the minimum area continue increasing a bit to 12.33 GE while keeping the depth. Interestingly, the same results are also obtained for non-optimal 4-bit bijective S-boxes as long as their differential uniform $\mathcal{U}(S) <16$ and linearity $\mathcal{L}(S) < 8$ (i.e., there is no non-trivial linear structures) if only 2-input and 3-input gates are used. But the minimum area reduce to 9 GE if 4-input gates are involved. More strictly, if we require the algebraic degree of all coordinate functions of optimal S-boxes be 3, the minimum area is 14 GE with fixed point and 14.33 GE without fixed point, and the depth increases sharply to 8. Besides determining the exact minimum area, our tool is also useful to search for a better implementation of existing S-boxes. As a result, we find out an implementation of Keccak's 5-bit S-box with 17 GE. As a contrast, the designer's original circuit has an area of 23.33 GE, while the optimized result by Lu et al. achieves an area of 17.66 GE. Also, we find out the first optimized implementation of SKINNY's 8-bit S-box with 26.67 GE.
Last updated:  2025-03-08
MERCURY: A multilinear Polynomial Commitment Scheme with constant proof size and no prover FFTs
Liam Eagen and Ariel Gabizon
We construct a pairing-based polynomial commitment scheme for multilinear polynomials of size $n$ where constructing an opening proof requires $O(n)$ field operations, and $2n+O(\sqrt n)$ scalar multiplications. Moreover, the opening proof consists of a constant number of field elements. This is a significant improvement over previous works which would require either 1. $O(n\log n)$ field operations; or 2. $O(\log n)$ size opening proof. The main technical component is a new method of verifiably folding a witness via univariate polynomial division. As opposed to previous methods, the proof size and prover time remain constant *regardless of the folding factor*.
Last updated:  2025-02-28
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
Walid Haddaji, Loubna Ghammam, Nadia El Mrabet, and Leila Ben Abdelghani
In pairing-based cryptography, final exponentiation with a large fixed exponent is crucial for ensuring unique outputs in Tate and optimal Ate pairings. While optimizations for elliptic curves with even embedding degrees have been well-explored, progress for curves with odd embedding degrees, particularly those divisible by $3$, has been more limited. This paper presents new optimization techniques for computing the final exponentiation of the optimal Ate pairing on these curves. The first exploits the fact that some existing seeds have a form enabling cyclotomic cubing and extends this to generate new seeds with the same form. The second is to generate new seeds with sparse ternary representations, replacing squaring with cyclotomic cubing. The first technique improves efficiency by $1.7\%$ and $1.5\%$ compared to the square and multiply (\textbf{SM}) method for existing seeds at $192$-bit and $256$-bit security levels, respectively. For newly generated seeds, it achieves efficiency gains of $3.6\%$ at $128$-bit, $5\%$ at $192$-bit, and $8.5\%$ at $256$-bit security levels. The second technique improves efficiency by $3.3\%$ at $128$-bit, $19.5\%$ at $192$-bit, and $4.3\%$ at $256$-bit security levels.
Last updated:  2025-02-28
Pencil: A Domain-Extended PRF with Full $n$-bit Security \\ for Strengthening GCM and More
Ritam Bhaumik and Jean Paul Degabriele
We consider the problem of constructing efficient pseudorandom functions with Beyond-Birthday-Bound (BBB) security from blockciphers. More specifically, we are interested in variable-output-length pseudorandom functions (PRF) whose domain is twice that of the underlying blockcipher. We present two such constructions, $\textsf{Pencil}$ and $\sharp\textsf{Pencil}$, which provide weak PRF and full PRF security, respectively, where both achieve full $n$-bit security. While several recent works have focused on constructing BBB PRFs from blockciphers, much less attention has been given to weak PRF constructions which can potentially be constructed more efficiently and still serve as a useful primitive. Another understudied problem in this domain, is that of extending the domain of a BBB PRF, which turns out to be rather challenging. Besides being of theoretical interest in itself, this is also a very practical problem. Often, the input to the BBB PRF is a nonce, but random nonces are much easier to handle in practice as they do not require maintaining state---which can be very cumbersome in distributed systems and encrypted cloud storage. Accordingly, in order to maintain a BBB security bound, one requires random nonces of size between $1.5n$ and $2n$ bits long and corresponding BBB (weak) PRF constructions that admit matching input sizes. NIST has recently announced a pre-draft call for comments to standardise AEAD schemes that can encrypt larger amounts of data and admit larger nonces. The call lists two approaches. The first is to define an analogue of GCM using a 256-bit blockcipher, and the second is based on a recent proposal by Gueron, to extend GCM with a key derivation function (KDF) called DNDK to increase its security. In essence, DNDK is a BBB-secure expanding weak pseudorandom function with a domain size of 192 bits that is realised from AES. Our work makes relevant contributions to this domain in two important ways. Firstly, an immediate consequence of our work is that one can construct a GCM analogue with BBB security from $\sharp\textsf{Pencil}$, without resorting to a 256-bit blockcipher. Our second contribution is that $\sharp\textsf{Pencil}$ can be used as a KDF in combination with GCM in an analogous manner to DNDK-GCM. However, $\sharp\textsf{Pencil}$ being a full PRF as opposed to DNDK which is only a weak PRF, allows one to prove the KDF-GCM composition secure as an AEAD scheme. Finally, when contrasting $\textsf{Pencil}$ and DNDK as weak PRFs with comparable parameters, our construction requires only half the blockcipher calls.
Last updated:  2025-02-28
On the Security and Privacy of CKKS-based Homomorphic Evaluation Protocols
Intak Hwang, Seonhong Min, Jinyeong Seo, and Yongsoo Song
CKKS is a homomorphic encryption (HE) scheme that supports arithmetic over complex numbers in an approximate manner. Despite its utility in PPML protocols, formally defining the security of CKKS-based protocols is challenging due to its approximate nature. To be precise, in a sender-receiver model, where the receiver holds input ciphertexts and the sender evaluates its private circuit, it is difficult to define sender's privacy in terms of indistinguishability, whereas receiver's privacy is easily achieved through the semantic security of CKKS. In this paper, we present a new definition for CKKS-based protocols, called Differentially Private Homomorphic Evaluation (DPHE) protocols, along with a general method to achieve this. In our definition, we relax the sender’s privacy condition from indistinguishability to differential privacy notion. We focus on the fact that most security concern for PPML protocols is differential privacy on evaluation results, rather than the simulatability of the evaluation. We prove that if the ideal functionality satisfies differential privacy and a protocol satisfies DPHE, then the output of the protocol also satisfies differential privacy. Next, we provide a general compiler that transforms a plain CKKS-based protocol into a DPHE one. We achieve this by mixing the Laplace mechanism and zero-knowledge argument of knowledge (ZKAoK) for CKKS. This approach allows us to achieve sender's privacy with a moderate noise, whereas the previous indistinguishability-based approach requires exponentially large overhead. Finally, we provide a concrete instantiation of ZKAoK for CKKS in the form of PIOP. To prove the well-formedness of CKKS ciphertexts and public keys, we devise new proof techniques that use homomorphic evaluation during verification. We also provide an implementation to demonstrate the practicality of our ZKAoK for CKKS by compiling PIOPs using the HSS polynomial commitment scheme (Crypto'24).
Last updated:  2025-02-27
Faster FHEW Bootstrapping with Adaptive Key Update
Qi Zhang, Mingqiang Wang, and Xiaopeng Cheng
Lee et al. proposed a new bootstrapping algorithm based on homomorphic automorphism, which merges the empty sets of ciphertexts by adjusting the window size. This algorithm supports arbitrary secret key distributions with no additional runtime costs while using small evaluation keys. However, our implementation reveals that once the window size exceeds a certain threshold, the time required for bootstrapping remains relatively constant. This observation prompts the question of how to further reduce the running time. To address this challenge, we introduce a new trick called Adaptive Key Update (AKU). With AKU and automorphism techniques, we propose a new bootstrapping algorithm for Gaussian secret keys that requires only $n$ external products and no switching for blind rotation. Building on this, we employ window size optimization and key switching techniques to further improve the algorithm. The improved algorithm provides a useful trade-off between key storage and computational efficiency, depending on the choice of the window size. Compared to the current fastest FHEW bootstrapping method for Gaussian secret keys (the LLWW+ method proposed by Li et al.), our AKU-based algorithm reduces the number of key switching by 76% and decreases the running time of bootstrapping by 20.7%. At this time, the practical runtime for bootstrapping is approximately equal to that of performing only $n$ external products.
Last updated:  2025-02-27
A New Generalized Attack on RSA-like Cryptosystems
Michel SECK, Oumar Niang, and Djiby Sow
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers $N = pq$, where $p$ and $q$ are prime numbers. The public exponent $e$ and the private exponent $d$ are related by the equation $ed - k(p-1)(q-1) = 1$. Recently, Cotan and Te{\c{s}}eleanu (NordSec 2023) introduced a variant of RSA, where the public exponent $e$ and the private exponent $d$ satisfy the equation $ed - k(p^n-1)(q^n-1) = 1$ for some positive integer $n$. In this paper, we study the general equation $eu - (p^n - 1)(q^n - 1)v = w$ with positive integers $u$ and $v$, and $w\in \mathbb{Z}$. We show that, given the public parameters $N$ and $e$, one can recover $u$ and $v$ and factor the modulus $N$ in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on $u$, $v$, and $w$. Furthermore, we show that if the private exponent $d$ in an RSA-like cryptosystem is either small or too large, then $N$ can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
Last updated:  2025-02-27
A Complete Security Proof of SQIsign
Marius A. Aardal, Andrea Basso, Luca De Feo, Sikhar Patranabis, and Benjamin Wesolowski
SQIsign is the leading digital signature from isogenies. Despite the many improvements that have appeared in the literature, all its recents variants lack a complete security proof. In this work, we provide the first full security proof of SQIsign, as submitted to the second round of NIST's on-ramp track for digital signatures. To do so, we introduce a new framework, which we call Fiat-Shamir with hints, that captures all those protocols where the simulator needs additional information to simulate a transcript. Using this framework, we show that SQIsign is EUF-CMA secure in the ROM, assuming the hardness of the One Endomorphism problem with hints, or the hardness of the Full Endomorphism Ring problem with hints together with a hint indistinguishability assumption; all assumptions, unlike previous ones in the literature, are non-interactive. Along the way, we prove several intermediate results that may be of independent interest.
Last updated:  2025-02-27
Side-Channel and Fault Injection Attacks on VOLEitH Signature Schemes: A Case Study of Masked FAEST
Sönke Jendral and Elena Dubrova
Ongoing efforts to transition to post-quantum secure public- key cryptosystems have created the need for algorithms with a variety of performance characteristics and security assumptions. Among the can- didates in NIST’s post-quantum standardisation process for additional digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the- Head (VOLEitH)-based scheme, whose security relies on the one-wayness of the Advanced Encryption Standard (AES). The VOLEitH paradigm enables competitive performance and signature sizes under conservative security assumptions. However, since it was introduced recently, in 2023, its resistance to physical attacks has not yet been analysed. In this paper, we present the first security analysis of VOLEitH-based signa- ture schemes in the context of side-channel and fault injection attacks. We demonstrate four practical attacks on a masked implementation of FAEST in ARM Cortex-M4 capable of recovering the full secret key with high probability (greater than 0.87) from a single signature. These at- tacks exploit vulnerabilities of components specific to VOLEitH schemes and FAEST, such as the all-but-one vector commitments, VOLE gener- ation, and AES proof generation. Finally, we propose countermeasures to mitigate these attacks and enhance the physical security of VOLEitH- based signature schemes.
Last updated:  2025-02-27
HiAE: A High-Throughput Authenticated Encryption Algorithm for Cross-Platform Efficiency
Han Chen, Tao Huang, Phuong Pham, and Shuang Wu
This paper addresses the critical challenges in designing cryptographic algorithms that achieve both high performance and cross-platform efficiency on ARM and x86 architectures, catering to the demanding requirements of next-generation communication systems, such as 6G and GPU/NPU interconnections. We propose HiAE, a high-throughput authenticated encryption algorithm optimized for performance exceeding 100 Gbps and designed to meet the stringent security requirements of future communication networks. HiAE leverages the stream cipher structure, integrating the AES round function for non-linear diffusion. Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over 180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips
Last updated:  2025-02-27
Another Look at the Quantum Security of the Vectorization Problem with Shifted Inputs
Paul Frixons, Valerie Gilchrist, Péter Kutas, Simon-Philipp Merz, and Christophe Petit
Cryptographic group actions provide simple post-quantum generalizations to many cryptographic protocols based on the discrete logarithm problem (DLP). However, many advanced group action-based protocols do not solely rely on the core group action problem (the so-called vectorization problem), but also on variants of this problem, to either improve efficiency or enable new functionalities. In particular, the security of the CSI-SharK threshold signature protocol relies on the Vectorization Problem with Shifted Inputs where (in DLP formalism) the adversary not only receives $g$ and $g^x$, but also $g^{xc}$ for multiple known values of $c$. A natural open question is then whether the extra data provided to the adversary in this variant allows for more efficient attacks. In this paper, we revisit the concrete quantum security of this problem. We start from a quantum multiple hidden shift algorithm of Childs and van Dam, which to the best of our knowledge was never applied in cryptography before. We specify algorithms for its subroutines and we provide concrete complexity estimates for both these subroutines and the overall algorithm. We then apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg’s algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant T-gate complexity improvements overall. We also show that the quantum security of the protocol decreases when the number of public keys increases, and quantify this degradation. Beyond its direct application to the CSI-Shark protocol, our work more generally questions the quantum security of vectorization problem variants, and it introduces the Childs-van Dam algorithm as a new quantum cryptanalysis tool.
Last updated:  2025-02-27
Evasive LWE: Attacks, Variants & Obfustopia
Shweta Agrawal, Anuja Modi, Anshu Yadav, and Shota Yamada
Evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) is a recently introduced, popular lattice assumption which has been used to tackle long-standing problems in lattice based cryptography. In this work, we develop new counter-examples against Evasive LWE, in both the private and public-coin regime, propose counter-measures that define safety zones, and finally explore modifications to construct full compact FE/iO. Attacks: Our attacks are summarized as follows. - The recent work by Hseih, Lin and Luo [HLL23] constructed the first ABE for unbounded depth circuits by relying on the (public coin) ''circular'' evasive LWE assumption, which incorporates circularity into the Evasive LWE assumption. We provide a new attack against this assumption by exhibiting a sampler such that the pre-condition is true but post-condition is false. - We demonstrate a counter-example against public-coin evasive LWE which exploits the freedom to choose the error distributions in the pre and post conditions. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition. - The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities ($\mathsf{prFE}$) and extended this to obfuscation for pseudorandom functionalities ($\mathsf{prIO}$) [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the stated assumption. - The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. By adapting the counter-example against [AKY24a], we provide an attack against this assumption. - Branco et al. [BDJ+24] showed that there exist contrived, somehow ''self-referential'', classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We develop an analogous result to the setting of pseudorandom functional encryption. While Evasive LWE was developed to specifically avoid zeroizing attacks as discussed above, our attacks show that in some (contrived) settings, the adversary may nevertheless obtain terms in the zeroizing regime. Counter-measures: Guided by the learning distilled from the above attacks, we develop counter-measures to prevent against them. Our interpretation of the above attacks is that Evasive LWE, as defined, is too general -- we suggest restrictions to identify safe zones for the assumption, using which, the broken applications can be recovered. Variants to give full FE and iO: Finally, we show that certain modifications of Evasive LWE, which respect the counter-measures developed above, yield full compact FE in the standard model. We caution that the main goal of presenting these candidates is as goals for cryptanalysis to further our understanding of this regime of assumptions.
Last updated:  2025-03-04
Simple and General Counterexamples for Private-Coin Evasive LWE
Nico Döttling, Abhishek Jain, Giulio Malavolta, Surya Mathialagan, and Vinod Vaikuntanathan
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Last updated:  2025-02-26
Split Prover Zero-Knowledge SNARKs
Sanjam Garg, Aarushi Goel, Dimitris Kolonelos, Sina Shiehian, and Rohit Sinha
We initiate the study of {\em split prover zkSNARKs}, which allow Alice to offload part of the zkSNARK computation to her assistant, Bob. In scenarios like online transactions (e.g., zCash), a significant portion of the witness (e.g., membership proofs of input coins) is often available to the prover (Alice) before the transaction begins. This setup offers an opportunity to Alice to initiate the proof computation early, even before the entire witness is available. The remaining computation can then be delegated to Bob, who can complete it once the final witness (e.g., the transaction amount) is known. To prevent Bob from generating proofs independently (e.g., initiating unauthorized transactions), it is essential that the data provided to him for the second phase of computation does not reveal the witness used in the first phase. Additionally, the verifier of the zkSNARK should be unable to determine whether the proof was generated solely by Alice or through this two-step process. To achieve this efficiently, we require this two-phase proof generation to only use cryptography in a black-box manner. We propose a split prover zkSNARK based on the Groth16 zkSNARKs [Groth, EUROCRYPT 2016], meeting all these requirements. Our solution is also \emph{asymptotically tight}, meaning it achieves the optimal second phase proof generation time for Groth16. Importantly, our split prover zkSNARK preserves the verification algorithm of the original Groth16 zkSNARK, enabling seamless integration into existing deployments of Groth16.
Last updated:  2025-02-26
KLPT²: Algebraic Pathfinding in Dimension Two and Applications
Wouter Castryck, Thomas Decru, Péter Kutas, Abel Laval, Christophe Petit, and Yan Bo Ti
Following Ibukiyama, Katsura and Oort, all principally polarized superspecial abelian surfaces over $\overline{\mathbb{F}}_p$ can be represented by a certain type of $2 \times 2$ matrix $g$, having entries in the quaternion algebra $B_{p,\infty}$. We present a heuristic polynomial-time algorithm which, upon input of two such matrices $g_1, g_2$, finds a "connecting matrix" representing a polarized isogeny of smooth degree between the corresponding surfaces. Our algorithm should be thought of as a two-dimensional analog of the KLPT algorithm from 2014 due to Kohel, Lauter, Petit and Tignol for finding a connecting ideal of smooth norm between two given maximal orders in $B_{p,\infty}$. The KLPT algorithm has proven to be a versatile tool in isogeny-based cryptography, and our analog has similar applications; we discuss two of them in detail. First, we show that it yields a polynomial-time solution to a two-dimensional analog of the so-called constructive Deuring correspondence: given a matrix $g$ representing a superspecial principally polarized abelian surface, realize the latter as the Jacobian of a genus-$2$ curve (or, exceptionally, as the product of two elliptic curves if it concerns a product polarization). Second, we show that, modulo a plausible assumption, Charles-Goren-Lauter style hash functions from superspecial principally polarized abelian surfaces require a trusted set-up. Concretely, if the matrix $g$ associated with the starting surface is known then collisions can be produced in polynomial time. We deem it plausible that all currently known methods for generating a starting surface indeed reveal the corresponding matrix. As an auxiliary tool, we present an explicit table for converting $(2,2)$-isogenies into the corresponding connecting matrix, a step for which a previous method by Chu required super-polynomial (but sub-exponential) time.
Last updated:  2025-02-26
Functional Oblivious Transfer with Applications in Privacy-Preserving Machine Learning
Aydin Abadi and Mohammad Naseri
Oblivious Transfer (OT) is a fundamental cryptographic primitive introduced nearly four decades ago. OT allows a receiver to select and learn $t$ out of $n$ private messages held by a sender. It ensures that the sender does not learn which specific messages the receiver has chosen, while the receiver gains no information about the remaining $n − t$ messages. In this work, we introduce the notion of functional OT (FOT), for the first time. FOT adds a layer of security to the conventional OT by ensuring that the receiver only learns a function of the selected messages rather than the $t$ individual messages themselves. We propose several protocols that realize this concept. In particular, we propose concrete instantiations of FOT when the function to be executed on the selected message is mean, mode, addition, or multiplication. The schemes are efficient and unconditionally secure. We also propose a non-trivial protocol that supports arbitrary functions on the selected messages mainly using fully homomorphic encryption (FHE) and oblivious linear function evaluation, where the number of FHE invocations is constant $O(1)$ with respect to $n$. Our asymptotic and concrete cost analyses demonstrate the efficiency of our unconditionally secure FOT protocols. FOT can enhance the security of privacy-preserving machine learning, particularly in (i) K-Nearest Neighbors schemes and (ii) client selection in Federated Learning (FL).
Last updated:  2025-02-26
Simple Public Key Anamorphic Encryption and Signature using Multi-Message Extensions
Shalini Banerjee, Tapas Pal, Andy Rupp, and Daniel Slamanig
Anamorphic encryption (AE) considers secure communication in the presence of a powerful surveillant (typically called a ''dictator'') who only allows certain cryptographic primitives and knows all the secret keys in a system. The basic idea is that there is a second (anamorphic) mode of encryption that allows to transmit an anamorphic message using a double key to a receiver that can decrypt this message using a double key. From the point of view of the dictator the encryption keys as well as the ciphertexts in the regular and anamorphic mode are indistinguishable. The most recent works in this field consider public key anamorphic encryption (PKAE), i.e., the sender of an anamorphic message requires an encryption double key (or no key at all) and the receiver requires a decryption double key. Known constructions, however, either work only for schemes that are mostly of theoretical interest or come with conceptual limitations. In this paper we ask whether we can design such PKAE schemes without such limitations and being closer to PKE schemes used in practice. In fact, such schemes are more likely to be allowed by a cognizant dictator. Moreover, we initiate the study of identity-based anamorphic encryption (IBAE), as the IBE setting seems to be a natural choice for a dictator. For both PKAE and IBAE, we show how well-known IND-CPA and IND-CCA secure primitives can be extended by an anamorphic encryption channel. In contrast to previous work, we additionally consider CCA (rather than just CPA) security notions for the anamorphic channel and also build upon CPA (rather than just CCA) secure PKE. Finally, we ask whether it is possible to port the recent concept of anamorphic signatures, which considers constructing symmetric anamorphic channels in case only signature schemes are allowed by the dictator, to the asymmetric setting, which we denote by public-key anamorphic signatures (PKAS). Also here we consider security beyond IND-CPA for the anamorphic channel.
Last updated:  2025-02-26
Higher Residuosity Attacks on Small RSA Subgroup Decision Problems
Xiaopeng Zhao, Zhenfu Cao, Xiaolei Dong, and Zhusen Liu
Secure two-party comparison, known as Yao's millionaires' problem, has been a fundamental challenge in privacy-preserving computation. It enables two parties to compare their inputs without revealing the exact values of those inputs or relying on any trusted third party. One elegant approach to secure computation is based on homomorphic encryption. Recently, building on this approach, Carlton et al. (CT-RSA 2018) and Bourse et al. (CT-RSA 2020) presented novel solutions for the problem of secure integer comparison. These protocols have demonstrated significantly improved performance compared to the well-known and frequently used DGK protocol (ACISP 2007 and Int. J. Appl. Cryptogr. 1(4),323–324, 2009). In this paper, we introduce a class of higher residuosity attacks, which can be regarded as an extension of the classical quadratic residuosity attack on the decisional Diffie-Hellman problem. We demonstrate that the small RSA subgroup decision problems, upon which both the CEK and BST protocols are based, are not difficult to solve when the prime base $p_0$ is small (e.g., \( p_0 < 100 \)). Under these conditions, the protocols achieve optimal overall performance. Furthermore, we offer recommendations for precluding such attacks, including one approach that does not adversely affect performance. We hope that these attacks can be applied to analyze other number-theoretic hardness assumptions.
Last updated:  2025-02-26
Polynomial Secret Sharing Schemes and Algebraic Matroids
Amos Beimel, Oriol Farràs, and Adriana Moya
In a secret sharing scheme with polynomial sharing, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. These schemes generalize the linear ones, adding more expressivity and giving room for more efficient schemes. To identify the access structures for which this efficiency gain is relevant, we need a systematic method to identify the access structure of polynomial schemes; i.e., to identify which sets can reconstruct the secret in the scheme. As a first step, we study ideal polynomial secret sharing schemes where there is a single polynomial for each party. Ideal schemes have optimal share size because the size of each share is the size of the secret. Our goal is to generalize results of linear secret sharing schemes, i.e., schemes in which the shares are computed by applying linear mappings and the linear dependency of these mappings determines their access structures. To achieve this goal, we study the connection between the algebraic dependency of the sharing polynomials and the access structure of the polynomial scheme. Our first result shows that if the degree of the sharing polynomials is not too big compared to the size of the field, then the algebraic dependence of the sharing polynomials determines the access structure of the scheme. This contributes to the characterization of ideal polynomial schemes and establishes a new connection between families of ideal schemes and algebraic matroids. Conversely, we ask the question: If we associate a polynomial with each party and the dealer, can we use these polynomials to realize the access structure determined by the algebraic dependency of the polynomials? Our second result shows that these access structures admit statistical schemes with small shares. Finally, we extend this result to the general case where each party may have more than one polynomial.
Last updated:  2025-02-26
Partial Lattice Trapdoors: How to Split Lattice Trapdoors, Literally
Martin R. Albrecht, Russell W. F. Lai, Oleksandra Lapiha, and Ivy K. Y. Woo
Lattice trapdoor algorithms allow us to sample hard random lattices together with their trapdoors, given which short lattice vectors can be sampled efficiently. This enables a wide range of advanced cryptographic primitives. In this work, we ask: can we distribute lattice trapdoor algorithms non-interactively? We study a natural approach to sharing lattice trapdoors: splitting them into partial trapdoors for different lower-rank sublattices which allow the local sampling of short sublattice vectors. Given sufficiently many short sublattice vectors, these can then be combined to yield short vectors in the original lattice. Moreover, this process can be repeated an unbounded polynomial number of times without needing a party holding a full trapdoor to intervene. We further define one-wayness and indistinguishability properties for partial trapdoors. We establish that such objects exist that have non-trivial performance under standard assumptions. Specifically, we prove these properties for a simple construction from the κ-SIS and κ-LWE assumptions, which were previously shown to be implied by the plain SIS and LWE assumptions, respectively. The security proofs extend naturally to the ring or module settings under the respective analogues of these assumptions, which have been conjectured to admit similar reductions. Our partial trapdoors achieve non-trivial efficiency, with relevant parameters sublinear in the number of shareholders. Our construction is algebraic, without resorting to generic tools such as multiparty computation or fully homomorphic encryption. Consequently, a wide range of lattice-trapdoor-based primitives can be thresholdised non-interactively by simply substituting the trapdoor preimage sampling procedure with our partial analogue.
Last updated:  2025-02-26
Enabling Microarchitectural Agility: Taking ML-KEM & ML-DSA from Cortex-M4 to M7 with SLOTHY
Amin Abdulrahman, Matthias J. Kannwischer, and Thing-Han Lim
Highly-optimized assembly is commonly used to achieve the best performance for popular cryptographic schemes such as the newly standardized ML-KEM and ML-DSA. The majority of implementations today rely on hand-optimized assembly for the core building blocks to achieve both security and performance. However, recent work by Abdulrahman et al. takes a new approach, writing a readable base assembly implementation first and leaving the bulk of the optimization work to a tool named SLOTHY based on constraint programming. SLOTHY performs instruction scheduling, register allocation, and software pipelining simultaneously using constraints modeling the architectural and microarchitectural details of the target platform. In this work, we extend SLOTHY and investigate how it can be used to migrate already highly hand-optimized assembly to a different microarchitecture, while maximizing performance. As a case study, we optimize state-of-the-art Arm Cortex-M4 implementations of ML-KEM and ML-DSA for the Arm Cortex-M7. Our results suggest that this approach is promising: For the number-theoretic transform (NTT) – the core building block of both ML-DSA and ML-KEM – we achieve speed-ups of $1.97\times$ and $1.69\times$, respectively. For Keccak – the permutation used by SHA-3 and SHAKE and also vastly used in ML-DSA and ML-KEM – we achieve speed-ups of 30% compared to the M4 code and 5% compared to hand-optimized M7 code. For many other building blocks, we achieve similarly significant speed-ups of up to $2.35\times$. Overall, this results in 11 to 33% faster code for the entire cryptosystems.
Last updated:  2025-02-26
Lattice-Based Updatable Public-Key Encryption for Group Messaging
Joël Alwen, Georg Fuchsbauer, Marta Mularczyk, and Doreen Riepel
Updatable Public-Key Encryption (UPKE) augments the security of PKE with Forward Secrecy properties. While requiring more coordination between parties, UPKE enables much more efficient constructions than full-fledged Forward-Secret PKE. Alwen, Fuchsbauer and Mularczyk (AFM, Eurocrypt’24) presented the strongest security notion to date. It is the first to meet the needs of UPKE’s most important applications: Secure Group Messaging and Continuous Group Key Agreement. The authors provide a very efficient construction meeting their notion with classic security based on the Computational Diffie-Hellman (CDH) assumption in the Random Oracle Model (ROM). In this work we present the first post-quantum secure UPKE construction meeting (a slight relaxation of) the AFM security notion. Based on the Module LWE assumption, our construction is practically efficient. Moreover, public key sizes are about $1/2$ and ciphertext sizes around $2/3$ of those of the state-of-the-art lattice-based UPKE scheme in the ROM by Abou Haidar, Passelègue and Stehlé – despite only being shown to satisfy a significantly weaker security notion. As the AFM proofs relies on random self-reducibility of CDH, which has no analogue for lattices, we develop a new proof technique for strong UPKE, identifying the core properties required from the underlying (lattice-based) encryption scheme.
Last updated:  2025-02-26
Traitor Tracing in Multi-sender Setting ($\textsf{TMCFE}$: Traceable Multi-client Functional Encryption)
Xuan Thanh Do, Dang Truong Mac, Ky Nguyen, Duong Hieu Phan, and Quoc-Huy Vu
Traitor tracing is a traditional cryptographic primitive designed for scenarios with multiple legitimate receivers. When the plaintext - that is, the output of decryption - is leaked and more than one legitimate receiver exists, it becomes imperative to identify the source of the leakage, a need that has motivated the development of traitor tracing techniques. Recent advances in standard encryption have enabled decryption outcomes to be defined in a fine-grained manner through the introduction of Functional Encryption (FE). Constructing FE schemes is intriguing, and achieving the tracing property adds an additional layer of complexity. Traitor tracing techniques have been actively developed for more than three decades, yet they have always remained within the same framework - a single sender responsible for encrypting all the data. However, fine-grained decryption is particularly useful when data originates from multiple sources, allowing for joint computation on personal data. This leads to the concept of multi-client functional encryption (MCFE), where multiple concurrent senders independently encrypt their data while agreeing on the decryption of a specific function (e.g., a statistical measure) computed on the aggregated data, without revealing any additional information. In the era of cloud computing and big data, privacy-preserving joint computation is crucial, and tracing the source of any breach by dishonest participants becomes essential. Thus, in this paper we take the first step toward addressing the tracing problem in the general context of joint computation with multiple senders. Our contributions are twofold: - $\textbf{Conceptually:}$ We propose the first tracing model in the context of multi-sender encryption, namely $\textit{Traceable Multi-Client Functional Encryption}$ ($\textsf{TMCFE}$), which allows a pirate to extract secret information from both receivers and senders. Our model supports strong and naturally admissible decoders, removing artificial restrictions on the pirate decoder and thus addressing the shortcomings of existing traceable functional encryption schemes designed for the single-sender setting. - $\textbf{Technically:}$ To achieve our conceptual objective, we build upon the recently introduced notion of strong admissibility for MCFE. Our main technical contribution is a generic compiler that transforms a large class of MCFE schemes with weak admissibility into schemes with strong admissibility. This compiler not only helps overcome existing challenges but may also be of general interest within the functional encryption domain. Finally, we present a concrete lattice-based scheme $\textsf{TMCFE}$ for inner-product functionalities that achieves post-quantum security under standard assumptions.
Last updated:  2025-02-26
The Security of Hash-and-Sign with Retry against Superposition Attacks
Haruhisa Kosuge and Keita Xagawa
Considering security against quantum adversaries, while it is important to consider the traditional existential unforgeability (EUF-CMA security), it is desirable to consider security against adversaries making quantum queries to the signing oracle: Plus-one security (PO security) and blind unforgeability (BU security) proposed by Boneh and Zhandry (Crypto 2013) and Alagic et al. (EUROCRYPT 2020), respectively. Hash-and-sign is one of the most common paradigms for constructing EUF-CMA-secure signature schemes in the quantum random oracle model, employing a trapdoor function and a hash function. It is known that its derandomized version is PO- and BU-secure. A variant of hash-and-sign, known as hash-and-sign with retry (HSwR), formulated by Kosuge and Xagawa (PKC 2024), is widespread since it allows for weakening the security assumptions of a trapdoor function. Unfortunately, it has not been known whether HSwR can achieve PO- and BU-secure even with derandomization. In this paper, we apply a derandomization with bounded loops to HSwR. We demonstrate that HSwR can achieve PO and BU security through this approach. Since derandomization with bounded loops offers advantages in some implementations, our results support its wider adoption, including in NIST PQC candidates.
Last updated:  2025-02-26
Adaptively Secure Fully Homomorphic Message Authentication Code with Pre-processable Verification
Jeongsu Kim and Aaram Yun
There has been remarkable progress in fully homomorphic encryption, ever since Gentry's first scheme. In contrast, fully homomorphic authentication primitives received relatively less attention, despite existence of some previous constructions. While there exist various schemes with different functionalities for fully homomorphic encryption, there are only a few options for fully homomorphic authentication. Moreover, there are even fewer options when considering two of the most important properties: adaptive security, and pre-processable verification. To our knowledge, except for some concurrent works, achieving both properties requires the use of nested construction, which involves homomorphically authenticating a homomorphic authentication tag of a message, making the scheme costly and complicated. In this work, we propose a dedicated scheme for (leveled) fully homomorphic message authentication code that is adaptively secure and has pre-processable verification. Leveraging the secrecy of the primitive, we demonstrate that a slight modification of a selectively secure (leveled) fully homomorphic signature scheme yields an adaptively secure (leveled) fully homomorphic message authentication code with pre-processable verification. Additionally, we introduce a novel notion and generic transform to enhance the security of a homomorphic message authentication code, which also exploits the secrecy of the primitive.
Last updated:  2025-02-26
Predicate Encryption from Lattices: Enhanced Compactness and Refined Functionality
Uncategorized
Yuejun Wang, Baocang Wang, Qiqi Lai, and Huaxiong Wang
Show abstract
Uncategorized
In this work, we explore the field of lattice-based Predicate Encryption (PE), with a focus on enhancing compactness and refining functionality. First, we present a more compact bounded collusion predicate encryption scheme compared to previous constructions, significantly reducing both the per-unit expansion and fixed overhead, while maintaining an optimal linear blow-up proportional to $Q$. Next, we propose a Predicate Inner Product Functional Encryption (P-IPFE) scheme based on our constructed predicate encryption scheme. P-IPFE preserves the attribute-hiding property while enabling decryption to reveal only the inner product between the key and message vectors, rather than the entire message as in traditional PE. Our P-IPFE scheme also achieves bounded collusion resistance while inheriting the linear compactness optimized in the underlying PE scheme. Additionally, it supports any polynomial-sized and bounded-depth circuits, thereby extending beyond the inner-product predicate class in prior works. Furthermore, all the proposed schemes achieve selective fully attribute-hiding security in the simulation-based model, therefore, can further attain semi-adaptive security by adopting existing upgrading techniques.
Last updated:  2025-02-25
Vanishing Short Integer Solution, Revisited: Reductions, Trapdoors, Homomorphic Signatures for Low-Degree Polynomials
Kalle Jyrkinen and Russell W. F. Lai
The vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23], at its simplest form, asserts the hardness of finding a polynomial with short coefficients which vanishes at a given random point. While vSIS has proven to be useful in applications such as succinct arguments, not much is known about its theoretical hardness. Furthermore, without the ability to generate a hard instance together with a trapdoor, the applicability of vSIS is significantly limited. We revisit the vSIS assumption focusing on the univariate single-point constant-degree setting, which can be seen as a generalisation of the (search) NTRU problem. In such a setting, we show that the vSIS problem is as hard as finding the shortest vector in certain ideal lattices. We also show how to generate a random vSIS instance together with a trapdoor, under the (decision) NTRU assumption. Interestingly, a vSIS trapdoor allows to sample polynomials of short coefficients which evaluate to any given value at the public point. By exploiting the multiplicativity of the polynomial ring, we use vSIS trapdoors to build a new homomorphic signature scheme for low-degree polynomials.
Last updated:  2025-02-25
A Note on Zero-Knowledge Simulator of the CROSS Identification Protocol
Shai Levin
We point out flaw in zero-knowledge of the CROSS identification protocol, $\textsf{CROSS-ID}$, which allows a distinguisher to distinguish real and simulated transcripts given access to the witness. Moreover, we show that the real and simulated transcripts are not statistically indistinguishable, and therefore the protocol can only satisfy weak computational (rather than strong, statistical or perfect) Honest Verifier Zero-knowledge. This issue is still present in version 2.0 updated on January 31, 2025, which resolves the security losses attained via the attacks of [BLP+25]
Last updated:  2025-02-25
The Complexity of Memory Checking with Covert Security
Elette Boyle, Ilan Komargodski, and Neekon Vafa
A memory checker is an algorithmic tool used to certify the integrity of a database maintained on a remote, unreliable, computationally bounded server. Concretely, it allows a user to issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. A recent result due to Boyle, Komargodski, and Vafa (BKV, STOC '24) showed a tradeoff between the size of the local storage and the number of queries the memory checker makes to the server upon every logical instruction. Specifically, they show that every non-trivial memory checker construction with inverse-polynomial soundness and local storage at most $n^{1 - \epsilon}$ must make $\Omega(\log n/ \log \log n)$ queries, and this is tight up to constant factors given known constructions. However, an intriguing question is whether natural relaxations of the security guarantee could allow for more efficient constructions. We consider and adapt the notion of covert security to the memory checking context, wherein the adversary can effectively cheat while taking the risk of being caught with constant probability. Notably, BKV's lower bound does not apply in this setting. We close this gap and prove that $\Omega(\log n/ \log \log n)$ overhead is unavoidable even in the covert security setting. Our lower bound applies to any memory checker construction, including ones that use randomness and adaptivity and ones that rely on cryptographic assumptions and/or the random oracle model, as long as they satisfy a natural "read-only reads" property. This property requires a memory checker not to modify contents of the database or local storage in the execution of a logical read instruction.
Last updated:  2025-02-25
Random Number Generation from Pulsars
Hayder Tirmazi
Pulsars exhibit signals with precise inter-arrival times that are on the order of milliseconds to seconds depending on the individual pulsar. There is subtle variation in the timing of pulsar signals, primarily due to the presence of gravitational waves, intrinsic variance in the period of the pulsar, and errors in the realization of Terrestrial Time (TT). Traditionally, these variations are dismissed as noise in high-precision timing experiments. In this paper, we show that these variations serve as a natural entropy source for the creation of Random Number Generators (RNG). We also explore the effects of using randomness extractors to increase the entropy of random bits extracted from Pulsar timing data. To evaluate the quality of the Pulsar RNG, we model its entropy as a $k$-source and use well-known cryptographic results to show its closeness to a theoretically ideal uniformly random source. To remain consistent with prior work, we also show that the Pulsar RNG passes well-known statistical tests such as the NIST test suite.
Last updated:  2025-02-25
Lattice-based Proof-Friendly Signatures from Vanishing Short Integer Solutions
Adrien Dubois, Michael Klooß, Russell W. F. Lai, and Ivy K. Y. Woo
Efficient anonymous credentials are typically constructed by combining proof-friendly signature schemes with compatible zero-knowledge proof systems. Inspired by pairing-based proof-friendly signatures such as Boneh- Boyen (BB) and Boneh-Boyen-Shacham (BBS), we propose a wide family of lattice-based proof-friendly signatures based on variants of the vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23]. In particular, we obtain natural lattice-based adaptions of BB and BBS which, similar to their pairing-based counterparts, admit nice algebraic properties. [Bootle-Lyubashevsky-Nguyen-Sorniotti, Crypto'23] (BLNS) recently proposed a framework for constructing lattice-based proof-friendly signatures and anonymous credentials, based on another new lattice assumption called $\mathsf{ISIS}_f$ parametrised by a fixed function $f$, with focus on $f$ being the binary decomposition. We introduce a generalised $\mathsf{ISIS}_f$ framework, called $\mathsf{GenISIS}_f$, with a keyed and probabilistic function $f$. For example, picking $f_b(\mu) = 1/(b-\mu)$ with key $b$ for short ring element $\mu$ leads to algebraic and thus proof-friendly signatures. To better gauge the robustness and proof-friendliness of $\mathsf{(Gen)}\mathsf{ISIS}_f$, we consider what happens when the inputs to $f$ are chosen selectively (or even adaptively) by the adversary, and the behaviour under relaxed norm checks. While bit decomposition quickly becomes insecure, our proposed function families seem robust.
Last updated:  2025-02-25
Commit-and-Prove System for Vectors and Applications to Threshold Signing
Anja Lehmann and Cavit Özbay
Multi-signatures allow to combine several individual signatures into a compact one and verify it against a short aggregated key. Compared to threshold signatures, multi-signatures enjoy non-interactive key generation but give up on the threshold-setting. Recent works by Das et al. (CCS'23) and Garg et al. (S&P'24) show how multi-signatures can be turned into schemes that enable efficient verification when an ad hoc threshold -- determined only at verification -- is satisfied. This allows to keep the simple key generation of multi-signatures and support flexible threshold settings in the signing process later on. Both works use the same idea of combining BLS multi-signatures with inner-product proofs over committed keys. Das et al. give a somewhat generic proof from both building blocks, which we show to be flawed, whereas Garg et al. give a direct proof for the combined construction in the algebraic group model. In this work, we identify the common blueprint used in both works and abstract the proof-based approach through the building block of a commit-and-prove system for vectors (CP). We formally define a flexible set of security properties for the CP system and show how it can be securely combined with a multi-signature to yield a signature with ad hoc thresholds. Our scheme also lifts the threshold signatures into the multiverse setting recently introduced by Baird et al. (S&P'23), which allows signers to re-use their long-term keys across several groups. The challenge in the generic construction is to express -- and realize -- the combination of homomorphic proofs and commitments (needed to realize flexible thresholds over fixed group keys) and their simulation extractability (needed in the threshold signature security proof). We finally show that a CP instantiation closely following the ideas of Das et al. can be proven secure, but requires a new flexible-base DL-assumption to do so.
Last updated:  2025-02-25
Delayed-Input Multi-Party Computation
Michele Ciampi, Jure Sternad, and Yu Xia
In this work, we consider the setting where the process of securely evaluating a multi-party functionality is divided into two phases: offline (or preprocessing) and online. The offline phase is independent of the parties’ inputs, whereas the online phase does require the knowledge of the inputs. We consider the problem of minimizing the round of communication required in the online phase and propose a round preserving compiler that can turn a big class of multi-party computation (MPC) protocols into protocols in which only the last two rounds are input-dependent. Our compiler can be applied to a big class of MPC protocols, and in particular to all existing round-optimal MPC protocols. All our results assume no setup and are proven in the dishonest majority setting with black-box simulation. As part of our contribution, we propose a new definition we call Multi-Party Computation with Adaptive-Input Selection, which allows the distinguisher to craft the inputs the honest parties should use during the online phase, adaptively on the offline phase. This new definition is needed to argue that not only are the messages of the offline phase input-independent but also that security holds even in the stronger (and realistic) adversarial setting where the inputs may depend on some of the offline-phase protocol messages. We argue that this is the definition that any protocol should satisfy to be securely used while preprocessing part of the rounds. We are the first to study this definition in a setting where there is no setup, and the majority of the parties can be corrupted. Prior definitions have been presented in the Universal Composable framework, which is unfortunately not well suited for our setting (i.e., no setup and dishonest majority). As a corollary, we obtain the first four-round (which is optimal) MPC protocol, where the first two rounds can be preprocessed, and its security holds against adaptive-input selection.
Last updated:  2025-02-25
Stronger Security for Threshold Blind Signatures
Uncategorized
Anja Lehmann, Phillip Nazarian, and Cavit Özbay
Show abstract
Uncategorized
Blind signatures allow a user to obtain a signature from an issuer in a privacy-preserving way: the issuer neither learns the signed message, nor can link the signature to its issuance. The threshold version of blind signatures further splits the secret key among n issuers, and requires the user to obtain at least t ≤ n of signature shares in order to derive the final signature. Security should then hold as long as at most t − 1 issuers are corrupt. Security for blind signatures is expressed through the notion of one-more unforgeability and demands that an adversary must not be able to produce more signatures than what is considered trivial after its interactions with the honest issuer(s). While one-more unforgeability is well understood for the single-issuer setting, the situation is much less clear in the threshold case: due to the blind issuance, counting which interactions can yield a trivial signature is a challenging task. Existing works bypass that challenge by using simplified models that do not fully capture the expectations of the threshold setting. In this work, we study the security of threshold blind signatures, and propose a framework of one-more unforgeability notions where the adversary can corrupt c < t issuers. Our model is generic enough to capture both interactive and non-interactive protocols, and it provides a set of natural properties with increasingly stronger guarantees, giving the issuers gradually more control over how their shares can be combined. As a point of comparison, we reconsider the existing threshold blind signature models and show that their security guarantees are weaker and less clearly comprehensible than they seem. We then re-assess the security of existing threshold blind signature schemes – BLS-based and Snowblind – in our framework, and show how to lift them to provide stronger security.
Last updated:  2025-02-25
Efficient NIZK Arguments with Straight-Line Simulation and Extraction
Michele Ciampi and Ivan Visconti
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
Last updated:  2025-02-25
Thorough Power Analysis on Falcon Gaussian Samplers and Practical Countermeasure
Xiuhan Lin, Shiduo Zhang, Yang Yu, Weijia Wang, Qidi You, Ximing Xu, and Xiaoyun Wang
Falcon is one of post-quantum signature schemes selected by NIST for standardization. With the deployment underway, its implementation security is of great importance. In this work, we focus on the side-channel security of Falcon and our contributions are threefold. First, by exploiting the symplecticity of NTRU and a recent decoding technique, we dramatically improve the key recovery using power leakages within Falcon Gaussian samplers. Compared to the state of the art (Zhang, Lin, Yu and Wang, EUROCRYPT 2023), the amount of traces required by our attack for a full key recovery is reduced by at least 85%. Secondly, we present a complete power analysis for two exposed power leakages within Falcon’s integer Gaussian sampler. We identify new sources of these leakages, which have not been identified by previous works, and conduct detailed security evaluations within the reference implementation of Falcon on Chipwhisperer. Thirdly, we propose effective and easy-to-implement countermeasures against both two leakages to protect the whole Falcon’s integer Gaussian sampler. Configured with our countermeasures, we provide security evaluations on Chipwhisperer and report performance of protected implementation. Experimental results highlight that our countermeasures admit a practical trade-off between effciency and side-channel security.
Last updated:  2025-02-25
Bootstrapping with RMFE for Fully Homomorphic Encryption
Khin Mi Mi Aung, Enhui Lim, Jun Jie Sim, Benjamin Hong Meng Tan, and Huaxiong Wang
There is a heavy preference towards instantiating BGV and BFV homomorphic encryption schemes where the cyclotomic order $m$ is a power of two, as this admits highly efficient fast Fourier transformations. Field Instruction Multiple Data (FIMD) was introduced to increase packing capacity in the case of small primes and improve amortised performance, using reverse multiplication-friendly embeddings (RMFEs) to encode more data into each SIMD slot. However, FIMD currently does not admit bootstrapping. In this work, we achieve bootstrapping for RMFE-packed ciphertexts with low capacity loss. We first adapt the digit extraction algorithm to work over RMFE-packed ciphertexts, by applying the recode map after every evaluation of the lifting polynomial. This allows us to follow the blueprint of thin bootstrapping, performing digit extraction on a single ciphertext. To achieve the low capacity loss, we introduce correction maps to the Halevi-Shoup digit extraction algorithm, to remove all but the final recode of RMFE digit extraction. We implement several workflows for bootstrapping RMFE-packed ciphertexts in HElib, and benchmark them against thin bootstrapping for $m=32768$. Our experiments show that the basic strategy of recoding multiple times in digit extraction yield better data packing, but result in very low remaining capacity and latencies of up to hundreds of seconds. On the other hand, using correction maps gives up to $6$ additional multiplicative depth and brings latencies often below $10$ seconds, at the cost of lower packing capacity.
Last updated:  2025-02-25
Efficient Distributed Randomness Generation from Minimal Assumptions where PArties Speak Sequentially Once
Chen-Da Liu-Zhang, Elisaweta Masserova, João Ribeiro, Pratik Soni, and Sri AravindaKrishnan Thyagarajan
We study efficient public randomness generation protocols in the PASSO (PArties Speak Sequentially Once) model for multi-party computation (MPC). PASSO is a variation of traditional MPC where $n$ parties are executed in sequence and each party ``speaks'' only once, broadcasting and sending secret messages only to parties further down the line. Prior results in this setting include information-theoretic protocols in which the computational complexity scales exponentially with the number of corruptions $t$ (CRYPTO 2022), as well as more efficient computationally-secure protocols either assuming a trusted setup phase or DDH (FC 2024). Moreover, these works only consider security against static adversaries. In this work, we focus on computational security against adaptive adversaries and from minimal assumptions, and improve on the works mentioned above in several ways: - Assuming the existence of non-interactive perfectly binding commitments, we design protocols with $n=3t+1$ or $n=4t$ parties that are efficient and secure whenever $t$ is small compared to the security parameter $\lambda$ (e.g., $t$ is constant). This improves the resiliency of all previous protocols, even those requiring a trusted setup. It also shows that $n=4$ parties are necessary and sufficient for $t=1$ corruptions in the computational setting, while $n=5$ parties are required for information-theoretic security. - Under the same assumption, we design protocols with $n=4t+2$ or $n=5t+2$ parties (depending on the adversarial network model) which are efficient whenever $t=poly(\lambda)$. This improves on the existing DDH-based protocol both in terms of resiliency and the underlying assumptions. - We design efficient protocols with $n=5t+3$ or $n=6t+3$ parties (depending on the adversarial network model) assuming the existence of one-way functions. We complement these results by studying lower bounds for randomness generation protocols in the computational setting.
Last updated:  2025-02-25
Juicebox Protocol: Distributed Storage and Recovery of Secrets Using Simple PIN Authentication
Nora Trapp and Diego Ongaro
Existing secret management techniques demand users memorize complex passwords, store convoluted recovery phrases, or place their trust in a specific service or hardware provider. We have designed a novel protocol that combines existing cryptographic techniques to eliminate these complications and reduce user complexity to recalling a short PIN. Our protocol specifically focuses on a distributed approach to secret storage that leverages Oblivious Pseudorandom Functions (OPRFs) and a Secret-Sharing Scheme (SSS) combined with self-destructing secrets to minimize the trust placed in any singular server. Additionally, our approach allows for servers distributed across organizations, eliminating the need to trust a singular service operator. We have built an open-source implementation of the client and server sides of this new protocol, the latter of which has variants for running on commodity hardware and secure hardware.
Last updated:  2025-02-25
Helix: Scalable Multi-Party Machine Learning Inference against Malicious Adversaries
Yansong Zhang, Xiaojun Chen, Qinghui Zhang, Ye Dong, and Xudong Chen
With the growing emphasis on data privacy, secure multi-party computation has garnered significant attention for its strong security guarantees in developing privacy-preserving machine learning (PPML) schemes. However, only a few works address scenarios with a large number of participants. The state of the art by Liu et al. (LXY24, USENIX Security'24) first achieves a practical PPML protocol for up to 63 parties but is constrained to semi-honest security. Although naive extensions to the malicious setting are possible, they would introduce significant overhead. In this paper, we propose Helix, a scalable framework for maliciously secure PPML in the honest majority setting, aiming to enhance both the scalability and practicality of maliciously secure protocols. In particular, we report a privacy leakage issue in LXY24 during prefix OR operations and introduce a round-optimized alternative based on a single-round vectorized three-layer multiplication protocol. Additionally, by exploiting reusability properties within the computation process, we propose lightweight compression protocols that substantially improve the efficiency of multiplication verification. We also develop a batch check protocol to reduce the computational complexity of revealing operations in the malicious setting. For 63-party neural network inference, compared to the semi-honest LXY24, Helix is only 1.9$\times$ (1.1$\times$) slower in the online phase and 1.2$\times$ (1.1$\times$) slower in preprocessing under LAN (WAN) in the best case.
Last updated:  2025-02-25
Homomorphic Encryption for Large Integers from Nested Residue Number Systems
Dan Boneh and Jaehyung Kim
Existing fully homomorphic encryption (FHE) schemes primarily support a plaintext space defined over a relatively small prime. However, in some important applications of FHE one needs arithmetic over a large prescribed prime. In this paper we construct a new FHE system that is specifically designed for this purpose. Our system composes three layers of residue systems to enable much better performance than was previously possible. Our experiments show that for arithmetic modulo a 256-bit integer, when compared to the TFHE-rs implementation of 256-bit arithmetic, our new system achieves a factor of a thousand better multiplication throughput and a factor of ten better latency. Moreover, for a 2048-bit prime modulus we achieve far better performance than was previously possible.
Last updated:  2025-02-25
Publicly Verifiable Threshold Proxy Re-encryption and Its Application in Data Rights Confirmation
Uncategorized
Tao Liu, Liang Zhang, Haibin Kan, and Jiheng Zhang
Show abstract
Uncategorized
Proxy re-encryption (PRE) has been regarded as an effective cryptographic primitive in data sharing systems with distributed proxies. However, no literature considers the honesty of data owners, which is critical in the age of big data. In this paper, we fill the gap by introducing a new proxy re-encryption scheme, called publicly verifiable threshold PRE (PVTPRE). Briefly speaking, we innovatively apply a slightly modified publicly verifiable secret sharing (PVSS) scheme to distribute the re-encryption keys to multiple proxies. Consequently, we achieve publicly verifiability of data owners non-interactively. Then, the correctness of data users in decryption and public verifiability of proxies in re-encryption are guaranteed seamlessly through execution of the PVSS reconstruction algorithms. We further prove that PVTPRE satisfies IND-CPA security. Besides, we put forward a privacy-preserving data rights confirmation framework by providing clear principles for data ownership and usage, based on the PVTPRE scheme and blockchain. Blockchain plays the role of data bank and smart contract engine, providing reliable storage and verification for all framework. To our knowledge, we are the first to systematically investigate data rights confirmation considering privacy as well as public verifiability, addressing the growing need for robust mechanisms to protect data rights and ensure transparency. Finally, we conduct comprehensive experiments to illustrate the correctness, feasibility and effectiveness. The experimental results show that our PVTPRE outperforms other PREs in many aspects.
Last updated:  2025-03-07
Publicly Verifiable Generalized Secret Sharing and Its Application in Building Decentralized Exchange
Liang Zhang, Dongliang Cai, Tao Liu, Haibin Kan, Jiheng Zhang, Haibin Zhang, and Sisi Duan
Generalized secret sharing (GSS), which can offer more flexibility by accommodating diverse access structures and conditions, has been under-explored in distributed computing over the past decades. To address the gaps, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme, enhancing the applicability of GSS in transparent systems. Public verifiability is a crucial property to gain trustworthiness for decentralized systems like blockchain. We begin by introducing two GSS constructions, one based on Shamir's secret sharing and the other on the linear secret sharing scheme (LSSS). Next, we present PVGSS schemes that combine GSS with non-interactive zero-knowledge (NIZK) proofs. Further, we construct a decentralized exchange (DEX) based on PVGSS scheme, where any users can participate in exchanges and engage in arbitrage. Specifically, users can fairly swap ERC-20 tokens with passive watchers, who earn profits by providing arbitration services. The critical property of "fairness" required by the DEX is ensured through a sophisticated access structure, supported by the PVGSS scheme. We provide a comprehensive evaluation on the performance of the PVGSS schemes and the monetary costs for users in the DEX. The results demonstrate the feasibility and practicality of this approach in real-world applications.
Last updated:  2025-02-24
Tight Multi-challenge Security Reductions for Key Encapsulation Mechanisms
Lewis Glabush, Kathrin Hövelmanns, and Douglas Stebila
A key encapsulation mechanism (KEM) allows two parties to establish a shared secret key using only public communication. For post-quantum KEMs, the most widespread approach is to design a passively secure public-key encryption (PKE) scheme and then apply the Fujisaki–Okamoto (FO) transform that turns any such PKE scheme into an IND-CCA secure KEM. While the base security requirement for KEMs is typically IND-CCA security, adversaries in practice can sometimes observe and attack many public keys and/or ciphertexts, which is referred to as multi-challenge security. FO does not necessarily guarantee multi-challenge security: for example, FrodoKEM, a Round 3 alternate in NIST’s post-quantum project, used FO to achieve IND-CCA security, but was subsequently shown to be vulnerable to attackers that can target multiple ciphertexts. To avert this multi-ciphertext attack, the FrodoKEM team added a salt to the encapsulation procedure and proved that this does not degrade (single-ciphertext) IND-CCA security. The formal analysis of whether this indeed averts multi-ciphertext attacks, however, was left open, which we address in this work. Firstly, we formalize FrodoKEM's approach as a new variant of the FO transform, called the salted FO transform. Secondly, we give tight reductions from multi-challenge security of the resulting KEM to multi-challenge security of the underlying public key encryption scheme, in both the random oracle model (ROM) and the quantum-accessible ROM (QROM). Together these results justify the multi-ciphertext security of the salted FrodoKEM scheme, and can also be used generically by other schemes requiring multi-ciphertext security.
Last updated:  2025-02-24
Traceable Threshold Encryption without Trusted Dealer
Jan Bormet, Jonas Hofmann, and Hussien Othman
The fundamental assumption in $t$-out-of-$n$ threshold encryption is that the adversary can only corrupt less than $t$ parties. Unfortunately, it may be unfounded in practical scenarios where shareholders could be incentivized to collude. Boneh, Partap, and Rotem (Crypto'24) recently addressed the setting where $t$ or more shareholders work together to decrypt illegally. Inspired by the well-established notion of traitor tracing in broadcast encryption, they added a traceability mechanism that guarantees identifying at least one of the colluders. They provide several constructions that enable traceability, all of which require a trusted dealer to distribute the secret shares. While the trusted dealer can be replaced with a DKG for conventional threshold encryption, it is unclear how to do so without compromising traceability. As thresholdizing is meant to mitigate a single point of failure, a natural question that remains is: Can we construct an efficient traceable threshold encryption scheme that does not rely on a trusted party to distribute the secret shares? In this paper, we achieve two dealerless traceable threshold encryption constructions with different merits by extending the PLBE primitive of Boneh et al. (Eurocrypt'06) and combining it with the silent setup threshold encryption construction of Garg et al. (Crypto'24). Our first construction achieves an amortized ciphertext of size $O(1)$ (for $O(n)$ ciphertexts). Our second construction achieves constant ciphertext size even in the worst case but requires a less efficient preprocessing phase as a tradeoff. Both our constructions enjoy a constant secret key size and do not require any interaction between the parties. An additional restriction in the constructions of Boneh et al. is that they can only guarantee to find at least one colluder, leaving techniques to identify more traitors as an open problem. In this paper, we take a first step towards solving this question by formalizing a technique and applying it to our first construction. Namely, our first construction enables tracing $t$ traitors.
Last updated:  2025-02-24
CCA-Secure Traceable Threshold (ID-based) Encryption and Application
Rishiraj Bhattacharyya, Jan Bormet, Sebastian Faust, Pratyay Mukherjee, and Hussien Othman
A recent work by Boneh, Partap, and Rotem [Crypto'24] introduced the concept of traceable threshold encryption, in that if $t$ or more parties collude to construct a decryption box, which performs decryptions, then at least one party's identity can be traced by making a few black-box queries to the box. This has important applications, e.g., in blockchain mempool privacy, where collusion yields high financial gain through MEVs without any consequence - the possibility of tracing discourages collusion. Nevertheless, their definitions leave room for exploitation as they only achieve CPA security and do not consider inconsistency in decryption via different participating sets. This paper proposes stronger definitions of traceable threshold encryption, which supports CCA-security and consistency. Our main approach considers identity-based variants of traceable encryption (which we also define). It converts that to a CCA-secure construction, adapting two generic transformations, first using a one-time signature and then a fingerprinting code. We put forward two efficient instantiations of our identity-based scheme with different merits: our first construction is based on Boneh-Franklin IBE [Crypto'01] and has constant size ciphertexts but quadratic size public keys - this is proven secure based on XDH and BDDH. Our second construction is based on Boneh-Boyen IBE [Eurocrypt'04]. It supports both constant-size ciphertexts and constant-size public keys - this is proven secure based on a variant of the uber assumption over bilinear pairings. Our concrete analysis shows that the first construction's ciphertext is much (~6x) smaller than the second construction. Finally, we extend the definitions to support consistency and achieve it by adjoining an efficient, non-interactive proof of correct encryption.
Last updated:  2025-02-24
Hollow LWE: A New Spin, Unbounded Updatable Encryption from LWE and PCE
Uncategorized
Martin R. Albrecht, Benjamin Benčina, and Russell W. F. Lai
Show abstract
Uncategorized
Updatable public-key encryption (UPKE) allows anyone to update a public key while simultaneously producing an update token, given which the secret key holder could consistently update the secret key. Furthermore, ciphertexts encrypted under the old public key remain secure even if the updated secret key is leaked -- a property much desired in secure messaging. All existing lattice-based constructions of UPKE update keys by a noisy linear shift. As the noise accumulates, these schemes either require super-polynomial-size moduli or an a priori bounded number of updates to maintain decryption correctness. Inspired by recent works on cryptography based on the lattice isomorphism problem, we propose an alternative way to update keys in lattice-based UPKE. Instead of shifting, we rotate them. As rotations do not induce norm growth, our construction supports an unbounded number of updates with a polynomial-size modulus. The security of our scheme is based on the LWE assumption over hollow matrices -- matrices which generate linear codes with non-trivial hull -- and the hardness of permutation code equivalence. Along the way, we also show that LWE over hollow matrices is as hard as LWE over uniform matrices, and that a leftover hash lemma holds for hollow matrices.
Last updated:  2025-02-24
Key-Homomorphic Computations for RAM: Fully Succinct Randomised Encodings and More
Damiano Abram, Giulio Malavolta, and Lawrence Roy
We propose a new method to construct a public-key encryption scheme, where one can homomorphically transform a ciphertext encrypted under a key $\mathbf{x}$ into a ciphertext under $(P, P(\mathbf{x}))$, for any polynomial-time RAM program $P: \mathbf{x} \mapsto \mathbf{y}$ with runtime $T$ and memory $L$. Combined with other lattice techniques, this allows us to construct: 1) Succinct-randomised encodings from RAM programs with encoder complexity $(|\mathbf{x}| + |\mathbf{y}|)\cdot \text{poly}(\log T, \log L)$ and rate-1 encodings. 2) Laconic function evaluation for RAM programs, with encoder runtime bounded by $(|\mathbf{x}| + |\mathbf{y}|)\cdot\text{poly}(\log T, \log L)$ and rate-1 encodings. 3) Key-policy attribute-based encryption for RAM programs, with ciphertexts of size $O(T)$. The same scheme can be converted to the register setting, obtaining linear CRS size in the number of parties. All of our schemes rely on the hardness of the \emph{decomposed learning with errors} (LWE) problem, along with other standard computational assumptions on lattices. The decomposed LWE problem can be interpreted as postulating the circular-security of a natural lattice-based public-key encryption scheme. To gain confidence in the assumption, we show that it is implied by the hardness of the succinct LWE problem of Wee (CRYPTO'24).
Last updated:  2025-02-24
CT-LLVM: Automatic Large-Scale Constant-Time Analysis
Zhiyuan Zhang and Gilles Barthe
Constant-time (CT) is a popular programming discipline to protect cryptographic libraries against micro-architectural timing attacks. One appeal of the CT discipline lies in its conceptual simplicity: a program is CT iff it has no secret-dependent data-flow, control-flow or variable-timing operation. Thanks to its simplicity, the CT discipline is supported by dozens of analysis tools. However, a recent user study demonstrates that these tools are seldom used due to poor usability and maintainability (Jancar et al. IEEE SP 2022). In this paper, we introduce CT-LLVM, a CT analysis tool designed for usability, maintainability and automatic large-scale analysis. Concretely, CT-LLVM is packaged as a LLVM plugin and is built as a thin layer on top of two standard LLVM analysis: def-use and alias analysis. Besides confirming known CT violations, we demonstrate the usability and scalability of CT-LLVM by automatically analyzing nine cryptographic libraries. On average, CT-LLVM can automatically and soundly analyze 36% of the functions in these libraries, proving that 61% of them are CT. In addition, the large-scale automatic analysis also reveals new vulnerabilities in these libraries. In the end, we demonstrate that CT-LLVM helps systematically mitigate compiler-introduced CT violations, which has been a long-standing issue in CT analysis.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.