Papers updated in last 365 days (2959 results)

Last updated:  2025-04-10
A Systematic Analysis of Pseudorandom Generation for Masked Cryptographic Implementation
Rei Ueno, Naofumi Homma, and Kazuhiko Minematsu
This study analyzes and investigates pseudorandom generation (PRG) in the context of masked cryptographic implementation. Although masking and PRGs have been distinctly studied for a long time, little literature studies how the randomness in the masked implementation should be generated. The lack of analysis on mask-bits generators makes the practical security of masked cryptographic implementation unclear, and practitioners (e.g., designers, implementers, and evaluators) may be confused about how to realize it. This study provides its systematic analysis by developing new three adversarial models, which correspond to respective practical scenarios of leakage assessment, quantitative evaluation of sidechannel security (e.g., success rate and key-lifetime), and practical deployment. We present the properties required for each scenario by a stage-by-stage analysis. In particular, we support a long-held belief/folklore with a proof: for the output of PRG for masking, cryptographic security is sufficient but unnecessary, but only a statistical uniformity (precisely, high-dimensional uniform equidistribution) is necessary. In addition, we thoroughly investigate the side-channel security of PRGs in the wild in the masking context. We conclude this study with some recommendations for practitioners and a proposal of leakage-resilient method of comparative performance.
Last updated:  2025-04-10
Hash-Based Multi-Signatures for Post-Quantum Ethereum
Justin Drake, Dmitry Khovratovich, Mikhail Kudinov, and Benedikt Wagner
With the threat posed by quantum computers on the horizon, systems like Ethereum must transition to cryptographic primitives resistant to quantum attacks. One of the most critical of these primitives is the non-interactive multi-signature scheme used in Ethereum's proof-of-stake consensus, currently implemented with BLS signatures. This primitive enables validators to independently sign blocks, with their signatures then publicly aggregated into a compact aggregate signature. In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
Last updated:  2025-04-10
Perfect 2-Party Computation from Additive Somewhat Homomorphic Encryption
Jonathan Trostle
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key somewhat additive homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is somewhat additive homomorphic and we extend it to support multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. Correctness of the server in the malicious model can be verified by a 3rd party where the client and server privacy are protected from the verifier.
Last updated:  2025-04-09
ColliderVM: Stateful Computation on Bitcoin without Fraud Proofs
Victor I. Kolobov, Avihu M. Levy, and Moni Naor
Bitcoin script cannot easily access and store state information onchain without an upgrade such as BIP-347 (OP_CAT); this makes performing general (stateful) computation on Bitcoin impossible to do directly. Despite this limitation, several approaches have been proposed to bypass it, with BitVM being the closest to production. BitVM enables fraud-proof-based computation on Bitcoin, relying on a $1$-out-of-$n$ honesty assumption. This left the question of whether it is possible to achieve computation under the same honesty assumption without requiring onlookers to ensure validity through fraud proofs. In this note, we answer this question affirmatively by introducing ColliderVM, a new approach for performing computation on Bitcoin today. Crucially, this approach eliminates some capital inefficiency concerns stemming from reliance on fraud proofs. For our construction, a key point is to replace the Lamport or Winternitz signature-based storage component in contemporary protocols with a hash collision-based commitment. Our techniques are inspired by ColliderScript, but are more efficient, reducing the number of hash evaluations required by at least $\times 10000$. With it, we estimate that the Bitcoin script length for STARK proof verification becomes practical, allowing it to be used alongside other, pairing-based proof systems common today in applications.
Last updated:  2025-04-09
Legacy Encryption Downgrade Attacks against LibrePGP and CMS
Falko Strenzke and Johannes Roth
This work describes vulnerabilities in the specification of AEAD modes and Key Wrap in two cryptographic message formats. Firstly, this applies to AEAD packets as introduced in the novel LibrePGP specification that is implemented by the widely used GnuPG application. Secondly, we describe vulnerabilities in the AES-based AEAD schemes as well as the Key Wrap Algorithm specified in the Cryptographic Message Syntax (CMS). These new attacks exploit the possibility to downgrade AEAD or AES Key Wrap ciphertexts to valid legacy CFB- or CBC-encrypted related ciphertexts and require that the attacker learns the content of the legacy decryption result. This can happen in two principal ways: either due to the human recipient returning the decryption output to the attacker as a quote or due to a programmatic decryption oracle in the receiving system that reveals information about the plaintext. The attacks effect the decryption of low-entropy plaintext blocks in AEAD ciphertexts and, in the case of LibrePGP, also the manipulation of existing AEAD ciphertexts. For AES Key Wrap in CMS, full key decryption is possible. Some of the attacks require multiple successful oracle queries. The attacks thus demonstrate that CCA2 security is not achieved by the LibrePGP and CMS AEAD or Key Wrap encryption in the presence of a legacy cipher mode decryption oracle.
Last updated:  2025-04-09
Quarantined-TreeKEM: a Continuous Group Key Agreement for MLS, Secure in Presence of Inactive Users
Céline Chevalier, Guirec Lebrun, Ange Martinelli, and Abdul Rahman Taleb
The recently standardized secure group messaging protocol Messaging Layer Security (MLS) is designed to ensure asynchronous communications within large groups, with an almost-optimal communication cost and the same security level as point-to-point se- cure messaging protocols such as Signal. In particular, the core sub-protocol of MLS, a Continuous Group Key Agreement (CGKA) called TreeKEM, must generate a common group key that respects the fundamental security properties of post-compromise security and forward secrecy which mitigate the effects of user corruption over time. Most research on CGKAs has focused on how to improve these two security properties. However, post-compromise security and forward secrecy require the active participation of respectively all compromised users and all users within the group. Inactive users – who remain offline for long periods – do not update anymore their encryption keys and therefore represent a vulnerability for the entire group. This issue has already been identified in the MLS standard, but no solution, other than expelling these inactive users after some disconnection time, has been found. We propose here a CGKA protocol based on TreeKEM and fully compatible with the MLS standard, that implements a quarantine mechanism for the inactive users in order to mitigate the risk induced by these users during their inactivity period and before they are removed from the group. That mechanism indeed updates the inactive users’ encryption keys on their behalf and secures these keys with a secret sharing scheme. If some of the inactive users eventually reconnect, their quarantine stops and they are able to recover all the messages that were exchanged during their offline period. Our Quarantined-TreeKEM protocol thus increases the security of original TreeKEM, with a very limited – and sometimes negative – communication overhead.
Last updated:  2025-04-09
Fully Homomorphic Encryption for Cyclotomic Prime Moduli
Robin Geelen and Frederik Vercauteren
This paper presents a Generalized BFV (GBFV) fully homomorphic encryption scheme that encrypts plaintext spaces of the form $\mathbb{Z}[x]/(\Phi_m(x), t(x))$ with $\Phi_m(x)$ the $m$-th cyclotomic polynomial and $t(x)$ an arbitrary polynomial. GBFV encompasses both BFV where $t(x) = p$ is a constant, and the CLPX scheme (CT-RSA 2018) where $m = 2^k$ and $t(x) = x - b$ is a linear polynomial. The latter can encrypt a single huge integer modulo $\Phi_m(b)$, has much lower noise growth than BFV, but it is not known to be efficiently bootstrappable. We show that by a clever choice of $m$ and higher degree polynomial $t(x)$, our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime $\Phi_2(2^{16}) = 2^{16} + 1$ and the Goldilocks prime $\Phi_6(2^{32}) = 2^{64} - 2^{32} + 1$. These primes are often used in homomorphic encryption applications and zero-knowledge proof systems. Due to the lower noise growth, GBFV can evaluate much deeper circuits compared to native BFV in the same ring dimension. As a result, we can evaluate either larger circuits or work with smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security already at ring dimension $n = 2^{14}$, which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only 2 seconds to bootstrap a ciphertext encrypting up to 8192 elements modulo $2^{16} + 1$.
Last updated:  2025-04-09
Revisiting Oblivious Top-$k$ Selection with Applications to Secure $k$-NN Classification
Kelong Cong, Robin Geelen, Jiayi Kang, and Jeongeun Park
An oblivious Top-$k$ algorithm selects the $k$ smallest elements from $d$ elements while ensuring the sequence of operations and memory accesses do not depend on the input. In 1969, Alekseev proposed an oblivious Top-$k$ algorithm with complexity $O(d \log^2{k})$, which was later improved by Yao in 1980 for small $k \ll \sqrt{d}$. In this paper, we revisit the literature on oblivious Top-$k$ and propose another improvement of Alekseev's method that outperforms both for large $k = \Omega(\sqrt{d})$. Our construction is equivalent to applying a new truncation technique to Batcher's odd-even sorting algorithm. In addition, we propose a combined network to take advantage of both Yao's and our technique that achieves the best concrete performance, in terms of the number of comparators, for any $k$. To demonstrate the efficiency of our combined Top-$k$ network, we implement a secure non-interactive $k$-nearest neighbors classifier using homomorphic encryption as an application. Compared with the work of Zuber and Sirdey (PoPETS 2021) where oblivious Top-$k$ was realized with complexity $O(d^2)$, our experimental results show a speedup of up to 47 times (not accounting for difference in CPU) for $d = 1000$.
Last updated:  2025-04-09
SLAMP-FSS: Two-Party Multi-Point Function Secret Sharing from Simple Linear Algebra
Erki Külaots, Toomas Krips, Hendrik Eerikson, and Pille Pullonen-Raudvere
Multi-point function secret sharing (FSS) is a building block for pseudo- random correlation generators used in the novel silent correlation generation methods for various secure multiparty computation applications. However, the main construction used so far is the naive approach to combining several single point functions. In this paper, we propose an efficient and natural generalization of the point function. FSS scheme of Boyle et al. 2016 [BGI16 ] using a tree structure, a pseudorandom generator and systems of linear equations. Our schemes are more efficient in the evaluation phase than other previously proposed multi-point FSS schemes while being also more flexible and being similar in other efficiency parameters. Our setup phase is similar in cost to previous best versions, while the full evaluation, which is by far the costliest step, is more efficient.
Last updated:  2025-04-09
An Efficient SNARK for Field-Programmable and RAM Circuits
Jehyuk Jang and Jamie Judd
The advancement of succinct non-interactive argument of knowledge (SNARK) with constant proof size has significantly enhanced the efficiency and privacy of verifiable computation. Verifiable computation finds applications in distributed computing networks, particularly in scenarios where nodes cannot be generally trusted, such as blockchains. However, fully harnessing the efficiency of SNARK becomes challenging when the computing targets in the network change frequently, as the SNARK verification can involve some untrusted preprocess of the target, which is expected to be reproduced by other nodes. This problem can be addressed with two approaches: One relieves the reproduction overhead by reducing the dimensionality of preprocessing data; The other utilizes verifiable machine computation, which eliminates the dependency on preprocess at the cost of increased overhead to SNARK proving and verification. In this paper, we propose a new SNARK with constant proof size applicable to both approaches. The proposed SNARK combines the efficiency of Groth16 protocol, albeit lacking universality for new problems, and the universality of PlonK protocol, albeit with significantly larger preprocessing data dimensions. Consequently, we demonstrate that our proposed SNARK maintains the efficiency and the universality while significantly reducing the dimensionality of preprocessing data. Furthermore, our SNARK can be seamlessly applied to the verifiable machine computation, requiring a proof size smaller about four to ten times than other related works.
Last updated:  2025-04-09
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Zhongtang Luo, Yanxue Jia, Alejandra Victoria Ospina Gracia, and Aniket Kate
Stateless blockchain designs have emerged to address the challenge of growing blockchain size using succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly to update proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment that enables proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity $O\left(\left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right) \log^2 \left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right)\right)$ for $\left|\alpha\right|$ users and $\left|\beta\right|$ transactions, compared to the previous $O\left(\left|\vec{\alpha}\right|\cdot\left|\vec{\beta}\right|\right)$ approaches. This advancement reduces the computational burden on proof-serving nodes, allowing efficient proof maintenance across large user groups. We demonstrate that our approach is approximately eight times faster than the naive approach at the Ethereum-level transaction throughput if we perform batch update every hour. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
Last updated:  2025-04-09
Security in the Presence of Key Reuse: Context-Separable Interfaces and their Applications
Christopher Patton and Thomas Shrimpton
Key separation is often difficult to enforce in practice. While key reuse can be catastrophic for security, we know of a number of cryptographic schemes for which it is provably safe. But existing formal models, such as the notions of joint security (Haber-Pinkas, CCS ’01) and agility (Acar et al., EUROCRYPT ’10), do not address the full range of key-reuse attacks—in particular, those that break the abstraction of the scheme, or exploit protocol interactions at a higher level of abstraction. This work attends to these vectors by focusing on two key elements: the game that codifies the scheme under attack, as well as its intended adversarial model; and the underlying interface that exposes secret key operations for use by the game. Our main security experiment considers the implications of using an interface (in practice, the API of a software library or a hardware platform such as TPM) to realize the scheme specified by the game when the interface is shared with other unspecified, insecure, or even malicious applications. After building up a definitional framework, we apply it to the analysis of two real-world schemes: the EdDSA signature algorithm and the Noise protocol framework. Both provide some degree of context separability, a design pattern for interfaces and their applications that aids in the deployment of secure protocols.
Last updated:  2025-04-08
The many faces of Schnorr: a toolkit for the modular design of threshold Schnorr signatures
Victor Shoup
Recently, a number of highly optimized threshold signing protocols for Schnorr signatures have been proposed. While these proposals contain important new techniques, some of them present and analyze these techniques in very specific contexts, making it less than obvious how these techniques can be adapted to other contexts, or combined with one another. The main goal of this paper is to abstract out and extend in various ways some of these techniques, building a toolbox of techniques that can be easily combined in different ways and in different contexts. To this end, we present security results for various "enhanced" modes of attack on the Schnorr signature scheme in the non-distributed setting, and we demonstrate how to reduce the security in the distributed threshold setting to these enhanced modes of attack in the non-distributed setting. This results in a very modular approach to protocol design and analysis, which can be used to easily design new threshold Schnorr protocols that enjoy better security and/or performance properties than existing ones.
Last updated:  2025-04-08
Binding Security of Implicitly-Rejecting KEMs and Application to BIKE and HQC
Juliane Krämer, Patrick Struck, and Maximiliane Weishäupl
In this work, we continue the analysis of the binding properties of implicitly-rejecting key-encapsulation mechanisms (KEMs) obtained via the Fujisaki-Okamoto (FO) transform. These binding properties, in earlier literature known under the term robustness, thwart attacks that can arise when using KEMs in complex protocols. Recently, Cremers et al. (CCS'24) introduced a framework for binding notions, encompassing previously existing but also new ones. While implicitly-rejecting FO-KEMs have been analyzed with respect to multiple of these notions, there are still several gaps. We complete the picture by providing positive and negative results for the remaining notions. Further, we show how to apply our results to the code-based KEMs BIKE and HQC, which were round-4 candidates in NIST's PQC standardization process. Through this, we close a second gap as our results complete the analysis of the binding notions for the NIST round-4 KEMs. Finally, we give a modified version of the FO transform that achieves all binding notions.
Last updated:  2025-04-08
Side-Channel and Fault Injection Attacks on VOLEitH Signature Schemes: A Case Study of Masked FAEST
Sönke Jendral and Elena Dubrova
Ongoing efforts to transition to post-quantum public-key cryptosystems have created the need for algorithms with a variety of performance characteristics and security assumptions. Among the candidates in NIST's post-quantum standardisation process for additional digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the-Head (VOLEitH)-based scheme, whose security relies on the one-wayness of the Advanced Encryption Standard (AES). The VOLEitH paradigm enables competitive performance and signature sizes under conservative security assumptions. However, since it was introduced recently, in 2023, its resistance to physical attacks has not yet been analysed. In this paper, we present the first security analysis of VOLEitH-based signature schemes in the context of side-channel and fault injection attacks. We demonstrate four practical attacks on a masked implementation of FAEST in ARM Cortex-M4 capable of recovering the full secret key with high probability (greater than 0.87) from a single signature. These attacks exploit vulnerabilities of components specific to VOLEitH schemes and FAEST, such as the parallel all-but-one vector commitments, the VOLE generation, and the AES proof generation. Finally, we propose countermeasures to mitigate these attacks and enhance the physical security of VOLEitH-based signature schemes.
Last updated:  2025-04-08
More NTRU+Sign Signatures from Cyclotomic Trinomials
Ga Hee Hong, Joo Woo, Jonghyun Kim, Minkyu Kim, Hochang Lee, and Jong Hwan Park
Recently, $\mathsf{NTRU}$+$\mathsf{Sign}$ was proposed as a new compact signature scheme, following `Fiat-Shamir with Aborts' (FSwA) framework. Its compactness is mainly based on their novel NTRU-based key structure that fits well with bimodal distributions in the FSwA framework. However, despite its compactness, $\mathsf{NTRU}$+$\mathsf{Sign}$ fails to provide a diverse set of parameters that can meet some desired security levels. This limitation stems from its reliance on a ring $\mathbb{Z}_q[x]/\langle x^n+1 \rangle$, where $n$ is restricted to powers of two, limiting the flexibility in selecting appropriate security levels. To overcome this limitation, we propose a revised version of $\mathsf{NTRU}$+$\mathsf{Sign}$ by adopting a ring $\mathbb{Z}_q[x]/\langle x^n-x^{n/2}+1\rangle$ from cyclotomic trinomials, where $n=2^{i}3^{j}$ for some positive integers $i$ and $j$. Our parameterization offers three distinct security levels: approximately $120$, $190$, and $260$ bits, while preserving the compactness in $\mathbb{Z}_q[x]/\langle x^n+1 \rangle$. We implement these re-parameterized $\mathsf{NTRU}$+$\mathsf{Sign}$ schemes, showing that the performance of $\mathsf{NTRU}$+$\mathsf{Sign}$ from cyclotomic trinomials is still comparable to previous lattice-based signature schemes such as $\mathsf{Dilithium}$ and $\mathsf{HAETAE}$.
Last updated:  2025-04-08
Lattice-based Cryptography: A survey on the security of the lattice-based NIST finalists
Koen de Boer and Wessel van Woerden
This survey, mostly written in the years 2022-2023, is meant as an as short as possible description of the current state-of-the-art lattice attacks on lattice-based cryptosystems, without losing the essence of the matter. The main focus is the security of the NIST finalists and alternatives that are based on lattices, namely CRYSTALS-Kyber, CRYSTALS-Dilithium and Falcon. Instead of going through these cryptosystems case by case, this survey considers attacks on the underlying hardness assumptions: in the case of the mentioned lattice-based schemes, these are (variants of) LWE (Learning With Errors) and NTRU.
Last updated:  2025-04-08
SoK: Self-Generated Nudes over Private Chats: How Can Technology Contribute to a Safer Sexting?
Joel Samper and Bernardo Ferreira
More and more people take advantage of mobile apps to strike up relationships and casual contacts. This sometimes results in the sharing of self-generated nudes. While this opens a way for sexual exploration, it also raises concerns. In this paper, we review existing technology-assisted permissive proposals/features that provide security, privacy or accountability benefits when sharing nudes online. To do so, we performed a systematic literature review combing through 10,026 search results and cross-references, and we identified real-world solutions by surveying OS features and 52 dating, messaging and social network apps. We systematized knowledge by defining a sexting threat model, deriving a taxonomy of the proposals/features, discussing some of their shortcomings, organizing privacy-related concepts, and providing take-aways with some directions for future research and development. Our study found a very diverse ecosystem of academic proposals and app features, showing that safer sexting goes far beyond nude detection. None of the techniques represents the ultimate solution for all threats, but each contributes to a safer sexting in a different way.
Last updated:  2025-04-08
Inner Product Masked Integral Distinguishers and Integral Sets over Large Finite Fields (Full Version)
Weizhe Wang, Deng Tang, and Haoyang Wang
The security and performance of many advanced protocols heavily rely on the underlying symmetric-key primitives. These primitives, known as arithmetization-oriented (\texttt{AO}) ciphers, focus on arithmetic metrics over large finite fields. Most \texttt{AO} ciphers are vulnerable to algebraic attacks, especially integral attacks. In this work, we explore integral attacks over large finite fields. By combining integral distinguishers with inner product masks, we propose inner product masked (IPM) integral distinguishers and establish a system of equations concerning the inner product mask. Additionally, we provide theoretical lower bounds on the complexity of IPM integral distinguishers in certain cases. For practical applications, we introduce IPM integral sets, which effectively characterize the integral property of sets over the finite field $\mathbb{F}_{p^n}$. Besides the IPM sets based on additive subgroups and multiplicative subgroups, we present a method to obtain sets with improved integral properties by merging existing sets. Exploring different classes of IPM integral sets can help us find lower-complexity integral distinguishers. Combining with monomial detection techniques, we propose a framework for searching for integral distinguishers based on the IPM integral set. Our framework is compatible with various monomial detection techniques, including general monomial prediction proposed by Cui et al. at ASIACRYPT 2022 and coefficient grouping invented by Liu et al. at EUROCRYPT 2023. Previous integral distinguishers over $\mathbb{F}_{2^n}$ were predominantly based on additive subgroups. With IPM integral sets based on multiplicative subgroups and merged sets, we successfully obtain IPM integral distinguishers with lower complexity for \textsf{MiMC}, \textsf{CIMINION}, and \textsf{Chaghri}. We believe that our work will provide new insights into integral attacks over large finite fields.
Last updated:  2025-04-08
Malicious Security for SCALES: Outsourced Computation with Ephemeral Servers
Anasuya Acharya, Carmit Hazay, Vladimir Kolesnikov, and Manoj Prabhakaran
SCALES (Small Clients And Larger Ephemeral Servers) model is a recently proposed model for MPC (Acharya et al., TCC 2022). While the SCALES model offers several attractive features for practical large-scale MPC, the result of Acharya et al. only offered semi-honest secure protocols in this model. We present a new efficient SCALES protocol secure against malicious adversaries, for general Boolean circuits. We start with the base construction of Acharya et al. and design and use a suite of carefully defined building blocks that may be of independent interest. The resulting protocol is UC-secure without honest majority, with a CRS and bulletin-board as setups, and allows publicly identifying deviations from correct execution.
Last updated:  2025-04-08
SCALES: MPC with Small Clients and Larger Ephemeral Servers
Anasuya Acharya, Carmit Hazay, Vladimir Kolesnikov, and Manoj Prabhakaran
The recently proposed YOSO model is a groundbreaking approach to MPC, executable on a public blockchain, circumventing adaptive player corruption by hiding the corruption targets until they are worthless. Players are selected unpredictably from a large pool to perform MPC subtasks, in which each selected player sends a single message (and reveals their identity). While YOSO MPC has attractive asymptotic complexity, unfortunately, it is concretely prohibitively expensive due to the cost of its building blocks. We propose a modification to the YOSO model that preserves resilience to adaptive server corruption, but allows for much more efficient protocols. In SCALES (Small Clients And Larger Ephemeral Servers) only the servers facilitating the MPC computation are ephemeral (unpredictably selected and ``speak once''). Input providers (clients) publish problem instances and collect the output, but do not otherwise participate in computation. SCALES offers attractive features, and improves over YOSO protocols in outsourcing MPC to a large pool of servers under adaptive corruption. We build SCALES from rerandomizable garbling schemes, which is a contribution of independent interest, with additional applications.
Last updated:  2025-04-08
Periodic Table of Cryptanalysis: Geometric Approach with Different Bases
Kai Hu, Chi Zhang, Chengcheng Chang, Jiashu Zhang, Meiqin Wang, and Thomas Peyrin
In the past three decades, we have witnessed the creation of various cryptanalytic attacks. However, relatively little research has been done on their potential underlying connections. The geometric approach, developed by Beyne in 2021, shows that a cipher can be viewed as a linear operation when we treat its input and output as points in an induced \textit{free vector space}. By performing a change of basis for the input and output spaces, one can obtain various transition matrices. Linear, differential, and (ultrametic) integral attacks have been well reinterpreted by Beyne's theory in a unified way. Thus far, the geometric approach always uses the same basis for the input and output spaces. We observe here that this restriction is unnecessary and allowing different bases makes the geometric approach more flexible and able to interpret/predict more attack types. Given some set of bases for the input and output spaces, a family of basis-based attacks is defined by combining them, and all attacks in this family can be studied in a unified automatic search method. We revisit three kinds of bases from previous geometric approach papers and extend them to four extra ones by introducing new rules when generating new bases. With the final seven bases, we can obtain $7^{2d}$ different basis-based attacks in the $d$-th order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack. We then provide four examples of applications of this new framework. First, we show that by choosing a better pair of bases, Beyne and Verbauwhede's ultrametric integral cryptanalysis can be interpreted as a single element of a transition matrix rather than as a linear combination of elements. This unifies the ultrametric integral cryptanalysis with the previous linear and quasi-differential attacks. Second, we revisit the multiple-of-$n$ property with our refined geometric approach and exhibit new multiple-of-$n$ distinguishers that can reach more rounds of the \skinny-64 cipher than the state-of-the-art. Third, we study the multiple-of-$n$ property for the first-order case, which is similar to the subspace trail but it is the divisibility property that is considered. This leads to a new distinguisher for 11-round-reduced \skinny-64. Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values. We emphasize that all these applications were not possible before.
Last updated:  2025-04-07
A Unified Treatment of Anamorphic Encryption
Wonseok Choi, Daniel Collins, Xiangyu Liu, and Vassilis Zikas
Receiver anamorphic encryption (hereafter anamorphic encryption), introduced by Persiano et al. at Eurocrypt 2022, allows for a double message to be symmetrically hidden in a public-key encryption ciphertext via a pre-shared -double key-. In anamorphic encryption, confidentiality must be preserved even if the adversary (or the -dictator-) has access to all regular keys. It has been the subject of several works since its introduction that explore tweaks and extensions to the core primitive. However, this study has not been systematic, and so disparate security notions have been proposed, for which their relationships are not clear. Moreover, there are clear gaps in the literature, including in the treatment of chosen-ciphertext attacks. In this work, we conduct a systematic study of receiver anamorphic encryption. We unify existing security notions and propose several new ones, and prove implications and separations between them. Our main findings are as follows. First, we identify gaps in previous security notions against an anamorphic -sender-, namely an adversary who is given the double key, and propose three new security notions to bridge these gaps. We also identify several gaps in the treatment of chosen-ciphertext attacks, a setting only very recently considered in anamorphic cryptography (Jaeger and Stracovsky, Asiacrypt 2024). Moreover, observing that no previous construction achieves all desirable security properties in this setting, we propose a suitable construction that does. Finally, we propose several security notions for -asymmetric- anamorphic encryption, and explore the case here where the dictator and the anamorphic sender collude.
Last updated:  2025-04-07
An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes
YoungBeom Kim and Seog Chung Seo
Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.
Last updated:  2025-04-06
Pacmann: Efficient Private Approximate Nearest Neighbor Search
Mingxun Zhou, Elaine Shi, and Giulia Fanti
We propose a new private Approximate Nearest Neighbor (ANN) search scheme named Pacmann that allows a client to perform ANN search in a vector database without revealing the query vector to the server. Unlike prior constructions that run encrypted search on the server side, Pacmann carefully offloads limited computation and storage to the client, no longer requiring computationally-intensive cryptographic techniques. Specifically, clients run a graph-based ANN search, where in each hop on the graph, the client privately retrieves local graph information from the server. To make this efficient, we combine two ideas: (1) we adapt a leading graph-based ANN search algorithm to be compatible with private information retrieval (PIR) for subgraph retrieval; (2) we use a recent class of PIR schemes that trade offline preprocessing for online computational efficiency. Pacmann achieves significantly better search quality than the state-of-the-art private ANN search schemes, showing up to 2.5$\times$ better search accuracy on real-world datasets than prior work and reaching 90\% quality of a state-of-the-art non-private ANN algorithm. Moreover on large datasets with up to 100 million vectors, Pacmann shows better scalability than prior private ANN schemes with up to $63\%$ reduction in computation time and $24\%$ reduction in overall latency.
Last updated:  2025-04-06
Revisiting the attacker's knowledge in inference attacks against Searchable Symmetric Encryption
Marc Damie, Jean-Benoist Leger, Florian Hahn, and Andreas Peter
Encrypted search schemes have been proposed to address growing privacy concerns. However, several leakage-abuse attacks have highlighted some security vulnerabilities. Recent attacks assumed an attacker's knowledge containing data "similar" to the indexed data. However, this vague assumption is barely discussed in literature: how likely is it for an attacker to obtain a "similar enough" data? Our paper provides novel statistical tools usable on any attack in this setting to analyze its sensitivity to data similarity. First, we introduce a mathematical model based on statistical estimators to analytically understand the attackers' knowledge and the notion of similarity. Second, we conceive statistical tools to model the influence of the similarity on the attack accuracy. We apply our tools on three existing attacks to answer questions such as: is similarity the only factor influencing accuracy of a given attack? Third, we show that the enforcement of a maximum index size can make the ``similar-data'' assumption harder to satisfy. In particular, we propose a statistical method to estimate an appropriate maximum size for a given attack and dataset. For the best known attack on the Enron dataset, a maximum index size of 200 guarantees (with high probability) the attack accuracy to be below 5%.
Last updated:  2025-04-06
Shuffle Shamir Secret Shares Uniformly with Linear Online Communication
Jiacheng Gao, Yuan Zhang, and Sheng Zhong
In this paper, we revisit shuffle protocol for Shamir secret sharing. Upon examining previous works, we observe that existing constructions either produce non-uniform shuffle or require large communication and round complexity, e.g. exponential in the number of parties. We propose two shuffle protocols, both of which shuffle uniformly within $O(\frac{k + l}{\log k}n^2m\log m)$ communication for shuffling rows of an $m\times l$ matrix shared among $n$ parties, where $k\leq m$ is a parameter balancing communication and computation. Our first construction is more concretely efficient, while our second construction requires only $O(nml)$ online communication within $O(n)$ round. In terms of overall communication and online communication, both shuffle protocols outperform current optimal shuffle protocols for Shamir secret sharing. At the core of our constructions is a novel permutation-sharing technique, which can be used to permute arbitrarily many vectors by computing matrix-vector products. Once shared, applying a permutation becomes much cheaper, which results in a faster online phase. Letting each party share one secret uniform permutation in the offline phase and applying them sequentially in the online phase, we obtain our first shuffle protocol. To further optimize online complexity and simplify the trade-off, we adopt the shuffle correlation proposed by Gao et al. and obtain the second shuffle protocol with $O(nml)$ online communication and $O(n^2ml)$ online computation. This brings an additional benefit that the online complexity is now independent of offline complexity, which reduces parameter optimization to optimizing offline efficiency. Our constructions require only most basic primitives in Shamir secret sharing scheme, and work for arbitrary field $\mathbb{F}$ of size larger than $n$. As shuffle is a frequent operation in algorithm design, we expect them to accelerate many other primitives in context of Shamir secret sharing MPC, such as sorting, oblivious data structure, etc.
Last updated:  2025-04-06
Multiparty Shuffle: Linear Online Phase is Almost for Free
Jiacheng Gao, Yuan Zhang, and Sheng Zhong
Shuffle is a frequently used operation in secure multiparty computations, with applications including joint data analysis, anonymous communication systems, secure multiparty sorting, etc. Despite a series of ingenious works, the online (i.e. data-dependent) complexity of malicious secure $n$-party shuffle protocol remains $\Omega(n^2m)$ for shuffling data array of length $m$. This potentially slows down the application and MPC primitives built upon MPC shuffle. In this paper, we study the online complexities of MPC shuffle protocol. We observe that most existing works follow a ``permute-in-turn'' paradigm,where MPC shuffle protocol consists of $n$ sequential calls to a more basic MPC permutation protocol. We hence raise the following question: given only black-box access to an arbitrary MPC framework and permutation protocol, can we build an MPC shuffle, whose online complexities are independent of the underlying permutation protocol? We answer this question affirmatively, offering generic transformation from semi-honest/malicious MPC permutation protocolsto MPC shuffle protocols with semi-honest/malicious security and only $O(nm)$ online communication and computation. The linear online phase is obtained almost for free via the transformation, in the sense that in terms of overall complexities, the generated protocolequals the protocol generated by naive permute-in-turn paradigm. Notably, instantiating our construction with additive/Shamir secret sharing and corresponding optimal permutation protocol, we obtain the first malicious secure shuffle protocols with linear online complexities for additive/Shamir secret sharing, respectively. These results are to be compared with previous optimal online communication complexities of $O(Bn^2m)$ and $O(n^2m\log m)$ for malicious secure shuffle, for additive and Shamir secret sharing, respectively. We provide formal security proofs for both semi-honest and malicious secure transformations,showing that our malicious secure construction achieves universally composable security. Experimental results indicate that our construction significantly improves online performance while maintaining a moderate increase in offline overhead. Given that shuffle is a frequently used primitive in secure multiparty computation, we anticipate that our constructions will accelerate many real-world MPC applications.
Last updated:  2025-04-05
Efficient SNARKs for Boolean Circuits via Sumcheck over Tower Fields
Tianyi Liu and Yupeng Zhang
In this paper, we present efficient SNARKs for Boolean circuits, achieving significant improvements in the prover efficiency. The core of our technique is a novel tower sumcheck protocol and a tower zero-check protocol tailored for tower fields, which enable this efficiency boost. When instantiated with Wiedemann's binary tower fields with the base field of $GF(2)$ and the top-level field $GF(2^{2^\ell})$, assuming the quadratic complexity of multiplications \(O(2^{2\ell})\) in the top-level field with $2^\ell$ bits, the prover time of our sumcheck protocol is \(O(2^{1.5\ell}N)\). It is faster than the standard sumcheck protocol over the large field with the complexity of \(O(2^{2\ell}N)\). To achieve a reasonable security level, $2^\ell$ is usually set to $128$. Leveraging this advancement, we improve the efficiency of IOP protocols over the binary or small characteristic fields for Plonkish, CCS, and GKR-based constraint systems. Moreover, to further improve the prover efficiency of the SNARKs, we introduce a basis-switching mechanism that efficiently transforms polynomial evaluations on the base-field polynomial to evaluations on the tower-field polynomial. With the basis-switching, we are able to compile the binary-field IOPs to SNARKs using large-field polynomial commitment schemes (PCS) that batch the witness over the base field. The size of the large-field PCS is only $\frac{1}{2^\ell}$ of the size of the witness over the base field. Combining the IOP and the PCS, the overall prover time of our SNARKs for Boolean circuits significantly faster than the naive approach of encoding Boolean values in a large field.
Last updated:  2025-04-05
Quantum Analysis of AES
Kyungbae Jang, Anubhab Baksi, Hyunji Kim, Gyeongju Song, Hwajeong Seo, and Anupam Chattopadhyay
Quantum computing is considered one of the next big leaps in computational science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the symmetric key ciphers against a potent quantum adversary. Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256). We develop a pool of 26 implementations per AES variant (thus totaling 78), by taking the state-of-the-art advancements in the relevant fields into account. In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 97 percent for each variant of AES. We show that the qubit count - Toffoli depth product is reduced from theirs by more than 87 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in detail, fix the bugs (arising from some problem of the quantum computing tool used and not related to their coding), and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the Asiacrypt'22 paper by Huang and Sun, the Asiacrypt'23 paper by Liu et al. and the Asiacrypt'24 paper by Shi and Feng) in terms of various quantum circuit complexity metrics (Toffoli depth, full depth, Toffoli/full depth - qubit count product, full depth - gate count product, etc.). Also, our bug-fixing of Jaques et al.'s Eurocrypt'20 implementations seems to improve from the authors' own bug-fixing, thanks to our architecture consideration. Equipped with the basic AES implementations, we further investigate the prospect of the Grover's search. We propose four new implementations of the S-box, one new implementation of the MixColumn; as well as five new architecture (one is motivated by the architecture by Jaques et al. in Eurocrypt'20, and the rest four are entirely our innovation). Under the MAXDEPTH constraint (specified by NIST), the circuit depth metrics (Toffoli depth, T-depth and full depth) become crucial factors and parallelization for often becomes necessary. We provide the least depth implementation in this respect that offers the best performance in terms of metrics for circuit complexity (like, depth-squared - qubit count product, depth - gate count product). Thus, to our knowledge, we estimate the currently best-known quantum attack complexities for AES-128 ($2^{156.2630}$), AES-192 ($2^{221.5801}$) and AES-256 ($2^{286.0731}$); these provide the NIST-specified quantum security levels 1, 3 and 5 respectively. Additionally, we achieve the least Toffoli depth - qubit count product for AES-128 ($121920$, improving from $130720$ by Shi and Feng in Asiacrypt'24), AES-192 ($161664$, improving from $188880$ by Liu et al. in Asiacrypt'23) and AES-256 ($206528$, improving from $248024$ by Liu et al. in Asiacrypt'23) so far.
Last updated:  2025-04-05
New Techniques for Preimage Sampling: Improved NIZKs and More from LWE
Brent Waters, Hoeteck Wee, and David J. Wu
Recent constructions of vector commitments and non-interactive zero-knowledge (NIZK) proofs from LWE implicitly solve the following shifted multi-preimage sampling problem: given matrices $\mathbf{A}_1, \ldots, \mathbf{A}_\ell \in \mathbb{Z}_q^{n \times m}$ and targets $\mathbf{t}_1, \ldots, \mathbf{t}_\ell \in \mathbb{Z}_q^n$, sample a shift $\mathbf{c} \in \mathbb{Z}_q^n$ and short preimages $\boldsymbol{\pi}_1, \ldots, \boldsymbol{\pi}_\ell \in \mathbb{Z}_q^m$ such that $\mathbf{A}_i \boldsymbol{\pi}_i = \mathbf{t}_i + \mathbf{c}$ for all $i \in [\ell]$. In this work, we introduce a new technique for sampling $\mathbf{A}_1, \ldots, \mathbf{A}_\ell$ together with a succinct public trapdoor for solving the multi-preimage sampling problem with respect to $\mathbf{A}_1, \ldots, \mathbf{A}_\ell$. This enables the following applications: * We provide a dual-mode instantiation of the hidden-bits model (and by correspondence, a dual-mode NIZK proof for $\mathsf{NP}$) with (1) a linear-size common reference string (CRS); (2) a transparent setup in hiding mode (which yields statistical NIZK arguments); and (3) hardness from LWE with a polynomial modulus-to-noise ratio. This improves upon the work of Waters (STOC 2024) which required a quadratic-size structured reference string (in both modes) and LWE with a super-polynomial modulus-to-noise ratio. * We give a statistically-hiding vector commitment with transparent setup and polylogarithmic-size CRS, commitments, and openings from SIS. This simultaneously improves upon the vector commitment schemes of de Castro and Peikert (EUROCRYPT 2023) as well as Wee and Wu (EUROCRYPT 2023). At a conceptual level, our work provides a unified view of recent lattice-based vector commitments and hidden-bits model NIZKs through the lens of the shifted multi-preimage sampling problem.
Last updated:  2025-04-05
Cryptanalysis of a Type of White-Box Implementations of the SM4 Block Cipher
Jiqiang Lu, Jingyu Li, Zexuan Chen, and Yanan Li
The SM4 block cipher is a Chinese national standard and an ISO international standard. Since white-box cryptography has many real-life applications nowadays, a few white-box implementations of SM4 has been proposed, among which a type of constructions is dominated, that uses a linear or affine diagonal block encoding to protect the original three 32-bit branches entering a round function and uses its inverse as the input encoding to the S-box layer. In this paper, we analyse the security of this type of constructions against Lepoint et al.'s collision-based attack method, our experiment under a small fraction of (encodings, round key) combinations shows that the rank of the concerned linear system is much less than the number of the involved unknowns, meaning these white-box SM4 implementations should resist Lepoint et al.'s method, but we leave it as an open problem whether there are such encodings that the rank of the corresponding linear system is slightly less than the number of the involved unknowns, in which scenario Lepoint et al.'s method may be used to recover a round key for the case with linear encodings and to remove most white-box operations until mainly some Boolean masks for the case with affine encodings.
Last updated:  2025-04-04
On the Tight Security of the Double Ratchet
Daniel Collins, Doreen Riepel, and Si An Oliver Tran
The Signal Protocol is a two-party secure messaging protocol used in applications such as Signal, WhatsApp, Google Messages and Facebook Messenger and is used by billions daily. It consists of two core components, one of which is the Double Ratchet protocol that has been the subject of a line of work that aims to understand and formalise exactly what security it provides. Existing models capture strong guarantees including resilience to state exposure in both forward security (protecting past secrets) and post-compromise security (restoring security), adaptive state corruptions, message injections and out-of-order message delivery. Due to this complexity, prior work has failed to provide security guarantees that do not degrade in the number of interactions, even in the single-session setting. Given the ubiquity of the Double Ratchet in practice, we explore tight security bounds for the Double Ratchet in the multi-session setting. To this end, we revisit the modelling of Alwen, Coretti and Dodis (EUROCRYPT 2019) who decompose the protocol into modular, abstract components, notably continuous key agreement (CKA) and forward-secure AEAD (FS-AEAD). To enable a tight security proof, we propose a CKA security model that provides one-way security under key checking attacks. We show that multi-session security of the Double Ratchet can be tightly reduced to the multi-session security of CKA and FS-AEAD, capturing the same strong security guarantees as Alwen et al. Our result improves upon the bounds of Alwen et al. in the random oracle model. Even so, we are unable to provide a completely tight proof for the Double Ratchet based on standard Diffie-Hellman assumptions, and we conjecture it is not possible. We thus go a step further and analyse CKA based on key encapsulation mechanisms (KEMs). In contrast to previous works, our new analysis allows for tight constructions based on the DDH and post-quantum assumptions.
Last updated:  2025-04-04
Cloning Games, Black Holes and Cryptography
Alexander Poremba, Seyoon Ragavan, and Vinod Vaikuntanathan
Quantum no-cloning is one of the most fundamental properties of quantum information. In this work, we introduce a new toolkit for analyzing cloning games; these games capture more quantitative versions of no-cloning and are central to unclonable cryptography. Previous works rely on the framework laid out by Tomamichel, Fehr, Kaniewski and Wehner to analyze both the $n$-qubit BB84 game and the subspace coset game. Their constructions and analysis face the following inherent limitations: - The existing bounds on the values of these games are at least $2^{-0.25n}$; on the other hand, the trivial adversarial strategy wins with probability $2^{-n}$. Not only that, the BB84 game does in fact admit a highly nontrivial winning strategy. This raises the natural question: are there cloning games which admit no non-trivial winning strategies? - The existing constructions are not multi-copy secure; the BB84 game is not even $2 \mapsto 3$ secure, and the subspace coset game is not $t \mapsto t+1$ secure for a polynomially large $t$. Moreover, we provide evidence that the existing technical tools do not suffice to prove multi-copy security of even completely different constructions. This raises the natural question: can we design new cloning games that achieve multi-copy security, possibly by developing a new analytic toolkit? Inspired by the literature on pseudorandom states, we study a new cloning game based on binary phase states and show that it is $t$-copy secure when $t=o(n/\log n)$. Moreover, for constant $t$, we obtain the first asymptotically optimal bounds of $O(2^{-n})$. To accomplish this, we introduce a new analytic toolkit based on of binary subtypes and combine this with novel bounds on the operator norms of block-wise tensor products of matrices. We also show a worst-case to average-case reduction for a large class of cloning games, which allows us to show the same quantitative results for Haar cloning games. These technical ingredients together enable two new applications which have previously been out of reach: - In the area of black-hole physics, our cloning games reveal that, in an idealized model of a black hole which features Haar random (or pseudorandom) scrambling dynamics, the information from infalling qubits can only be recovered from either the interior or the exterior of the black hole---but never from both places at the same time. To demonstrate this result, it turns out to be crucial to prove an asymptotically optimal bound and to overcome the first limitation above. - In the area of quantum cryptography, our worst-case to average-case reduction helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby, for the first time, bridging the gap between ``MicroCrypt'' and unclonable cryptography. We also propose and provide evidence for the security of multi-copy unclonable encryption schemes, which requires us to overcome the second limitation above.
Last updated:  2025-04-04
Private SCT Auditing, Revisited
Lena Heimberger, Christopher Patton, and Bas Westerbaan
In order for a client to securely connect to a server on the web, the client must trust certificate authorities (CAs) only to issue certificates to the legitimate operator of the server. If a certificate is miss-issued, it is possible for an attacker to impersonate the server to the client. The goal of Certificate Transparency (CT) is to log every certificate issued in a manner that allows anyone to audit the logs for miss-issuance. A client can even audit a CT log itself, but this would leak sensitive browsing data to the log operator. As a result, client-side audits are rare in practice. In this work, we revisit private CT auditing from a real-world perspective. Our study is motivated by recent changes to the CT ecosystem and advancements in Private Information Retrieval (PIR). First, we find that checking for inclusion of Signed Certificate Timestamps (SCTs) in a log — the audit performed by clients — is now possible with PIR in under a second and under 100kb of communication with minor adjustments to the protocol that have been proposed previously. Our results also show how to scale audits by using existing batching techniques and the algebraic structure of the PIR protocols, in particular to obtain certificate hashes by included in the log. Since PIR protocols are more performant with smaller databases, we also suggest a number of strategies to lower the size of the SCT database for audits. Our key observation is that the web will likely transition to a new model for certificate issuance. While this transition is primarily motivated by the need to adapt the PKI to larger, post-quantum signature schemes, it also removes the need for SCT audits in most cases. We present the first estimates of how this transition may impact SCT auditing, based on data gathered from public CT logs. We find that large scale deployment of the new issuance model may reduce the number of SCT audits needed by a factor of 1,000, making PIR-based auditing practical to deploy.
Last updated:  2025-04-04
A Methodology to Achieve Provable Side-Channel Security in Real-World Implementations
Uncategorized
Sonia Belaïd, Gaëtan Cassiers, Camille Mutschler, Matthieu Rivain, Thomas Roche, François-Xavier Standaert, and Abdul Rahman Taleb
Show abstract
Uncategorized
A wide range of countermeasures have been proposed to defend against side-channel attacks, with masking being one of the most effective and commonly used techniques. While theoretical models provide formal security proofs, these often rely on assumptions—sometimes implicit—that can be difficult to assess in practice. As a result, the design of secure masked implementations frequently combines proven theoretical arguments with heuristic and empirical validation. Despite the significant body of work, the literature still lacks a cohesive and well-defined framework for translating theoretical security guarantees into practical implementations on physical devices. Specifically, there remains a gap in connecting provable results from abstract models to quantitative security guarantees at the implementation level. In this Systematization of Knowledge (SoK), we aim to provide a comprehensive methodology to transform abstract cryptographic algorithms into physically secure implementations against side-channel attacks on microcontrollers. We introduce new tools to adapt the ideal noisy leakage model to practical, real-world scenarios, and we integrate state-of-the-art techniques to build secure implementations based on this model. Our work systematizes the design objectives necessary for achieving high security levels in embedded devices and identifies the remaining challenges in concretely applying security reductions. By bridging the gap between theory and practice, we seek to provide a foundation for future research that can develop implementations with proven security against side-channel attacks, based on well- understood leakage assumptions.
Last updated:  2025-04-04
Leveled Homomorphic Encryption Schemes for Homomorphic Encryption Standard
Shuhong Gao and Kyle Yates
Homomorphic encryption allows for computations on encrypted data without exposing the underlying plaintext, enabling secure and private data processing in various applications such as cloud computing and machine learning. This paper presents a comprehensive mathematical foundation for three prominent homomorphic encryption schemes: Brakerski-Gentry-Vaikuntanathan (BGV), Brakerski-Fan-Vercauteren (BFV), and Cheon-Kim-Kim-Song (CKKS), all based on the Ring Learning with Errors (RLWE) problem. We align our discussion with the functionalities proposed in the recent homomorphic encryption standard, providing detailed algorithms and correctness proofs for each scheme. Additionally, we propose improvements to the current schemes focusing on noise management and optimization of public key encryption and leveled homomorphic computation. Our modifications ensure that the noise bound remains within a fixed function for all levels of computation, guaranteeing correct decryption and maintaining efficiency comparable to existing methods. The proposed enhancements reduce ciphertext expansion and storage requirements, making these schemes more practical for real-world applications.
Last updated:  2025-04-04
SIGNITC: Supersingular Isogeny Graph Non-Interactive Timed Commitments
Knud Ahrens
Non-Interactive Timed Commitment schemes (NITC) allow to open any commitment after a specified delay $t_{\mathrm{fd}}$. This is useful for sealed bid auctions and as primitive for more complex protocols. We present the first NITC without repeated squaring or theoretical black box algorithms like NIZK proofs or one-way functions. It has fast verification, almost arbitrary delay and satisfies IND-CCA hiding and perfect binding. Our protocol is based on isogenies between supersingular elliptic curves making it presumably quantum secure, and all algorithms have been implemented as part of SQISign or other well-known isogeny-based cryptosystems. Additionally, it needs no trusted setup and can use known primes for SIKE or SQISign.
Last updated:  2025-04-04
Post-Quantum Threshold Ring Signature Applications from VOLE-in-the-Head
James Hsin-Yu Chiang, Ivan Damgård, William R. Duro, Sunniva Engan, Sebastian Kolby, and Peter Scholl
We propose efficient, post-quantum threshold ring signatures constructed from one-wayness of AES encryption and the VOLE-in-the-Head zero-knowledge proof system. Our scheme scales efficiently to large rings and extends the linkable ring signatures paradigm. We define and construct key-binding deterministic tags for signature linkability, that also enable succinct aggregation with approximate lower bound arguments of knowledge; this allows us to achieve succinct aggregation of our signatures without SNARKs. Finally, we extend our threshold ring signatures to realize post-quantum anonymous ledger transactions in the spirit of Monero. Our constructions assume symmetric key primitives only. Whilst it is common to build post-quantum signatures from the one-wayness property of AES and a post-quantum NIZK scheme, we extend this paradigm to define and construct novel security properties from AES that are useful for advanced signature applications. We introduce key-binding and pseudorandomness of AES to establish linkability and anonymity of our threshold ring signatures from deterministic tags, and similarly establish binding and hiding properties of block ciphers modeled as ideal permutations to build commitments from AES, a crucial building block for our proposed post-quantum anonymous ledger scheme.
Last updated:  2025-04-04
Pencil: A Domain-Extended PRF with Full $n$-bit Security for Strengthening GCM and More
Ritam Bhaumik and Jean Paul Degabriele
We consider the problem of constructing efficient pseudorandom functions with Beyond-Birthday-Bound (BBB) security from blockciphers. More specifically, we are interested in variable-output-length pseudorandom functions (PRF) whose domain is twice that of the underlying blockcipher. We present two such constructions, $\textsf{Pencil}$ and $\sharp\textsf{Pencil}$, which provide weak PRF and full PRF security, respectively, where both achieve full $n$-bit security. While several recent works have focused on constructing BBB PRFs from blockciphers, much less attention has been given to weak PRF constructions which can potentially be constructed more efficiently and still serve as a useful primitive. Another understudied problem in this domain, is that of extending the domain of a BBB PRF, which turns out to be rather challenging. Besides being of theoretical interest in itself, this is also a very practical problem. Often, the input to the BBB PRF is a nonce, but random nonces are much easier to handle in practice as they do not require maintaining state---which can be very cumbersome in distributed systems and encrypted cloud storage. Accordingly, in order to maintain a BBB security bound, one requires random nonces of size between $1.5n$ and $2n$ bits long and corresponding BBB (weak) PRF constructions that admit matching input sizes. NIST has recently announced a pre-draft call for comments to standardise AEAD schemes that can encrypt larger amounts of data and admit larger nonces. The call lists two approaches. The first is to define an analogue of GCM using a 256-bit blockcipher, and the second is based on a recent proposal by Gueron, to extend GCM with a key derivation function (KDF) called DNDK to increase its security. In essence, DNDK is a BBB-secure expanding weak pseudorandom function with a domain size of 192 bits that is realised from AES. Our work makes relevant contributions to this domain in two important ways. Firstly, an immediate consequence of our work is that one can construct a GCM analogue with BBB security from $\sharp\textsf{Pencil}$, without resorting to a 256-bit blockcipher. Our second contribution is that $\sharp\textsf{Pencil}$ can be used as a KDF in combination with GCM in an analogous manner to DNDK-GCM. However, $\sharp\textsf{Pencil}$ being a full PRF as opposed to DNDK which is only a weak PRF, allows one to prove the KDF-GCM composition secure as an AEAD scheme. Finally, when contrasting $\textsf{Pencil}$ and DNDK as weak PRFs with comparable parameters, our construction requires only half the blockcipher calls.
Last updated:  2025-04-04
A Better Kyber Butterfly for FPGAs
Jonas Bertels, Quinten Norga, and Ingrid Verbauwhede
Kyber was selected by NIST as a Post-Quantum Cryptography Key Encapsulation Mechanism standard. This means that the industry now needs to transition and adopt these new standards. One of the most demanding operations in Kyber is the modular arithmetic, making it a suitable target for optimization. This work offers a novel modular reduction design with the lowest area on Xilinx FPGA platforms. This novel design, through K-reduction and LUT-based reduction, utilizes 49 LUTs and 1 DSP as opposed to Xing and Li’s 2021 CHES design requiring 90 LUTs and 1 DSP for one modular multiplication. Our design is the smallest modular multiplier reported as of today.
Last updated:  2025-04-04
An Efficient and Secure Boolean Function Evaluation Protocol
Sushmita Sarkar, Vikas Srivastava, Tapaswini Mohanty, Nibedita Kundu, and Sumit Kumar Debnath
Boolean functions play an important role in designing and analyzing many cryptographic systems, such as block ciphers, stream ciphers, and hash functions, due to their unique cryptographic properties such as nonlinearity, correlation immunity, and algebraic properties. The secure evaluation of Boolean functions or Secure Boolean Evaluation (SBE) is an important area of research. SBE allows parties to jointly compute Boolean functions without exposing their private inputs. SBE finds applications in privacy-preserving protocols and secure multi-party computations. In this manuscript, we present an efficient and generic two-party protocol (namely BooleanEval) for the secure evaluation of Boolean functions by utilizing a 1-out-of-2 Oblivious Transfer (OT) as a building block. BooleanEval only employs hash evaluations and XOR operations as the core computational step, thus making it lightweight and fast. Unlike other lightweight state-of-the-art designs of SBE, BooleanEval avoids the use of additional cryptographic primitives, such as commitment schemes to reduce the computational overhead.
Last updated:  2025-04-04
Engorgio: An Arbitrary-Precision Unbounded-Size Hybrid Encrypted Database via Quantized Fully Homomorphic Encryption
Song Bian, Haowen Pan, Jiaqi Hu, Zhou Zhang, Yunhao Fu, Jiafeng Hua, Yi Chen, Bo Zhang, Yier Jin, Jin Dong, and Zhenyu Guan
This work proposes an encrypted hybrid database framework that combines vectorized data search and relational data query over quantized fully homomorphic encryption (FHE). We observe that, due to the lack of efficient encrypted data ordering capabilities, most existing encrypted database (EDB) frameworks do not support hybrid queries involving both vectorized and relational data. To further enrich query expressiveness while retaining evaluation efficiency, we propose Engorgio, a hybrid EDB framework based on quantized data ordering techniques over FHE. Specifically, we design a new quantized data encoding scheme along with a set of novel comparison and permutation algorithms to accurately generate and apply orders between large-precision data items. Furthermore, we optimize specific query types, including full table scan, batched query, and Top-k query to enhance the practical performance of the proposed framework. In the experiment, we show that, compared to the state-of-the-art EDB frameworks, Engorgio is up to 28x--854x faster in homomorphic comparison, 65x--687x faster in homomorphic sorting and 15x--1,640x faster over a variety of end-to-end relational, vectorized, and hybrid SQL benchmarks. Using Engorgio, the amortized runtime for executing a relational and hybrid query on a 48-core processor is under 3 and 75 seconds, respectively, over a 10K-row hybrid database.
Last updated:  2025-04-04
A Quantum Approach For Reducing Communications in Classical Secure Computations with Long Outputs
Jiayu Zhang
How could quantum cryptography help us achieve what are not achievable in classical cryptography? In this work we study the classical cryptographic problem that two parties would like to perform secure computations with long outputs. As a basic primitive and example, we first consider the following problem which we call secure function sampling with long outputs: suppose $f:\{0,1\}^n\rightarrow \{0,1\}^m$ is a public, efficient classical function, where $m$ is big; Alice would like to sample $x$ from its domain and sends $f(x)$ to Bob; what Bob knows should be no more than $f(x)$ even if it behaves maliciously. Classical cryptography, like FHE and succinct arguments [Gen09,Kil92,HW15], allows us to achieve this task within communication complexity $O(n+m)$; could we achieve this task with communication complexity independent of $m$? In this work, we first design a quantum cryptographic protocol that achieves secure function sampling with approximate security, within $O(n)$ communication (omitting the dependency on the security parameter and error tolerance). We also prove the classical impossibility using techniques in [HW15], which means that our protocol indeed achieves a type of quantum advantage. Building on the secure function sampling protocol, we further construct protocols for general secure two-party computations [Yao86,GB01] with approximate security, with communication complexity only depending on the input length and the targeted security. In terms of the assumptions, we construct protocols for these problems assuming only the existence of collapsing hash functions [Unr16]; what's more, we also construct a classical-channel protocol for these problems additionally assuming the existence of noisy trapdoor claw-free functions [BCMVV,BKVV].
Last updated:  2025-04-04
$\mathsf{emGraph}$: Efficient Multiparty Secure Graph Computation
Siddharth Kapoor, Nishat Koti, Varsha Bhat Kukkala, Arpita Patra, and Bhavish Raj Gopal
Secure graph computation enables computing on graphs while hiding the graph topology as well as the associated node/edge data. This facilitates collaborative analysis among multiple data owners, who may only hold a private partial view of the global graph. Several works address this problem using the technique of secure multiparty computation (MPC) in the presence of 2 or 3 parties. However, when moving to the multiparty setting, as required for collaborative analysis among multiple data owners, these solutions are no longer scalable. This remains true with respect to the state-of-the-art framework of $\mathsf{Graphiti}$ (Koti et al., CCS 2024) as well. Specifically, $\mathsf{Graphiti}$ incurs a round complexity linear in the number of parties or data owners. This is due to its reliance on secure shuffle protocol, constituting a bottleneck in the multiparty setting. Additionally, $\mathsf{Graphiti}$ has a prohibitively expensive initialisation phase due to its reliance on secure sort, with a round complexity dependent on both the graph size and the number of parties. We propose $\mathsf{emGraph}$, a generic framework for secure graph computation in the multiparty setting that eliminates the need of shuffle and instead, relies on a weaker primitive known as $\mathsf{Permute+Share}$. Further $\mathsf{emGraph}$ is designed to have a lightweight initialisation, that eliminates the need for sorting, making its round complexity independent of the graph size and number of parties. Unlike any of the prior works, achieving a round complexity independent of the number of parties is what makes $\mathsf{emGraph}$ scalable. Finally, we implement and benchmark the performance of $\mathsf{emGraph}$ for the application of PageRank computation and showcase its efficiency and scalability improvements over $\mathsf{Graphiti}$. Concretely, we witness improvements of up to $80\times$ in runtime in comparison to state-of-the-art framework $\mathsf{Graphiti}$. Further, we observe that $\mathsf{emGraph}$ takes under a minute to perform 10 iterations of PageRank computation on a graph of size $10^6$ that is distributed among $25$ parties/data owners, making it highly practical for secure graph computation in the multiparty setting.
Last updated:  2025-04-04
The Black-Box Simulation Barrier Persists in a Fully Quantum World
Nai-Hui Chia, Kai-Min Chung, Xiao Liang, and Jiahui Liu
Zero-Knowledge (ZK) protocols have been a subject of intensive study due to their fundamental importance and versatility in modern cryptography. However, the inherently different nature of quantum information significantly alters the landscape, necessitating a re-examination of ZK designs. A crucial aspect of ZK protocols is their round complexity, intricately linked to $\textit{simulation}$, which forms the foundation of their formal definition and security proofs. In the $\textit{post-quantum}$ setting, where honest parties and their communication channels are all classical but the adversaries could be quantum, Chia, Chung, Liu, and Yamakawa [FOCS'21] demonstrated the non-existence of constant-round $\textit{black-box-simulatable}$ ZK arguments (BBZK) for $\mathbf{NP}$ unless $\mathbf{NP} \subseteq \mathbf{BQP}$. However, this problem remains widely open in the full-fledged quantum future that will eventually arrive, where all parties (including the honest ones) and their communication are naturally quantum. Indeed, this problem is of interest to the broader theory of quantum computing. It has been an important theme to investigate how quantum power fundamentally alters traditional computational tasks, such as the $\textit{unconditional}$ security of Quantum Key Distribution and the incorporation of Oblivious Transfers in MiniQCrypt. Moreover, quantum communication has led to round compression for commitments and interactive arguments. Along this line, the above problem is of great significance in understanding whether quantum computing could also change the nature of ZK protocols in some fundamentally manner. We resolved this problem by proving that only languages in $\mathbf{BQP}$ admit constant-round $\textit{fully-quantum}$ BBZK. This result holds significant implications. Firstly, it illuminates the nature of quantum zero-knowledge and provides valuable insights for designing future protocols in the quantum realm. Secondly, it relates ZK round complexity with the intriguing problem of $\mathbf{BQP}$ vs $\mathbf{QMA}$, which is out of the reach of previous analogue impossibility results in the classical or post-quantum setting. Lastly, it justifies the need for the $\textit{non-black-box}$ simulation techniques or the relaxed security notions employed in existing constant-round fully-quantum BBZK protocols.
Last updated:  2025-04-03
Proving CPU Executions in Small Space
Vineet Nair, Justin Thaler, and Michael Zhu
zkVMs are SNARKs for verifying CPU execution. They allow an untrusted prover to show that it correctly ran a specified program on a witness, where the program is given as bytecode conforming to an instruction set architecture like RISC-V. Existing zkVMs still struggle with high prover resource costs, notably large runtime and memory usage. We show how to implement Jolt—an advanced, sum-check- based zkVM—with a significantly reduced memory footprint, without relying on SNARK recursion, and with only modest runtime overhead (potentially well below a factor of two). We discuss benefits of this approach compared to prevailing recursive techniques.
Last updated:  2025-04-03
Silent Circuit Relinearisation: Sublinear-Size (Boolean and Arithmetic) Garbled Circuits from DCR
Pierre Meyer, Claudio Orlandi, Lawrence Roy, and Peter Scholl
We introduce a general template for building garbled circuits with low communication, under the decisional composite residuosity (DCR) assumption. For the case of layered Boolean circuits, we can garble a circuit of size $s$ with communication proportional to $O(s/\log\log s)$ bits, plus an additive factor that is polynomial in the security parameter. For layered arithmetic circuits with $B$-bounded integer computation, we obtain a similar result: the garbled arithmetic circuit has size $O(s/\log\log s) \cdot (\lambda + \log B)$ bits, where $\lambda$ is the security parameter. These are the first constructions of general-purpose, garbled circuits with sublinear size, without relying on heavy tools like indistinguishability obfuscation or attribute-based and fully homomorphic encryption. To achieve these results, our main technical tool is a new construction of a form of homomorphic secret sharing where some of the inputs are semi-private, that is, known to one of the evaluating parties. Through a new relinearisation technique that allows performing arbitrary additions and multiplications on semi-private shares, we build such an HSS scheme that supports evaluating any function of the form $C(x) \cdot C'(y)$, where $C$ is any polynomially-sized circuit applied to the semi-private input $y$, and $C'$ is a restricted-multiplication (or, NC1) circuit applied to the private input $x$. This significantly broadens the expressiveness of prior known HSS constructions.
Last updated:  2025-04-03
Clubcards for the WebPKI: smaller certificate revocation tests in theory and practice
John M. Schanck
CRLite is a low-bandwidth, low-latency, privacy-preserving mechanism for distributing certificate revocation data. A CRLite aggregator periodically encodes revocation data into a compact static hash set, or membership test, which can can be downloaded by clients and queried privately. We present a novel data-structure for membership tests, which we call a clubcard, and we evaluate the encoding efficiency of clubcards using data from Mozilla's CRLite infrastructure. As of November 2024, the WebPKI contains over 900 million valid certificates and over 8 million revoked certificates. We describe an instantiation of CRLite that encodes the revocation status of these certificates in a 6.7 MB package. This is $54\%$ smaller than the original instantiation of CRLite presented at the 2017 IEEE Symposium on Security and Privacy, and it is $21\%$ smaller than the lower bound claimed in that work. A sequence of clubcards can encode a dynamic dataset like the WebPKI revocation set. Using data from late 2024 again, we find that clubcards encoding 6 hour delta updates to the WebPKI can be compressed to 26.8 kB on average---a size that makes CRLite truly practical. We have extended Mozilla's CRLite infrastructure so that it can generate clubcards, and we have added client-side support for this system to Firefox. We report on some performance aspects of our implementation, which is currently the default revocation checking mechanism in Firefox Nightly, and we propose strategies for further reducing the bandwidth requirements of CRLite.
Last updated:  2025-04-03
Random Oracle Combiners: Merkle-Damgård Style
Yevgeniy Dodis, Eli Goldin, and Peter Hall
A Random Oracle Combiner (ROC), introduced by Dodis et al. (CRYPTO ’22), takes two hash functions $h_1, h_2$ from m bits to n bits and outputs a new hash function $C$ from $m$' to $n$' bits. This function C is guaranteed to be indifferentiable from a fresh random oracle as long as one of $h_1$ and $h_2$ (say, $h_1$) is a random oracle, while the other h2 can “arbitrarily depend” on $h_1$. The work of Dodis et al. also built the first length-preserving ROC, where $n$′ = $n$. Unfortunately, despite this feasibility result, this construction has several deficiencies. From the practical perspective, it could not be directly applied to existing Merkle-Damgård-based hash functions, such as SHA2 or SHA3. From the theoretical perspective, it required $h_1$ and $h_2$ to have input length $m$ > 3λ, where λ is the security parameter. To overcome these limitations, Dodis et al. conjectured — and left as the main open question — that the following (salted) construction is a length-preserving ROC: $C^{h1,h2}_{\mathcal{Z}_1,\mathcal{Z}_2} (M ) = h_1^*(M, \mathcal{Z}_1) \oplus h^*_2(M,\mathcal{Z}_2),$ where $\mathcal{Z}_1, \mathcal{Z}_2$ are random salts of appropriate length, and $f^*$ denotes the Merkle-Damgård-extension of a given compression function $f$. As our main result, we resolve this conjecture in the affirmative. For practical use, this makes the resulting combiner applicable to existing, Merkle-Damgård-based hash functions. On the theory side, it shows the existence of ROCs only requiring optimal input length $m$ = λ+O(1).
Last updated:  2025-04-03
On some non-linear recurrences over finite fields linked to isogeny graphs
Juan Jesús León and Vicente Muñoz
This paper presents new results that establish connections between isogeny graphs and nonlinear recurrences over finite fields. Specifically, we prove several theorems that link these two areas, offering deeper insights into the structure of isogeny graphs and their relationship with nonlinear recurrence sequences. We further provide two related conjectures which may be worth of further research. These findings contribute to a better understanding of the endomorphism ring of a curve, advancing progress toward the resolution of the Endomorphism Ring Problem, which aims to provide a computational characterization of the endomorphism ring of a supersingular elliptic curve.
Last updated:  2025-04-03
Analytic and Simulation Results of a Gaussian Physically Unclonable Constant Based on Resistance Dispersion
Riccardo Bernardini
Physically Unclonable Constants (PUCs) are a special type of Physically Unclonable Constants and they can be used to embed secret bit-strings in chips. Most PUCs are an array of cells where each cell is a digital circuit that evolve spontaneously toward one of two states, the chosen state being function of random manufacturing process variations. In this paper we propose an Analog Physically Unclonable Constant (APUC) whose output is an analog value to be transformed in digital by a digitizer circuit. The ratio behind this proposal is that an APUC cell has the potential of providing more than one bit, reducing the required footprint. Preliminary theoretical analysis and simulation results are presented. The proposed APUC has interesting performances (e.g., it can provide up to 5 bits per cell) that grant for further investigation.
Last updated:  2025-04-03
TockOwl: Asynchronous Consensus with Fault and Network Adaptability
Minghang Li, Qianhong Wu, Zhipeng Wang, Bo Qin, Bohang Wei, Hang Ruan, Shihong Xiong, and Zhenyang Ding
BFT protocols usually have a waterfall-like degradation in performance in the face of crash faults. Some BFT protocols may not experience sudden performance degradation under crash faults. They achieve this at the expense of increased communication and round complexity in fault-free scenarios. In a nutshell, existing protocols lack the adaptability needed to perform optimally under varying conditions. We propose TockOwl, the first asynchronous consensus protocol with fault adaptability. TockOwl features quadratic communication and constant round complexity, allowing it to remain efficient in fault-free scenarios. TockOwl also possesses crash robustness, enabling it to maintain stable performance when facing crash faults. These properties collectively ensure the fault adaptability of TockOwl. Furthermore, we propose TockOwl+ that has network adaptability. TockOwl+ incorporates both fast and slow tracks and employs hedging delays, allowing it to achieve low latency comparable to partially synchronous protocols without waiting for timeouts in asynchronous environments. Compared to the latest dual-track protocols, the slow track of TockOwl+ is simpler, implying shorter latency in fully asynchronous environments.
Last updated:  2025-04-03
An attack on ML-DSA using an implicit hint
Paco Azevedo-Oliveira, Jordan Beraud, and Louis Goubin
The security of ML-DSA, like most signature schemes, is partially based on the fact that the nonce used to generate the signature is unknown to any attacker. In this work, we exhibit a lattice-based attack that is possible if the nonces share implicit or explicit information. From a collection of signatures whose nonces share certain coefficients, it is indeed possible to build a collection of non full-rank lattices. Intersecting them, we show how to create a low-rank lattice that contains one of the polynomials of the secret key, which in turn can be recovered using lattice reduction techniques. There are several interpretations of this result: firstly, it can be seen as a generalization of a fault-based attack on BLISS presented at SAC'16 by Thomas Espitau et al. Alternatively, it can be understood as a side-channel attack on ML-DSA, in the case where an attacker is able to recover only one of the coefficients of the nonce used during the generation of the signature. For ML-DSA-II, we show that $4 \times 160$ signatures and few hours of computation are sufficient to recover the secret key on a desktop computer. Lastly, our result shows that simple countermeasures, such as permuting the generation of the nonce coefficients, are not sufficient.
Last updated:  2025-04-03
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability
Tianxiang Dai, Yufan Jiang, Yong Li, Jörn Müller-Quade, and Andy Rupp
Secure two-party computation (2PC) allows two distrustful parties to jointly compute some functions while keeping their private secrets unrevealed. Adversaries are often categorized as semi-honest and malicious, depending on whether they follow the protocol specifications or not. While a semi-honest secure protocol is fast but strongly assumed that all participants will follow the protocol, a malicious protocol often requires heavy verification steps and preprocessing phase to prohibit any cheat. Covert security [10] was first introduced by Aumann and Lindell, which looks into the "middle ground" between semi-honest security and malicious security, such that active adversaries who cheat will be caught with a predefined probability. However, it is still an open question that how to properly determine such a probability before protocol execution, and the misbehavior detection must be made by other honest participants with cut-and-choose in current constructions. To achieve public verifiability and meanwhile outsource the verification steps, [12] presented publicly auditable security to enable an external auditor to verify the result correctness. Essentially, an additional existence assumption of a bulletin board functionality is required to keep tracking the broadcasted messages for the auditor. And moreover, the auditor cannot identify the cheater, but only points out the incorrect result. The (robust) accountability family [40, 62, 76] achieves both output delivery guarantee and public verifiability, which relies on heavy offline and online constructions with zero knowledge proofs. In this paper, we propose a new security notion called honorific security, where an external arbiter can find the cheater with overwhelming probability under the malicious corruption. With honorific security, we do not prohibit cheat of a corrupted party during the online stage, but enable the honest party to detect and punish the cheater later on in public. We show that a maliciously secure garbled circuit (GC) [83] protocol can thus be constructed with only slightly more overhead than a passively secure protocol. Our construction performs up to 2.37 times and 13.30 times as fast as the state-of-the-art protocols with covert and malicious security, respectively.
Last updated:  2025-04-03
Laconic Cryptography with Preprocessing
Rishabh Bhadauria, Nico Döttling, Carmit Hazay, and Chuanwei Lin
Laconic cryptography focuses on designing two-message protocols that allow secure computation on large datasets while minimizing communication costs. While laconic cryptography protocols achieve asymptotically optimal communication complexity for many tasks, their concrete efficiency is prohibitively expensive due to the heavy use of public-key techniques or the non-black-box of cryptographic primitives. In this work, we initiate the study of "laconic cryptography with preprocessing", introducing a model that includes an offline phase to generate database-dependent correlations, which are then used in a lightweight online phase. These correlations are conceptually simple, relying on linear-algebraic techniques. This enables us to develop a protocol for private laconic vector oblivious linear evaluation (plvOLE). In such a protocol, the receiver holds a large database $\mathsf{DB}$, and the sender has two messages $v$ and $w$, along with an index $i$. The receiver learns the value $v \cdot \mathsf{DB}_i + w$ without revealing other information. Our protocol, which draws from ideas developed in the context of private information retrieval with preprocessing, serves as the backbone for two applications of interest: laconic private set intersection (lPSI) for large universes and laconic function evaluation for RAM-programs (RAM-LFE). Based our plvOLE protocol, we provide efficient instantiations of these two primitives in the preprocessing model.
Last updated:  2025-04-03
A Method for Securely Comparing Integers using Binary Trees
Anselme Tueno, Jonas Janneck, and David Boehm
In this paper, we propose a new protocol for secure integer comparison which consists of parties having each a private integer. The goal of the computation is to compare both integers securely and reveal to the parties a single bit that tells which integer is larger. Nothing more should be revealed. To achieve a low communication overhead, this can be done by using homomorphic encryption (HE). Our protocol relies on binary decision trees that is a special case of branching programs and can be implemented using HE. We assume a client-server setting where each party holds one of the integers, the client also holds the private key of a homomorphic encryption scheme and the evaluation is done by the server. In this setting, our protocol outperforms the original DGK protocol of Damgård et al. and reduces the running time by at least 45%. In the case where both inputs are encrypted, our scheme reduces the running time of a variant of DGK by 63%.
Last updated:  2025-04-03
Deny Whatever You Want: Dual-Deniable Public-Key Encryption
Zhiyuan An and Fangguo Zhang
We introduce an enhanced requirement of deniable public key encryption that we call dual-deniability. It asks that a sender who is coerced should be able to produce fake randomness, which can explain the target ciphertext as the encryption of any alternative message under any valid key she/he desires to deny. Compared with the original notion of deniability (Canetti et al. in CRYPTO ’97, hereafter named message-deniability), this term further provides a shield for the anonymity of the receiver against coercion attacks. We first give a formal definition of dual-deniability, along with its weak-mode variant. For conceptual understanding, we then show dual-deniability implies semantic security and anonymity against CPA, separates full robustness, and even contradicts key-less or mixed robustness, while is (constructively) implied by key-deniability and full robustness with a minor assumption for bits encryption. As for the availability of dual-deniability, our main scheme is a generic approach from ciphertext-simulatable PKE, where we devise a subtle multi-encryption schema to hide the true message within random masking ciphertexts under plan-ahead setting. Further, we leverage the weak model to present a more efficient scheme having negligible detection probability and constant ciphertext size. Besides, we revisit the notable scheme (Sahai and Waters in STOC ’14) and show it is inherently dual-deniable. Finally, we extend the Boneh-Katz transform to capture CCA security, deriving dual-deniable and CCA-secure PKE from any selectively dual-deniable IBE under multi-TA setting. Overall our work mounts the feasibility of anonymous messaging against coercion attacks.
Last updated:  2025-04-03
Pseudorandom Multi-Input Functional Encryption and Applications
Shweta Agrawal, Simran Kumari, and Shota Yamada
We construct the first multi-input functional encryption (MIFE) and indistinguishability obfuscation (iO) schemes for pseudorandom functionalities, where the output of the functionality is pseudorandom for every input seen by the adversary. Our MIFE scheme relies on LWE and evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) for constant arity functions, and a strengthening of evasive LWE for polynomial arity. Thus, we obtain the first MIFE and iO schemes for a nontrivial functionality from conjectured post-quantum assumptions. Along the way, we identify subtle issues in the proof of witness encryption from evasive LWE by prior work and believe that a similar strengthening of evasive LWE should also be required for their proof, for the same reasons as ours. We demonstrate the power of our new tools via the following applications: 1. Multi Input Predicate Encryption for Constant Arity. Assuming evasive LWE and LWE, we construct a multi-input predicate encryption scheme (MIPE) for P, supporting constant arity. The only prior work to support MIPE for P with constant arity by Agrawal et al. (Crypto, 2023) relies on a strengthening of Tensor LWE in addition to LWE and evasive LWE. 2. Multi Input Predicate Encryption for Polynomial Arity. Assuming a stronger variant of evasive LWE and LWE, we construct MIPE for P for polynomial arity. MIPE for polynomial arity supporting P was not known before, to the best of our knowledge. 3. Two Party ID Based Key Exchange. Assuming a stronger variant of evasive LWE and LWE, along with Decision Bilinear Diffie-Hellman, we provide the first two-party ID based Non-Interactive Key Exchange (ID-NIKE) scheme in the standard model. This leads to the first ID-NIKE in the standard model without using multilinear maps or indistinguishability obfuscation. 4. Instantiating the Random Oracle. We use our pseudorandom iO to instantiate the random oracle in several applications that previously used iO (Hohenberger, Sahai and Waters, Eurocrypt 2014) such as full-domain hash signature based on trapdoor permutations and more. Our tools of MIFE and iO for pseudorandom functionalities appear quite powerful and yield extremely simple constructions when used in applications. We believe they provide a new pathway for basing “extreme” cryptography, which has so far required full fledged iO, on the presumably weaker evasive LWE in the post quantum regime.
Last updated:  2025-04-03
Compact Pseudorandom Functional Encryption from Evasive LWE
Shweta Agrawal, Simran Kumari, and Shota Yamada
We provide the first construction of compact Functional Encryption (FE) for pseudorandom functionalities from the evasive LWE and LWE assumptions. Intuitively, a pseudorandom functionality means that the output of the circuit is indistinguishable from uniform for every input seen by the adversary. This yields the first compact FE for a nontrivial class of functions which does not rely on pairings. We demonstrate the power of our new tool by using it to achieve optimal parameters for both key-policy and ciphertext-policy Attribute Based Encryption (ABE) schemes for circuits of unbounded depth, from just the LWE and evasive LWE assumptions. This improves prior work along the twin axes of assumptions and performance. In more detail, this allows to: (i) replace the assumption of circular evasive LWE used in the work of Hseih, Lin and Luo (FOCS 2023) by plain evasive LWE, (ii) remove the need for the circular tensor LWE assumption in the work of Agrawal, Kumari and Yamada (CRYPTO, 2024), (iii) improve parameters obtained by both aforementioned works to achieve asymptotic optimality. Previously, optimal parameters for ABE schemes were only achieved using compact FE for P (Jain, Lin and Luo, Eurocrypt 2023) – we show that compact FE for a much weaker class (albeit with incomparable security) suffices. Thus we obtain the first optimal ABE schemes for unbounded depth circuits which can be conjectured post-quantum secure. Along the way, we define and construct a new primitive which we term laconic pseudorandom obfuscation from the same assumptions – this may be of independent interest.
Last updated:  2025-04-02
PAC-Private Algorithms
Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas
Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise. Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We propose new techniques in order to reduce instability in algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.
Last updated:  2025-04-02
On the success rate of simple side-channel attacks against masking with unlimited attack traces
Aymeric Hiltenbrand, Julien Eynard, and Romain Poussier
Side-channel attacks following a classical differential power analysis (DPA) style are well understood, along with the effect the mask- ing countermeasure has on them. However, simple attacks (SPA) where the target variable does not vary thanks to a known value, such as the plaintext, are less studied. In this paper, we investigate how the masking countermeasure affects the success rate of simple attacks. To this end, we provide theoretical, simulated, and practical experiments. Interestingly, we will see that masking can allow us to asymptotically recover more information on the secret than in the case of an unprotected implemen- tation, depending on the masking type. We will see that this is true for masking encodings that add non-linearity with respect to the leakages, such as arithmetic masking, while it is not for Boolean masking. We be- lieve this context provides interesting results, as the average information of arithmetic encoding is proven less informative than the Boolean one.
Last updated:  2025-04-02
Mobile Byzantine Agreement in a Trusted World
Bo Pan and Maria Potop Butucaru
In this paper, we address the Byzantine Agreement problem in synchronous systems where Byzantine agents can move from process to process, corrupting their host. We focus on three representative models: \emph{Garay's}, \emph{Bonnet's} and \emph{Buhrman's} models. In \emph{Garay's model} when a process has been left by the Byzantine, it is in the \emph{cured} state and it is aware of its condition and thus can remain silent for a round to prevent the dissemination of wrong information. In \emph{Bonnet's model} a cured process may send messages (based on a state corrupted by the malicious agent), however it will behave correctly in the way it sends those messages: i.e., send messages according to the algorithm. In \emph{Buhrman's model} Byzantine agents move together with the message. It has been shown that in order to solve Byzantine Agreement in the \emph{Garay's model} at least $4t+1$ processors are needed, for \emph{Bonnet's model} at least $5t+1$ processors are needed, while for \emph{Buhrman's model} at least $3t+1$ processors are needed. In this paper we target to increase the tolerance to mobile Byzantines by integrating a trusted counter abstraction to the above models. This abstraction prevents nodes to equivocate. In the new models we prove that at least $3t+1$, respectively $4t+1$, and $2t+1$ processors are needed to tolerate $t$ mobile Byzantine agents. Furthermore, we propose novel Mobile Byzantine Agreement algorithms that match these new lower bounds for \emph{Garay's}, \emph{Bonnet's} and \emph{Buhrman's} models.
Last updated:  2025-04-02
Lattice-Based Sanitizable Signature Schemes: Chameleon Hash Functions and More
Sebastian Clermont, Samed Düzlü, Christian Janson, Laurens Porzenheim, and Patrick Struck
Sanitizable Signature Schemes (SSS) enable a designated party, the sanitizer, to modify predefined parts of a signed message without invalidating the signature, making them useful for applications like pseudonymization and redaction. Since their introduction by Ateniese et al. (ESORICS'05), several classical SSS constructions have been proposed, but none have been instantiated from quantum-resistant assumptions. In this work, we develop the first quantum-secure sanitizable signature schemes based on lattice assumptions. Our primary focus is on SSS constructions that rely on chameleon hash functions (CHFs), a key component for enabling the controlled modification of messages. While lattice-based CHFs exist, they do not meet the required security guarantees for SSS, becoming insecure under adversarial access to an adapt oracle. To address this, we construct a novel lattice-based CHF that achieves collision resistance even in such settings, called full collision resistance. However, our CHF lacks the uniqueness property, a limitation we show to be inherent in lattice-based CHFs. As a result, our SSS constructions initially fall short of achieving the critical security property of accountability. To overcome this, we apply a transformation based on verifiable ring signatures (VRS), for which we present the first lattice-based instantiation. Additionally, we provide a comprehensive analysis of existing classical SSS constructions, explore their potential for post-quantum instantiations, and present new attacks on previously assumed secure SSS schemes. Our work closes the gap in constructing quantum-secure SSS and lays the groundwork for further research into advanced cryptographic primitives based on lattice assumptions.
Last updated:  2025-04-02
Voting with coercion resistance and everlasting privacy using linkable ring signatures
Panagiotis Grontas, Aris Pagourtzis, and Marianna Spyrakou
We propose an e-voting protocol based on a novel linkable ring signature scheme with unconditional anonymity. In our system, all voters create private credentials and register their public counterparts. To vote, they create a ring (anonymity set) consisting of public credentials together with a proof of knowledge of their secret credential via our signature. Its unconditional anonymity prevents an attacker, no matter how powerful, from deducing the identity of the voter, thus attaining everlasting privacy. Additionally, our protocol provides coercion resistance in the JCJ framework; when an adversary tries to coerce a voter, the attack can be evaded by creating a signature with a fake but indistinguishable credential. During a moment of privacy, they will cast their real vote. Our scheme also provides verifiability and ballot secrecy.
Last updated:  2025-04-02
Relations among new CCA security notions for approximate FHE
Chris Brzuska, Sébastien Canard, Caroline Fontaine, Duong Hieu Phan, David Pointcheval, Marc Renard, and Renaud Sirdey
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for correct FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
Last updated:  2025-04-02
PHOENIX: Crypto-Agile Hardware Sharing for ML-KEM and HQC
Antonio Ras, Antoine Loiseau, Mikaël Carmona, Simon Pontié, Guénaël Renault, Benjamin Smith, and Emanuele Valea
The transition to quantum-safe public-key cryptography has begun: for key agreement, NIST has standardized ML-KEM and selected HQC for future standardization. The relative immaturity of these schemes encourages crypto-agile implementations, to facilitate easy transitions between them. Intelligent crypto-agility requires efficient sharing strategies to compute operations from different cryptosystems using the same resources. This is particularly challenging for cryptosystems with distinct mathematical foundations, like lattice-based ML-KEM and code-based HQC. We introduce PHOENIX, the first crypto-agile hardware coprocessor for lattice- and code-based cryptosystems--specifically, ML-KEM and HQC, at all three NIST security levels--with an effective agile sharing strategy. PHOENIX accelerates polynomial multiplication, which is the main operation in both cryptosystems, and the current bottleneck of HQC. To maximise sharing, we replace HQC's Karatsuba-based polynomial multiplication with the Frobenius Additive FFT (FAFFT), which is similar on an abstract level to ML-KEM's Number Theoretic Transform (NTT). We show that the FAFFT already brings substantial performance improvements in software. In hardware, our sharing strategy for the FAFFT and NTT is based on a new SuperButterfly unit that seamlessly switches between these two FFT variants over completely different rings. This is, to our knowledge, the first FAFFT hardware accelerator of any kind. We have integrated PHOENIX in a real System-on-Chip FPGA scenario, where our performance measurements show that efficient crypto-agility for lattice- and code-based KEMs can be achieved with low overhead.
Last updated:  2025-04-02
Improved Round-by-round Soundness IOPs via Reed-Muller Codes
Dor Minzer and Kai Zhe Zheng
We give an IOPP (interactive oracle proof of proximity) for trivariate Reed-Muller codes that achieves the best known query complexity in some range of security parameters. Specifically, for degree $d$ and security parameter $\lambda\leq \frac{\log^2 d}{\log\log d}$ , our IOPP has $2^{-\lambda}$ round-by-round soundness, $O(\lambda)$ queries, $O(\log\log d)$ rounds and $O(d)$ length. This improves upon the FRI [Ben-Sasson, Bentov, Horesh, Riabzev, ICALP 2018] and the STIR [Arnon, Chiesa, Fenzi, Yogev, Crypto 2024] IOPPs for Reed-Solomon codes, that have larger query and round complexity standing at $O(\lambda \log d)$ and $O(\log d+\lambda\log\log d)$ respectively. We use our IOPP to give an IOP for the NP-complete language Rank-1-Constraint-Satisfaction with the same parameters. Our construction is based on the line versus point test in the low-soundness regime. Compared to the axis parallel test (which is used in all prior works), the general affine lines test has improved soundness, which is the main source of our improved soundness. Using this test involves several complications, most significantly that projection to affine lines does not preserve individual degrees, and we show how to overcome these difficulties. En route, we extend some existing machinery to more general settings. Specifically, we give proximity generators for Reed-Muller codes, show a more systematic way of handling "side conditions" in IOP constructions, and generalize the compiling procedure of [Arnon, Chiesa, Fenzi, Yogev, Crypto 2024] to general codes.
Last updated:  2025-04-02
Insecurity of One Decentralized Attribute-based Signature Scheme for Social Co-governance
Zhengjun Cao and Lihua Liu
We show that the attribute-based signature scheme [Information Sciences, 654(2024), 119839] is insecure, because an adversary can generate valid signatures for any message even though he cannot access the signer's secret key. The four components of signature $\{\delta_1, \delta_2, \delta_3, \delta_4\}$ are not tightly bound to the target message $M$ and the signer's public key. The dependency between the signer's public key and secret key is not properly used to construct any intractable problem. The inherent flaw results in that the adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm.
Last updated:  2025-04-02
Nominal State-Separating Proofs
Markus Krabbe Larsen and Carsten Schürmann
State-separting proofs are a powerful tool to structure cryptographic arguments, so that they are amenable for mechanization, as has been shown through implementations, such as SSProve. However, the treatment of separation for heaps has never been satisfactorily addressed. In this work, we present the first comprehensive treatment of nominal state separation in state-separating proofs using nominal sets. We provide a Coq library, called Nominal-SSProve, that builds on nominal state separation supporting mechanized proofs that appear more concise and arguably more elegant.
Last updated:  2025-04-02
Endomorphisms for Faster Cryptography on Elliptic Curves of Moderate CM Discriminants
Dimitri Koshelev and Antonio Sanso
This article generalizes the widely-used GLV decomposition for (multi-)scalar multiplication to a much broader range of elliptic curves with moderate CM discriminant \( D < 0 \). Previously, it was commonly believed that this technique can only be applied efficiently for small values of \( D \) (e.g., up to \( 100 \)). In practice, curves with \( j \)-invariant \( 0 \) are most frequently employed, as they have the smallest possible \( D = -3 \). However, $j = 0$ curves are either too suspicious for conservative government regulators (e.g., for Russian ones, which prefer $D = -619$) or unavailable under imposed extra restrictions in a series of cryptographic settings. The article thus participates in the decade-long development of numerous curves with moderate \( D \) in the context of zk-SNARKs. Such curves are typically derived from others, which limits the ability to generate them while controlling the magnitude of \( D \).
Last updated:  2025-04-02
Highway to Hull: An Algorithm for Solving the General Matrix Code Equivalence Problem
Alain Couvreur and Christophe Levrat
The matrix code equivalence problem consists, given two matrix spaces $\mathcal{C},\mathcal{D}\subset \mathbb{F}_q^{m\times n}$ of dimension $k$, in finding invertible matrices $P\in\textrm{GL}_m(\mathbb{F}_q)$ and $Q\in\textrm{GL}_n(\mathbb{F}_q)$ such that $\mathcal{D} =P\mathcal{C} Q^{-1}$. Recent signature schemes such as MEDS and ALTEQ relate their security to the hardness of this problem. Naranayan et. al. recently published an algorithm solving this problem in the case $k = n =m$ in $\widetilde{\mathcal{O}}(q^{\frac k 2})$ operations. We present a different algorithm which solves the problem in the general case. Our approach consists in reducing the problem to the matrix code conjugacy problem, i.e. the case $P=Q$. For the latter problem, similarly to the permutation code equivalence problem in Hamming metric, a natural invariant based on the \emph{Hull} of the code can be used. Next, the equivalence of codes can be deduced using a usual list collision argument. For $k=m=n$, our algorithm achieves the same complexity as in the aforementioned reference. However, it extends to a much broader range of parameters.
Last updated:  2025-04-02
Partial Key Exposure Attacks on UOV and Its Variants
Yuki Seto, Hiroki Furue, and Atsushi Takayasu
In CRYPTO 2022, Esser et al. proposed a partial key exposure attack on several post-quantum cryptographic schemes including Rainbow which is a variant of UOV. The task of the attack is to recover a full secret key from its partial information such as a secret key with symmetric/asymmetric bit errors. One of the techniques Esser et al. developed is a partial enumeration that combines the standard algorithms to solve the MQ problem with enumeration. Although an efficient attack on Rainbow was proposed, UOV and its variants have still been paid much attention since UOV and its three variants, i.e., MAYO, QR-UOV and SNOVA, were selected as the Round 2 candidates of the additional call for digital signature schemes proposal by NIST. In this paper, we analyze partial key exposure attacks on UOV, MAYO, and QR-UOV. Although our proposed attacks use the partial enumeration, we refine their enumeration strategy. We employ two enumeration strategies and analyze the complexity of the proposed attacks. Then, we find a structural difference between UOV and its variants to resist partial enumeration. Specifically, the partial enumeration is effective if the number of vinegar variables is smaller than the number of equations and the order of a finite field is small. As a result, the proposed attack is the most effective on MAYO. While our attacks on UOV and QR-UOV are effective only when the symmetric error probabilities are 0.11 and 0.05, respectively, that on MAYO is effective even when the probability is close to 0.5.
Last updated:  2025-04-01
Bayesian Leakage Analysis: A Framework for Analyzing Leakage in Cryptography
Zachary Espiritu, Seny Kamara, and Tarik Moataz
We introduce a framework based on Bayesian statistical inference for analyzing leakage in cryptography and its vulnerability to inference attacks. Our framework naturally integrates auxiliary information, defines a notion of adversarial advantage, and provides information-theoretic measures that capture the security of leakage patterns against both full and functional recovery attacks. We present two main theorems that bound the advantage of powerful inference techniques: the maximum a posteriori (MAP), the maximum likelihood estimate (MLE) and the MAP test. Specifically, we show that the advantage of these methods is exponentially bounded by new entropy measures that capture the susceptibility of leakage patterns to inference. To demonstrate the applicability of our framework, we design and implement an automated leakage attack engine, Bayle, which leverages a novel inference algorithm that efficiently computes MAP estimates for a large class of i.i.d. leakage models. These models include query equality leakage, the combination of query equality and volume leakage, and leakage patterns arising from naive conjunctions.
Last updated:  2025-04-01
Oblivious Immutable Memory
Ananya Appan and David Heath
An oblivious RAM (ORAM) compiler is a cryptographic tool that transforms a program $P$ running in time $n$ into an equivalent program $\tilde P$, with the property that the sequence of memory addresses read from/written to by $\tilde P$ reveal nothing about $\tilde P$'s data (Goldreich and Ostrovsky, JACM'96). An efficient ORAM compiler $C$ should achieve some combination of the following: - Low bandwidth blow-up: $\tilde P$ should read/write a similar amount of data as does P. - Low latency: $\tilde P$ should incur a similar number of roundtrips to the memory as does P. - Low space complexity: $\tilde P$ should run in as few words of local memory as possible. It is well known that for a generic compiler (i.e. one that works for any RAM program $P$), certain combinations of efficiencies are impossible. Any generic ORAM compiler must incur $\Omega(\log n)$ bandwidth blow-up, and any ORAM compiler with no latency blow-up must incur either $\Omega(\sqrt n)$ bandwidth blow-up and/or local space. Moreover, while a $O(\log n)$ bandwidth blow-up compiler is known, it requires the assumption that one-way functions exist and incurs enormous constant factors. To circumvent the above problems and improve efficiency of particular ORAM programs, we develop a compiler for a specific class of programs. Let $P$ be a program that interacts with an immutable memory. Namely, $P$ may write values to memory, then read them back, but it cannot change values that were already written. Using only information-theoretic techniques, we compile any such $P$ into an oblivious form $\tilde P$ with a combination of efficiencies that no generic ORAM compiler can achieve: - $\tilde P$ incurs $\Theta(\log n)$ amortized bandwidth blow-up. - $\tilde P$ incurs $O(1)$ amortized latency blow-up. - $\tilde P$ runs in $O(\lambda)$ words of local space ($\tilde P$ incurs an error with probability $2^{-\Omega(\lambda)}$). We show that this, for instance, implies that any pure functional program can be compiled with the same asymptotics. Our work builds on and is compatible with prior work (Appan et al., CCS'24) that showed similar results for pointer machine programs that manipulate objects with constant in-degree (i.e., the program may only maintain a constant number of pointers to any one memory cell; our immutable memory approach does not have this limitation). By combining techniques, we can consider programs that interact with a mixed memory that allows each memory cell to be updated until it is frozen, after which it becomes immutable, allowing further reads to be compiled with the above asymptotics, even when in-degree is high. Many useful algorithms/data structures can be naturally implemented as mixed memory programs, including suffix trees (powerful data structures used in computational biology) and deterministic finite automata (DFAs).
Last updated:  2025-04-01
The Quantum Decoherence Model: Everlasting Composable Secure Computation and More
Nico Döttling, Alexander Koch, Sven Maier, Jeremias Mechler, Anne Müller, Jörn Müller-Quade, and Marcel Tiepelt
Quantum cryptography allows to achieve security goals which are unobtainable using classical cryptography alone: it offers the promise of everlasting privacy. Thatis, an adversary trying to attack a protocol must succeed during the run of the protocol. After the protocol has terminated, security holds unconditionally. In this work, we initiate the study of a new model which we call the quantum decoherence model (QDM). In a nutshell, this model captures adversaries that are computationally bounded during the run of a protocol (and some time after), but become computationally unbounded long after the protocol terminates. Importantly, once the adversary becomes computationally unbounded, he can only remember a bounded number of qubits from before the computational bound was lifted. We provide a variant of the Universal Composability framework which captures the new notion of quantum decoherence and augment it with quantum random oracles. As our main contribution, we construct a non-interactive commitment scheme achieving unconditional and statistical security against malicious senders and everlasting security against malicious receivers under our new security notion. Such commitments imply general secure multiparty computation with everlasting security. Finally, we show that our core technique can be applied to a broader spectrum of problems. We show that it gives rise to everlasting public key encryption and OT in the QDM. Finally, we also consider the weaker notion of incompressible encryption in the setting of quantum decoherence, and show that post-quantum IND-CPA secure public key encryption is sufficient to realize this notion without resorting to random oracles. At the technical core of our constructions is a new, conceptually simple yet powerful reverse entropic uncertainty relation.
Last updated:  2025-04-01
DSM: Decentralized State Machine - The Missing Trust Layer of the Internet
Brandon Ramsay
The modern internet relies heavily on centralized trust systems controlled by corporations, governments, and intermediaries to manage authentication, identity, and value transfer. These models introduce fundamental vulnerabilities, including censorship, fraud, and systemic insecurity. The Decentralized State Machine (DSM) addresses these issues by introducing a mathematically enforced trust layer that eliminates the need for consensus mechanisms, third-party validators, and centralized infrastructure. DSM enables quantum-resistant, deterministic state transitions for digital identity and value exchange—offering immediate finality, offline capability, and tamper-proof forward-only state progression. DSM replaces traditional blockchain execution models with deterministic, pre-committed state transitions, enabling secure, multi-path workflows without requiring Turing-completeness or global consensus. The protocol architecture is based on a straight hash chain with sparse indexing and Sparse Merkle Trees (SMTs), ensuring efficient verification, scalability, and privacy. A bilateral isolation model supports asynchronous, offline operation with built-in consistency guarantees. DSM introduces a sustainable, gas-free economic model based on cryptographic subscription commitments. This paper outlines the architecture, cryptographic foundations, and security guarantees of DSM, and demonstrates how it achieves verifiable, trustless interaction between peers—both online and offline. By decoupling security from consensus and enabling self-validating state transitions, DSM offers a practical and scalable alternative to conventional internet trust models.
Last updated:  2025-04-01
Structured Encryption for Indirect Addressing
Ruth Ng, Alexander Hoover, David Cash, and Eileen Ee
The Structured Encryption (StE) framework can be used to capture the encryption and querying of complex data structures on an honest-but-curious server. In this work, we introduce a new type of StE called indirectly addressed multimap encryption (IA-MME). We propose two IA-MME schemes: the layered multimaps approach" which extends and generalizes the existing "multimap chaining" approach, and a novel technique called the single multimap approach which has comparable efficiency and strictly better security. We demonstrate that our formalisms simplify and modularize StE solutions for real-world use cases in searchable encryption and SQL databases, and provide simulations demonstrating that our IA-MME constructions lead to tangible efficiency and security gains on realistic data. As a part of our techniques, we identify and correct a technical error in prior constructions — providing greater insight into issues that can arise when composing StE schemes.
Last updated:  2025-04-01
Counter Galois Onion (CGO) for Tor: Fast Non-Malleable Onion Encryption
Jean Paul Degabriele, Alessandro Melloni, Jean-Pierre Münch, and Martijn Stam
In 2012, the Tor project expressed the need to upgrade Tor's onion encryption scheme to protect against tagging attacks and thereby strengthen its end-to-end integrity protection. Tor proposal 261, where each encryption layer is processed by a strongly secure, yet relatively expensive tweakable wide-block cipher, is the only concrete candidate replacement to be backed by formal, yet partial, security proofs (Degabriele and Stam, EUROCRYPT 2018, and Rogaway and Zhang, PoPETS 2018). We propose an alternative onion encryption scheme, called Counter Galois Onion (CGO), that follows a minimalistic, modular design and includes several improvements over proposal 261. CGO's underlying primitive is an updatable tweakable split-domain cipher accompanied with a new security notion, that augments the recently introduced rugged pseudorandom permutation (Degabriele and Karadžić, CRYPTO 2022). Thus, we relax the security compared to a tweakable wide-block cipher, allowing for more efficient designs. We suggest a concrete instantiation for the updatable tweakable split-domain cipher and report on our experiments comparing the performance of CGO with Tor's existing onion encryption scheme.
Last updated:  2025-04-01
On the Adaptive Security of Free-XOR-based Garbling Schemes in the Plain Model
Anasuya Acharya, Karen Azari, and Chethan Kamath
A Garbling Scheme is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since its inception by Yao (FOCS'82, '86), optimizing the communication and computation complexities of securely garbling circuits has been an area of active research. One such optimization, and perhaps the most fundamental, is the `Free-XOR' technique (Kolesnikov and Schneider, ICALP'08) which allows XOR gates in a function garbling to not require representation, and therefore communication. Since then, several works have designed and analysed the security of schemes that adopt the Free-XOR optimisation. In particular: (1) Applebaum (JoC'16) proved that this can be securely instantiated assuming symmetric-key encryption satisfying a notion called RK-KDM security; and (2) Zahur, Rosulek and Evans (Eurocrypt'15) proposed the so-called `Half Gates' scheme, and proved that it can be instantiated assuming hash functions satisfying a notion called CCR security. Although both schemes have been proven selectively secure, prior work leaves it open to analyze whether they satisfy a stronger security notion -- adaptive security -- in the plain model. In this work, we formally show that the selective security of these two schemes cannot be lifted to adaptive security under the same assumptions. To establish these barriers, we adopt techniques from the work of Kamath et al (Crypto'21), who proved similar negative results for Yao's garbling. We use that as a starting point and introduce new techniques tailored towards addressing Free-XOR-based schemes.
Last updated:  2025-04-01
Accelerating Isogeny Walks for VDF Evaluation
David Jacquemin, Anisha Mukherjee, Ahmet Can Mert, and Sujoy Sinha Roy
VDFs are characterized by sequential function evaluation but an immediate output verification. In order to ensure secure use of VDFs in real-world applications, it is important to determine the fastest implementation. Considering the point of view of an attacker (say with unbounded resources), this paper aims to accelerate the isogeny-based VDF proposed by De Feo-Mason-Petit-Sanso in 2019. It is the first work that implements a hardware accelerator for the evaluation step of an isogeny VDF. To meet our goal, we use redundant representations of integers and introduce a new lookup table-based algorithm for modular reduction. We also provide both a survey of elliptic curve arithmetic to arrive at the most cost-effective curve computations and an in-depth cost analysis of the different base degree isogeny and method for the isogeny evaluation. The evaluation step of a VDF is defined to be sequential, which means that there is limited scope for parallelism. Nevertheless, taking this constraint into account our proposed design targets the highest levels of parallelism possible on an architectural level of an isogeny VDF implementation. We provide a technology-independent metric to model the delay of isogeny evaluation, which a VDF developer can use to derive secure parameters. ASIC synthesis results in 28nm are used as a baseline to estimate VDF parameters.
Last updated:  2025-04-01
Violating Clauser-Horne-Shimony-Holt Inequality Represents Nothing
Zhengjun Cao, Zhenfu Cao, and Lihua Liu
The Clauser-Horne-Shimony-Holt (CHSH) inequality is of great importance to quantum entanglement and quantum computers. We present a purely mathematical argument for the famous inequality. The essential assumptions in the new argument are totally independent of any physical interpretation, including the hidden variable interpretation for EPR thought experiment and the Copenhagen interpretation for quantum mechanics. The findings are helpful to comprehend the inequality from a new point of view.
Last updated:  2025-04-01
Pre-Constructed Publicly Verifiable Secret Sharing and Applications
Karim Baghery, Noah Knapen, Georgio Nicolas, and Mahdi Rahimi
Conventional Publicly Verifiable Secret Sharing (PVSS) protocols allow a dealer to share a secret among $n$ parties without interaction, ensuring that any $t + 1$ parties (where $t+1 \le n$) can recover the secret, while anyone can publicly verify the validity of both the individual shares and the reconstructed secret. PVSS schemes are shown to be a key tool in a wide range of practical applications. In this paper, we introduce Pre-constructed PVSS (PPVSS), an extension of standard PVSS schemes, highlighting its enhanced utility and efficiency in various protocols. Unlike standard PVSS, PPVSS requires the dealer to publish a commitment or encryption of the main secret and incorporates a novel secret reconstruction method. We show that these refinements make PPVSS more practical and versatile than conventional PVSS schemes. To build a PPVSS scheme, we first point out that the well-known PVSS scheme by Schoenmakers (CRYPTO'99) and its pairing-based variant presented by Heidarvand and Villar (SAC'08) can be seen as special cases of PPVSS, where the dealer also publishes a commitment to the main secret. However, these protocols are not practical for many applications due to efficiency limitations and are less flexible compared to a standard PPVSS scheme. To address this, we propose a general strategy for transforming a Shamir-based PVSS scheme into a PPVSS scheme. Using this strategy, we construct two practical PPVSS schemes in both the Random Oracle (RO) and plain models, grounded in state-of-the-art PVSS designs. Leveraging the new RO-based PPVSS scheme, we revisit some applications and present more efficient variants. Notably, we propose a new universally verifiable e-voting protocol that improves on the alternative scheme by Schoenmakers (CRYPTO'99), reducing the verification complexity with $m$ voters from $O(n^2m)$ to $O(nm)$ exponentiations--a previously unattainable goal with standard PVSS schemes. Our implementation results demonstrate that both our proposed PPVSS schemes and the new universally verifiable e-voting protocol significantly outperform existing alternatives in terms of efficiency.
Last updated:  2025-04-01
Defeating AutoLock: From Simulation to Real-World Cache-Timing Exploits against TrustZone
Quentin Forcioli, Sumanta Chaudhuri, and Jean-Luc Danger
In this article, we present for the first time a cross-core Prime+Probe attack on ARM TrustZone, which bypasses the AutoLock mechanism. We introduce our simulation- driven methodology based on gem5 for vulnerability analysis. We demonstrate its utility in reverse engineering a SoC platform in order to study its microarchitectural behavior (caches, etc.), inside a simulator, in spite of hardware protection. We present a novel vulnerability analysis technique, which takes into account the cache set occupancy for targeted victim executable. This proves to be essential in identifying information leakage in presence of AutoLock. The above tool also identifies the cache lines leaking a maximum amount of information. A cross-core Prime+Probe attack is then mounted on these max-leakage cache lines both in simulation for fine-tuning, and in real hardware. We validate our analysis and attack method on OP-TEE, an open-source trusted execution environment running on RockPi4 a board based on RK3399 SoC. More specifically we target the RSA subroutine in the MbedTLS library used inside OP-TEE. Despite the presence of AutoLock, multiplier obfuscation, and assuming a cross-core attack, we are able to retrieve 30% of the key bits, which can later be used in Branch-and-Prune methods to recover the full key.
Last updated:  2025-04-01
DFS: Delegation-friendly zkSNARK and Private Delegation of Provers
Yuncong Hu, Pratyush Mishra, Xiao Wang, Jie Xie, Kang Yang, Yu Yu, and Yuwen Zhang
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) lead to proofs that can be succinctly verified but require huge computational resources to generate. Prior systems outsource proof generation either through public delegation, which reveals the witness to the third party, or, more preferably, private delegation that keeps the witness hidden using multiparty computation (MPC). However, current private delegation schemes struggle with scalability and efficiency due to MPC inefficiencies, poor resource utilization, and suboptimal design of zkSNARK protocols. In this paper, we introduce DFS, a new zkSNARK that is delegation-friendly for both public and private scenarios. Prior work focused on optimizing the MPC protocols for existing zkSNARKs, while DFS uses co-design between MPC and zkSNARK so that the protocol is efficient for both distributed computing and MPC. In particular, DFS achieves linear prover time and logarithmic verification cost in the non-delegated setting. For private delegation, DFS introduces a scheme with zero communication overhead in MPC and achieves malicious security for free, which results in logarithmic overall communication; while prior work required linear communication. Our evaluation shows that DFS is as efficient as state-of-the-art zkSNARKs in public delegation; when used for private delegation, it scales better than previous work. In particular, for $2^{24}$ constraints, the total communication of DFS is less than $500$KB, while prior work incurs $300$GB, which is linear to the circuit size. Additionally, we identify and address a security flaw in prior work, EOS (USENIX'23).
Last updated:  2025-04-01
Quantum Attacks on Sum of Even-Mansour Construction Utilizing Online Classical Queries
Zhenqiang Li, Shuqin Fan, Fei Gao, Yonglin Hao, Hongwei Sun, Xichao Hu, and Dandan Li
The Sum of Even-Mansour (SoEM) construction, proposed by Chen et al. at Crypto 2019, has become the basis for designing some symmetric schemes, such as the nonce-based MAC scheme $\text{nEHtM}_{p}$ and the nonce-based encryption scheme $\text{CENCPP}^{\ast}$. In this paper, we make the first attempt to study the quantum security of SoEM under the Q1 model where the targeted encryption oracle can only respond to classical queries rather than quantum ones. Firstly, we propose a quantum key recovery attack on SoEM21 with a time complexity of $\tilde{O}(2^{n/3})$ along with $O(2^{n/3})$ online classical queries. Compared with the current best classical result which requires $O(2^{2n/3})$, our method offers a quadratic time speedup while maintaining the same number of queries. The time complexity of our attack is less than that observed for quantum exhaustive search by a factor of $2^{n/6}$. We further propose classical and quantum key recovery attacks on the generalized SoEMs1 construction (consisting of $s\geq 2$ independent public permutations), revealing that the application of quantum algorithms can provide a quadratic acceleration over the pure classical methods. Our results also imply that the quantum security of SoEM21 cannot be strengthened merely by increasing the number of permutations.
Last updated:  2025-04-01
A Place for Everyone vs Everyone in its Place: Measuring and Attacking the Ethereum Global Network
Chenyu Li, Ren Zhang, and Xiaorui Gong
The Ethereum Global Network (EGN) is the peer-to-peer (P2P) network underlying Ethereum and thousands of subsequent blockchain services. Deviating from traditional single-service P2P networks, EGN's multi-service architecture has gained widespread acceptance for supposedly improving node discovery efficiency and security. This paper challenges this belief by critically examining EGN's design and its purported benefits. Our analysis reveals significant shortcomings in EGN's node discovery process. EGN nodes struggle to connect with peers offering the desired service: over three-quarters of connection attempts reach nodes of other services. In an extreme case, one node spent an average of $45\,908$ connection attempts to find each neighbor. Moreover, this blended architecture compromises EGN's security. The network demonstrates high susceptibility to DHT pollution and partition attacks. Even with only $300$ malicious nodes in EGN, an attacker can isolate thousands of nodes, significantly hindering recovery. In contrast, such a small number of malicious nodes has minimal impact on every single-service P2P network. We propose solutions to improve EGN's node discovery efficiency and strengthen its resilience against attacks.
Last updated:  2025-04-01
Lifeboats on the Titanic Cryptography
Gideon Samid
The Titanic was the ship that "could not sink," fortunately its designers installed lifeboats (not enough) despite having no logical grounding for this waste of space and material. It was out of respect for unforeseen surprises. NIST-Post Quantum Ciphers represent the best and the brightest in world crypto intelligence. They are certified as good for their purpose. And likely so, alas, not surely so. If we could find a crypto equivalent for the Titanic Lifeboats, should not we load them up for our journey? Indeed, pattern-devoid cryptography is the crypto equivalent of the lifeboats that mitigated the Titanic disaster. Pattern-Devoid cryptography (PDC) may be deemed inelegant, inconvenient, and bloated, but it will hold up against quantum computers more powerful than expected, and more importantly, it will hold up against adversarial mathematical talent greater than expected. Which is why we should put up with its negatives, and install it just in case the Titanic story repeats itself in cyberspace. This article elaborates on this proposition.
Last updated:  2025-04-01
Heuristic Algorithm for Solving Restricted SVP and its Applications
Geng Wang, Wenwen Xia, and Dawu Gu
In lattice-based cryptography, many attacks are performed by finding a short enough vector on a specific lattice. However, it is possible that length is not the only restriction on the vector to be found. A typical example is SVP with infinity norm: since most SVP solving algorithms only aim to find short vector under Euclidean norm, the infinity norm is in fact another restriction on the vector. In the literature, such problems are usually solved by performing exhaustive search on a list of short vectors generated from lattice sieving. However, the sieving list might either be too large or too small to pass the additional restriction, which makes the solving algorithm inefficient in some cases. Our contribution in this work is as follows: (1) We formally define a new lattice hard problem called restricted SVP, and show that it can be used to generalize many lattice hard problems, including SVP with non-Euclidean norm and Kannan's embedding on approximate CVP. (2) We extend the dimension for free technique and the enumerate-then-slice technique into approximate SVP where the goal is to output a list of short vectors with a certain size. (3) We give the heuristic algorithm for solving restricted SVP, and design a hardness estimator for this algorithm, which can be used to estimate the concrete hardness of signature forgery in Dilithium and other lattice-based signatures. Using this estimator, we present a concrete security analysis for Dilithium against signature forgery under the gate-count model for the first time. Our estimation matches well with the security estimation from core-SVP model in the document of Dilithium, and we also combine our estimator with the rescaling technique to generate a tighter estimation.
Last updated:  2025-04-01
Protecting Computations Against Continuous Bounded-Communication Leakage
Yuval Ishai and Yifan Song
We consider the question of protecting a general computation device, modeled by a stateful Boolean circuit, against leakage of partial information about its internal wires. Goyal et al. (FOCS 2016) obtained a solution for the case of bounded-communication leakage, where the wires are partitioned into two parts and the leakage can be any function computed using $t$ bits of communication between the parts. However, this solution suffers from two major limitations: (1) it only applies to a one-shot (stateless) computation, mapping an encoded input to an encoded output, and (2) the leakage-resilient circuit consumes fresh random bits, whose number scales linearly with the circuit complexity of the computed function. In this work, we eliminate the first limitation and make progress on the second. Concretely: - We present the first construction of stateful circuits that offer information-theoretic protection against continuous bounded-communication leakage. As an application, we extend a two-party ``malware-resilient'' protocol of Goyal et al. to the continuous-leakage case. - For simple types of bounded-communication leakage, which leak $t$ parities or $t$ disjunctions of circuit wires or their negations, we obtain a deterministic variant that does not require any fresh randomness beyond the randomness in the initial state. Here we get computational security based on a subexponentially secure one-way function. This is the first deterministic leakage-resilient circuit construction for any nontrivial class of global leakage.
Last updated:  2025-03-31
Adaptive Distributional Security: A Framework for Input-Adaptive Cryptography
Cruz Barnum and David Heath
It is often desirable to break cryptographic primitives into two components: an input-independent offline component, and a cheap online component used when inputs arrive. Security of such online/offline primitives must be proved in the input-adaptive setting: the adversary chooses its input adaptively, based on what it sees in the offline-phase. Proving security in the input-adaptive setting can be difficult, particularly when one wishes to achieve simulation security and avoid idealized objects like a random oracle (RO). This work proposes a framework for reasoning about input-adaptive primitives: adaptive distributional security (ADS). Roughly, an ADS primitive provides security when it is used with inputs drawn from one of two distributions that are themselves hard to distinguish. ADS is useful as a framework for the following reasons: - An ADS definition can often circumvent impossibility results imposed on the corresponding simulation-based definition. This allows us to decrease the online-cost of primitives, albeit by using a weaker notion of security. - With care, one can typically upgrade an ADS-secure object into a simulation-secure object (by increasing cost in the online-phase). - ADS is robust, in the sense that (1) it enables a form of composition and (2) interesting ADS primitives are highly interconnected in terms of which objects imply which other objects. - Many useful ADS-secure objects are plausibly secure from straightforward symmetric-key cryptography. We start by defining the notion of an ADS encryption (ADE) scheme. A notion of input-adaptive encryption can be easily achieved from RO, and the ADE definition can be understood as capturing the concrete property provided by RO that is sufficient to achieve input-adaptivity. From there, we use ADE to achieve ADS variants of garbled circuits and oblivious transfer, to achieve simulation-secure garbled circuits, oblivious transfer, and two-party computation, and prove interconnectedness of these primitives. In sum, this results in a family of objects with extremely cheap online-cost.
Last updated:  2025-03-31
Adaptively-Secure Big-Key Identity-Based Encryption
Jeffrey Champion, Brent Waters, and David J. Wu
Key-exfiltration attacks on cryptographic keys are a significant threat to computer security. One proposed defense against such attacks is big-key cryptography which seeks to make cryptographic secrets so large that it is infeasible for an adversary to exfiltrate the key (without being detected). However, this also introduces an inconvenience to the user who must now store the large key on all of their different devices. The work of Döttling, Garg, Sekar and Wang (TCC 2022) introduces an elegant solution to this problem in the form of big-key identity-based encryption (IBE). Here, there is a large master secret key, but very short identity keys. The user can now store the large master secret key as her long-term key, and can provision each of her devices with short ephemeral identity keys (say, corresponding to the current date). In this way, the long-term secret key is protected by conventional big-key cryptography, while the user only needs to distribute short ephemeral keys to their different devices. Döttling et al. introduce and construct big-key IBE from standard pairing-based assumptions. However, their scheme only satisfies selective security where the adversary has to declare its challenge set of identities at the beginning of the security game. The more natural notion of security is adaptive security where the user can adaptively choose which identities it wants to challenge after seeing the public parameters (and part of the master secret key). In this work, we give the first adaptively-secure construction of big-key IBE from standard cryptographic assumptions. Our first construction relies on indistinguishability obfuscation (and one-way functions), while our second construction relies on witness encryption for NP together with standard pairing-based assumptions (i.e., the SXDH assumption). To prove adaptive security, we show how to implement the classic dual-system methodology with indistinguishability obfuscation as well as witness encryption.
Last updated:  2025-03-31
The Singularity Random Number Generator: Bridging Determinism and Unpredictability to Redefine Randomness, Secure Systems, and Adaptive Intelligence
S. P. Prahlad
Abstract The Singularity Random Number Generator (SRNG) represents a groundbreaking advancement in the generation of random numbers by integrating two key properties - computational irreducibility and seed independence - into a deterministic algorithm. Unlike conventional pseudorandom number generators (PRNGs) whose randomness is intrinsically linked to seed quality or chaotic sensitivity, SRNG transforms even low-entropy seeds into complex, unpredictable outputs. SRNG demonstrates high-quality randomness can emerge independently of seed entropy or size. This paper explores how SRNG not only challenges classical paradigms of predictability in deterministic systems but also offers transformative applications in cryptography, artificial intelligence (AI), and interdisciplinary research. Furthermore, by drawing parallels with cognitive variability research - such as insights from the Forbes article “Why A ‘Productively Distracted’ Brain Is A Superpower” - we discuss how the emergent unpredictability of SRNG may contribute to enhanced adaptive learning and decision-making processes in AI systems. Ultimately, SRNG is presented as a model that bridges the realms of science and mystery, inviting a new understanding of randomness and the limits of scientific inquiry, thereby expanding the frontiers of interdisciplinary research.
Last updated:  2025-03-31
One Tree to Rule Them All: Optimizing GGM Trees and OWFs for Post-Quantum Signatures
Carsten Baum, Ward Beullens, Shibam Mukherjee, Emmanuela Orsini, Sebastian Ramacher, Christian Rechberger, Lawrence Roy, and Peter Scholl
The use of MPC-in-the-Head (MPCitH)-based zero-knowledge proofs of knowledge (ZKPoK) to prove knowledge of a preimage of a one-way function (OWF) is a popular approach towards constructing efficient post-quantum digital signatures. Starting with the Picnic signature scheme, many optimized MPCitH signatures using a variety of (candidate) OWFs have been proposed. Recently, Baum et al. (CRYPTO 2023) showed a fundamental improvement to MPCitH, called VOLE-in-the-Head (VOLEitH), which can generically reduce the signature size by at least a factor of two without decreasing computational performance or introducing new assumptions. Based on this, they designed the FAEST signature which uses AES as the underlying OWF. However, in comparison to MPCitH, the behavior of VOLEitH when using other OWFs is still unexplored. In this work, we improve a crucial building block of the VOLEitH and MPCitH approaches, the so-called all-but-one vector commitment, thus decreasing the signature size of VOLEitH and MPCitH signature schemes. Moreover, by introducing a small Proof of Work into the signing procedure, we can improve the parameters of VOLEitH (further decreasing signature size) without compromising the computational performance of the scheme. Based on these optimizations, we propose three VOLEitH signature schemes FAESTER, KuMQuat, and MandaRain based on AES, MQ, and Rain, respectively. We carefully explore the parameter space for these schemes and implement each, showcasing their performance with benchmarks. Our experiments show that these three signature schemes outperform MPCitH-based competitors that use comparable OWFs, in terms of both signature size and signing/verification time.
Last updated:  2025-03-31
Release the Power of Rejected Signatures: An Efficient Side-Channel Attack on Dilithium
Zheng Liu, An Wang, Congming Wei, Yaoling Ding, Jingqi Zhang, Annyu Liu, and Liehuang Zhu
The Module-Lattice-Based Digital Signature Standard (ML-DSA), formerly known as CRYSTALS-Dilithium, is a lattice-based post-quantum cryptographic scheme. In August 2024, the National Institute of Standards and Technology (NIST) officially standardized ML-DSA under FIPS 204. Dilithium generates one valid signature and multiple rejected signatures during the signing process. Most Side-Channel Attacks targeting Dilithium have focused solely on the valid signature, while neglecting the hints contained in rejected signatures. In this paper, we propose a method for recovering the private key by simultaneously leveraging side-channel leakages from both valid signatures and rejected signatures. This approach minimizes the number of signing attempts required for full key recovery. We construct a factor graph incorporating all relevant side-channel leakages and apply the Belief Propagation (BP) algorithm for private key recovery. We conducted a proof-of-concept experiment on a Cortex M4 core chip, where the results demonstrate that utilizing rejected signatures reduces the required number of traces by at least $42\%$ for full key recovery. A minimum of a single trace can recover the private key with a success rate of $30\%$. Our findings highlight that protecting rejected signatures is crucial, as their leakage provides valuable side-channel information. We strongly recommend implementing countermeasures for rejected signatures during the signing process to mitigate potential threats.
Last updated:  2025-03-31
Reusable Dynamic Multi-Party Homomorphic Encryption
Jung Hee Cheon, Hyeongmin Choe, Seunghong Kim, and Yongdong Yeo
Homomorphic Encryption (HE) is a promising primitive for evaluating arbitrary circuits while keeping the user's privacy. We investigate how to use HE in the multi-party setting where data is encrypted with several distinct keys. One may use the Multi-Key Homomorphic Encryption (MKHE) in this setting, but it has space/computation overhead of $\mathcal O(n)$ for the number of users $n$, which makes it impractical when $n$ grows large. On the contrary, Multi-Party Homomorphic Encryption (MPHE) is the other Homomorphic Encryption primitive in the multi-party setting, where the space/computation overhead is $\mathcal O(1)$; however, is limited in terms of ciphertext reusability and dynamicity, that ciphertexts are encrypted just for a group of parties and cannot be reused for other purposes, and that additional parties cannot join the computation dynamically. Contrary to MKHE, where the secret key owners engage only in the decryption phase, we consider a more relaxed situation where the secret key owners can communicate before the computation. In that case, we can reduce the size of a ciphertext and the evaluation complexity from $\mathcal O(n)$ to $\mathcal O(1)$ as in a single-key HE setting. We call this primitive as {\em Reusable Dynamic Multi-Party Homomorphic Encryption}, which is more suitable in real-world scenarios. We show that 1) the procedures before the computation can be done in a very few rounds of communications, 2) the evaluation/space complexities are independent of the number of users, and 3) the functionalities are as efficient as MKHE, with asymptotic analysis and with implementation.
Last updated:  2025-03-31
Efficient Revocable Identity-Based Encryption from Middle-Product LWE
Takumi Nishimura and Atsushi Takayasu
The Middle-Product Learning with Errors (MPLWE) assumption is a variant of the Learning with Errors (LWE) assumption. The MPLWE assumption reduces the key size of corresponding LWE-based schemes by setting keys as sets of polynomials. Moreover, MPLWE has more robust security than other LWE variants such as Ring-LWE and Module-LWE. Lombardi et al. proposed an identity-based encryption (IBE) scheme (LVV-IBE) based on the MPLWE assumption in the random oracle model (ROM) by following Gentry et al.'s IBE scheme (GPV-IBE) based on LWE. Due to the benefit of MPLWE, LVV-IBE has a shorter master public key and a secret key than GPV-IBE without changing the size of a ciphertext. However, Lombardi et al.'s proof is not tight in the ROM, while Katsumata et al. proved that GPV-IBE achieves tight adaptive anonymity in the quantum ROM (QROM). Revocable IBE (RIBE) is a variant of IBE supporting a key revocation mechanism to remove malicious users from the system. Takayasu proposed the most efficient RIBE scheme (Takayasu-RIBE) based on LWE achieving tight adaptive anonymity in the QROM. Although a concrete RIBE scheme based on MPLWE has not been proposed, we can construct a scheme (LVV-based RIBE) by applying Ma and Lin's generic transformation to LVV-IBE. Due to the benefit of MPLWE, LVV-based RIBE has an asymptotically shorter master public key and a shorter secret key than Takayasu-RIBE although the former has a larger ciphertext than the latter. Moreover, the security proof is not tight and anonymous in the ROM due to security proofs of Ma-Lin and Lombardi et al. In this paper, we propose a concrete RIBE scheme based on MPLWE. Compared with the above RIBE schemes, the proposed RIBE scheme is the most asymptotically efficient since the sizes of a master public key and a secret key (resp. ciphertext) of the proposed scheme are the same as those of LVV-based RIBE scheme (resp. Takayasu-RIBE). Moreover, we prove the tight adaptive anonymity of the proposed RIBE scheme in the QROM. For this purpose, we also prove the tight adaptive anonymity of LVV-IBE in the QROM.
Last updated:  2025-03-30
REGKYC: Supporting Privacy and Compliance Enforcement for KYC in Blockchains
Xihan Xiong, Michael Huth, and William Knottenbelt
Know Your Customer (KYC) is a core component of the Anti-Money Laundering (AML) framework, designed to prevent illicit activities within financial systems. However, enforcing KYC and AML on blockchains remains challenging due to difficulties in establishing accountability and preserving user privacy. This study proposes REGKYC, a privacy-preserving Attribute-Based Access Control (ABAC) framework that balances user privacy with externally mandated KYC and AML requirements. REGKYC leverages a structured ABAC model to support the flexible verification of KYC attributes and the enforcement of compliance policies, providing benefits to multiple stakeholders. First, it enables legitimate users to meet compliance requirements while preserving the privacy of their on-chain activities. Second, it empowers Crypto-asset Service Providers (CASPs) to tailor compliance policies to operational needs, ensuring adaptability to evolving regulations. Finally, it enhances regulatory accountability by enabling authorized deanonymization of malicious actors. We hope this work inspires future research to harmonize user privacy and regulatory compliance in blockchain systems.
Last updated:  2025-03-30
Efficient Garbled Pseudorandom Functions and Lookup Tables from Minimal Assumption
Wei-Kai Lin, Zhenghao Lu, and Hong-Sheng Zhou
Yao's garbled circuits have received huge attention in both theory and practice. While garbled circuits can be constructed using minimal assumption (i.e., the existence of pseudorandom functions or one-way functions), the state-of-the-art constructions (e.g., Rosulek-Roy, Crypto 2021) are based on stronger assumptions. In particular, the ``Free-XOR'' technique (Kolesnikov-Schneider, ICALP 2008) is essential in these state-of-the-art constructions, and their security can only be proven in the random oracle model, or rely on the ``circular-correlation robust hash'' assumption. In this paper, we aim to develop new techniques to construct efficient garbling schemes using minimal assumptions. Instead of generically replacing the Free-XOR technique, we focus on garbling schemes for specific functionalities. We successfully eliminated the need for Free-XOR in several state-of-the-art schemes, including the one-hot garbling (Heath and Kolesnikov, CCS 2021) and the garbled pseudorandom functions, and the garbled lookup tables (Heath, Kolesnikov and Ng, Eurocrypt 2024). Our schemes are based on minimal assumptions, i.e., standard pseudorandom functions (PRFs)---we resolved the need for circular security. The performance of our scheme is almost as efficient as the best results except for a small constant factor. Namely, for any lookup table $\{0,1\}^n \to \{0,1\}^m$, our scheme takes $n + (5n+9)m\lambda + 2^n \cdot m$ bits of communication, where $\lambda$ is the security parameter of PRF.
Last updated:  2025-03-30
Making GCM Great Again: Toward Full Security and Longer Nonces
Woohyuk Chung, Seongha Hwang, Seongkwang Kim, Byeonghak Lee, and Jooyoung Lee
The GCM authenticated encryption (AE) scheme is one of the most widely used AE schemes in the world, while it suffers from risk of nonce misuse, short message length per encryption and an insufficient level of security. The goal of this paper is to design new AE schemes achieving stronger provable security in the standard model and accepting longer nonces (or providing nonce misuse resistance), with the design rationale behind GCM. As a result, we propose two enhanced variants of GCM and GCM-SIV, dubbed eGCM and eGCM-SIV, respectively. eGCM and eGCM-SIV are built on top of a new CENC-type encryption mode, dubbed eCTR: using 2n-bit counters, eCTR enjoys beyond-birthday-bound security without significant loss of efficiency. eCTR is combined with an almost uniform and almost universal hash function, yielding a variable input-length variable output-length pseudorandom function, dubbed HteC. GCM and GCM-SIV are constructed using eCTR and HteC as building blocks. eGCM and eGCM-SIV accept nonces of arbitrary length, and provide almost the full security (namely, n-bit security when they are based on an n-bit block cipher) for a constant maximum input length, under the assumption that the underlying block cipher is a pseudorandom permutation (PRP). Their efficiency is also comparable to GCM in terms of the rate and the overall speed.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.