All papers in 2024 (2100 results)

Last updated:  2024-12-31
Compact Key Storage in the Standard Model
Yevgeniy Dodis and Daniel Jost
In recent work [Crypto'24], Dodis, Jost, and Marcedone introduced Compact Key Storage (CKS) as a modern approach to backup for end-to-end (E2E) secure applications. As most E2E-secure applications rely on a sequence of secrets $(s_1,...,s_n)$ from which, together with the ciphertexts sent over the network, all content can be restored, Dodis et al. introduced CKS as a primitive for backing up $(s_1,...,s_n)$. The authors provided definitions as well as two practically efficient schemes (with different functionality-efficiency trade-offs). Both, their security definitions and schemes relied however on the random oracle model (ROM). In this paper, we first show that this reliance is inherent. More concretely, we argue that in the standard model, one cannot have a general CKS instantiation that is applicable to all "CKS-compatible games", as defined by Dodis et al., and realized by their ROM construction. Therefore, one must restrict the notion of CKS-compatible games to allow for standard model CKS instantiations. We then introduce an alternative standard-model CKS definition that makes concessions in terms of functionality (thereby circumventing the impossibility). More precisely, we specify CKS which does not recover the original secret $s_i$ but a derived key $k_i$, and then observe that this still suffices for many real-world applications. We instantiate this new notion based on minimal assumptions. For passive security, we provide an instantiation based on one-way functions only. For stronger notions, we additionally need collision-resistant hash functions and dual-PRFs, which we argue to be minimal. Finally, we provide a modularization of the CKS protocols of Dodis et al. In particular, we present a unified protocol (and proof) for standard-model equivalents for both protocols introduced in the original work.
Last updated:  2024-12-31
MicroNova: Folding-based arguments with efficient (on-chain) verification
Jiaxing Zhao, Srinath Setty, and Weidong Cui
We describe the design and implementation of MicroNova, a folding-based recursive argument for producing proofs of incremental computations of the form $y = F^{(\ell)}(x)$, where $F$ is a possibly non-deterministic computation (encoded using a constraint system such as R1CS), $x$ is the initial input, $y$ is the output, and $\ell > 0$. The proof of an $\ell$-step computation is produced step-by-step such that the proof size nor the time to verify it depends on $\ell$. The proof at the final iteration is then compressed, to achieve further succinctness in terms of proof size and verification time. Compared to prior folding-based arguments, a distinguishing aspect of MicroNova is the concrete efficiency of the verifier—even in a resource-constrained environment such as Ethereum’s blockchain. In particular, the compressed proof consists of $O(\log{N})$ group elements and it can be verified with $O(\log{N})$ group scalar multiplications and two pairing operations, where $N$ is the number of constraints for a single invocation of $F$. MicroNova requires a universal trusted setup and can employ any existing setup material created for the popular KZG univariate polynomial commitment scheme. Finally, we implement and experimentally evaluate MicroNova. We find that MicroNova’s proofs can be efficiently verified on the Ethereum blockchain with $\approx$2.2M gas. Furthermore, MicroNova’s prover incurs minimal overheads atop its baseline Nova’s prover.
Last updated:  2025-01-06
Asymptotically Optimal Adaptive Asynchronous Common Coin and DKG with Silent Setup
Hanwen Feng and Qiang Tang
This paper presents the first optimal-resilient, adaptively secure asynchronous common coin protocol with $O(\lambda n^2)$ communication complexity and $O(1)$ rounds, requiring only a public silent setup. Our protocol immediately implies a sequence of quadratic-communication, constant-round asynchronous Byzantine agreement protocols and asynchronous distributed key generation with a silent setup. Along the way, we formulate a new primitive called asynchronous subset alignment and introduce a simple framework to reason about specific composition security suitable for asynchronous common coin, which may be of independent interest.
Last updated:  2024-12-31
NMFT: A Copyrighted Data Trading Protocol based on NFT and AI-powered Merkle Feature Tree
Dongming Zhang, Lei Xie, and Yu Tao
With the rapid growth of blockchain-based Non-Fungible Tokens (NFTs), data trading has evolved to incorporate NFTs for ownership verification. However, the NFT ecosystem faces significant challenges in copyright protection, particularly when malicious buyers slightly modify the purchased data and re-mint it as a new NFT, infringing upon the original owner's rights. In this paper, we propose a copyright-preserving data trading protocol to address this challenge. First, we introduce the Merkle Feature Tree (MFT), an enhanced version of the traditional Merkle Tree that incorporates an AI-powered feature layer above the data layer. Second, we design a copyright challenge phase during the trading process, which recognizes the data owner with highly similar feature vectors and earlier on-chain timestamp as the legitimate owner. Furthermore, to achieve efficient and low-gas feature vector similarity computation on blockchain, we employ Locality-Sensitive Hashing (LSH) to compress high-dimensional floating-point feature vectors into single uint256 integers. Experiments with multiple image and text feature extraction models demonstrate that LSH effectively preserves the similarity between highly similar feature vectors before and after compression, thus supporting similarity-based copyright challenges. Experimental results on the Ethereum Sepolia testnet demonstrate NMFT's scalability with sublinear growth in gas consumption while maintaining stable latency.
Last updated:  2025-01-08
Efficient Multi-party Private Set Union Resistant to Maximum Collusion Attacks
Qiang Liu and Joon-Woo Lee
Multi-party Private Set Union (MPSU) enables multiple participants to jointly compute the union of their private sets without leaking any additional information beyond the resulting union. Liu et al. (ASIACRYPT 2023) proposed the first scalable MPSU protocol fully based on symmetric key encryption (SKE), which designates one participant as the "leader" responsible for obtaining the final union. However, the protocol assumes that the leader does not collude with other participants, which weakens its practicality. In this work, we design a scalable MPSU protocol, $\Pi_\text{MPSU}^\text{one-leader}$, which tolerates maximum collusion. The protocol relies primarily on SKE supplemented with additive homomorphic encryption (AHE), with the designated leader obtaining the union result. Furthermore, to address the issue of fairness in scenarios where obtaining the result early provides an advantage, we extend $\Pi_\text{MPSU}^\text{one-leader}$ and propose a protocol that allows all participants to receive the union result simultaneously, called $\Pi_\text{MPSU}^\text{leaderless}$. We implement our proposed schemes and conduct a comprehensive comparison against state-of-the-art solutions. The result shows that, for input sizes of $2^{12}$ at a comparable security level, $\Pi_\text{MPSU}^{\text{one-leader}}$ achieves a $663$ times speedup in online runtime compared to the state-of-the-art. Furthermore, it also remains $22$ times faster than half-collusion-tolerant protocol.
Last updated:  2025-01-03
A Note on the Minimality of One-Way Functions in Post-Quantum Cryptography
Sam Buxbaum and Mohammad Mahmoody
In classical cryptography, one-way functions (OWFs) play a central role as the minimal primitive that (almost) all primitives imply. The situation is more complicated in quantum cryptography, in which honest parties and adversaries can use quantum computation and communication, and it is known that analogues of OWFs in the quantum setting might not be minimal. In this work we ask whether OWFs are minimal for the intermediate setting of post-quantum cryptography, in which the protocols are classical while they shall resist quantum adversaries. We show that for a wide range of natural settings, if a primitive Q implies OWFs, then so does its (uniformly or non-uniformly secure) post-quantum analogue. In particular, we show that if a primitive Q implies any other primitive P that has a 2-message security game (e.g., OWFs) through a black-box classical security reduction R, then one can always (efficiently) turn any polynomial-size quantum adversary breaking P into a polynomial-size quantum adversary breaking Q. Note that this result holds even if the implementation of P using that of Q is arbitrarily non-black-box. We also prove extensions of this result for when the reduction R anticipates its oracle adversary to be deterministic, whenever either of the following conditions hold: (1) the adversary needs to win the security game of Q only with non-negligible probability (e.g., Q is collision-resistant hashing) or (2) that either of P and Q have "falsifiable" security games (this is the case when P is OWFs). Our work leaves open answering our main question when Q implies OWFs through a non-black-box security reduction, or when P uses a more complicated security game than a two-message one.
Last updated:  2024-12-30
Secure Vault scheme in the Cloud Operating Model
Rishiraj Bhattacharyya, Avradip Mandal, and Meghna Sengupta
The rising demand for data privacy in cloud-based environments has led to the development of advanced mechanisms for securely managing sensitive information. A prominent solution in this domain is the "Data Privacy Vault," a concept that is being provided commercially by companies such as Hashicorp, Basis Theory, Skyflow Inc., VGS, Evervault, Protegrity, Anonomatic, and BoxyHQ. However, no existing work has rigorously defined the security notions required for a Data Privacy Vault or proven them within a formal framework which is the focus of this paper. Among its other uses, data privacy vaults are increasingly being used as storage for LLM training data which necessitates a scheme that enables users to securely store sensitive information in the cloud while allowing controlled access for performing analytics on specific non-sensitive attributes without exposing sensitive data. Conventional solutions involve users generating encryption keys to safeguard their data, but these solutions are not deterministic and are therefore unsuited for the LLM setting. To address this, we propose a novel framework that is deterministic as well as semantically secure. Our scheme operates in the Cloud Operating model where the server is trusted but stateless, and the storage is outsourced. We provide a formal definition and a concrete instantiation of this data privacy vault scheme. We introduce a novel tokenization algorithm that serves as the core mechanism for protecting sensitive data within the vault. Our approach not only generates secure, unpredictable tokens for sensitive data but also securely stores sensitive data while enabling controlled data retrieval based on predefined access levels. Our work fills a significant gap in the existing literature by providing a formalized framework for the data privacy vault, complete with security proofs and a practical construction - not only enhancing the understanding of vault schemes but also offering a viable solution for secure data management in the era of cloud computing.
Last updated:  2024-12-30
Exploring Large Integer Multiplication for Cryptography Targeting In-Memory Computing
Florian Krieger, Florian Hirner, and Sujoy Sinha Roy
Emerging cryptographic systems such as Fully Homomorphic Encryption (FHE) and Zero-Knowledge Proofs (ZKP) are computation- and data-intensive. FHE and ZKP implementations in software and hardware largely rely on the von Neumann architecture, where a significant amount of energy is lost on data movements. A promising computing paradigm is computing in memory (CIM), which enables computations to occur directly within memory, thereby reducing data movements and energy consumption. However, efficiently performing large integer multiplications - critical in FHE and ZKP - is an open question, as existing CIM methods are limited to small operand sizes. In this work, we address this question by exploring advanced algorithmic approaches for large integer multiplication, identifying the Karatsuba algorithm as the most effective for CIM applications. Thereafter, we design the first Karatsuba multiplier for resistive CIM crossbars. Our multiplier uses a three-stage pipeline to enhance throughput and, additionally, balances memory endurance with efficient array sizes. Compared to existing CIM multiplication methods, when scaled up to the bit widths required in ZKP and FHE, our design achieves up to 916x in throughput and 281x in area-time product improvements.
Last updated:  2024-12-30
PQConnect: Automated Post-Quantum End-to-End Tunnels
Daniel J. Bernstein, Tanja Lange, Jonathan Levin, and Bo-Yin Yang
This paper introduces PQConnect, a post-quantum end-to-end tunneling protocol that automatically protects all packets between clients that have installed PQConnect and servers that have installed and configured PQConnect. Like VPNs, PQConnect does not require any changes to higher-level protocols and application software. PQConnect adds cryptographic protection to unencrypted applications, works in concert with existing pre-quantum applications to add post-quantum protection, and adds a second application-independent layer of defense to any applications that have begun to incorporate application-specific post-quantum protection. Unlike VPNs, PQConnect automatically creates end-to-end tunnels to any number of servers using automatic peer discovery, with no need for the client administrator to configure per-server information. Each server carries out a client-independent configuration step to publish an announcement that the server's name accepts PQConnect connections. Any PQConnect client connecting to that name efficiently finds this announcement, automatically establishes a post-quantum point-to-point IP tunnel to the server, and routes traffic for that name through that tunnel. The foundation of security in PQConnect is the server's long-term public key used to encrypt and authenticate all PQConnect packets. PQConnect makes a conservative choice of post-quantum KEM for this public key. PQConnect also uses a smaller post-quantum KEM for forward secrecy, and elliptic curves to ensure pre-quantum security even in case of security failures in KEM design or KEM software. Security of the handshake component of PQConnect has been symbolically proven using Tamarin.
Last updated:  2024-12-29
Encrypted Multi-map that Hides Query, Access, and Volume Patterns
Uncategorized
Alexandra Boldyreva and Tianxin Tang
Show abstract
Uncategorized
We present an encrypted multi-map, a fundamental data structure underlying searchable encryption/structured encryption. Our protocol supports updates and is designed for applications demanding very strong data security. Not only it hides the information about queries and data, but also the query, access, and volume patterns. Our protocol utilizes a position-based ORAM and an encrypted dictionary. We provide two instantiations of the protocol, along with their operation-type-revealing variants, all using PathORAM but with different encrypted dictionary instantiations (AVL tree or BSkiplist). Their efficiency has been evaluated through both asymptotic and concrete complexity analysis, outperforming prior work while achieving the same level of strong security. We have implemented our instantiations and evaluated their performance on two real-world email databases (Enron and Lucene). We also discuss the strengths and limitations of our construction, including its resizability, and highlight that optimized solutions, even with heavy network utilization, may become practical as network speed improves.
Last updated:  2024-12-29
Breaking the Shadow: Key Recovery Attack on Full-Round Shadow Block Ciphers with Minimal Data
Anda Che and Shahram Rasoolzadeh
Shadow is a family of lightweight block ciphers introduced by Guo, Li, and Liu in 2021, with Shadow-32 having a 32-bit block size and a 64-bit key, and Shadow-64 having a 64-bit block size and a 128-bit key. Both variants use a generalized Feistel network with four branches, incorporating the AND-Rotation-XOR operation similar to the Simon family for their bridging function. This paper reveals that the security claims of the Shadow family are not as strong as suggested. We present a key recovery attack that can retrieve the sequence of round keys used for encryption with only two known plaintext/ciphertext pairs, requiring time and memory complexity of $2^{43.23}$ encryptions and $2^{21.62}$ blocks of memory for Shadow-32, and complexity of $2^{81.32}$ encryptions and $2^{40.66}$ blocks of memory for Shadow-64. Notably, this attack is independent of the number of rounds and the bridging function employed. Furthermore, we critically evaluate one of the recent cryptanalysis on Shadow ciphers and identify significant flaws in the proposed key recovery attacks. In particular, we demonstrate that the distinguisher used in impossible differential attacks by Liu et al. is ineffective for key recovery, despite their higher claimed complexities compared to ours.
Last updated:  2024-12-28
Computing the Hermite Normal Form: A Survey
Leon Damer
The Hermite Normal Form (HNF) of a matrix is an analogue of the echolon form over the integers. Any integer matrix can be transformed into its unique HNF. A common obstacle in computing the HNF is the extensive blow up of intermediate values. As first approach to this problem, we discuss the $Modulo Determinant Algorithm$. It keeps the entries bounded by $d$, the determinant of the lattice, and has a time complexity of $\mathcal{O}(n^3\log^2 d)$, where $n$ is the dimension of the matrix. Although this algorithm is very useful if the determinant is small, in the general case, the entries still become extremely large. Secondly, we study the $Linear Space Algorithm$. It has a time complexity of $\mathcal{O}(n^5\mathrm{polylog}(M, n))$, where $M$ denotes the largest absolute value of the input matrix. This is as fast as the best previously known algorithms, but in contrast, it assures space complexity linear in the input size, i.e. $\mathcal{O}(n^2\log M)$. As last algorithm to compute the HNF we analyze the $Heuristic Algorithm$, which is based on the first two algorithms. It achieves a much faster runtime in practice, yielding a heuristic runtime of $\mathcal{O}(n^4\mathrm{polylog}(M, n))$, while keeping the linear space complexity. Besides some performance speed ups, the $Linear Space Algorithm$ and $Heuristic Algorithm$ are precisely the algorithms implemented by SageMath.
Last updated:  2024-12-27
An Embedded Domain-Specific Language for Using One-Hot Vectors and Binary Matrices in Secure Computation Protocols
Andrei Lapets
The use of secure computation protocols within production software systems and applications is complicated by the fact that such protocols sometimes rely upon -- or are most compatible with -- unusual or restricted models of computation. We employ the features of a contemporary and widely used programming language to create an embedded domain-specific language for working with user-defined functions as binary matrices that operate on one-hot vectors. At least when working with small finite domains, this allows programmers to overcome the restrictions of more simple secure computation protocols that support only linear operations (such as addition and scalar multiplication) on private inputs. Notably, programmers are able to define their own input and output domains, to use all available host language features and libraries to define functions that operate on these domains, and to translate inputs, outputs, and functions between their usual host language representations and their one-hot vector or binary matrix forms. Furthermore, these features compose in a straightforward way with simple secure computation libraries available for the host language.
Last updated:  2025-01-13
Post-Quantum Privacy for Traceable Receipt-Free Encryption
Paola de Perthuis and Thomas Peters
Traceable Receipt-free Encryption (TREnc) has recently been introduced as a verifiable public-key encryption primitive endowed with a unique security model. In a nutshell, TREnc allows randomizing ciphertexts in transit in order to remove any subliminal information up to a public trace that ensures the non-malleability of the underlying plaintext. A remarkable property of TREnc is the indistinguishability of the randomization of chosen ciphertexts against traceable chosen-ciphertext attacks (TCCA). The main application lies in voting systems by allowing voters to encrypt their votes, tracing whether a published ballot takes their choices into account, and preventing them from proving how they voted. While being a very promising primitive, the few existing TREnc mechanisms solely rely on discrete-logarithm related assumptions making them vulnerable to the well-known record-now/decrypt-later attack in the wait of quantum computers. We address this limitation by building the first TREnc whose privacy withstands the advent of quantum adversaries in the future. To design our construction, we first generalize the original TREnc primitive that is too restrictive to be easily compatible with built-in lattice-based semantically-secure encryption. Our more flexible model keeps all the ingredients generically implying receipt-free voting. Our instantiation relies on Ring Learning With Errors (RLWE) with pairing-based statistical zero-knowledge simulation sound proofs from Groth-Sahai, and further enjoys a public-coin common reference string removing the need of a trusted setup.
Last updated:  2024-12-30
How To Think About End-To-End Encryption and AI: Training, Processing, Disclosure, and Consent
Mallory Knodel, Andrés Fábrega, Daniella Ferrari, Jacob Leiken, Betty Li Hou, Derek Yen, Sam de Alfaro, Kyunghyun Cho, and Sunoo Park
End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns. This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI “assistants” within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features.
Last updated:  2024-12-27
Definition of End-to-end Encryption
Mallory Knodel, Sofía Celi, Olaf Kolkman, and Gurshabad Grover
This document provides a definition of end-to-end encryption (E2EE). End-to-end encryption is an application of cryptographic mechanisms to provide security and privacy to communication between endpoints. Such communication can include messages, email, video, audio, and other forms of media. E2EE provides security and privacy through confidentiality, integrity, authenticity and forward secrecy for communication amongst people.
Last updated:  2024-12-27
Zero Knowledge Memory-Checking Techniques for Stacks and Queues
Alexander Frolov
There are a variety of techniques for implementing read/write memory inside of zero-knowledge proofs and validating consistency of memory accesses. These techniques are generally implemented with the goal of implementing a RAM or ROM. In this paper, we present memory techniques for more specialized data structures: queues and stacks. We first demonstrate a technique for implementing queues in arithmetic circuits that requires 3 multiplication gates and 1 advice value per read and 2 multiplication gates per write. This is based on using Horner's Rule to evaluate 2 polynomials at random points and check that the values read from the queue are equal to the values written to the queue as vectors. Next, we present a stack scheme based on an optimized version of the RAM scheme of Yang and Heath that requires 5 multiplication gates and 4 advice values per read and 2 multiplication gates per write. This optimizes the RAM scheme by observing that reads and writes to a stack are already "paired" which avoids the need for inserting dummy operations for each access as in a stack. We also introduce a different notion of "multiplexing" or "operation privacy" that is better suited to the use case of stacks and queues. All of the techniques we provide are based on evaluating polynomials at random points and using randomly evaluated polynomials as universal hash functions to check set/vector equality.
Last updated:  2024-12-27
Fully Hybrid TLSv1.3 in WolfSSL on Cortex-M4
Mila Anastasova, Reza Azarderakhsh, and Mehran Mozaffari Kermani
To provide safe communication across an unprotected medium such as the internet, network protocols are being established. These protocols employ public key techniques to perform key exchange and authentication. Transport Layer Security (TLS) is a widely used network protocol that enables secure communication between a server and a client. TLS is employed in billions of transactions per second. Contemporary protocols depend on traditional methods that utilize the computational complexity of factorization or (elliptic curve) logarithm mathematics problems. The ongoing advancement in the processing power of classical computers requires an ongoing increase in the security level of the underlying cryptographic algorithms. This study focuses on the analysis of Curve448 and Edwards curve Ed448, renowned for their superior security features that offer a 224-bit level of security as part of the TLSv1.3 protocol. The exponential advancement of quantum computers, however, presents a substantial threat to secure network communication that depends on classical crypto schemes, irrespective of their degree of security. Quantum computers have the capability to resolve these challenges within a feasible timeframe. In order to successfully transition to Post-Quantum secure network protocols, it is imperative to concurrently deploy both classical and post-quantum algorithms. This is done to fulfill the requirements of both enterprises and governments, while also instilling more assurance in the reliability of the post-quantum systems. This paper presents a detailed hybrid implementation architecture of the TLSv1.3 network protocol. We showcase the first deployment of Curve448 and Crystals-Kyber for the purpose of key exchanging, and Ed448 and Crystals-Dilithium for verifying the authenticity of entities and for X.509 Public Key Infrastructure (PKI). We rely upon the widely used OpenSSL library and the specific wolfSSL library for embedded devices to provide our results for server and client applications.
Last updated:  2024-12-27
ClusterGuard: Secure Clustered Aggregation for Federated Learning with Robustness
Yulin Zhao, Zhiguo Wan, and Zhangshuang Guan
Federated Learning (FL) enables collaborative model training while preserving data privacy by avoiding the sharing of raw data. However, in large-scale FL systems, efficient secure aggregation and dropout handling remain critical challenges. Existing state-of-the-art methods, such as those proposed by Liu et al. (UAI'22) and Li et al. (ASIACRYPT'23), suffer from prohibitive communication overhead, implementation complexity, and vulnerability to poisoning attacks. Alternative approaches that utilize partially connected graph structures (resembling client grouping) to reduce communication costs, such as Bell et al. (CCS'20) and ACORN (USENIX Sec'23), face the risk of adversarial manipulation during the graph construction process. To address these issues, we propose ClusterGuard, a secure clustered aggregation scheme for federated learning. ClusterGuard leverages Verifiable Random Functions (VRF) to ensure fair and transparent cluster selection and employs a lightweight key-homomorphic masking mechanism, combined with efficient dropout handling, to achieve secure clustered aggregation. Furthermore, ClusterGuard incorporates a dual filtering mechanism based on cosine similarity and norm to effectively detect and mitigate poisoning attacks. Extensive experiments on standard datasets demonstrate that ClusterGuard achieves over 2x efficiency improvement compared to advanced secure aggregation methods. Even with 20% of clients being malicious, the trained model maintains accuracy comparable to the original model, outperforming state-of-the-art robustness solutions. ClusterGuard provides a more efficient, secure, and robust solution for practical federated learning.
Last updated:  2024-12-26
Generalized Cryptanalysis of Cubic Pell RSA
Hao Kang and Mengce Zheng
The RSA (Rivest-Shamir-Adleman) cryptosystem is a fundamental algorithm of public key cryptography and is widely used across various information domains. For an RSA modulus represented as $N = pq$, with its factorization remaining unknown, security vulnerabilities arise when attackers exploit the key equation $ed-k(p-1)(q-1)=1$. To enhance the security, Murru and Saettone introduced cubic Pell RSA --- a variant of RSA based on the cubic Pell equation, where the key equation becomes $ed-k(p^2+p+1)(q^2+q+1)=1$. In this paper, we further investigate the security implications surrounding the generalized key equation $eu-(p^2+p+1)(q^2+q+1)v=w$. We present a novel attack strategy aimed at recovering the prime factors $p$ and $q$ under specific conditions satisfied by $u$, $v$, and $w$. Our generalized attack employs lattice-based Coppersmith's techniques and extends several previous attack scenarios, thus deepening the understanding of mathematical cryptanalysis.
Last updated:  2024-12-26
Improved Lattice-Based Attack on Mersenne Low Hamming Ratio Search Problem
Mengce Zheng and Wei Yan
This paper investigates the Mersenne number-based $\mathsf{AJPS}$ cryptosystem, with a particular focus on its associated hard problem. Specifically, we aim to enhance the existing lattice-based attack on the Mersenne low Hamming ratio search problem. Unlike the previous approach of directly employing lattice reduction algorithm, we apply the lattice-based method to solving polynomial equations derived from the above problem. We extend the search range for vulnerabilities in weak keys and increase the success probability of key recovery attack. To validate the efficacy and accuracy of our proposed improvements, we conduct numerical computer experiments. These experiments serve as a concrete validation of the practicality and effectiveness of our improved attack.
Last updated:  2024-12-26
Solving AES-SAT Using Side-Channel Hints: A Practical Assessment
Elena Dubrova
Side-channel attacks exploit information leaked through non-primary channels, such as power consumption, electromagnetic emissions, or timing, to extract sensitive data from cryptographic devices. Over the past three decades, side-channel analysis has evolved into a mature research field with well-established methodologies for analyzing standard cryptographic algorithms like the Advanced Encryption Standard (AES). However, the integration of side-channel analysis with formal methods remains relatively unexplored. In this paper, we present a hybrid attack on AES that combines side-channel analysis with SAT. We model AES as a SAT problem and leverage hints of the input and output values of the S-boxes, extracted via profiled deep learning-based power analysis, to solve it. Experimental results on an ATXmega128D4 MCU implementation of AES-128 demonstrate that the SAT-assisted approach consistently recovers the full encryption key from a single trace, captured from devices different from those used for profiling, within one hour. In contrast, without SAT assistance, the success rate remains below 80% after 26 hours of key enumeration.
Last updated:  2024-12-26
Strongly Secure Universal Thresholdizer
Ehsan Ebrahimi and Anshu Yadav
A universal thresholdizer (UT), constructed from a threshold fully homomorphic encryption by Boneh et. al , Crypto 2018, is a general framework for universally thresholdizing many cryptographic schemes. However, their framework is insufficient to construct strongly secure threshold schemes, such as threshold signatures and threshold public-key encryption, etc. In this paper, we strengthen the security definition for a universal thresholdizer and propose a scheme which satisfies our stronger security notion. Our UT scheme is an improvement of Boneh et. al ’s construction at the level of threshold fully homomorphic encryption using a key homomorphic pseudorandom function. We apply our strongly secure UT scheme to construct strongly secure threshold signatures and threshold public-key encryption.
Last updated:  2024-12-26
Report on evaluation of KpqC Round-2 candidates
Daniel J. Bernstein, Jolijn Cottaar, Emanuele Di Giandomenico, Kathrin Hövelmanns, Andreas Hülsing, Mikhail Kudinov, Tanja Lange, Mairon Mahzoun, Matthias Meijers, Alex Pellegrini, Alberto Ravagnani, Silvia Ritsch, Sven Schäge, Tianxin Tang, Monika Trimoska, Marc Vorstermans, and Fiona Johanna Weber
This report covers our analysis (security, proofs, efficiency) of the Round-2 candidates to the Korean post-quantum competiton KpqC. Signature systems covered in the report are AIMer, HAETAE, MQ-Sign, and NCC-Sign; KEMs covered are NTRU+, Paloma, REDOG, and SMAUG-T.
Last updated:  2024-12-25
Blind Signatures from Proofs of Inequality
Michael Klooß and Michael Reichle
Blind signatures are an important primitive for privacy-preserving technologies. To date, highly efficient pairing-free constructions rely on the random oracle model, and additionally, a strong assumption, such as interactive assumptions or the algebraic group model. In contrast, for signatures we know many efficient constructions that rely on the random oracle model and standard assumptions. In this work, we develop techniques to close this gap. Compared to the most efficient pairing-free AGM-based blind signature by Crites et. al. (Crypto 2023), our construction has a relative overhead of only a factor $3\times$ and $2\times$ in terms of communication and signature size, and it is provable in the random oracle model under the DDH assumption. With one additional move and $\mathbb{Z}_p$ element, we also achieve one-more strong unforgeability. Our construction is inspired by the recent works by Chairattana-Apirom, Tessaro, and Zhu (Crypto 2024) and Klooß, Reichle, and Wagner (Asiacrypt 2024), and we develop a tailored technique to circumvent the sources of inefficiency in their constructions. Concretely, we achieve signature and communication size of $192$ B and $608$ B, respectively.
Last updated:  2024-12-25
Tightly-Secure Blind Signatures in Pairing-Free Groups
Nicholas Brandt, Dennis Hofheinz, Michael Klooß, and Michael Reichle
We construct the first blind signature scheme that achieves all of the following properties simultaneously: - it is tightly secure under a standard (i.e., non-interactive, non-\(q\)-type) computational assumption, - it does not require pairings, - it does not rely on generic, non-black-box techniques (like generic NIZK proofs). The third property enables a reasonably efficient solution, and in fact signatures in our scheme comprise 10 group elements and 29 \(\mathbb{Z}_p\)-elements. Our scheme starts from a pairing-based non-blind signature scheme (Abe et al., JoC 2023), and uses recent techniques of Chairattana-Apirom, Tessaro, and Zhu (CRYPTO 2024) to replace the pairings used in this scheme with non-interactive zero-knowledge proofs in the random oracle model. This conversion is not generic or straightforward (also because the mentioned previous works have converted only significantly simpler signature schemes), and we are required to improve upon and innovate existing techniques in several places. As an interesting side note, and unlike previous works, our techniques only require a non-programmable random oracle, and our signature scheme achieves predicate blindness (which means that the user can prove statements about the signed message during the signing process).
Last updated:  2024-12-28
EQSIGN: Practical Digital Signatures from the Non-Abelian Hidden Subgroup Problem and Information Theoretic Equivocation
Samuel Lavery
We present a novel digital signature scheme grounded in non-commutative cryptography and implemented over a bilinear matrix group platform. At the core of our design is a unique equivocation function that obfuscates intermediate elements, effectively concealing outputs and minimizing observable information leakage. To the best of our knowledge, this is the first digital signature scheme to combine information-theoretic security with computational hardness, relying on a challenging instance of the Non-Abelian Hidden Subgroup Problem (NAHSP) and strengthened by practical guarantees. This dual-layered security approach ensures robustness against both classical and quantum adversaries while maintaining communication overheads competitive with RSA. Our work represents a significant advancement toward efficient, quantum-resilient digital signatures for real-world applications. This paper is an early pre-release intended to invite collaboration and feedback. The work is presented for research purposes only and is not intended for use in production systems.
Last updated:  2024-12-24
Succinct Partial Garbling from Groups and Applications
Yuval Ishai, Hanjun Li, and Huijia Lin
A garbling scheme transforms a program (e.g., circuit) $C$ into a garbled program $\hat{C}$, along with a pair of short keys $(k_{i,0},k_{i,1})$ for each input bit $x_i$, such that $(C,\hat{C}, \{k_{i,x_i}\})$ can be used to recover the output $z = C(x)$ while revealing nothing else about the input $x$. This can be naturally generalized to partial garbling, where part of the input is public, and a computation $z = C(x, y)$ is decomposed into a public part $C_{\text{pub}}(x)$, depending only on the public input $x$, and a private part $z = C_{\text{priv}}(C_{\text{pub}}(x), y)$ that also involves a private input $y$. A key challenge in garbling is to achieve succinctness, where the size of the garbled program may grow only with the security parameter and (possibly) the output length, but not with the size of $C$. Prior work achieved this strong notion of succinctness using heavy tools such as indistinguishability obfuscation (iO) or a combination of fully homomorphic encryption and attribute-based encryption. In this work, we introduce new succinct garbling schemes based on variants of standard group-based assumptions. Our approach, being different from prior methods, offers a promising pathway towards practical succinct garbling. Specifically, we construct: - A succinct partial garbling scheme for general circuits, where the garbled circuit size scales linearly with the private computation $|C_{\text{priv}}|$ and is independent of the public computation $|C_{\text{pub}}|$. This implies fully succinct conditional disclosure of secrets (CDS) protocols for circuits. - Succinct (fully hiding) garbling schemes for simple types of programs, including truth tables, bounded-length branching programs (capturing decision trees and DFAs as special cases) and degree-2 polynomials, where the garbled program size is independent of the program size. This implies succinct private simultaneous messages (PSM) protocols for the same programs. Our succinct partial garbling scheme can be based on a circular-security variant of the power-DDH assumption, which holds in the generic group model, or alternatively on the key-dependent message security of the Damgård-Jurik encryption. For bounded-depth circuits or the aforementioned simple programs, we avoid circular-security assumptions entirely. At the heart of our technical approach is a new computational flavor of algebraic homomorphic MAC (aHMAC), for which we obtain group-based constructions building on techniques from the literature on homomorphic secret sharing. Beyond succinct garbling, we demonstrate the utility of aHMAC by constructing constrained pseudorandom functions (CPRFs) for general constraint circuits from group-based assumptions. Previous CPRF constructions were limited to $\mathsf{NC}^1$ circuits or alternatively relied on lattices or iO.
Last updated:  2025-01-13
Advancements in Distributed RSA Key Generation: Enhanced Biprimality Tests
ChihYun Chuang, IHung Hsu, and TingFang Lee
RSA is widely used in modern cryptographic practice, with certain RSA-based protocols relying on the secrecy of $p$ and $q$. A common approach is to use secure multiparty computation to address the privacy concerns of $p$ and $q$. Specifically constrained to distributed RSA modulus generation protocols, the biprimality test for Blum integers $N=pq$, where $p\equiv q\equiv 3 \mod4$ are two primes, proposed by Boneh and Franklin ($2001$) is the most commonly used. Over the past $20 $ years, the worst-case acceptance rate of this test has been consistently assumed to be $1/2$ under the condition $\gcd(pq,p+q-1)=1$.This paper demonstrates that the acceptance probability for the Boneh-Franklin test is at most $1/4$, rather than $1/2$, except in the specific case where $p = q = 3$. Notably, $1/4$ is shown to be the tightest upper bound. This result substantially improves the practical effectiveness of the Boneh-Franklin test: achieving the same level of soundness for the RSA modulus now requires only half the iterations previously considered necessary. Furthermore, we propose a generalized biprimality test based on the Lucas sequence. In the worst case, the acceptance rate of the proposed test is at most $1/4 + 1.25/(p_{\min} -3)$, where $p_{\min}$ is the smallest prime factors of $N$. To validate our approach, we implemented the variant Miller-Rabin test, the Boneh-Franklin test, and our proposed test, performing pairwise comparisons of their effectiveness. Simulations indicate that the proposed test is generally more efficient than the Boneh-Franklin test in detecting cases where $N$ is not an RSA modulus. Additionally, this test is applicable to generating RSA moduli for arbitrary odd primes $p,q$. A corresponding protocol is developed for this test, validated for resilience against semi-honest adversaries, and shown to be applicable to most known distributed RSA modulus generation protocols. After thoroughly analyzing and comparing well-known protocols for Blum integers, including Burkhardt et al.'s protocol (CCS 2023), and the Boneh-Franklin protocol, our protocol is competitive for generating distributed RSA moduli.
Last updated:  2024-12-24
Perfectly Secure Fluid MPC with Abort and Linear Communication Complexity
Alexander Bienstock, Daniel Escudero, and Antigoni Polychroniadou
The \emph{Fluid} multiparty computation (MPC) model, introduced in (Choudhuri \emph{et al.} CRYPTO 2021), addresses dynamic scenarios where participants can join or leave computations between rounds. Communication complexity initially stood at $\Omega(n^2)$ elements per gate, where $n$ is the number of parties in a committee online at a time. This held for both statistical security (honest majority) and computational security (dishonest majority) in (Choudhuri \emph{et al.}~CRYPTO'21) and (Rachuri and Scholl, CRYPTO'22), respectively. The work of Bienstock \emph{et al.}~CRYPTO'23) improved communication to $O(n)$ elements per gate. However, it's important to note that the perfectly secure setting with one-third corruptions per committee has only recently been addressed in the work of (David \emph{et al.}~CRYPTO'23). Notably, their contribution marked a significant advancement in the Fluid MPC literature by introducing guaranteed output delivery. However, this achievement comes at the cost of prohibitively expensive communication, which scales to $\Omega(n^9)$ elements per gate. In this work, we study the realm of perfectly secure Fluid MPC under one-third active corruptions. Our primary focus lies in proposing efficient protocols that embrace the concept of security with abort. Towards this, we design a protocol for perfectly secure Fluid MPC that requires only \emph{linear} communication of $O(n)$ elements per gate, matching the communication of the non-Fluid setting. Our results show that, as in the case of computational and statistical security, perfect security with abort for Fluid MPC comes "for free (asymptotically linear in $n$) with respect to traditional non-Fluid MPC, marking a substantial leap forward in large scale dynamic computations, such as Fluid MPC.
Last updated:  2024-12-24
Sneaking up the Ranks: Partial Key Exposure Attacks on Rank-Based Schemes
Giuseppe D'Alconzo, Andre Esser, Andrea Gangemi, and Carlo Sanna
A partial key exposure attack is a key recovery attack where an adversary obtains a priori partial knowledge of the secret key, e.g., through side-channel leakage. While for a long time post-quantum cryptosystems, unlike RSA, have been believed to be resistant to such attacks, recent results by Esser, May, Verbel, and Wen (CRYPTO ’22), and by Kirshanova and May (SCN ’22), have refuted this belief. In this work, we focus on partial key exposure attacks in the context of rank-metric-based schemes, particularly targeting the RYDE, MIRA, and MiRitH digital signatures schemes, which are active candidates in the NIST post-quantum cryptography standardization process. We demonstrate that, similar to the RSA case, the secret key in RYDE can be recovered from a constant fraction of its bits. Specifically, for NIST category I parameters, our attacks remain efficient even when less than 25% of the key material is leaked. Interestingly, our attacks lead to a natural improvement of the best generic attack on RYDE without partial knowledge, reducing security levels by up to 9 bits. For MIRA and MiRitH our attacks remain efficient as long as roughly 57%-60% of the secret key material is leaked. Additionally, we initiate the study of partial exposure of the witness in constructions following the popular MPCitH (MPC-in-the-Head) paradigm. We show a generic reduction from recovering RYDE and MIRA’s witness to the MinRank problem, which again leads to efficient key recovery from constant fractions of the secret witness in both cases.
Last updated:  2024-12-24
One Solves All: Exploring ChatGPT's Capabilities for Fully Automated Simple Power Analysis on Cryptosystems
Wenquan Zhou, An Wang, Yaoling Ding, Congming Wei, Jingqi Zhang, and Liehuang Zhu
Side-channel analysis is a powerful technique to extract secret data from cryptographic devices. However, this task heavily relies on experts and specialized tools, particularly in the case of simple power analysis (SPA). Meanwhile, ChatGPT, a leading example of large language models, has attracted great attention and been widely applied for assisting users with complex tasks. Despite this, ChatGPT’s capabilities for fully automated SPA, where prompts and traces are input only once, have yet to be systematically explored and improved. In this paper, we introduce a novel prompt template with three expert strategies and conduct a large-scale evaluation of ChatGPT’s capabilities for SPA. We establish a dataset comprising seven sets of real power traces from various implementations of public-key cryptosystems, including RSA, ECC, and Kyber, as well as eighteen sets of simulated power traces that illustrate typical SPA leakage patterns. The results indicate that ChatGPT fails to be directly used for SPA. However, by applying the expert strategies, we successfully recovered the private keys for all twenty-five traces, which demonstrate that non-experts can use ChatGPT with our expert strategies to perform fully automated SPA.
Last updated:  2024-12-23
Weightwise Almost Perfectly Balanced Functions, Construction From A Permutation Group Action View
Deepak Kumar Dalai, Krishna Mallick, and Pierrick Méaux
The construction of Boolean functions with good cryptographic properties over subsets of vectors with fixed Hamming weight is significant for lightweight stream ciphers like FLIP. In this article, we propose a general method to construct a class of Weightwise Almost Perfectly Balanced (WAPB) Boolean functions using the action of a cyclic permutation group on $\mathbb{F}_2^n$. This class generalizes the Weightwise Perfectly Balanced (WPB) $2^m$-variable Boolean function construction by Liu and Mesnager to any $n$. We show how to bound the nonlinearity and weightwise nonlinearities of functions from this construction. Additionally, we explore two significant permutation groups, $\langle \psi \rangle$ and $\langle \sigma \rangle$, where $\psi$ is a binary-cycle permutation and $\sigma$ is a rotation. We theoretically analyze the cryptographic properties of the WAPB functions derived from these permutations and experimentally evaluate their nonlinearity parameters for $n$ between 4 and 10.
Last updated:  2024-12-23
Bypassing the characteristic bound in logUp
Liam Eagen and Ulrich Haböck
In this informal note, we describe how to bypass the characteristic bound in logUp [eprint 2022/1530] by abstracting the notion of (pole) multiplicity. The method applies as well to the GKR-variant from Papini and Haböck [eprint 2023/1284], and it moreover unlocks fractional decomposition lookups over binary fields.
Last updated:  2024-12-23
COCO: Coconuts and Oblivious Computations for Orthogonal Authentication
Yamya Reiki
Authentication often bridges real-world individuals and their virtual public identities, like usernames, user IDs and e-mails, exposing vulnerabilities that threaten user privacy. This research introduces COCO (Coconuts and Oblivious Computations for Orthogonal Authentication), a framework that segregates roles among Verifiers, Authenticators, and Clients to achieve privacy-preserving authentication. COCO eliminates the need for Authenticators to directly access virtual public identifiers or real-world identifiers for authentication. Instead, the framework leverages Oblivious Pseudorandom Functions (OPRFs) and an extended Coconut Credential Scheme to ensure privacy by introducing separate unlinkable orthogonal authentication identifiers and a full-consensus mechanism to perform zero-knowledge authentications whose proof-s are unlinkable across multiple sessions. Authentication process becomes self-contained, preventing definitive reverse tracing of virtual public identifiers to real-world identifiers.
Last updated:  2024-12-24
Partial Exposure Attacks Against a Family of RSA-like Cryptosystems
George Teseleanu
An RSA generalization using complex integers was introduced by Elkamchouchi, Elshenawy, and Shaban in 2002. This scheme was further extended by Cotan and Teșeleanu to Galois fields of order $n \geq 1$. In this generalized framework, the key equation is $ed - k (p^n-1)(q^n-1) = 1$, where $p$ and $q$ are prime numbers. Note that, the classical RSA, and the Elkamchouchi \emph{et al.} key equations are special cases, namely $n=1$ and $n=2$. In addition to introducing this generic family, Cotan and Teșeleanu describe a continued fractions attack capable of recovering the secret key $d$ if $d < N^{0.25n}$. This bound was later improved by Teșeleanu using a lattice based method. In this paper, we explore other lattice attacks that could lead to factoring the modulus $N = pq$. Namely, we propose a series of partial exposure attacks that can aid an adversary in breaking this family of cryptosystems if certain conditions hold.
Last updated:  2024-12-23
(Deep) Learning about Elliptic Curve Cryptography
Diana Maimut, Alexandru Cristian Matei, and George Teseleanu
Motivated by the interest in elliptic curves both from a theoretical (algebraic geometry) and applied (cryptography) perspective, we conduct a preliminary study on the underlying mathematical structure of these mathematical structures. Hence, this paper mainly focuses on investigating artificial intelligence techniques to enhance the efficiency of Schoof's algorithm for point counting across various elliptic curve distributions, achieving varying levels of success.
Last updated:  2024-12-23
The Number of the Beast: Reducing Additions in Fast Matrix Multiplication Algorithms for Dimensions up to 666
Erik Mårtensson and Paul Stankovski Wagner
While a naive algorithm for multiplying two 2 × 2 matrices requires eight multiplications and four additions, Strassen showed how to compute the same matrix product using seven multiplications and 18 additions. Winograd reduced the number of additions to 15, which was assumed to be optimal. However, by introducing a change of basis, Karstadt and Schwartz showed how to lower the number of additions to 12, which they further showed to be optimal within this generalized Karstadt-Schwartz (KS) framework. Since the cost of the change of basis is smaller than one addition (per each recursive level), it is disregarded in this cost metric. In this work we present improved methods for classical optimization of the number of additions in Strassen-type matrix multiplication schemes, but for larger matrix sizes, and without including any change of basis. We find specific examples where our methods beat the best instances found within the KS framework, the most impressive one being Laderman’s algorithm for multiplying 3 × 3 matrices, which we reduce from the naive 98 additions to 62, compared to 74 in the KS framework. We indicate that our approach performs better compared to previous work within the KS framework, as the matrix dimensions increase. We additionally apply our algorithms to another reference set of algorithms due to Moosbauer and Kauers for which we do not have results in the KS framework as a comparison. For parameters (n, m, k), when multiplying an (n × m) matrix with an (m × k) matrix, the schoolbook algorithm uses nk(m − 1) additions. Using the reference set of algorithms we find that our algorithm leads to an optimized number of additions of roughly cnk(m − 1), where c is a small constant which is independent of the dimensions. We further provide concrete and practical implementations of our methods that are very efficient for dimensions including (and surpassing) the 666 limit, i.e. (n, m, k) = (6, 6, 6), in our reference set of fast multiplication algorithms.
Last updated:  2024-12-23
Two Halves Make a Whole: How to Reconcile Soundness and Robustness in Watermarking for Large Language Models
Lei Fan, Chenhao Tang, Weicheng Yang, and Hong-Sheng Zhou
Watermarking techniques have been used to safeguard AI-generated content. In this paper, we study publicly detectable watermarking schemes (Fairoze et al.), and have several research findings. First, we observe that two important security properties, robustness and soundness, may conflict with each other. We then formally investigate these two properties in the presence of an arguably more realistic adversary that we called editing-adversary, and we can prove an impossibility result that, the robustness and soundness properties cannot be achieved via a publicly-detectable single watermarking scheme. Second, we demonstrate our main result: we for the first time introduce the new concept of publicly- detectable dual watermarking scheme, for AI-generated content. We provide a novel construction by using two publicly-detectable watermarking schemes; each of the two watermarking schemes can achieve “half” of the two required properties: one can achieve robustness, and the other can achieve soundness. Eventually, we can combine the two halves into a whole, and achieve the robustness and soundness properties at the same time. Our construction has been implemented and evaluated.
Last updated:  2024-12-23
Programming Equation Systems of Arithmetization-Oriented Primitives with Constraints
Mengyu Chang, Kexin Qiao, Junjie Cheng, Changhai Ou, and Liehuang Zhu
Arithmetization-Oriented (AO) cryptographic algorithms operate on large finite fields. The most threatening attack on such designs is the Gröbner basis attack, which solves the equation system encoded from the cryptanalysis problem. However, encoding a primitive as a system of equations is not unique, and finding the optimal one with low solving complexity is a challenge. This paper introduces an automatic tool that converts the CICO problem into a Mixed-Integer Quadratic Constraint Programming (MIQCP) model, using integer variables and constraints to track degree propagation and determine variable introduction points. The optimal MIQCP solution provides the lowest solving complexity. We build models for Griffin, Anemoi, and Ciminion permutations to cover modules comprehensively. Experiments show reduced Gröbner basis attack complexity, lower than designers’ bounds for small numbers of rounds, e.g. up to 8 rounds for Griffin.This tool can be used for security evaluation against Gröbner basis attack in new designs.
Last updated:  2024-12-21
"These results must be false": A usability evaluation of constant-time analysis tools
Marcel Fourné, Daniel De Almeida Braga, Jan Jancar, Mohamed Sabt, Peter Schwabe, Gilles Barthe, Pierre-Alain Fouque, and Yasemin Acar
Cryptography secures our online interactions, transactions, and trust. To achieve this goal, not only do the cryptographic primitives and protocols need to be secure in theory, they also need to be securely implemented by cryptographic library developers in practice. However, implementing cryptographic algorithms securely is challenging, even for skilled professionals, which can lead to vulnerable implementations, especially to side-channel attacks. For timing attacks, a severe class of side-channel attacks, there exist a multitude of tools that are supposed to help cryptographic library developers assess whether their code is vulnerable to timing attacks. Previous work has established that despite an interest in writing constant-time code, cryptographic library developers do not routinely use these tools due to their general lack of usability. However, the precise factors affecting the usability of these tools remain unexplored. While many of the tools are developed in an academic context, we believe that it is worth exploring the factors that contribute to or hinder their effective use by cryptographic library developers. To assess what contributes to and detracts from usability of tools that verify constant-timeness (CT), we conducted a two-part usability study with 24 (post) graduate student participants on 6 tools across diverse tasks that approximate real-world use cases for cryptographic library developers. We find that all studied tools are affected by similar usability issues to varying degrees, with no tool excelling in usability, and usability issues preventing their effective use. Based on our results, we recommend that effective tools for verifying CT need usable documentation, simple installation, easy to adapt examples, clear output corresponding to CT violations, and minimal noninvasive code markup. We contribute first steps to achieving these with limited academic resources, with our documentation, examples, and installation scripts.
Last updated:  2024-12-22
Minimizing the Use of the Honest Majority in YOSO MPC with Guaranteed Output Delivery
Rishabh Bhadauria, James Hsin-yu Chiang, Divya Ravi, Jure Sternad, and Sophia Yakoubov
Cleve (STOC 86) shows that an honest majority is necessary for MPC with guaranteed output delivery. In this paper, we show that while an honest majority is indeed necessary, its involvement can be minimal. We demonstrate an MPC protocol with guaranteed output delivery, the majority of which is executed by a sequence of committees with dishonest majority; we leverage one committee with an honest majority, each member of which does work independent of the circuit size. Our protocol has the desirable property that every participant speaks only once (YOSO, Crypto 2021). As a building block of independent interest, we introduce public computation, which is essentially privacy-free MPC with guaranteed output delivery (akin to smart contracts realized on blockchains). We instantiate public computation on a public bulletin board in three different ways (with different assumption / round / space utilization trade-offs).
Last updated:  2024-12-20
Learning with Errors from Nonassociative Algebras
Andrew Mendelsohn and Cong Ling
We construct a provably-secure structured variant of Learning with Errors (LWE) using nonassociative cyclic division algebras, assuming the hardness of worst-case structured lattice problems, for which we are able to give a full search-to-decision reduction, improving upon the construction of Grover et al. named `Cyclic Learning with Errors' (CLWE). We are thus able to create structured LWE over cyclic algebras without any restriction on the size of secret spaces, which was required for CLWE as a result of its restricted security proof. We reduce the shortest independent vectors problem in ideal lattices, obtained from ideals in orders of such algebras, to the decision variant of LWE defined for nonassociative CDAs. We believe this variant has greater security and greater freedom with parameter choices than CLWE, and greater asymptotic efficiency of multiplication than module LWE. Our reduction requires new results in the ideal theory of such nonassociative algebras, which may be of independent interest. We then adapt an LPR-like PKE scheme to hold for nonassociative spaces, and discuss the efficiency and security of our construction, showing that it is immune to certain subfield attacks. Finally, we give example parameters to construct algebras for cryptographic use.
Last updated:  2024-12-20
Leveraging remote attestation APIs for secure image sharing in messaging apps
Joel Samper and Bernardo Ferreira
Sensitive pictures such as passport photos and nudes are commonly shared through mobile chat applications. One popular strategy for the privacy protection of this material is to use ephemeral messaging features, such as the view once snaps in Snapchat. However, design limitations and implementation bugs in messaging apps may allow attackers to bypass the restrictions imposed by those features on the received material. One way by which attackers may accomplish so is by tampering with the software stack on their own devices. In this paper, we propose and test a protection strategy based on a multiplatform system that encrypts and decrypts sensitive pictures on top of messaging apps and performs remote attestation with available app integrity APIs to safeguard its security. Our analysis and experiments show that, compared to previous proposals for image encryption in a middleware, remote attestation offers increased security, adds privacy benefits, simplifies integration, and improves usability by not requiring users to exchange key material a priori. In our experiments, it incurs an added average latency of 3.8 and 4.5 seconds when sending and receiving private pictures, respectively.
Last updated:  2024-12-23
Exact Template Attacks with Spectral Computation
Meriem Mahar, Mammar Ouladj, Sylvain Guilley, Hacène Belbachir, and Farid Mokrane
The so-called Gaussian template attacks (TA) is one of the optimal Side-Channel Analyses (SCA) when the measurements are captured with normal noise. In the SCA literature, several optimizations of its implementation are introduced, such as coalescence and spectral computation. The coalescence consists of averaging traces corresponding to the same plaintext value, thereby coalescing (synonymous: compacting) the dataset. Spectral computation consists of sharing the computational workload when estimating likelihood across key hypotheses. State-of-the-art coalescence leverages the Law of Large Numbers (LLN) to compute the mean of equivalent traces. This approach comes with a drawback because the LLN is just an asymptotic approximation. So it does not lead to an exact Template Attack, especially for a few number of traces. In this paper, we introduce a way of calculating the TA exactly and with the same computational complexity (using the spectral approach), without using the LLN, regardless of the number of messages. For the experimental validation of this approach, we use the ANSSI SCA Database (ASCAD), with different numbers of messages and different amounts of samples per trace. Recall that this dataset concerns a software implementation of AES-128 bits, running on an ATMEGA-8515 microprocessor.
Last updated:  2024-12-20
Zeroed Out: Cryptanalysis of Weak PRFs in Alternating Moduli
Irati Manterola Ayala and Håvard Raddum
The growing adoption of secure multi-party computation (MPC) has driven the development of efficient symmetric key primitives tailored for MPC. Recent advancements, such as the alternating moduli paradigm, have shown promise but leave room for cryptographic and practical improvements. In this paper, we analyze a family of weak pseudorandom functions (wPRF) proposed at Crypto 2024, focusing on the One-to-One parameter sets. We demonstrate that these configurations fail to achieve their intended one-to-one mappings and exploit this observation to develop an efficient key recovery attack. The attacks reveal significant vulnerabilities, reducing the complexity of key recovery to O(2^(λ/2) log_2 (λ)) for the Standard One-to-One wPRF and O(2^(0.84λ)) for the Reversed Moduli variant– both substantially below their claimed λ-bit security. We validate our findings through experimental evaluations, confirming alignment between predicted and observed attack complexities.
Last updated:  2024-12-20
Greedy Algorithm for Representative Sets: Applications to IVLBC and GIFT-64 in Impossible Differential Attack
Manjeet Kaur, Tarun Yadav, Manoj Kumar, and Dhananjoy Dey
The impossible differential (ID) attack is crucial for analyzing the strength of block ciphers. The critical aspect of this technique is to identify IDs, and the researchers introduced several methods to detect them. Recently, the researchers extended the mixed-integer linear programming (MILP) approach by partitioning the input and output differences to identify IDs. The researchers proposed techniques to determine the representative set and partition table of a set over any nonlinear function. In this paper, we introduce a deterministic algorithm using the greedy approach~\cite{cormen2022introduction} for finding the representative set and partition table of a set over any nonlinear function. This algorithm iteratively selects the set that covers the maximum uncovered elements of the target set. This performs better than the existing algorithms in terms of time complexity. We use this algorithm to compute the representative sets and partition tables of the two block ciphers, IVLBC and GIFT-64, and identify 6-round IDs for them. Our research contributes a deterministic algorithm to compute the representative set and partition table of a set over any nonlinear function with at most 16-bit input size.
Last updated:  2024-12-26
Optimally Secure TBC Based Accordion Mode
Nilanjan Datta, Avijit Dutta, Shibam Ghosh, and Hrithik Nandi
The design of tweakable wide block ciphers has advanced significantly over the past two decades. This evolution began with the approach of designing a wide block cipher by Naor and Reingold. Since then, numerous tweakable wide block ciphers have been proposed, many of which build on existing block ciphers and are secure up to the birthday bound for the total number of blocks queried. Although there has been a slowdown in the development of tweakable wide block cipher modes in last couple of years, the latest NIST proposal for accordion modes has reignited interest and momentum in the design and analysis of these ciphers. Although new designs have emerged, their security often falls short of optimal (i.e., $n$-bit) security, where $n$ is the output size of the primitive. In this direction, designing an efficient tweakable wide block cipher with $n$-bit security seems to be an interesting research problem. An optimally secure tweakable wide block cipher mode can easily be turned into a misuse-resistant RUP secure authenticated encryption scheme with optimal security. This paper proposes $\textsf{HCTR+}$, which turns an $n$-bit tweakable block cipher (TBC) with $n$-bit tweak into a variable input length tweakable block cipher. Unlike tweakable \textsf{HCTR}, $\textsf{HCTR+}$ ensures $n$-bit security regardless of tweak repetitions. We also propose two TBC-based almost-xor-universal hash functions, named $\textsf{PHASH+}$ and $\textsf{ZHASH+}$, and use them as the underlying hash functions in the $\textsf{HCTR+}$ construction to create two TBC-based $n$-bit secure tweakable wide block cipher modes, $\textsf{PHCTR+}$ and $\textsf{ZHCTR+}$. Experimental results show that both $\textsf{PHCTR+}$ and $\textsf{ZHCTR+}$ exhibit excellent software performance when their underlying TBC is instantiated with \textsf{Deoxys-BC-128-128}.
Last updated:  2024-12-27
Improved Rejection Sampling for Compact Lattice Signatures
Joel Gärtner
One of the primary approaches used to construct lattice-based signature schemes is through the “Fiat-Shamir with aborts” methodology. Such a scheme may abort and restart during signing which corresponds to rejection sampling produced signatures to ensure that they follow a distribution that is independent of the secret key. This rejection sampling is only feasible when the output distribution is sufficiently wide, limiting how compact this type of signature schemes can be. In this work, we develop a new method to construct signatures influenced by the rejection condition. This allows our rejection sampling to target significantly narrower output distributions than previous approaches, allowing much more compact signatures. The combined size of a signature and a verification key for the resulting scheme is less than half of that for ML-DSA and comparable to that of compact hash-and-sign lattice signature schemes, such as Falcon.
Last updated:  2024-12-19
Simple Power Analysis assisted Chosen Cipher-Text Attack on ML-KEM
Alexandre Berzati, Andersson Calle Viera, Maya Chartouny, and David Vigilant
Recent work proposed by Bernstein et al. (from EPRINT 2024) identified two timing attacks, KyberSlash1 and KyberSlash2, targeting ML-KEM decryption and encryption algorithms, respectively, enabling efficient recovery of secret keys. To mitigate these vulnerabilities, correctives were promptly applied across implementations. In this paper, we demonstrate a very simple side-channel-assisted power analysis attack on the patched implementations of ML-KEM. Our result showed that original timing leakage can be shifted to power consumption leakage that can be exploited on specific data. We performed a practical validation of this attack on both the standard and a shuffled implementations of ML-KEM on a Cortex-M4 platform, confirming its effectiveness. Our approach enables the recovery of the ML-KEM secret key in just 30 seconds for the standard implementation, and approximately 3 hours for the shuffled implementation, achieving a 100% success rate in both cases.
Last updated:  2024-12-19
Simulation Secure Multi-Input Quadratic Functional Encryption: Applications to Differential Privacy
Ferran Alborch Escobar, Sébastien Canard, and Fabien Laguillaumie
Multi-input functional encryption is a primitive that allows for the evaluation of an $\ell$-ary function over multiple ciphertexts, without learning any information about the underlying plaintexts. This type of computation is useful in many cases where one has to compute over encrypted data, such as privacy-preserving cloud services, federated learning, or more generally delegation of computation from multiple clients. It has recently been shown by Alborch et al. in PETS '24 to be useful to construct a randomized functional encryption scheme for obtaining differentially private data analysis over an encrypted database supporting linear queries. In this work we propose the first secret-key multi-input quadratic functional encryption scheme satisfying simulation security. Current constructions supporting quadratic functionalities, proposed by Agrawal et al. in CRYPTO '21 and TCC '22, only reach indistinguishibility-based security. Our proposed construction is generic, and for a concrete instantiation, we propose a new function-hiding inner-product functional encryption scheme proven simulation secure against one challenge ciphertext in the standard model, which is of independent interest. We then use these two results to construct an efficient randomized quadratic functional encryption scheme, from which we obtain differentially private data analysis over an encrypted database supporting quadratic queries. Finally, we give and fully benchmark an implementation of the randomized scheme. This work is an extended version of the paper "Simulation Secure Multi-Input Quadratic Functional Encryption" at SAC '24, where the multi-input quadratic functional encryption scheme and function-hiding inner-product functional encryption schemes were first presented (Section 3 and Seciton 4).
Last updated:  2024-12-19
BBB Secure Arbitrary Length Tweak TBC from n-bit Block Ciphers
Arghya Bhattacharjee, Ritam Bhaumik, Nilanjan Datta, Avijit Dutta, and Sougata Mandal
At FSE'15, Mennink introduced two tweakable block ciphers, $\widetilde{F}[1]$ and $\widetilde{F}[2]$, both utilizing an $n$-bit tweak. It was demonstrated that $\widetilde{F}[1]$ is secure for up to $2^{2n/3}$ queries, while $\widetilde{F}[2]$ is secure for up to $2^n$ queries, assuming the underlying block cipher is an ideal cipher with $n$-bit key and $n$-bit data. Later, at ASIACRYPT'16, Wang et al. showed a birthday bound attack on Mennink's design (which was later corrected in the eprint version {\textbf eprint 2015/363}) and proposed 32 new candidates for tweakable block ciphers that are derived from $n$-bit ideal block ciphers. It was shown that all the $32$ constructions are provably secure up to $2^n$ queries. All the proposed designs by both Mennink and Wang et al. admit only $n$-bit tweaks. In FSE'23, Shen and Standaert proposed a tweakable block cipher, $\widetilde{G2}$, which uses $2n$-bit tweaks and is constructed from three $n$-bit block cipher calls, proving its security up to $n$ bits, assuming that the underlying block cipher is an ideal cipher. They have also shown that it is impossible to design a tweakable block cipher with $2n$-bit tweaks using only two $n$-bit block cipher calls while achieving security beyond the birthday bound. In this paper, we advance this research further. We show that any tweakable block cipher design with $3n$-bit tweaks based on only three block cipher calls, where at least one key is tweak-independent, is vulnerable to a birthday bound distinguishing attack. We then propose a tweakable block cipher, $\widetilde{\textsf{G}_3}^*$ that uses three block cipher calls and admits $3n$-bit tweaks, achieves security up to $O(2^{2n/3})$ queries when all three block cipher keys are tweak-dependent. Furthermore, we prove that using four ideal block cipher calls, with at least one key being tweak-dependent, is necessary and sufficient to achieve $n$-bit security for a tweakable block cipher that admits $3n$-bit tweaks. Finally, we propose a tweakable block cipher, $\widetilde{\textsf{G}_r}$, which uses $(r+1)$ block cipher calls and processes $rn$-bit tweaks, achieving security up to $O(2^n)$ queries when at least one block cipher key is tweak-dependent.
Last updated:  2025-01-07
How to Compress Garbled Circuit Input Labels, Efficiently
Marian Dietz, Hanjun Li, and Huijia Lin
Garbled Circuits are essential building blocks in cryptography, and extensive research has explored their construction from both applied and theoretical perspectives. However, a challenge persists: While theoretically designed garbled circuits offer optimal succinctness--remaining constant in size regardless of the underlying circuit’s complexit--and are reusable for multiple evaluations, their concrete computational costs are prohibitively high. On the other hand, practically efficient garbled circuits, inspired by Yao’s garbled circuits, encounter limitations due to substantial communication bottlenecks and a lack of reusability. To strike a balance, we propose a novel concept: online-offline garbling. This approach leverages instance-independent and (partially) reusable preprocessing during an offline phase, to enable the creation of constant-size garbled circuits in an online phase, while maintaining practical efficiency. Specifically, during the offline stage, the garbler generates and transmits a reference string, independent of the computation to be performed later. Subsequently, in the online stage, the garbler efficiently transforms a circuit into a constant-size garbled circuit. The evaluation process relies on both the reference string and the garbled circuit. We demonstrate that by leveraging existing tools such as those introduced by Applebaum et al. (Crypto’13) and Chongwon et al. (Crypto’17), online-offline garbling can be achieved under a variety of assumptions, including the hardness of Learning With Errors (LWE), Computational Diffie-Hellman (CDH), and factoring. In contrast, without the help of an offline phase, constant-size garbling is only feasible under the LWE and circular security assumptions, or the existence of indistinguishability obfuscation. However, these schemes are still very inefficient, several orders of magnitude more costly than Yao-style garbled circuits. To address this, we propose a new online-offline garbling scheme based on Ring LWE. Our scheme offers both asymptotic and concrete efficiency. It serves as a practical alternative to Yao-style garbled circuits, especially in scenarios where online communication is constrained. Furthermore, we estimate the concrete latency using our approach in realistic settings and demonstrate that it is 2-20X faster than using Yao-style garbled circuits. This improvement is estimated without taking into account parallelization of computation, which can lead to further performance improvement using our scheme.
Last updated:  2024-12-19
Breaking and Provably Restoring Authentication: A Formal Analysis of SPDM 1.2 including Cross-Protocol Attacks
Cas Cremers, Alexander Dax, and Aurora Naska
The SPDM (Security Protocol and Data Model) protocol is a standard under development by the DMTF consortium, and supported by major industry players including Broadcom, Cisco, Dell, Google, HP, IBM, Intel, and NVIDIA. SPDM 1.2 is a complex protocol that aims to provide platform security, for example for communicating hardware components or cloud computing scenarios. In this work, we provide the first holistic, formal analysis of SPDM 1.2: we model the full protocol flow of SPDM considering all of its modes – especially the complex interaction between its different key-exchange modes – in the framework of the Tamarin prover, making our resulting model one of the most complex Tamarin models to date. To our surprise, Tamarin finds a cross-protocol attack that allows a network attacker to completely break authentication of the pre-shared key mode. We implemented our attack on the SPDM reference implementation, and reported the issue to the SPDM developers. DMTF registered our attack as a CVE with CVSS rating 9 (critical). We propose a fix and develop the first formal symbolic proof using the Tamarin prover for the fixed SPDM 1.2 protocol as a whole. The resulting model of the main modes and their interactions is highly complex, and we develop supporting lemmas to enable proving properties in the Tamarin prover, including the absence of all cross-protocol attacks. Our fix has been incorporated into both the reference implementation and the newest version of the standard. Our results highlight the need for a holistic analysis of other internet standards and the importance of providing generalized security guarantees across entire protocols.
Last updated:  2024-12-18
Decompressing Dilithium's Public Key with Fewer Signatures Using Side Channel Analysis
Ruize Wang, Joel Gärtner, and Elena Dubrova
The CRYSTALS-Dilithium digital signature scheme, selected by NIST as a post-quantum cryptography (PQC) standard under the name ML-DSA, employs a public key compression technique intended for performance optimization. Specifically, the module learning with error instance $({\bf A}, {\bf t})$ is compressed by omitting the low-order bits ${\bf t_0}$ of the vector ${\bf t}$. It was recently shown that knowledge of ${\bf t_0}$ enables more effective side-channel attacks on Dilithium implementations. Another recent work demonstrated a method for reconstructing ${\bf t_0}$ from multiple signatures. In this paper, we build on this method by applying profiled deep learning-assisted side-channel analysis to partially recover the least significant bit of ${\bf t_0}$ from power traces. As a result, the number of signatures required for the reconstruction of ${\bf t_0}$ can be reduced by roughly half. We demonstrate how the new ${\bf t_0}$ reconstruction method enhances the efficiency of recovering the secret key component ${\bf s}_1$, and thus facilitates digital signature forgery, on an ARM Cortex-M4 implementation of Dilithium.
Last updated:  2024-12-18
Cryptanalysis of TETRA Encryption Algorithms - Episode 1: TEA-3
Jens Alich, Amund Askeland, Subhadeep Banik, Tim Beyne, Anne Canteaut, Patrick Felke, Gregor Leander, Willi Meier, and Lukas Stennes
We present the first public and in-depth cryptanalysis of TEA-3, a stream cipher used in TETRA radio networks that was kept secret until recently. While the same also holds for the six other TETRA encryption algorithms, we pick TEA-3 to start with as (i) it is not obviously weakened as TEA-{1,4,7} but (ii) in contrast to TEA-2 it is approved only for extra-European emergency service, and (iii) as already noted by [MBW23] the TEA-3 design surprisingly contains a non-bijective S-box. Most importantly, we show that the 80-bit non-linear feedback shift register operating on the key decomposes into a cascade of two 40-bit registers. Although this hints at an intentional weakness at first glance, we are not able to lift our results to a practical attack. Other than that, we show how the balanced non-linear feedback functions used in the state register of TEA-3 can be constructed.
Last updated:  2024-12-18
Cryptographic Commitments on Anonymizable Data
Xavier Bultel, Céline Chevalier, Charlène Jojon, Diandian Liu, and Benjamin Nguyen
Local Differential Privacy (LDP) mechanisms consist of (locally) adding controlled noise to data in order to protect the privacy of their owner. In this paper, we introduce a new cryptographic primitive called LDP commitment. Usually, a commitment ensures that the committed value cannot be modified before it is revealed. In the case of an LDP commitment, however, the value is revealed after being perturbed by an LDP mechanism. Opening an LDP commitment therefore requires a proof that the mechanism has been correctly applied to the value, to ensure that the value is still usable for statistical purposes. We also present a security model for this primitive, in which we define the hiding and binding properties. Finally, we present a concrete scheme for an LDP staircase mechanism (generalizing the randomized response technique), based on classical cryptographic tools and standard assumptions. We provide an implementation in Rust that demonstrates its practical efficiency (the generation of a commitment requires just a few milliseconds). On the application side, we show how our primitive can be used to ensure simultaneously privacy, usability and traceability of medical data when it is used for statistical studies in an open science context. We consider a scenario where a hospital provides sensitive patients data signed by doctors to a research center after it has been anonymized, so that the research center can verify both the provenance of the data (i.e. verify the doctors’ signatures even though the data has been noised) and that the data has been correctly anonymized (i.e. is usable even though it has been anonymized).
Last updated:  2024-12-18
Efficient Error-tolerant Side-channel Attacks on GPV Signatures Based on Ordinary Least Squares Regression
Jaesang Noh, Dongwoo Han, and Dong-Joon Shin
The Gentry-Peikert-Vaikuntanathan (GPV) framework is utilized for constructing digital signatures, which is proven to be secure in the classical/quantum random-oracle model. Falcon is such a signature scheme, recognized as a compact and efficient signature among NIST-standardized signature schemes. Although a signature scheme based on the GPV framework is theoretically highly secure, it could be vulnerable to side-channel attacks and hence further research on physical attacks is required to make a robust signature scheme. We propose a general secret key recovery attack on GPV signatures using partial information about signatures obtained from side-channel attack. The three main contributions are summarized as follows. First, we introduce, for the first time, a concept of vulnerable partial information of GPV signatures and propose a secret key recovery attack, called OLS attack, which effectively utilizes partial information. In contrast to the approaches of Guerreau et al. (CHES 2022) and Zhang et al. (Eurocrypt 2023), which utilize filtered (or processed) signatures with hidden parallelepiped or learning slice schemes, the OLS attack leverages all the available signatures without filtering. We prove that the secret key recovered by the OLS attack converges to the real secret key in probability as the number of samples increases. Second, we utilize Gaussian leakage as partial information for the OLS attack on Falcon. As a result, the OLS attack shows a significantly higher success rate with fewer samples than the existing attack schemes. Furthermore, by incorporating the DDGR attack, the OLS attack can recover the secret key using much less samples with a success rate close to 100%. Moreover, we propose more efficient OLS attack on Falcon, which reduces the number of required side-channel attacks. Third, we propose an error-tolerant power analysis attack using MAP decoding, which effectively corrects the errors in samples to utilize Gaussian leakage correctly. In conclusion, the OLS attack is expected to strengthen the security of the GPV signatures including Falcon.
Last updated:  2024-12-18
A Note on Isogeny Group Action-Based Pseudorandom Functions
Yi-Fu Lai
In PKC'24, de Saint Guilhem and Pedersen give a pseudorandom function basing on a relaxed group action assumption in the semi-honest setting. Basing on the assumption, they build an oblivious pseudorandom function (OPRF). Later, a recent paper by Levin and Pedersen uses the same function to build a verifiable random function (VRF), using the same assumption. We give a structural attack on this problem by reducing it to a few group action inverse problems (GAIP/DLog) over small subgroups. This reduction allows us to apply a CRT-based attack to recover the secret key, ultimately lowering the problem’s effective security strength to under 70 classical bits when using CSIDH-512. Hence the strength of their pseudorandom functions is bounded above by the GAIP over the largest prime order subgroup. Clearly, Kuperberg’s subexponential attack can be used to further reduce its quantum security.
Last updated:  2024-12-18
SeaSearch: Secure and Efficient Selection Queries
Shantanu Sharma, Yin Li, Sharad Mehrotra, Nisha Panwar, Komal Kumari, and Swagnik Roychoudhury
Information-theoretic or unconditional security provides the highest level of security --- independent of the computational capability of an adversary. Secret-sharing techniques achieve information-theoretic security by splitting a secret into multiple parts (called shares) and storing the shares across non-colluding servers. However, secret-sharing-based solutions suffer from high overheads due to multiple communication rounds among servers and/or information leakage due to access-patterns (i.e.., the identity of rows satisfying a query) and volume (i.e., the number of rows satisfying a query). We propose SeaSearch, an information-theoretically secure approach that uses both additive and multiplicative secret-sharing, to efficiently support a large class of selection queries involving conjunctive, disjunctive, and range conditions. Two major contributions of SeaSearch are: (i) a new search algorithm using additive shares based on fingerprints, which were developed for string-matching over cleartext; and (ii) two row retrieval algorithms: one is based on multiplicative shares and another is based on additive shares. SeaSearch does not require communication among servers storing shares and does not reveal any information to an adversary based on access-patterns and volume.
Last updated:  2024-12-18
Verified Foundations for Differential Privacy
Markus de Medeiros, Muhammad Naveed, Tancrède Lepoint, Temesghen Kahsai, Tristan Ravitch, Stefan Zetzsche, Anjali Joshi, Joseph Tassarotti, Aws Albarghouthi, and Jean-Baptiste Tristan
Differential privacy (DP) has become the gold standard for privacy-preserving data analysis, but implementing it correctly has proven challenging. Prior work has focused on verifying DP at a high level, assuming the foundations are correct and a perfect source of randomness is available. However, the underlying theory of differential privacy can be very complex and subtle. Flaws in basic mechanisms and random number generation have been a critical source of vulnerabilities in real-world DP systems. In this paper, we present SampCert, the first comprehensive, mechanized foundation for differential privacy. SampCert is written in Lean with over 12,000 lines of proof. It offers a generic and extensible notion of DP, a framework for constructing and composing DP mechanisms, and formally verified implementations of Laplace and Gaussian sampling algorithms. SampCert provides (1) a mechanized foundation for developing the next generation of differentially private algorithms, and (2) mechanically verified primitives that can be deployed in production systems. Indeed, SampCert’s verified algorithms power the DP offerings of Amazon Web Services (AWS), demonstrating its real-world impact. SampCert’s key innovations include: (1) A generic DP foundation that can be instantiated for various DP definitions (e.g., pure, concentrated, Rényi DP); (2) formally verified discrete Laplace and Gaussian sampling algorithms that avoid the pitfalls of floating-point implementations; and (3) a simple probability monad and novel proof techniques that streamline the formalization. To enable proving complex correctness properties of DP and random number generation, SampCert makes heavy use of Lean’s extensive Mathlib library, leveraging theorems in Fourier analysis, measure and probability theory, number theory, and topology.
Last updated:  2024-12-17
Revisiting Boomerang Attacks on Lightweight ARX and AND-RX Ciphers with Applications to KATAN, SIMON and CHAM
Li Yu and Je Sen Teh
In this paper, we investigate the security of lightweight block ciphers, focusing on those that utilize the ADD-Rotate-XOR (ARX) and AND-Rotate-XOR (AND-RX) design paradigms. More specifically, we examine their resilience against boomerang-style attacks. First, we propose an automated search strategy that leverages the boomerang connectivity table (BCT) for AND operations ($\wedge BCT$) to conduct a complete search for boomerang and rectangle distinguishers for AND-RX ciphers. The proposed search strategy automatically considers all possible $\wedge BCT$ switches in the middle of the boomerang to optimise distinguishing probability. The correctness of the search strategy was verified experimentally. We were able to find the best boomerang and rectangle distinguishers to date in the single-key model for lightweight block ciphers KATAN32/48/64} and SIMON32/48. Next, we investigated BCT properties of ARX ciphers and discovered that a truncated boomerang switch could be formulated for the lightweight ARX cipher, CHAM. We were able to find the best single-key and related-key rectangle distinguishers to date for Cham. Our findings provide more accurate security margins of these lightweight ciphers against boomerang-style attacks.
Last updated:  2024-12-17
Adaptive Special Soundness: Improved Knowledge Extraction by Adaptive Useful Challenge Sampling
Thomas Attema, Michael Klooß, Russell W. F. Lai, and Pavlo Yatsyna
Proving knowledge soundness of an interactive proof from scratch is often a challenging task. This has motivated the evelopment of various special soundness frameworks which, in a nutshell, separate knowledge extractors into two parts: (1) an extractor to produce a set of accepting transcripts conforming to some structure; (2) a witness recovery algorithm to recover a witness from a set of transcripts with said structure. These frameworks take care of (1), so it suffices for a protocol designer to specify (2) which is often simple(r). Recently, works by Bünz–Fisch (TCC’23) and Aardal et al. (CRYPTO’24) provide new frameworks, called almost special soundness and predicate special soundness, respectively. To handle insufficiencies of special soundness, they deviate from its spirit and augment it in different ways. The necessity for their changes is that special soundness does not allow the challenges for useful sets of transcripts to depend on the transcripts themselves, but only on the challenges in the transcripts. As a consequence, (generalised) special soundness cannot express extraction strategies which reduce a computational problem to finding “inconsistent” accepting transcripts, for example in PCP/IOP-based or lattice-based proof systems, and thus provide (very) sub-optimal extractors. In this work, we introduce adaptive special soundness which captures extraction strategies exploiting inconsistencies between transcripts, e.g. transcripts containing different openings of the same commitment. Unlike (generalised) special soundness (Attema, Fehr, and Resch (TCC’23)), which specifies a target transcript structure, our framework allows specifying an extraction strategy which guides the extractor to sample challenges adaptively based on the history of prior transcripts. We extend the recent (almost optional) extractor of Attema, Fehr, Klooß and Resch (EPRINT 2023/1945) to our notion, and argue that it encompasses almost special soundness and predicate special soundness in many cases of interest. As a challenging application, we modularise and generalise the lattice Bulletproofs analysis by Bünz–Fisch (TCC’23) using the adaptive special soundness framework. Moreover, we extend their analysis to the ring setting for a slightly wider selection of rings than rational integers.
Last updated:  2024-12-17
Multilateral Trade Credit Set-off in MPC via Graph Anonymization and Network Simplex
Enrico Bottazzi, Chan Nam Ngo, and Masato Tsutsumi
Multilateral Trade Credit Set-off (MTCS) is a process run by a service provider that collects trade credit data (i.e. obligations from a firm to pay another firm) from a network of firms and detects cycles of debts that can be removed from the system. The process yields liquidity savings for the participants, who can discharge their debts without relying on expensive loans. We propose an MTCS protocol that protects firms' sensitive data, such as the obligation amount or the identity of the firms they trade with. Mathematically, this is analogous to solving the Minimum Cost Flow (MCF) problem over a graph of $n$ firms, where the $m$ edges are the obligations. State-of-the-art techniques for Secure MCF have an overall complexity of $O(n^{10} \log n)$ communication rounds, making it barely applicable even to small-scale instances. Our solution leverages novel secure techniques such as Graph Anonymization and Network Simplex to reduce the complexity of the MCF problem to $O(max(n, \log\log{n+m}))$ rounds of interaction per pivot operations in which $O(max(n^2, nm))$ comparisons and multiplications are performed. Experimental results show the tradeoff between privacy and optimality.
Last updated:  2024-12-17
Simple is COOL: Graded Dispersal and its Applications for Byzantine Fault Tolerance
Ittai Abraham, Gilad Asharov, and Anirudh Chandramouli
The COOL protocol of Chen (DISC'21) is a major advance that enables perfect security for various tasks (in particular, Byzantine Agreement in Synchrony and Reliable Broadcast in Asynchrony). For an input of size $L$ bits, its communication complexity is $O(nL+n^2 \log n)$, which is optimal up to a $\log n$ factor. Unfortunately, Chen’s analysis is rather intricate and complex. Our main contribution is a simple analysis of a new variant of COOL based on elementary counting arguments. Our main consistency proof takes less than two pages (instead of over 20 pages), making the COOL protocol much more accessible. In addition, the simple analysis allows us to improve the protocol by reducing one round of communication and reducing the communication complexity by 40%. In addition, we suggest a new way of extracting the core properties of COOL as a new primitive, which we call Graded Dispersal. We show how Graded Dispersal can then be used to obtain efficient solutions for Byzantine Agreement, Verifiable Information Dispersal, Gradecast, and Reliable Broadcast (in both Synchrony and Asynchrony, where appropriate). Our improvement of COOL directly applies here, and we improve the state-of-the-art in all those primitives by reducing at least one round and 40% communication.
Last updated:  2025-01-15
A Heuristic Proof of P $\neq$ NP
Ping Wang
The question of whether the complexity class P equals NP is a major unsolved problem in theoretical computer science. In this paper, we introduce a new language, the Add/XNOR problem, which has the simplest structure and perfect randomness, by extending the subset sum problem. We prove that P $\neq$ NP as it shows that the square-root complexity is necessary to solve the Add/XNOR problem. That is, problems that are verifiable in polynomial time are not necessarily solvable in polynomial time. Furthermore, by giving up commutative and associative properties, we design a magma equipped with a permutation and successfully achieve Conjecture 1. Based on this conjecture, we obtain the Add/XOR/XNOR problem and one-way functions that are believed to require exhaustive search to solve or invert.
Last updated:  2024-12-27
The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth
Gregory D. Kahanamoku-Meyer, Seyoon Ragavan, Vinod Vaikuntanathan, and Katherine Van Kirk
We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). To our knowledge, it is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem; the circuit also achieves sublinear depth and nearly linear gate count. We build on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. Our circuit completely factors any number $N$, whose prime decomposition has distinct exponents, and finds at least one non-trivial factor if not all exponents are the same. In particular, to factor an $n$-bit integer $N=P^2 Q$ (with $P$ and $Q$ prime, and $Q<2^m$ for some $m$), our circuit uses $\widetilde{O}(m)$ qubits and has depth at most $\widetilde{O}(m + n/m)$, with $\widetilde{O}(n)$ quantum gates. When $m=\Theta(n^a)$ with $2/3 < a < 1$, the space and depth are sublinear in $n$, yet no known classical algorithms exploit the relatively small size of $Q$ to run faster than general-purpose factoring algorithms. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known. The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of $A$ mod $B$, in the regime where $B$ is classical and much larger than $A$. Crucially, our circuit reads the bits of the classical value $B$ in a streaming fashion, never storing more than $\widetilde{O}(\log A)$ qubits of quantum information at one time. In the context of the larger Jacobi algorithm for factoring $N = P^2Q$, this reduces the overall qubit count to be roughly proportional to the length of $Q$, rather than the length of $N$. Our circuit for computing the Jacobi symbol is also highly gate-efficient and parallelizable, achieving gate count $\widetilde{O}(\log B)$ and depth at most $\widetilde{O}(\log A + \log B/\log A)$. Finally, we note that our circuit for computing the Jacobi symbol generalizes to related problems, such as computing the greatest common divisor, and thus could be of independent interest.
Last updated:  2024-12-17
General Practical Cryptanalysis of the Sum of Round-Reduced Block Ciphers and ZIP-AES
Antonio Flórez-Gutiérrez, Lorenzo Grassi, Gregor Leander, Ferdinand Sibleyras, and Yosuke Todo
We introduce a new approach between classical security proofs of modes of operation and dedicated security analysis for known cryptanalysis families: General Practical Cryptanalysis. This allows us to analyze generically the security of the sum of two keyed permutations against known attacks. In many cases (of course, not all), we show that the security of the sum is strongly linked to that of the composition of the two permutations. This enables the construction of beyond-birthday bound secure low-latency PRFs by cutting a known-to-be-secure block cipher into two equal parts. As a side result, our general analysis shows an inevitable difficulty for the key recovery based on differential-type attacks against the sum, which leads to a correction of previously published attacks on the dedicated design Orthros.
Last updated:  2024-12-16
Carousel: Fully Homomorphic Encryption from Slot Blind Rotation Technique
Seonhong Min and Yongsoo Song
Fully Homomorphic Encryption (FHE) enables secure computation of functions on ciphertexts without requiring decryption. Specifically, AP-like HE schemes exploit an intrinsic bootstrapping method called blind rotation. In blind rotation, a look-up table is homomorphically evaluated on the input ciphertext through the iterative multiplication of monomials. However, the algebraic structure of the multiplicative group of monomials imposes certain limitations on the input and output plaintext space: 1. only a fraction of the input plaintext space can be bootstrapped, 2. the output plaintext space is restricted to subsets of real numbers. In this paper, we design a novel bootstrapping method called slot blind rotation. The key idea of our approach is to utilize the automorphism group instead of monomials. More specifically, the look-up table is encoded into a single polynomial using SIMD (Single Instruction Multiple Data) packing and is rotated via a series of homomorphic multiplications and automorphisms. This method achieves two significant advantages: 1. the entire input plaintext space can be bootstrapped, 2. a more broad output plaintext space, such as complex numbers or finite field/rings can be supported. Finally, we present a new HE scheme leveraging the slot blind rotation technique and provide a proof-of-concept implementation. We also demonstrate the the benchmark results and provide recommended parameter sets.
Last updated:  2024-12-16
Covert 19th century political intrigues of Tenerife nobility revealed by cryptanalyzing an encrypted letter
Jezabel Molina-Gil, Cándido Caballero-Gil, Judit Gutiérrez-de-Armas, and Moti Yung
This article presents a cryptanalysis of a 19th-century encrypted manuscript discovered in the archives of Conde de Siete Fuentes in Tenerife, Canary Islands, Spain. The manuscript, preserved by the heirs of the 6th Count of Valle de Salazar, utilizes a polyalphabetic substitution cipher. The cryptanalysis was performed by applying statistical frequency analysis and developing a Python script for decryption, resulting in the authors successfully deciphering the message. The decrypted letter reveals political communications discussing the strategic positioning of Tenerife as the capital, the dissolution of local councils, and the influence of key political figures. The analysis compares the cipher with historical encryption techniques, and identifies the unique characteristics of the manuscript’s encryption method. The study highlights the political dynamics and alliances within Tenerife’s nobility and their interactions with the central Spanish government, providing significant insights into, both, the cryptographic practices and political maneuvers of the time.
Last updated:  2024-12-15
Security Analysis of ASCON Cipher under Persistent Faults
Madhurima Das and Bodhisatwa Mazumdar
This work investigates persistent fault analysis on ASCON cipher that has been recently standardized by NIST USA for lightweight cryptography applications. In persistent fault, the fault once injected through RowHammer injection techniques, exists in the system during the entire encryption phase. In this work, we propose a model to mount persistent fault analysis (PFA) on ASCON cipher. In the finalization round of the ASCON cipher, we identify that the fault-injected S-Box operation in the permutation round, $p^{12}$, is vulnerable to leaking infor- mation about the secret key. The model can exist in two variants, a single instance of fault-injected S-Box out of 64 parallel S-Box invocations, and the same faulty S-Box iterated 64 times. The attack model demonstrates that any Spongent construction operating with authenticated encryption with associated data (AEAD) mode is vulnerable to persistent faults. In this work, we demonstrate the scenario of a single fault wherein the fault, once injected is persistent until the device is powered off. Using the pro- posed method, we successfully retrieve the 128-bit key in ASCON. Our experiments show that the minimum number and the maximum num- ber of queries required are 63 plaintexts and 451 plaintexts, respectively. Moreover, we observe that the number of queries required to mount the attack depends on fault location in the S-box LUT as observed from the plots reporting the minimum number of queries and average number of queries for 100 key values.
Last updated:  2024-12-16
NLAT: the NonLinear Approximation Table of Vectorial Boolean Mappings
Jorge Nakahara Jr
This paper studies an extension of the Linear Approximation Table (LAT) of vectorial Boolean mappings (also known as Substitution boxes) used in Linear Cryptanalysis (LC). This extended table is called NonLinear Approximation Table (NLAT).
Last updated:  2024-12-14
Qubit Optimized Quantum Implementation of SLIM
Hasan Ozgur Cildiroglu and Oguz Yayla
The advent of quantum computing has profound implications for current technologies, offering advancements in optimization while posing significant threats to cryptographic algorithms. Public-key cryptosystems relying on prime factorization or discrete logarithms are particularly vulnerable, whereas block ciphers (BCs) remain secure through increased key lengths. In this study, we introduce a novel quantum implementation of SLIM, a lightweight block cipher optimized for 32-bit plaintext and an 80-bit key, based on a Feistel structure. This implementation distinguishes itself from other BC quantum implementations in its class (64–128-bit) by utilizing a minimal number of qubits while maintaining robust cryptographic strength and efficiency. By employing an innovative design that minimizes qubit usage, this work highlights SLIM’s potential as a resource-efficient and secure candidate for quantum-resistant encryption protocols.
Last updated:  2024-12-14
Impact Tracing: Identifying the Culprit of Misinformation in Encrypted Messaging Systems
Zhongming Wang, Tao Xiang, Xiaoguo Li, Biwen Chen, Guomin Yang, Chuan Ma, and Robert H. Deng
Encrypted messaging systems obstruct content moderation, although they provide end-to-end security. As a result, misinformation proliferates in these systems, thereby exacerbating online hate and harassment. The paradigm of ``Reporting-then-Tracing" shows great potential in mitigating the spread of misinformation. For instance, message traceback (CCS'19) traces all the dissemination paths of a message, while source tracing (CCS'21) traces its originator. However, message traceback lacks privacy preservation for non-influential users (e.g., users who only receive the message once), while source tracing maintains privacy but only provides limited traceability. In this paper, we initiate the study of impact tracing. Intuitively, impact tracing traces influential spreaders central to disseminating misinformation while providing privacy protection for non-influential users. We introduce noises to hide non-influential users and demonstrate that these noises do not hinder the identification of influential spreaders. Then, we formally prove our scheme's security and show it achieves differential privacy protection for non-influential users. Additionally, we define three metrics to evaluate its traceability, correctness, and privacy using real-world datasets. The experimental results show that our scheme identifies the most influential spreaders with accuracy from 82% to 99% as the amount of noise varies. Meanwhile, our scheme requires only a 6-byte platform storage overhead for each message while maintaining a low messaging latency (< 0.25ms).
Last updated:  2024-12-14
Orbweaver: Succinct Linear Functional Commitments from Lattices
Ben Fisch, Zeyu Liu, and Psi Vesely
We present Orbweaver, a plausibly post-quantum functional commitment for linear relations that achieves quasilinear prover time together with $O(\log n)$ proof size and polylogarithmic verifier time. Orbweaver enables evaluation of linear functions on committed vectors over cyclotomic rings and the integers. It is extractable, preprocessing, non-interactive, structure-preserving, and supports compact public proof aggregation. The security of our scheme is based on the $k$-$R$-ISIS assumption (and its knowledge counterpart), whereby we require a trusted setup to generate a universal structured reference string. We use Orbweaver to construct succinct univariate and multilinear polynomial commitments. Concretely, our scheme has smaller proofs than most other succinct post-quantum arguments for large statements. For binary vectors of length $2^{30}$ we achieve $302$KiB linear map evaluation proofs with evaluation binding, and $1$MiB proofs when extractability is required; for $32$-bit integers these sizes are $494$KiB and $1.6$MiB, respectively.
Last updated:  2024-12-13
Mira: Efficient Folding for Pairing-based Arguments
Josh Beal and Ben Fisch
Pairing-based arguments offer remarkably small proofs and space-efficient provers, but aggregating such proofs remains costly. Groth16 SNARKs and KZG polynomial commitments are prominent examples of this class of arguments. These arguments are widely deployed in decentralized systems, with millions of proofs generated per day. Recent folding schemes have greatly reduced the cost of proving incremental computations, such as batch proof verification. However, existing constructions require encoding pairing operations in generic constraint systems, leading to high prover overhead. In this work, we introduce Mira, a folding scheme that directly supports pairing-based arguments. We construct this folding scheme by generalizing the framework in Protostar to support a broader class of special-sound protocols. We demonstrate the versatility and efficiency of this framework through two key applications: Groth16 proof aggregation and verifiable ML inference. Mira achieves 5.8x faster prover time and 9.7x lower memory usage than the state-of-the-art proof aggregation system while maintaining a constant-size proof. To improve the efficiency of verifiable ML inference, we provide a new lincheck protocol with a verifier degree that is independent of the matrix order. We show that Mira scales effectively to larger models, overcoming the memory bottlenecks of current schemes.
Last updated:  2024-12-13
Hash-Prune-Invert: Improved Differentially Private Heavy-Hitter Detection in the Two-Server Model
Borja Balle, James Bell, Albert Cheu, Adria Gascon, Jonathan Katz, Mariana Raykova, Phillipp Schoppmann, and Thomas Steinke
Differentially private (DP) heavy-hitter detection is an important primitive for data analysis. Given a threshold $t$ and a dataset of $n$ items from a domain of size $d$, such detection algorithms ignore items occurring fewer than $t$ times while identifying items occurring more than $t+\Delta$ times; we call $\Delta$ the error margin. In the central model where a curator holds the entire dataset, $(\varepsilon,\delta)$-DP algorithms can achieve error margin $\Theta(\frac 1 \varepsilon \log \frac 1 \delta)$, which is optimal when $d \gg 1/\delta$. Several works, e.g., Poplar (S&P 2021), have proposed protocols in which two or more non-colluding servers jointly compute the heavy hitters from inputs held by $n$ clients. Unfortunately, existing protocols suffer from an undesirable dependence on $\log d$ in terms of both server efficiency (computation, communication, and round complexity) and accuracy (i.e., error margin), making them unsuitable for large domains (e.g., when items are kB-long strings, $\log d \approx 10^4$). We present hash-prune-invert (HPI), a technique for compiling any heavy-hitter protocol with the $\log d$ dependencies mentioned above into a new protocol with improvements across the board: computation, communication, and round complexity depend (roughly) on $\log n$ rather than $\log d$, and the error margin is independent of $d$. Our transformation preserves privacy against an active adversary corrupting at most one of the servers and any number of clients. We apply HPI to an improved version of Poplar, also introduced in this work, that improves Poplar's error margin by roughly a factor of $\sqrt{n}$ (regardless of $d$). Our experiments confirm that the resulting protocol improves efficiency and accuracy for large $d$.
Last updated:  2024-12-13
An Abstract Multi-Forking Lemma
Charanjit S Jutla
In this work we state and prove an abstract version of the multi-forking lemma of Pointcheval and Stern from EUROCRYPT'96. Earlier, Bellare and Neven had given an abstract version of forking lemma for two-collisions (CCS'06). While the original purpose of the forking lemma was to prove security of signature schemes in the random oracle methodology, the abstract forking lemma can be used to obtain security proofs for multi-signatures, group signatures, and compilation of interactive protocols under the Fiat-Shamir random-oracle methodology.
Last updated:  2024-12-13
The Revisited Hidden Weight Bit Function
Pierrick Méaux, Tim Seuré, and Deng Tang
The Hidden Weight Bit Function (HWBF) has drawn considerable attention for its simplicity and cryptographic potential. Despite its ease of implementation and favorable algebraic properties, its low nonlinearity limits its direct application in modern cryptographic designs. In this work, we revisit the HWBF and propose a new weightwise quadratic variant obtained by combining the HWBF with a bent function. This construction offers improved cryptographic properties while remaining computationally efficient. We analyze the balancedness, nonlinearity, and other criteria of this function, presenting theoretical bounds and experimental results to highlight its advantages over existing functions in similar use cases. The different techniques we introduce to study the nonlinearity of this function also enable us to bound the nonlinearity of a broad family of weightwise quadratic functions, both theoretically and practically. We believe these methods are of independent interest.
Last updated:  2024-12-13
PrivQuant: Communication-Efficient Private Inference with Quantized Network/Protocol Co-Optimization
Tianshi Xu, Shuzhang Zhong, Wenxuan Zeng, Runsheng Wang, and Meng Li
Private deep neural network (DNN) inference based on secure two-party computation (2PC) enables secure privacy protection for both the server and the client. However, existing secure 2PC frameworks suffer from a high inference latency due to enormous communication. As the communication of both linear and non-linear DNN layers reduces with the bit widths of weight and activation, in this paper, we propose PrivQuant, a framework that jointly optimizes the 2PC-based quantized inference protocols and the network quantization algorithm, enabling communication-efficient private inference. PrivQuant proposes DNN architecture-aware optimizations for the 2PC protocols for communication-intensive quantized operators and conducts graph-level operator fusion for communication reduction. Moreover, PrivQuant also develops a communication-aware mixed precision quantization algorithm to improve the inference efficiency while maintaining high accuracy. The network/protocol co-optimization enables PrivQuant to outperform prior-art 2PC frameworks. With extensive experiments, we demonstrate PrivQuant reduces communication by $11\times, 2.5\times \mathrm{and}~ 2.8\times$, which results in $8.7\times, 1.8\times ~ \mathrm{and}~ 2.4\times$ latency reduction compared with SiRNN, COINN, and CoPriv, respectively.
Last updated:  2024-12-13
Ring Ring! Who's There? A Privacy Preserving Mobile Number Search
Akshit Aggarwal
Private set intersection (PSI) allows any two parties (say client and server) to jointly compute the intersection of their sets without revealing anything else. Fully homomorphic encryption (FHE)-based PSI is a cryptographic solution to implement PSI-based protocols. Most FHE-based PSI protocols implement hash function approach and oblivious transfer approach. The main limitations of their protocols are 1) high communication complexity, that is, $O(xlogy)$ (where $x$ is total number of elements on client side, and $y$ is total number of elements on server side), and 2) high memory usage due to SIMD packing for encrypting large digit numbers. In this work, we design a novel tree-based approach to store the large digit numbers that achieves less communication complexity, that is, $O(|d|^{2})$ (where $d$ is digits of a mobile number). Later we implement our protocol using Tenseal library. Our designed protocol opens the door to find the common elements with less communication complexity and less memory usage.
Last updated:  2024-12-13
Key-Insulated and Privacy-Preserving Signature Scheme with Publicly Derived Public Key, Revisited: Consistency, Outsider Strong Unforgeability, and Generic Construction
Keita Emura
Liu et al. (EuroS&P 2019) introduced Key-Insulated and Privacy-Preserving Signature Scheme with Publicly Derived Public Key (PDPKS) to enhance the security of stealth address and deterministic wallet. In this paper, we point out that the current security notions are insufficient in practice, and introduce a new security notion which we call consistency. Moreover, we explore the unforgeability to provide strong unforgeability for outsider which captures the situation that nobody, except the payer and the payee, can produce a valid signature. From the viewpoint of cryptocurrency functionality, it allows us to implement a refund functionality. Finally, we propose a generic construction of PDPKS that provides consistency and outsider strong unforgeability. The design is conceptually much simpler than known PDPKS constructions. It is particularly note that the underlying strongly unforgeable signature scheme is required to provide the strong conservative exclusive ownership (S-CEO) security (Cremers et al., IEEE S&P 2021). Since we explicitly require the underlying signature scheme to be S-CEO secure, our security proof introduces a new insight of exclusive ownership security which may be of independent interest. As instantiations, we can obtain a pairing-based PDPKS scheme in the standard model, a discrete-logarithm based pairing-free PDPKS scheme in the random oracle model, and a lattice-based PDPKS scheme in the random oracle model, and so on.
Last updated:  2024-12-18
On the BUFF Security of ECDSA with Key Recovery
Keita Emura
In the usual syntax of digital signatures, the verification algorithm takes a verification key in addition to a signature and a message, whereas in ECDSA with key recovery, which is used in Ethereum, no verification key is input to the verification algorithm. Instead, a verification key is recovered from a signature and a message. In this paper, we explore BUFF security of ECDSA with key recovery (KR-ECDSA), where BUFF stands for Beyond UnForgeability Features (Cremers et al., IEEE S&P 2021). As a result, we show that KR-ECDSA provides BUFF security, except weak non-resignability (wNR). We pay attention to that the verification algorithm of KR-ECDSA takes an Ethereum address addr as input, which is defined as the rightmost 160-bits of the Keccak-256 hash of the corresponding ECDSA verification key, and checks the hash value of the recovered verification key is equal to addr. Our security analysis shows that this procedure is mandatory to provide BUFF security. We also discuss whether wNR is mandatory in Ethereum or not. To clarify the above equality check is mandatory to provide BUFF security in KR-ECDSA, we show that the original ECDSA does not provide any BUFF security. As a by-product of the analysis, we show that one of our BUFF attacks also works against the Aumayr et al.'s ECDSA-based adaptor signature scheme (ASIACRYPT 2021). We emphasize that the attack is positioned outside of their security model.
Last updated:  2024-12-13
Byzantine Consensus in Wireless Networks
Hao Lu, Jian Liu, and Kui Ren
A Byzantine consensus protocol is essential in decentralized systems as the protocol ensures system consistency despite node failures. Research on consensus in wireless networks receives relatively less attention, while significant advancements in wired networks. However, consensus in wireless networks has equal significance as in wired networks. In this paper, we propose a new reliable broadcast protocol that can achieve reliability with high fault tolerance over than the SOTA (PODC '05). With the new protocol, we further develop the first wireless network Byzantine consensus protocol under the assumption of partial synchrony. Notably, this consensus protocol removes the requirement of leaders and fail-over mechanism in prior works. We formally prove the correctness of both our new broadcast protocol and consensus protocol.
Last updated:  2024-12-13
The Existence of Quantum One-Way Functions
Ping Wang, Yikang Lei, Zishen Shen, and Fangguo Zhang
One-way functions are essential tools for cryptography. However, the existence of one-way functions is still an open conjecture. By constructing a function with classical bits as input and quantum states as output, we prove for the first time the existence of quantum one-way functions. It provides theoretical guarantees for the security of many quantum cryptographic protocols.
Last updated:  2024-12-13
Universal SNARGs for NP from Proofs of Correctness
Zhengzhong Jin, Yael Tauman Kalai, Alex Lombardi, and Surya Mathialagan
We give new constructions of succinct non-interactive arguments ($\mathsf{SNARG}$s) for $\mathsf{NP}$ in the settings of both non-adaptive and adaptive soundness. Our construction of non-adaptive $\mathsf{SNARG}$ is universal assuming the security of a (leveled or unleveled) fully homomorphic encryption ($\mathsf{FHE}$) scheme as well as a batch argument ($\mathsf{BARG}$) scheme. Specifically, for any choice of parameters $\ell$ and $L$, we construct a candidate $\mathsf{SNARG}$ scheme for any $\mathsf{NP}$ language $\mathcal{L}$ with the following properties: - the proof length is $\ell\cdot \mathsf{poly}(\lambda)$, - the common reference string $\mathsf{crs}$ has length $L\cdot \mathsf{poly}(\lambda)$, and - the setup is transparent (no private randomness). We prove that this $\mathsf{SNARG}$ has non-adaptive soundness assuming the existence of any $\mathsf{SNARG}$ where the proof size is $\ell$, the $\mathsf{crs}$ size is $L$, and there is a size $L$ Extended Frege ($\mathcal{EF}$) proof of completeness for the $\mathsf{SNARG}$. Moreover, we can relax the underlying $\mathsf{SNARG}$ to be any 2-message privately verifiable argument where the first message is of length $L$ and the second message is of length $\ell$. This yields new $\mathsf{SNARG}$ constructions based on any ``$\mathcal{EF}$-friendly'' designated-verifier $\mathsf{SNARG}$ or witness encryption scheme. We emphasize that our $\mathsf{SNARG}$ is universal in the sense that it does not depend on the argument system. We show several new implications of this construction that do not reference proof complexity: - a non-adaptive $\mathsf{SNARG}$ for $\mathsf{NP}$ with transparent $\mathsf{crs}$ from evasive $\mathsf{LWE}$ and $\mathsf{LWE}$. This gives a candidate lattice-based $\mathsf{SNARG}$ for $\mathsf{NP}$. - a non-adaptive $\mathsf{SNARG}$ for $\mathsf{NP}$ with transparent $\mathsf{crs}$ assuming the (non-explicit) existence of any $\mathsf{iO}$ and $\mathsf{LWE}$. - a non-adaptive $\mathsf{SNARG}$ for $\mathsf{NP}$ with a short and transparent (i.e., uniform) $\mathsf{crs}$ assuming $\mathsf{LWE}$, $\mathsf{FHE}$ and the (non-explicit) existence of any hash function that makes Micali's $\mathsf{SNARG}$ construction sound. - a non-adaptive $\mathsf{SNARG}$ for languages such as $\mathsf{QR}$ and $\overline{\mathsf{DCR}}$ assuming only $\mathsf{LWE}$. In the setting of adaptive soundness, we show how to convert any designated verifier $\mathsf{SNARG}$ into publicly verifiable $\mathsf{SNARG}$, assuming the underlying designated verifier $\mathsf{SNARG}$ has an $\mathcal{EF}$ proof of completeness. As a corollary, we construct an adaptive $\mathsf{SNARG}$ for $\mathsf{UP}$ with a transparent $\mathsf{crs}$ assuming subexponential $\mathsf{LWE}$ and evasive $\mathsf{LWE}$. We prove our results by extending the encrypt-hash-and-$\mathsf{BARG}$ paradigm of [Jin-Kalai-Lombardi-Vaikuntanathan, STOC '24].
Last updated:  2024-12-13
On the Traceability of Group Signatures: Uncorrupted User Must Exist
Keita Emura
Group signature (GS) is a well-known cryptographic primitive providing anonymity and traceability. Several implication results have been given by mainly focusing on the several security levels of anonymity, e.g., fully anonymous GS implies public key encryption (PKE) and selfless anonymous GS can be constructed from one-way functions and non-interactive zero knowledge poofs, and so on. In this paper, we explore an winning condition of full traceability: an adversary is required to produce a valid group signature whose opening result is an uncorrupted user. We demonstrate a generic construction of GS secure in the Bellare-Micciancio-Warinschi (BMW) model except the above condition from PKE only. We emphasize that the proposed construction is quite artificial and meaningless in practice because the verification algorithm always outputs 1 regardless of the input. This result suggests us the winning condition is essential in full traceability, i.e., an uncorrupted user must exist. We also explore a public verifiability of GS-based PKE scheme and introduce a new formal security definition of public verifiability by following BUFF (Beyond UnForgeability Features) security. Our definition guarantees that the decryption result of a valid cyphertext is in the message space specified by the public key. We show that the GS-based PKE scheme is publicly verifiable if the underlying GS scheme is fully traceable.
Last updated:  2024-12-13
Crescent: Stronger Privacy for Existing Credentials
Christian Paquin, Guru-Vamsi Policharla, and Greg Zaverucha
We describe Crescent, a construction and implementation of privacy-preserving credentials. The system works by upgrading the privacy features of existing credentials, such as JSON Web Tokens (JWTs) and Mobile Driver’s License (mDL) and as such does not require a new party to issue credentials. By using zero-knowledge proofs of possession of these credentials, we can add privacy features such as selective disclosure and unlinkability, without help from credential issuers. The system has practical performance, offering fast proof generation and verification times (tens of milliseconds) after a once-per-credential setup phase. We give demos for two practical scenarios, proof of employment for benefits eligibility (based on an employer-issued JWT), and online age verification (based on an mDL). We provide an open-source implementation to enable further research and experimentation. This paper is an early draft describing our work, aiming to include enough material to describe the functionality, and some details of the internals of our new library, available at https://github.com/microsoft/crescent-credentials.
Last updated:  2024-12-13
GraSS: Graph-based Similarity Search on Encrypted Query
Duhyeong Kim, Yujin Nam, Wen Wang, Huijing Gong, Ishwar Bhati, Rosario Cammarota, Tajana S. Rosing, Mariano Tepper, and Theodore L. Willke
Similarity search, i.e., retrieving vectors in a database that are similar to a query, is the backbone of many applications. Especially, graph-based methods show state-of-the-art performance. For sensitive applications, it is critical to ensure the privacy of the query and the dataset. In this work, we introduce GraSS, a secure protocol between client (query owner) and server (dataset owner) for graph-based similarity search based on fully homomorphic encryption (FHE). Both the client-input privacy against the server and the server-input privacy against the client are achievable based on underlying security assumptions on FHE. We first propose an FHE-friendly graph structure with a novel index encoding method that makes our protocol highly scalable in terms of data size, reducing the computational complexity of neighborhood retrieval process from $O(n^2)$ to $\tilde{O}(n)$ for the total number of nodes $n$. We also propose several core FHE algorithms to perform graph operations under the new graph structure. Finally, we introduce GraSS, an end-to-end solution of secure graph-based similarity search based on FHE. To the best of our knowledge, it is the first FHE-based solution for secure graph-based database search. We implemented GraSS with an open-source FHE library and estimated the performance on a million-scale dataset. GraSS identifies (approximate) top-16 in about $83$ hours achieving search accuracy of $0.918$, making it over $28\times$ faster than the previous best-known FHE-based solution.
Last updated:  2024-12-12
Honest-Majority Threshold ECDSA with Batch Generation of Key-Independent Presignatures
Jonathan Katz and Antoine Urban
Several protocols have been proposed recently for threshold ECDSA signatures, mostly in the dishonest-majority setting. Yet in so-called key-management networks, where a fixed set of servers share a large number of keys on behalf of multiple users, it may be reasonable to assume that a majority of the servers remain uncompromised, and in that case there may be several advantages to using an honest-majority protocol. With this in mind, we describe an efficient protocol for honest-majority threshold ECDSA supporting batch generation of key-independent presignatures that allow for "non-interactive'" online signing; these properties are not available in existing dishonest-majority protocols. Our protocol offers low latency and high throughput, and runs at an amortized rate of roughly 1.3 ms/presignature.
Last updated:  2024-12-20
Anonymous credentials from ECDSA
Matteo Frigo and abhi shelat
Anonymous digital credentials allow a user to prove possession of an attribute that has been asserted by an identity issuer without revealing any extra information about themselves. For example, a user who has received a digital passport credential can prove their “age is $>18$” without revealing any other attributes such as their name or date of birth. Despite inherent value for privacy-preserving authentication, anonymous credential schemes have been difficult to deploy at scale. Part of the difficulty arises because schemes in the literature, such as BBS+, use new cryptographic assumptions that require system-wide changes to existing issuer infrastructure. In addition, issuers often require digital identity credentials to be *device-bound* by incorporating the device’s secure element into the presentation flow. As a result, schemes like BBS+ require updates to the hardware secure elements and OS on every user's device. In this paper, we propose a new anonymous credential scheme for the popular and legacy-deployed Elliptic Curve Digital Signature Algorithm (ECDSA) signature scheme. By adding efficient zk arguments for statements about SHA256 and document parsing for ISO-standardized identity formats, our anonymous credential scheme is that first one that can be deployed *without* changing any issuer processes, *without* requiring changes to mobile devices, and *without* requiring non-standard cryptographic assumptions. Producing ZK proofs about ECDSA signatures has been a bottleneck for other ZK proof systems because standardized curves such as P256 use finite fields which do not support efficient number theoretic transforms. We overcome this bottleneck by designing a ZK proof system around sumcheck and the Ligero argument system, by designing efficient methods for Reed-Solomon encoding over the required fields, and by designing specialized circuits for ECDSA. Our proofs for ECDSA can be generated in 60ms. When incorporated into a fully standardized identity protocol such as the ISO MDOC standard, we can generate a zero-knowledge proof for the MDOC presentation flow in 1.2 seconds on mobile devices depending on the credential size. These advantages make our scheme a promising candidate for privacy-preserving digital identity applications.
Last updated:  2024-12-12
The Mis/Dis-information Problem is Hard to Solve
Gregory Hagen, Reihaneh Safavi-Naini, and Moti Yung
Securing information communication dates back thousands of years ago. The meaning of information security, however, has evolved over time and today covers a very wide variety of goals, including identifying the source of information, the reliability of information, and ultimately whether the information is trustworthy. In this paper, we will look at the evolution of the information security problem and the approaches that have been developed for providing information protection. We argue that the more recent problem of misinformation and disinformation has shifted the content integrity problem from the protection of message syntax to the protection of message semantics. This shift, in the age of advanced AI systems, a technology that can be used to mimic human-generated content as well as to create bots that mimic human behaviour on the Internet, poses fundamental technological challenges that evade existing technologies. It leaves social elements, including public education and a suitable legal framework, as increasingly the main pillars of effective protection, at least in the short run. It also poses an intriguing challenge to the scientific community: to design effective solutions that employ cryptography and AI, together with incentivization to engage the global community, to ensure the safety of the information ecosystem.
Last updated:  2024-12-12
PrivCirNet: Efficient Private Inference via Block Circulant Transformation
Tianshi Xu, Lemeng Wu, Runsheng Wang, and Meng Li
Homomorphic encryption (HE)-based deep neural network (DNN) inference protects data and model privacy but suffers from significant computation overhead. We observe transforming the DNN weights into circulant matrices converts general matrix-vector multiplications into HE-friendly 1-dimensional convolutions, drastically reducing the HE computation cost. Hence, in this paper, we propose PrivCirNet, a protocol/network co-optimization framework based on block circulant transformation. At the protocol level, PrivCirNet customizes the HE encoding algorithm that is fully compatible with the block circulant transformation and reduces the computation latency in proportion to the block size. At the network level, we propose a latency-aware formulation to search for the layer-wise block size assignment based on second-order information. PrivCirNet also leverages layer fusion to further reduce the inference cost. We compare PrivCirNet with the state-of-the-art HE-based framework Bolt (IEEE S&P 2024) and HE-friendly pruning method SpENCNN (ICML 2023). For ResNet-18 and Vision Transformer (ViT) on Tiny ImageNet, PrivCirNet reduces latency by $5.0\times$ and $1.3\times$ with iso-accuracy over Bolt, respectively, and improves accuracy by $4.1\%$ and $12\%$ over SpENCNN, respectively. For MobileNetV2 on ImageNet, PrivCirNet achieves $1.7\times$ lower latency and $4.2\%$ better accuracy over Bolt and SpENCNN, respectively. Our code and checkpoints are available at https://github.com/Tianshi-Xu/PrivCirNet.
Last updated:  2024-12-12
A Combinatorial Attack on Ternary Sparse Learning with Errors (sLWE)
Abul Kalam, Santanu Sarkar, and Willi Meier
Sparse Learning With Errors (sLWE) is a novel problem introduced at Crypto 2024 by Jain et al., designed to enhance security in lattice-based cryptography against quantum attacks while maintaining computational efficiency. This paper presents the first third-party analysis of the ternary variant of sLWE, where both the secret and error vectors are constrained to ternary values. We introduce a combinatorial attack that employs a subsystem extraction technique followed by a Meet-in-the-Middle approach, effectively recovering the ternary secret vector. Our comprehensive analysis explores the attack's performance across various sparsity and modulus settings, revealing critical security limitations inherent in ternary sLWE. Our analysis does not claim to present any attack on the proposal of Jain et al.; rather, it supports their assertion that sparse LWE is vulnerable for small secrets, particularly for ternary secrets and ternary errors. Notably, our findings indicate that the recommended parameters, which the developers claim provide security equivalent to LWE with a dimension of 1024, may not hold true for the ternary variant of sLWE. Our research highlights that, particularly with a modulus of $2^{64}$, the secret key can be recovered in a practical timeframe, supporting the developers' claim of vulnerability in this case. Additionally, for configurations with moduli of $2^{32}$ and $2^{16}$, we observe a significant reduction in the security margin. This suggests that the actual security level may be significantly weaker than intended. Overall, our work contributes crucial insights into the cryptographic robustness of ternary sLWE, emphasizing the need for further strengthening to protect against potential attacks and setting the stage for future research in this area.
Last updated:  2024-12-12
Data Decryption and Analysis of Note-Taking Applications
Seyoung Yoon, Myungseo Park, Kyungbae Jang, and Hwajeong Seo
As smartphone usage continues to grow, the demand for note-taking applications, including memo and diary apps, is rapidly increasing. These applications often contain sensitive information such as user schedules, thoughts, and activities, making them key targets for analysis in digital forensics. Each year, new note-taking applications are released, most of which include lock features to protect user data. However, these security features can create challenges for authorized investigators attempting to access and analyze application data. This paper aims to support investigators by conducting a static analysis of Android-based note-taking applications. It identifies how and where data is stored and explains methods for extracting and decrypting encrypted data. Based on the analysis, the paper concludes by proposing future research directions in the field of digital forensics.
Last updated:  2024-12-12
Post-Quantum Secure Channel Protocols for eSIMs
Luk Bettale, Emmanuelle Dottax, and Laurent Grémy
The transition to Post-Quantum (PQ) cryptography is increasingly mandated by national agencies and organizations, often involving a phase where classical and PQ primitives are combined into hybrid solutions. In this context, existing protocols must be adapted to ensure quantum resistance while maintaining their security goals. These adaptations can significantly impact performance, particularly on embedded devices. In this article, we focus on standardized protocols which support application management on eSIMs across different modes. This is a complex use-case, involving constrained devices with stringent security requirements. We present PQ adaptations, including both hybrid and fully PQ versions, for all modes. Using ProVerif, we provide automated proofs that verify the security of these PQ variants. Additionally, we analyze the performance impact of implementing PQ protocols on devices, measuring runtime and bandwidth consumption. Our findings highlight the resource overhead associated with achieving post-quantum security for eSIM management.
Last updated:  2024-12-13
Regev's attack on hyperelliptic cryptosystems
Razvan Barbulescu and Gaetan Bisson
Hyperelliptic curve cryptography (HECC) is a candidate to standardization which is a competitive alternative to elliptic curve cryptography (ECC). We extend Regev's algorithm to this setting. For genus-two curves relevant to cryptography, this yields a quantum attack up to nine times faster than the state-of-the-art. This implies that HECC is slightly weaker than ECC. In a more theoretical direction, we show that Regev's algorithm obtains its full speedup with respect to Shor's when the genus is high, a setting which is already known to be inadequate for cryptography.
Last updated:  2024-12-12
Exploring the Optimal Differential Characteristics of SM4 (Full Version): Improving Automatic Search by Including Human Insights
Bingqing Li and Ling Sun
This study aims to determine the complete and precise differential properties of SM4, which have remained unknown for over twenty years after the cipher was initially released. A Boolean Satisfiability Problem (SAT) based automatic search approach is employed to achieve the objective. To improve the limited efficiency of the search focused on differential probabilities, we want to investigate the feasibility of integrating human expertise into an automatic approach to enhance the search speed. This study presents the construction of four new SAT models that describe the human-identified specific properties of short differential characteristics. All of these models are integrated into the fundamental model, and the SAT solver is implemented to assess the acceleration capabilities of the new models. The experimental results indicate that including three new models effectively decreases the overall execution time of the SAT solver. Using the novel models, we obtain the first precise minimal values for the number of active S-boxes of SM4 under single-key (complete rounds) and related-key (1-round to 19-round) settings. The first precise upper bound for differential probabilities of SM4 (1-round to 20-round) is also determined. In addition, we present the first publicly revealed optimal 19-round differential characteristic of SM4.
Last updated:  2024-12-12
Improving Differential-Neural Distinguisher For Simeck Family
Xue Yuan and Qichun Wang
In CRYPTO 2019, Gohr introduced the method of differential neural cryptanalysis, utilizing neural networks as the underlying distinguishers to achieve distinguishers for (5-8)-round of the Speck32/64 cipher and subsequently recovering keys for 11 and 12 rounds. Inspired by this work, we propose an enhanced neural cryptanalysis framework that combines the Efficient Channel Attention (ECA) module with residual networks. By introducing the channel attention mechanism to emphasize key features and leveraging residual networks to facilitate efficient feature extraction and gradient flow, we achieve improved performance. Additionally, we employ a new data format that combines the ciphertext and the penultimate round ciphertext as input samples, providing the distinguisher with more useful features. Compared with the known results, our work enhance the accuracy of the neural distinguishers for Simeck32/64 (10-12)-round and achieve a new 13-round distinguisher. We also improve the accuracy of the Simeck48/96 (10-11)-round distinguishers and develop new (12-16)-round neural distinguishers. Moreover, we enhance the accuracy of the Simeck64/128 (14-18)-round distinguishers and obtain a new 19-round neural distinguisher. As a result, we achieve the highest accuracy and the longest rounds distinguishers for Simeck32/64, Simeck48/96, and Simeck64/128.
Last updated:  2024-12-11
Xiezhi: Toward Succinct Proofs of Solvency
Youwei Deng and Jeremy Clark
A proof of solvency (or proof of reserves) is a zero-knowledge proof conducted by centralized cryptocurrency exchange to offer evidence that the exchange owns enough cryptocurrency to settle each of its users' balances. The proof seeks to reveal nothing about the finances of the exchange or its users, only the fact that it is solvent. The literature has already started to explore how to make proof size and verifier time independent of the number of (i) users on the exchange, and (ii) addresses used by the exchange. We argue there are a few areas of improvement. First, we propose and implement a full end-to-end argument that is fast for the exchange to prove (minutes), small in size (KBs), and fast to verify (seconds). Second, we deal with the natural conflict between Bitcoin and Ethereum's cryptographic setting (secp256k1) and more ideal settings for succinctness (e.g., pairing-based cryptography) with a novel mapping approach. Finally, we discuss how to adapt the protocol to the concrete parameters of bls12-381 (which is relevant because the bit-decomposition of all user balances will exceed the largest root of unity of the curve for even moderately-sized exchanges).
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.