All papers (Page 26 of 24087 results)

Last updated:  2024-05-24
Closing the Efficiency Gap between Synchronous and Network-Agnostic Consensus
Giovanni Deligios and Mose Mizrahi Erbes
In the consensus problem, $n$ parties want to agree on a common value, even if some of them are corrupt and arbitrarily misbehave. If the parties have a common input $m$, then they must agree on $m$. Protocols solving consensus assume either a synchronous communication network, where messages are delivered within a known time, or an asynchronous network with arbitrary delays. Asynchronous protocols only tolerate $t_a < n/3$ corrupt parties. Synchronous ones can tolerate $t_s < n/2$ corruptions with setup, but their security completely breaks down if the synchrony assumptions are violated. Network-agnostic consensus protocols, as introduced by Blum, Katz, and Loss [TCC'19], are secure regardless of network conditions, tolerating up to $t_s$ corruptions with synchrony and $t_a$ without, under provably optimal assumptions $t_a \leq t_s$ and $2t_s + t_a < n$. Despite efforts to improve their efficiency, all known network-agnostic protocols fall short of the asymptotic complexity of state-of-the-art purely synchronous protocols. In this work, we introduce a novel technique to compile any synchronous and any asynchronous consensus protocols into a network-agnostic one. This process only incurs a small constant number of overhead rounds, so that the compiled protocol matches the optimal round complexity for synchronous protocols. Our compiler also preserves under a variety of assumptions the asymptotic communication complexity of state-of-the-art synchronous and asynchronous protocols. Hence, it closes the current efficiency gap between synchronous and network-agnostic consensus. As a plus, our protocols support $\ell$-bit inputs, and can be extended to achieve communication complexity $O(n^2\kappa + \ell n)$ under the assumptions for which this is known to be possible for purely synchronous protocols.
Last updated:  2024-02-23
Threshold Garbled Circuits with Low Overhead
Schuyler Rosefield, abhi shelat, and LaKyah Tyner
The folklore approach to designing a threshold variant of symmetric cryptographic algorithms involves applying generic MPC methods to se- cret sharing techniques: the MPC first combines participant input shares using the secret sharing scheme, and then evaluates the cryptographic function on the reconstructed key. Hardening this secure against n − 1 malicious parties requires some mechanism to ensure input consistency, e.g., adding MACs to inputs, which consequently, increases the number of inputs and gates to the MPC. In many cases, this extra overhead is substantially more than the underlying cost of evaluating the symmetric cryptographic algorithm. We present a scheme that can convert any suitable maliciously secure dishonest majority boolean-circuit FMPC into a threshold scheme Fthresh with almost no overhead. Specifically, we present an SUC-secure scheme that allows for reactive threshold t-of-n boolean circuit evaluation amongst a group of n parties P , for any t ≤ n, against a malicious adversary that corrupts any number of parties less than the threshold t. Moreover, mul- tiple circuits can be evaluated sequentially with the secret-shared authen- ticated outputs of a circuit to be used subsequently as inputs for a new circuit by any S ⊆ P of size |S| ≥ t. Building upon the works of Wang et al, Hazay et al, and Yang et al, [WRK17, HSSV17, YWZ20] for dishonest majority FMPC, our key insight is to create threshold versions of the “authenticated bits” used to han- dle input in these recent n-party garbled circuits protocols. The resulting design incurs a small overhead to produce the reusable “threshold authen- ticated bits” during preprocessing, and adds no extra communication to evaluate with the authenticated input during the online phase. Using our methods, thresholdizing a boolean circuit has essentially no performance overhead. For example, to compute HMAC, a full Setup+Eval execution of the (n − 2)-out-of-n thresholdized version is approximately 4% more expensive than the state-of-the-art n-party MPC. In contrast, using the folklore method is approximately 100% more expensive. This is especially true for small circuits such as AES which has 6800 gates and thus incurs the most overhead for thresholdizing. Simply considering the online Eval cost, our approach can evaluate AES blocks at 2.3/s with 16 parties, exceeding the baseline MPC cost without preprocessing, and sur- passing the folklore method that only achieves .33/s blocks. Ultimately, this result makes threshold boolean circuit MPC as feasible as any MPC application.
Last updated:  2024-07-04
Alternative Key Schedules for the AES
Christina Boura, Patrick Derbez, and Margot Funk
The AES block cipher is today the most important and analyzed symmetric algorithm. While all versions of the AES are known to be secure in the single-key setting, this is not the case in the related-key scenario. In this article we try to answer the question whether the AES would resist better differential-like related-key attacks if the key schedule was different. For this, we search for alternative permutation-based key schedules by extending the work of Khoo et al. at ToSC 2017 and Derbez et al. at SAC 2018. We first show that the model of Derbez et al. was flawed. Then, we develop different approaches together with MILP-based tools to find good permutations that could be used as the key schedule for AES-128, AES-192 and AES-256. Our methods permitted to find permutations that outperform the permutation exhibited by Khoo et al. for AES-128. Moreover, our new approach based on two MILP models that call one another allowed us to handle a larger search space and thus to search for alternative key schedules for the two bigger versions of AES. This method permitted us to find permutations for AES-192 and AES-256 that provide better resistance to related-key differential attacks. Most importantly, we showed that these variants can resist full-round boomerang attacks.
Last updated:  2024-11-07
Exploring the Advantages and Challenges of Fermat NTT in FHE Acceleration
Andrey Kim, Ahmet Can Mert, Anisha Mukherjee, Aikata Aikata, Maxim Deryabin, Sunmin Kwon, HyungChul Kang, and Sujoy Sinha Roy
Recognizing the importance of a fast and resource-efficient polynomial multiplication in homomorphic encryption, in this paper, we design a multiplier-less number theoretic transform using a Fermat number as an auxiliary modulus. To make this algorithm scalable with the degree of polynomial, we apply a univariate to multivariate polynomial ring transformation. We develop an accelerator architecture for fully homomorphic encryption using these algorithmic techniques for efficient multivariate polynomial multiplication. For practical homomorphic encryption application benchmarks, the hardware accelerator achieves a 1,200$\times$ speed-up compared to software implementations. Finally, we conclude the paper by discussing the advantages and limitations of the proposed polynomial multiplication method.
Last updated:  2025-03-29
The Complexity of Algebraic Algorithms for LWE
Matthias Johann Steiner
Arora & Ge introduced a noise-free polynomial system to compute the secret of a Learning With Errors (LWE) instance via linearization. Albrecht et al. later utilized the Arora-Ge polynomial model to study the complexity of Gröbner basis computations on LWE polynomial systems under the assumption of semi-regularity. In this paper we revisit the Arora-Ge polynomial and prove that it satisfies a genericity condition recently introduced by Caminata & Gorla, called being in generic coordinates. For polynomial systems in generic coordinates one can always estimate the complexity of DRL Gröbner basis computations in terms of the Castelnuovo-Mumford regularity and henceforth also via the Macaulay bound. Moreover, we generalize the Gröbner basis algorithm of Semaev & Tenti to arbitrary polynomial systems with a finite degree of regularity. In particular, existence of this algorithm yields another approach to estimate the complexity of DRL Gröbner basis computations in terms of the degree of regularity. In practice, the degree of regularity of LWE polynomial systems is not known, though one can always estimate the lowest achievable degree of regularity. Consequently, from a designer's worst case perspective this approach yields sub-exponential complexity estimates for general, binary secret and binary error LWE. In recent works by Dachman-Soled et al. the hardness of LWE in the presence of side information was analyzed. Utilizing their framework we discuss how hints can be incorporated into LWE polynomial systems and how they affect the complexity of Gröbner basis computations.
Last updated:  2024-02-23
Trapdoor Memory-Hard Functions
Benedikt Auerbach, Christoph U. Günther, and Krzysztof Pietrzak
Memory-hard functions (MHF) are functions whose evaluation provably requires a lot of memory. While MHFs are an unkeyed primitive, it is natural to consider the notion of trapdoor MHFs (TMHFs). A TMHF is like an MHF, but when sampling the public parameters one also samples a trapdoor which allows evaluating the function much cheaper. Biryukov and Perrin (Asiacrypt'17) were the first to consider TMHFs and put forth a candidate TMHF construction called Diodon that is based on the Scrypt MHF (Percival, BSDCan'09). To allow for a trapdoor, Scrypt's initial hash chain is replaced by a sequence of squares in a group of unknown order where the order of the group is the trapdoor. For a length $n$ sequence of squares and a group of order $N$, Diodon's cumulative memory complexity (CMC) is $O(n^2\log N)$ without the trapdoor and $O(n \log(n) \log(N)^2)$ with knowledge of it. While Scrypt is proven to be optimally memory-hard in the random oracle model (Alwen et al., Eurocrypt'17), Diodon's memory-hardness has not been proven so far. In this work, we fill this gap by rigorously analyzing a specific instantiation of Diodon. We show that its CMC is lower bounded by $\Omega(\frac{n^2}{\log n} \log N)$ which almost matches the upper bound. Our proof is based Alwen et al.'s lower bound on Scrypt's CMC but requires non-trivial modifications due to the algebraic structure of Diodon. Most importantly, our analysis involves a more elaborate compression argument and a solvability criterion for certain systems of Diophantine equations.
Last updated:  2024-08-09
Aggregating Falcon Signatures with LaBRADOR
Marius A. Aardal, Diego F. Aranha, Katharina Boudgoust, Sebastian Kolby, and Akira Takahashi
Several prior works have suggested to use non-interactive arguments of knowledge with short proofs to aggregate signatures of Falcon, which is part of the first post-quantum signatures selected for standardization by NIST. Especially LaBRADOR, based on standard structured lattice assumptions and published at CRYPTO’23, seems promising to realize this task. However, no prior work has tackled this idea in a rigorous way. In this paper, we thoroughly prove how to aggregate Falcon signatures using LaBRADOR. We start by providing the first complete knowledge soundness analysis for the non-interactive version of LaBRADOR. Here, the multi-round and recursive nature of LaBRADOR requires a complex and thorough analysis. For this purpose, we introduce the notion of predicate special soundness (PSS). This is a general framework for evaluating the knowledge error of complex Fiat-Shamir arguments of knowledge protocols in a modular fashion, which we believe to be of independent interest. We then explain the exact steps to take in order to adapt the non-interactive LaBRADOR proof system for aggregating Falcon signatures and provide concrete proof size estimates. Additionally, we formalize the folklore approach of obtaining aggregate signatures from the class of hash-then-sign signatures through arguments of knowledge.
Last updated:  2024-03-20
A Zero-Dimensional Gröbner Basis for Poseidon
Matthias Johann Steiner
In this paper we construct dedicated weight orders $>$ so that a $>$-Gröbner bases of Poseidon can be found via linear transformations for the preimage as well as the CICO problem. In particular, with our Gröbner bases we can exactly compute the $\mathbb{F}_q$-vector space dimension of the quotient space for all possible Poseidon configurations. This in turn resolves previous attempts to assess the security of Poseidon against Gröbner basis attacks, since the vector space dimension quantifies the complexity of computing the variety of a zero-dimensional polynomial system.
Last updated:  2024-02-23
NiLoPher: Breaking a Modern SAT-Hardened Logic-Locking Scheme via Power Analysis Attack
Prithwish Basu Roy, Johann Knechtel, Akashdeep Saha, Saideep Sreekumar, Likhitha Mankali, Mohammed Nabeel, Debdeep Mukhopadhyay, Ramesh Karri, and Ozgur Sinanoglu
LoPher brings, for the first time, cryptographic security promises to the field of logic locking in a bid to break the game of cat-and-mouse seen in logic locking. Toward this end, LoPher embeds the circuitry to lock within multiple rounds of a block cipher, by carefully configuring all the S-Boxes. To realize general Boolean functionalities and to support varying interconnect topologies, LoPher also introduces additional layers of MUXes between S-Boxes and the permutation operations. The authors of LoPher claim resilience against SAT-based attacks in particular. Here, we show the first successful attack on LoPher. First, we uncover a significant limitation for LoPher’s key-space configuration, resulting in large numbers of equivalent keys and, thus, a largely simplified search space for attackers in practice. Second, motivated by their well-proven working against ciphers, we employ a power side-channel attack against LoPher. We find that ISCAS-85 benchmarks locked with LoPher can all be broken in few thousands of traces. Finally, we also outline a simple and low-cost countermeasure to render LoPher more secure.
Last updated:  2024-09-20
C'est très CHIC: A compact password-authenticated key exchange from lattice-based KEM
Afonso Arriaga, Manuel Barbosa, Stanislaw Jarecki, and Marjan Skrobot
Driven by the NIST's post-quantum standardization efforts and the selection of Kyber as a lattice-based Key-Encapsulation Mechanism (KEM), several Password Authenticated Key Exchange (PAKE) protocols have been recently proposed that leverage a KEM to create an efficient, easy-to-implement and secure PAKE. In two recent works, Beguinet et al. (ACNS 2023) and Pan and Zeng (ASIACRYPT 2023) proposed generic compilers that transform KEM into PAKE, relying on an Ideal Cipher (IC) defined over a group. However, although IC on a group is often used in cryptographic protocols, special care must be taken to instantiate such objects in practice, especially when a low-entropy key is used. To address this concern, Dos Santos et al. (EUROCRYPT 2023) proposed a relaxation of the IC model under the Universal Composability (UC) framework called Half-Ideal Cipher (HIC). They demonstrate how to construct a UC-secure PAKE protocol, EKE-KEM, from a KEM and a modified 2-round Feistel construction called m2F. Remarkably, the m2F sidesteps the use of an IC over a group, and instead employs an IC defined over a fixed-length bitstring domain, which is easier to instantiate. In this paper, we introduce a novel PAKE protocol called CHIC that improves the communication and computation efficiency of EKE-KEM, by avoiding the HIC abstraction. Instead, we split the KEM public key in two parts and use the m2F directly, without further randomization. We provide a detailed proof of the security of CHIC and establish precise security requirements for the underlying KEM, including one-wayness and anonymity of ciphertexts, and uniformity of public keys. Our findings extend to general KEM-based EKE-style protocols and show that a passively secure KEM is not sufficient. In this respect, our results align with those of Pan and Zeng (ASIACRYPT 2023), but contradict the analyses of KEM-to-PAKE compilers by Beguinet et al. (ACNS 2023) and Dos Santos et al. (EUROCRYPT 2023). Finally, we provide an implementation of CHIC, highlighting its minimal overhead compared to the underlying KEM -- Kyber. An interesting aspect of the implementation is that we reuse the rejection sampling procedure in Kyber reference code to address the challenge of hashing onto the public key space. As of now, to the best of our knowledge, CHIC stands as the most efficient PAKE protocol from black-box KEM that offers rigorously proven UC security.
Last updated:  2024-02-23
SweetPAKE: Key exchange with decoy passwords
Afonso Arriaga, Peter Y.A. Ryan, and Marjan Skrobot
Decoy accounts are often used as an indicator of the compromise of sensitive data, such as password files. An attacker targeting only specific known-to-be-real accounts might, however, remain undetected. A more effective method proposed by Juels and Rivest at CCS'13 is to maintain additional fake passwords associated with each account. An attacker who gains access to the password file is unable to tell apart real passwords from fake passwords, and the attempted usage of a false password immediately sets off an alarm indicating a password file compromise. Password-Authenticated Key Exchange (PAKE) has long been recognised for its strong security guarantees when it comes to low-entropy password authentication and secure channel establishment, without having to rely on the setup of a PKI. In this paper, we introduce SweetPAKE, a new cryptographic primitive that offers the same security guarantees as PAKE for key exchange, while allowing clients with a single password to authenticate against servers with $n$ candidate passwords for that account and establish a secure channel. Additional security properties are identified and formalized to ensure that (a) high-entropy session keys are indistinguishable from random, even if later on the long-term secret password becomes corrupted (forward secrecy); (b) upon password file leakage, an adversary cannot tell apart real from fake passwords; and (c) a malicious client cannot trigger a false alarm. We capture these properties by extending well-established game-based definitions of PAKE. Furthermore, we propose a new UC formulation that comprehensively unifies both SweetPAKE (session key indistinguishability and sugarword indistinguishability) and a related notion known as Oblivious-PAKE. Finally, we propose efficient SweetPAKE and Oblivious-PAKE protocols constructed from Password-Authenticated Public-Key Encryption (PAPKE) that satisfy all the proposed notions.
Last updated:  2024-06-01
Concretely Efficient Lattice-based Polynomial Commitment from Standard Assumptions
Intak Hwang, Jinyeong Seo, and Yongsoo Song
Polynomial commitment is a crucial cryptographic primitive in constructing zkSNARKs. Most practical constructions to date are either vulnerable against quantum adversaries or lack homomorphic properties, which are essential for recursive proof composition and proof batching. Recently, lattice-based constructions have drawn attention for their potential to achieve all the desirable properties, though they often suffer from concrete inefficiency or rely on newly introduced assumptions requiring further cryptanalysis. In this paper, we propose a novel construction of a polynomial commitment scheme based on standard lattice-based assumptions. Our scheme achieves a square-root proof size and verification complexity, ensuring concrete efficiency in proof size, proof generation, and verification. Additionally, it features a transparent setup and publicly verifiability. When compared with Brakedown (CRYPTO 2023), a recent code-based construction, our scheme offers comparable performance across all metrics. Furthermore, its proof size is approximately 4.1 times smaller than SLAP (EUROCRYPT 2024), a recent lattice-based construction.
Last updated:  2025-02-20
Single-Input Functionality against a Dishonest Majority: Practical and Round-Optimal
Zhelei Zhou, Bingsheng Zhang, Hong-Sheng Zhou, and Kui Ren
In this work, we focus on Single-Input Functionality (SIF), which can be viewed as a special case of MPC. In a SIF, only one distinguished party called the dealer holds a private input. SIF allows the dealer to perform a computation task with other parties without revealing any additional information about the private input. SIF has diverse applications, including multiple-verifier zero-knowledge, and verifiable relation sharing. As our main contribution, we propose the first 1-round SIF protocol against a dishonest majority in the preprocessing model, which is highly efficient. The prior works either require at least 2-round online communication (Yang and Wang, Asiacrypt 2022; Baum et al., CCS 2022; Zhou et al., Euro SP 2024) or are only feasibility results (Lepinski et al., TCC 2005; Applebaum et al., Crypto 2022). We show the necessity of using the broadcast channels, by formally proving that 1-round SIF is impossible to achieve in the preprocessing model, if there are no broadcast channels available. We implement our protocol and conduct extensive experiments to illustrate the practical efficiency of our protocol.
Last updated:  2024-07-12
A Two-Layer Blockchain Sharding Protocol Leveraging Safety and Liveness for Enhanced Performance
Yibin Xu, Jingyi Zheng, Boris Düdder, Tijs Slaats, and Yongluan Zhou
Sharding is a critical technique that enhances the scalability of blockchain technology. However, existing protocols often assume adversarial nodes in a general term without considering the different types of attacks, which limits transaction throughput at runtime because attacks on liveness could be mitigated. There have been attempts to increase transaction throughput by separately handling the attacks; however, they have security vulnerabilities. This paper introduces Reticulum, a novel sharding protocol that overcomes these limitations and achieves enhanced scalability in a blockchain network without security vulnerabilities. Reticulum employs a two-phase design that dynamically adjusts transaction throughput based on runtime adversarial attacks on either or both liveness and safety. It consists of `control' and `process' shards in two layers corresponding to the two phases. Process shards are subsets of control shards, with each process shard expected to contain at least one honest node with high confidence. Conversely, control shards are expected to have a majority of honest nodes with high confidence. Reticulum leverages unanimous voting in the first phase to involve fewer nodes in accepting/rejecting a block, allowing more parallel process shards. The control shard finalizes the decision made in the first phase and serves as a lifeline to resolve disputes when they surface. Experiments demonstrate that the unique design of Reticulum empowers high transaction throughput and robustness in the face of different types of attacks in the network, making it superior to existing sharding protocols for blockchain networks.
Last updated:  2024-02-29
Single Pass Client-Preprocessing Private Information Retrieval
Arthur Lazzaretti and Charalampos Papamanthou
Recently, many works have considered Private Information Retrieval (PIR) with client-preprocessing: In this model a client and a server jointly run a preprocessing phase, after which client queries can run in time sublinear in the size of the database. In addition, such approaches store no additional bits per client at the server, allowing us to scale PIR to a large number of clients. In this work, we propose the first client-preprocessing PIR scheme with ``single pass'' client-preprocessing. In particular, our scheme is concretely optimal with respect to preprocessing, in the sense that it requires exactly one linear pass over the database. This is in stark contrast with existing works, whose preprocessing is proportional to $\lambda \cdot N$, where $\lambda$ is the security parameter (e.g., $\lambda=128$). Our approach yields a preprocessing speedup of 45-100$\times$ and a query speedup of up to 20$\times$ when compared to previous state-of-the-art schemes (e.g., Checklist, USENIX 2021), making preprocessing PIR more attractive for a myriad of use cases that are ``session-based''. In addition to fast preprocessing, our scheme features extremely fast updates (additions and edits)---in constant time. Previously, the best known approach for handling updates in client-preprocessing PIR had time complexity $O(\log N)$, while also adding a $\log N$ factor to the bandwidth. We implement our update algorithm and show concrete speedups of about 20$\times$ in update time when compared to the previous state-of-the-art updatable scheme (e.g., Checklist, USENIX 2021).
Last updated:  2024-04-24
Simple constructions of linear-depth t-designs and pseudorandom unitaries
Tony Metger, Alexander Poremba, Makrand Sinha, and Henry Yuen
Uniformly random unitaries, i.e., unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that ``look'' sufficiently Haar random while also being efficient to implement. Two different notions of derandomisation have emerged: $t$-designs are random unitaries that information-theoretically reproduce the first $t$ moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random. In this work, we take a unified approach to constructing $t$-designs and PRUs. For this, we introduce and analyse the ``$PFC$ ensemble'', the product of a random computational basis permutation $P$, a random binary phase operator $F$, and a random Clifford unitary $C$. We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the $PFC$ ensemble to show the following: 1. Linear-depth $t$-designs. We give the first construction of a (diamond-error) approximate $t$-design with circuit depth linear in $t$. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their $2t$-wise independent counterparts. 2. Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e., we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts. 3. Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from $n$ to $n + \omega(\log n)$ qubits, a small modification of our PRU construction achieves adaptive security, i.e., even a distinguisher that can query the isometry adaptively in sequence cannot distinguish it from Haar random isometries. This gives the first construction of adaptive pseudorandom isometries. Under an additional conjecture, this proof also extends to adaptive PRUs.
Last updated:  2024-02-22
Recommendations for the Design and Validation of a Physical True Random Number Generator Integrated in an Electronic Device
David Lubicz and Viktor FIscher
These Recommendations describe essential elements of the design of a secure physical true random number generator (PTRNG) integrated in an electronic device. Based on these elements, we describe and justify requirements for the design, validation and testing of PTRNGs, which are intended to guarantee the security of generators aimed at cryptographic applications.
Last updated:  2024-03-11
Diving Deep into the Preimage Security of AES-like Hashing
Shiyao Chen, Jian Guo, Eik List, Danping Shi, and Tianyu Zhang
Since the seminal works by Sasaki and Aoki, Meet-in-the-Middle (MITM) attacks are recognized as an effective technique for preimage and collision attacks on hash functions. At Eurocrypt 2021, Bao et al. automated MITM attacks on AES-like hashing and improved upon the best manual result. The attack framework has been furnished by subsequent works, yet far from complete. This paper elucidates three key contributions dedicated in further generalizing the idea of MITM and refining the automatic model on AES-like hashing. (1) We introduce S-box linearization to MITM pseudo-preimage attacks on AES-like hashing. The technique suits perfectly with superposition states to preserve information after S-box with an affordable cost. (2) We propose distributed initial structures, an extension on the original concept of initial states, that selects initial degrees of freedom in a more versatile manner to enlarge the search space. (3) We exploit the structural similarities between encryption and key schedule in constructions (e.g. Whirlpool and Streebog) to model propagations more accurately and avoid repeated costs. Weaponed with these innovative techniques, we further empower the MITM framework and improve the attack results on AES-like designs for preimage and collision. We obtain the first preimage attacks on 10-round AES-192, 10-round Rijndael-192/256, and 7.75-round Whirlpool, reduced time and/or memory complexities for preimage attacks on 5-, 6-round Whirlpool and 7.5-, 8.5-round Streebog, as well as improved collision attacks on 6- and 6.5-round Whirlpool.
Last updated:  2024-07-25
Divide and Surrender: Exploiting Variable Division Instruction Timing in HQC Key Recovery Attacks
Robin Leander Schröder, Stefan Gast, and Qian Guo
We uncover a critical side-channel vulnerability in the Hamming Quasi-Cyclic (HQC) round 4 optimized implementation arising due to the use of the modulo operator. In some cases, compilers optimize uses of the modulo operator with compile-time known divisors into constant-time Barrett reductions. However, this optimization is not guaranteed: for example, when a modulo operation is used in a loop the compiler may emit division (div) instructions which have variable execution time depending on the numerator. When the numerator depends on secret data, this may yield a timing side-channel. We name vulnerabilities of this kind Divide and Surrender (DaS) vulnerabilities. For processors supporting Simultaneous Multithreading (SMT) we propose a new approach called DIV-SMT which enables precisely measuring small division timing variations using scheduler and/or execution unit contention. We show that using only 100 such side-channel traces we can build a Plaintext-Checking (PC) oracle with above 90% accuracy. Our approach may also prove applicable to other instances of the DaS vulnerability, such as KyberSlash. We stress that exploitation with DIV-SMT requires co-location of the attacker on the same physical core as the victim. We then apply our methodology to HQC and present a novel way to recover HQC secret keys faster, achieving an 8-fold decrease in the number of idealized oracle queries when compared to previous approaches. Our new PC oracle attack uses our newly developed Zero Tester method to quickly determine whether an entire block of bits contains only zero-bits. The Zero Tester method enables the DIV-SMT powered attack on HQC-128 to complete in under 2 minutes on our targeted AMD Zen2 machine.
Last updated:  2024-02-21
New Models for the Cryptanalysis of ASCON
Mathieu Degré, Patrick Derbez, Lucie Lahaye, and André Schrottenloher
This paper focuses on the cryptanalysis of the ASCON family using automatic tools. We analyze two different problems with the goal to obtain new modelings, both simpler and less computationally heavy than previous works (all our models require only a small amount of code and run on regular desktop computers). The first problem is the search for Meet-in-the-middle attacks on reduced-round ASCON-Hash. Starting from the MILP modeling of Qin et al. (EUROCRYPT 2023 & ePrint 2023), we rephrase the problem in SAT, which accelerates significantly the solving time and removes the need for the ``weak diffusion structure'' heuristic. This allows us to reduce the memory complexity of Qin et al.'s attacks and to prove some optimality results. The second problem is the search for lower bounds on the probability of differential characteristics for the ASCON permutation. We introduce a lossy MILP encoding of the propagation rules based on the Hamming weight, in order to find quickly lower bounds which are comparable to the state of the art. We find a small improvement over the existing bound on 7 rounds.
Last updated:  2024-02-21
Accelerating Training and Enhancing Security Through Message Size Optimization in Symmetric Cryptography
Uncategorized
ABHISAR, Madhav Yadav, and Girish Mishra
Show abstract
Uncategorized
This research extends Abadi and Andersen's exploration of neural networks using secret keys for information protection in multiagent systems. Focusing on enhancing confidentiality properties, we employ end-to-end adversarial training with neural networks Alice, Bob, and Eve. Unlike prior work limited to 64-bit messages, our study spans message sizes from 4 to 1024 bits, varying batch sizes and training steps. An innovative aspect involves training model Bob to approach a minimal error value close to zero and examining its effect on the feasibility of the model. This research unveils the neural networks' adaptability and scalability in encryption and decryption across diverse scenarios, offering valuable insights into their optimization potential for secure communication.
Last updated:  2024-09-18
Attacking ECDSA with Nonce Leakage by Lattice Sieving: Bridging the Gap with Fourier Analysis-based Attacks
Yiming Gao, Jinghui Wang, Honggang Hu, and Binang He
The Hidden Number Problem (HNP) has found extensive applications in side-channel attacks against cryptographic schemes, such as ECDSA and Diffie-Hellman. There are two primary algorithmic approaches to solving the HNP: lattice-based attacks and Fourier analysis-based attacks. Lattice-based attacks exhibit better efficiency and require fewer samples when sufficiently long substrings of the nonces are known. However, they face significant challenges when only a small fraction of the nonce is leaked, such as 1-bit leakage, and their performance degrades in the presence of errors. In this paper, we address an open question by introducing an algorithmic tradeoff that significantly bridges the gap between these two approaches. By introducing a parameter $x$ to modify Albrecht and Heninger's lattice, the lattice dimension is reduced by approximately $(\log_2{x})/ l$, where $l$ represents the number of leaked bits. We present a series of new methods, including the interval reduction algorithm, several predicates, and the pre-screening technique. Furthermore, we extend our algorithms to solve the HNP with erroneous input. Our attack outperforms existing state-of-the-art lattice-based attacks against ECDSA. We break several records including 1-bit and less than 1-bit leakage on a 160-bit curve, while the best previous lattice-based attack for 1-bit leakage was conducted only on a 112-bit curve.
Last updated:  2024-09-13
An Efficient Hash Function for Imaginary Class Groups
Kostas Kryptos Chalkias, Jonas Lindstrøm, and Arnab Roy
This paper presents a new efficient hash function for imaginary class groups. Many class group based protocols, such as verifiable delay functions, timed commitments and accumulators, rely on the existence of an efficient and secure hash function, but there are not many concrete constructions available in the literature, and existing constructions are too inefficient for practical use cases. Our novel approach, building on Wesolowski's initial scheme, achieves a 200 fold increase in computation speed, making it exceptionally practical for real-world applications. This optimisation is achieved at the cost of a smaller image of the hash function, but we show that the image is still sufficiently large for the hash function to be secure.
Last updated:  2024-02-21
Multiplex: TBC-based Authenticated Encryption with Sponge-Like Rate
Thomas Peters, Yaobin Shen, and François-Xavier Standaert
Authenticated Encryption (AE) modes of operation based on Tweakable Block Ciphers (TBC) usually measure efficiency in the number of calls to the underlying primitive per message block. On the one hand, many existing solutions reach a primitive-rate of 1, meaning that each n-bit block of message asymptotically needs a single call to the TBC with output length n. On the other hand, while these modes look optimal in a blackbox setting, they become less attractive when leakage comes into play, since all these calls must then be equally well protected to maintain security. Leakage-resistant modes improve this situation, by generating ephemeral keys every constant number of calls. However, rekeying is inherently suboptimal in primitive-rate, since a TBC call can only be used either to refresh a key or to encrypt a block. Even worse, existing solutions achieving almost n bits of security for n-bit secret keys have at most a primitive-rate 2/3. Hence the question: Can we design a highly-secure TBC-based rekeying mode with ``nearly optimal'' primitive-rate? We answer this question positively with Multiplex, a new mode that has primitive-rate d/(d+1) given a TBC with a dn-bit tweak. Multiplex achieves $n-\log_2(dn)$ bits of security for both (i) misuse-resilience CCA security in the blackbox setting and (ii) Ciphertext Integrity with Misuse-resistant and unbounded Leakage in encryption and decryption (CIML2). It also provides (iii) confidentiality with leakage up to the birthday bound. Furthermore, Multiplex can run d+1 calls in parallel in each iteration. The combination of these features gives a mode of operation that inherits most of the good implementation features and flexibility of a Duplex sponge -- therefore paving the way towards sound comparisons between TBC-based and permutation-based AE.
Last updated:  2024-02-21
Registered Attribute-Based Signature
Yijian Zhang, Jun Zhao, Ziqi Zhu, Junqing Gong, and Jie Chen
This paper introduces the notion of registered attribute-based signature (registered ABS). Distinctly different from classical attribute-based signature (ABS), registered ABS allows any user to generate their own public/secret key pair and register it with the system. The key curator is critical to keep the system flowing, which is a fully transparent entity that does not retain secrets. Our results can be summarized as follows. -This paper provides the first definition of registered ABS, which has never been defined. -This paper presents the first generic fully secure registered ABS over the prime-order group from $k$-Lin assumption under the standard model, which supports various classes of predicate. -This paper gives the first concrete registered ABS scheme for arithmetic branching program (ABP), which achieves full security in the standard model. Technically, our registered ABS is inspired by the blueprint of Okamoto and Takashima[PKC'11]. We convert the prime-order registered attribute-based encryption (registered ABE) scheme of Zhu et al.[ASIACRYPT'23] via predicate encoding to registered ABS by employing the technique of re-randomization with specialized delegation, while we employ the different dual-system method considering the property of registration. Prior to our work, the work of solving the key-escrow issue was presented by Okamoto and Takashima[PKC'13] while their work considered the weak adversary in the random oracle model.
Last updated:  2024-02-21
IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials via ZK-JSON
Shuhao Zheng, Zonglun Li, Junliang Luo, Ziyue Xin, and Xue Liu
Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose ZK-JSON, a unique R1CS circuit design tailored for IDE over generic JSON documents. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision.
Last updated:  2024-02-20
Quantum Pseudorandomness Cannot Be Shrunk In a Black-Box Way
Samuel Bouaziz--Ermann and Garazi Muguruza
Pseudorandom Quantum States (PRS) were introduced by Ji, Liu and Song as quantum analogous to Pseudorandom Generators. They are an ensemble of states efficiently computable but computationally indistinguishable from Haar random states. Subsequent works have shown that some cryptographic primitives can be constructed from PRSs. Moreover, recent classical and quantum oracle separations of PRS from One-Way Functions strengthen the interest in a purely quantum alternative building block for quantum cryptography, potentially weaker than OWFs. However, our lack of knowledge of extending or shrinking the number of qubits of the PRS output still makes it difficult to reproduce some of the classical proof techniques and results. Short-PRSs, that is PRSs with logarithmic size output, have been introduced in the literature along with cryptographic applications, but we still do not know how they relate to PRSs. Here we answer half of the question, by showing that it is not possible to shrink the output of a PRS from polynomial to logarithmic qubit length while still preserving the pseudorandomness property, in a relativized way. More precisely, we show that relative to Kretschmer's quantum oracle (TQC 2021) short-PRSs cannot exist (while PRSs exist, as shown by Kretschmer's work).
Last updated:  2024-04-22
Secure Integrated Sensing and Communication Under Correlated Rayleigh Fading
Martin Mittelbach, Rafael F. Schaefer, Matthieu Bloch, Aylin Yener, and Onur Gunlu
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
Last updated:  2024-02-20
SoK: Parameterization of Fault Adversary Models - Connecting Theory and Practice
Dilara Toprakhisar, Svetla Nikova, and Ventzislav Nikov
Since the first fault attack by Boneh et al. in 1997, various physical fault injection mechanisms have been explored to induce errors in electronic systems. Subsequent fault analysis methods of these errors have been studied, and successfully used to attack many cryptographic implementations. This poses a significant challenge to the secure implementation of cryptographic algorithms. To address this, numerous countermeasures have been proposed. Nevertheless, these countermeasures are primarily designed to protect against the particular assumptions made by the fault analysis methods. These assumptions, however, encompass only a limited range of the capabilities inherent to physical fault injection mechanisms. In this paper, we narrow our focus to fault attacks and countermeasures specific to ASICs, and introduce a novel parameterized fault adversary model capturing an adversary's control over an ASIC. We systematically map (a) the physical fault injection mechanisms, (b) adversary models assumed in fault analysis, and (c) adversary models used to design countermeasures into our introduced model. This model forms the basis for our comprehensive exploration that covers a broad spectrum of fault attacks and countermeasures within symmetric key cryptography as a comprehensive survey. Furthermore, our investigation highlights a notable misalignment among the adversary models assumed in countermeasures, fault attacks, and the intrinsic capabilities of the physical fault injection mechanisms. Through this study, we emphasize the need to reevaluate existing fault adversary models, and advocate for the development of a unified model.
Last updated:  2024-02-27
A generic algorithm for efficient key recovery in differential attacks – and its associated tool
Christina Boura, Nicolas David, Patrick Derbez, Rachelle Heim Boissier, and María Naya-Plasencia
Differential cryptanalysis is an old and powerful attack against block ciphers. While different techniques have been introduced throughout the years to improve the complexity of this attack, the key recovery phase remains a tedious and error-prone procedure. In this work, we propose a new algorithm and its associated tool that permits, given a distinguisher, to output an efficient key guessing strategy. Our tool can be applied to SPN ciphers whose linear layer consists of a bit-permutation and whose key schedule is linear or almost linear. It can be used not only to help cryptanalysts find the best differential attack on a given cipher but also to assist designers in their security analysis. We applied our tool to four targets: RECTANGLE, PRESENT-80, SPEEDY-7-192 and GIFT-64. We extend the previous best attack on RECTANGLE-128 by one round and the previous best differential attack against PRESENT-80 by 2 rounds. We improve a previous key recovery step in an attack against SPEEDY and present more efficient key recovery strategies for RECTANGLE-80 and GIFT. Our tool outputs the results in only a second for most targets.
Last updated:  2024-02-20
CAPABARA: A Combined Attack on CAPA
Dilara Toprakhisar, Svetla Nikova, and Ventzislav Nikov
Physical attacks pose a substantial threat to the secure implementation of cryptographic algorithms. While considerable research efforts are dedicated to protecting against passive physical attacks (e.g., side-channel analysis (SCA)), the landscape of protection against other types of physical attacks remains a challenge. Fault attacks (FA), though attracting growing attention in research, still lack the prevalence of provably secure designs when compared to SCA. The realm of combined attacks, which leverage the capabilities of both SCA and FA adversaries, introduces powerful adversarial models, rendering protection against them challenging. This challenge has consequently led to a relatively unexplored area of research, resulting in a notable gap in understanding and efficiently protecting against combined attacks. The CAPA countermeasure, published at CRYPTO 2018, addresses this challenge with a robust adversarial model that goes beyond conventional SCA and FA adversarial models. Drawing inspiration from the principles of Multiparty Computation (MPC), CAPA claims security against higher-order SCA, higher-order fault attacks, and their combination. In this work, we present a combined attack that breaks CAPA within the constraints of its assumed adversarial model. In response, we propose potential fixes to the design of CAPA that increase the complexity of the proposed attack, although not provably thwarting it. With this presented combined attack, we highlight the difficulty of effectively protecting against combined attacks.
Last updated:  2024-02-20
Efficient Zero-Knowledge Arguments and Digital Signatures via Sharing Conversion in the Head
Jules Maire and Damien Vergnaud
We present a novel technique within the MPC-in-the-Head framework, aiming to design efficient zero-knowledge protocols and digital signature schemes. The technique allows for the simultaneous use of additive and multiplicative sharings of secret information, enabling efficient proofs of linear and multiplicative relations. The applications of our technique are manifold. It is first applied to construct zero-knowledge arguments of knowledge for Double Discrete Logarithms (DDLP). The resulting protocol achieves improved communication complexity without compromising efficiency. We also propose a new zero-knowledge argument of knowledge for the Permuted Kernel Problem. Eventually, we suggest a short (candidate) post-quantum digital signature scheme constructed from a new one-way function based on simple polynomials known as fewnomials. This scheme offers simplicity and ease of implementation. Finally, we present two additional results inspired by this work but using alternative approaches. We propose a zero-knowledge argument of knowledge of an RSA plaintext for a small public exponent that significantly improves the state-of-the-art communication complexity. We also detail a more efficient forward-backward construction for the DDLP.
Last updated:  2024-02-20
Mirrored Commitment: Fixing ``Randomized Partial Checking'' and Applications
Paweł Lorek, Moti Yung, and Filip Zagórski
Randomized Partial Checking (RPC} was proposed by Jakobsson, Juels, and Rivest and attracted attention as an efficient method of verifying the correctness of the mixing process in numerous applied scenarios. In fact, RPC is a building block for many electronic voting schemes, including Prêt à Voter, Civitas, Scantegrity II as well as voting-systems used in real-world elections (e.g., in Australia). Mixing is also used in anonymous transfers of cryptocurrencies. It turned out, however, that a series of works showed, however, subtle issues with analyses behind RPC. First, that the actual security level of the RPC protocol is way off the claimed bounds. The probability of successful manipulation of $k$ votes is $(\frac{3}{4})^k$ instead of the claimed $\frac{1}{2^k}$ (this difference, in turn, negatively affects actual implementations of the notion within existing election systems. This is so since concrete implemented procedures of a given length were directly based on this parameter). Further, privacy guarantees that a constant number of mix-servers is enough turned out to also not be correct. We can conclude from the above that these analyses of the processes of mixing are not trivial. In this paper, we review the relevant attacks, and we present Mirrored-RPC -- a fix to RPC based on ``mirrored commitment'' which makes it optimally secure; namely, having a probability of successful manipulation of $k$ votes $\frac{1}{2^k}$. Then, we present an analysis of the privacy level of both RPC and mRPC. We show that for $n$ messages, the number of mix-servers (rounds) needed to be $\varepsilon$-close to the uniform distribution in total variation distance is lower bounded by: \[ r(n, \varepsilon) \geq \log_{2}{n \choose 2}/\varepsilon. \] This proof of privacy, in turn, gives insights into the anonymity of various cryptocurrencies (e.g., Zerocash) using anonymizing pools. If a random fraction $q$ of $n$ existing coins is mixed (in each block), then to achieve full anonymity, the number of blocks one needs to run the protocol for, is: \[ rb(n, q, \varepsilon) \geq - \frac{\log n + \log (n-1) - \log (2\varepsilon)}{ {\log({1-q^2}})}. \]
Last updated:  2024-02-20
Practical Improvements to Statistical Ineffective Fault Attacks
Barış Ege, Bob Swinkels, Dilara Toprakhisar, and Praveen Kumar Vadnala
Statistical Fault Attacks (SFA), introduced by Fuhr et al., exploit the statistical bias resulting from injected faults. Unlike prior fault analysis attacks, which require both faulty and correct ciphertexts under the same key, SFA leverages only faulty ciphertexts. In CHES 2018, more powerful attacks called Statistical Ineffective Fault Attacks (SIFA) have been proposed. In contrast to the previous fault attacks that utilize faulty ciphertexts, SIFA exploits the distribution of the intermediate values leading to fault-free ciphertexts. As a result, the SIFA attacks were shown to be effective even in the presence of widely used fault injection countermeasures based on detection and infection. In this work, we build upon the core idea of SIFA, and provide two main practical improvements over the previously proposed analysis methods. Firstly, we show how to perform SIFA from the input side, which in contrast to the original SIFA, requires injecting faults in the earlier rounds of an encryption or decryption operation. If we consider the start of the operation as the trigger for fault injection, the cumulative jitter in the first few rounds of a cipher is much lower than the last rounds. Hence, performing the attack in the first or second round requires a narrower parameter range for fault injection and hence less fault injection attempts to recover the secret key. Secondly, in comparison to the straightforward SIFA approach of guessing 32-bits at a time, we propose a chosen input approach that reduces the guessing effort to 16-bits at a time. This decreases the key search space for full key recovery of an AES-128 implementation from $2^{34}$ to $2^{19}$.
Last updated:  2024-02-20
Toward Malicious Constant-Rate 2PC via Arithmetic Garbling
Carmit Hazay and Yibin Yang
A recent work by Ball, Li, Lin, and Liu [Eurocrypt'23] presented a new instantiation of the arithmetic garbling paradigm introduced by Applebaum, Ishai, and Kushilevitz [FOCS'11]. In particular, Ball et al.'s garbling scheme is the first constant-rate garbled circuit over large enough bounded integer computations, inferring the first constant-round constant-rate secure two-party computation (2PC) over bounded integer computations in the presence of semi-honest adversaries. The main source of difficulty in lifting the security of garbling schemes-based protocols to the malicious setting lies in proving the correctness of the underlying garbling scheme. In this work, we analyze the security of Ball et al.'s scheme in the presence of malicious attacks. - We demonstrate an overflow attack, which is inevitable in this computational model, even if the garbled circuit is fully correct. Our attack follows by defining an adversary, corrupting either the garbler or the evaluator, that chooses a bad input and causes the computation to overflow, thus leaking information about the honest party's input. By utilizing overflow attacks, we show that $1$-bit leakage is necessary for achieving security against a malicious garbler, discarding the possibility of achieving full malicious security in this model. We further demonstrate a wider range of overflow attacks against a malicious evaluator with more than $1$ bit of leakage. - We boost the security level of Ball et al.'s scheme by utilizing two variants of Vector Oblivious Linear Evaluation, denoted by VOLEc and aVOLE. We present the first constant-round constant-rate 2PC protocol over bounded integer computations, in the presence of a malicious garbler with $1$-bit leakage and a semi-honest evaluator, in the {VOLEc,aVOLE}-hybrid model and being black-box in the underlying group and ring. Compared to the semi-honest variant, our protocol incurs only a constant factor overhead, both in computation and communication. The constant-round and constant-rate properties hold even in the plain model.
Last updated:  2024-02-19
A Concrete Analysis of Wagner's $k$-List Algorithm over $\mathbb{Z}_p$
Antoine Joux, Hunter Kippen, and Julian Loss
Since its introduction by Wagner (CRYPTO `02), the $k$-list algorithm has found significant utility in cryptanalysis. One important application thereof is in computing forgeries on several interactive signature schemes that implicitly rely on the hardness of the ROS problem formulated by Schnorr (ICICS `01). The current best attack strategy for these schemes relies the conjectured runtime of the $k$-list algorithm over $\mathbb{Z}_p$. The tightest known analysis of Wagner's algorithm over $\mathbb{Z}_p$ is due to Shallue (ANTS `08). However, it hides large polynomial factors and leaves a gap with respect to desirable concrete parameters for the attack. In this work, we develop a degraded version of the $k$-list algorithm which provably enforces the heuristic invariants in Wagner's original. In the process, we devise and analyze a new list merge procedure that we dub the interval merge. We give a thorough analysis of the runtime and success probability of our degraded algorithm, and show that it beats the projected runtime of the analysis by Shallue for parameters relevant to the generalized ROS attack of Benhamouda et al. (EUROCRYPT `21). For a $256$-bit prime $p$, and $k = 8$, our degraded $k$-list algorithm runs in time $\approx 2^{70.4}$, while Shallue's analysis states that the Wagner's original algorithm runs in time $\approx 2^{98.3}$.
Last updated:  2024-02-19
Polynomial Commitments from Lattices: Post-Quantum Security, Fast Verification and Transparent Setup
Valerio Cini, Giulio Malavolta, Ngoc Khanh Nguyen, and Hoeteck Wee
Polynomial commitment scheme allows a prover to commit to a polynomial $f \in \mathcal{R}[X]$ of degree $L$, and later prove that the committed function was correctly evaluated at a specified point $x$; in other words $f(x)=u$ for public $x,u \in\mathcal{R}$. Most applications of polynomial commitments, e.g. succinct non-interactive arguments of knowledge (SNARKs), require that (i) both the commitment and evaluation proof are succinct (i.e., polylogarithmic in the degree $L$) - with the latter being efficiently verifiable, and (ii) no pre-processing step is allowed. Surprisingly, as far as plausibly quantum-safe polynomial commitments are concerned, the currently most efficient constructions only rely on weak cryptographic assumptions, such as security of hash functions. Indeed, despite making use of the underlying algebraic structure, prior lattice-based polynomial commitments still seem to be much behind the hash-based ones. Moreover, security of the aforementioned lattice constructions against quantum adversaries was never formally discussed. In this work, we bridge the gap and propose the first (asymptotically and concretely) efficient lattice-based polynomial commitment with transparent setup and post-quantum security. Our interactive variant relies on the standard (Module-)SIS problem, and can be made non-interactive in the random oracle model using Fiat-Shamir transformation. In addition, we equip the scheme with a knowledge soundness proof against quantum adversaries which can be of independent interest. In terms of concrete efficiency, for $L=2^{20}$ our scheme yields proofs of size $2$X smaller than the hash-based \textsf{FRI} commitment (Block et al., Asiacrypt 2023), and $70$X smaller than the very recent lattice-based construction by Albrecht et al. (Eurocrypt 2024).
Last updated:  2024-10-05
HARTS: High-Threshold, Adaptively Secure, and Robust Threshold Schnorr Signatures
Renas Bacho, Julian Loss, Gilad Stern, and Benedikt Wagner
Threshold variants of the Schnorr signature scheme have recently been at the center of attention due to their applications to cryptocurrencies. However, existing constructions for threshold Schnorr signatures among a set of $n$ parties with corruption threshold $t_c$ suffer from at least one of the following drawbacks: (i) security only against static (i.e., non-adaptive) adversaries, (ii) cubic or higher communication cost to generate a single signature, (iii) strong synchrony assumptions on the network, or (iv) $t_c+1$ are sufficient to generate a signature, i.e., the corruption threshold of the scheme equals its reconstruction threshold. Especially (iv) turns out to be a severe limitation for many asynchronous real-world applications where $t_c < n/3$ is necessary to maintain liveness, but a higher signing threshold of $n-t_c$ is needed. A recent scheme, ROAST, proposed by Ruffing et al. (ACM CCS 2022) addresses (iii) and (iv), but still falls short of obtaining subcubic communication complexity and adaptive security. In this work, we present HARTS, the first threshold Schnorr signature scheme to incorporate all these desiderata. More concretely: - HARTS is adaptively secure and remains fully secure and operational even under asynchronous network conditions in the presence of up to $t_c < n/3$ malicious parties. This is optimal. - HARTS outputs a Schnorr signature of size $\lambda$ with a near-optimal amortized communication cost of $O(\lambda n^2 \log{n})$ bits and a single asynchronous online round per signature. - HARTS is high-threshold: no fewer than $t_r+1$ signature shares can be combined to yield a full signature, where any $t_r\in [t_c,n-t_c)$ is supported. This especially covers the case $t_r \geq 2n/3 > 2t_c$. This is optimal. We prove our result in a modular fashion in the algebraic group model. At the core of our construction, we design a new simple and adaptively secure high-threshold asynchronous verifiable secret sharing (AVSS) scheme which may be of independent interest.
Last updated:  2024-03-13
Polynomial-Time Key-Recovery Attack on the ${\tt NIST}$ Specification of ${\tt PROV}$
River Moreira Ferreira and Ludovic Perret
In this paper, we present an efficient attack against ${\tt PROV}$, a recent variant of the popular Unbalanced Oil and Vinegar (${\tt UOV}$) multivariate signature scheme, that has been submitted to the ongoing ${\tt NIST}$ standardization process for additional post-quantum signature schemes. A notable feature of ${\tt PROV}$ is its proof of security, namely, existential unforgeability under a chosen-message attack (${\tt EUF-CMA}$), assuming the hardness of solving the system formed by the public-key non-linear equations. We present a polynomial-time key-recovery attack against the first specification of ${\tt PROV}$ (v$1.0$). To do so, we remark that a small fraction of the ${\tt PROV}$ secret-key is leaked during the signature process. Adapting and extending previous works on basic ${\tt UOV}$, we show that the entire secret-key can be then recovered from such a small fraction in polynomial-time. This leads to an efficient attack against ${\tt PROV}$ that we validated in practice. For all the security parameters suggested in by the authors of ${\tt PROV}$, our attack recovers the secret-key in at most $8$ seconds. We conclude the paper by discussing the apparent mismatch between such a practical attack and the theoretical security claimed by ${\tt PROV}$ designers. Our attack is not structural but exploits that the current specification of ${\tt PROV}$ differs from the required security model. A simple countermeasure makes ${\tt PROV}$ immune against the attack presented here and led the designers to update the specification of ${\tt PROV}$ (v$1.1$).
Last updated:  2025-02-20
Circle STARKs
Ulrich Haböck, David Levit, and Shahar Papini
Traditional STARKs require a cyclic group of a smooth order in the field. This allows efficient interpolation of points using the FFT algorithm, and writing constraints that involve neighboring rows. The Elliptic Curve FFT (ECFFT, Part I and II) introduced a way to make efficient STARKs for any finite field, by using a cyclic group of an elliptic curve. We show a simpler construction in the lines of ECFFT over the circle curve $x^2 + y^2 = 1$. When $p + 1$ is divisible by a large power of $2$, this construction is as efficient as traditional STARKs and ECFFT. Applied to the Mersenne prime $p = 2^{31} − 1$, which has been recently advertised in the IACR eprint 2023:824, our preliminary benchmarks indicate a speed-up by a factor of $1.4$ compared to a traditional STARK using the Babybear prime $p = 2^{31} − 2^{27} + 1$.
Last updated:  2024-02-19
Fault Attacks on UOV and Rainbow
Juliane Krämer and Mirjam Loiero
Multivariate cryptography is one of the main candidates for creating post-quantum public key cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. The signature schemes UOV and Rainbow are two of the most promising and best studied multivariate schemes which have proven secure for more than a decade. However, so far the security of multivariate signature schemes towards physical attacks has not been appropriately assessed. Towards a better understanding of the physical security of multivariate signature schemes, this paper presents fault attacks against SingleField schemes, especially UOV and Rainbow. Our analysis shows that although promising attack vectors exist, multivariate signature schemes inherently offer a good protection against fault attacks.
Last updated:  2024-11-29
Reduce and Prange: Revisiting Prange's ISD for Solving LPN/RSD over Large Fields
Jiseung Kim and Changmin Lee
The Learning Parity with Noise (LPN) problem and its specific form, LPN with regularity, are widely utilized in crafting cryptographic primitives. Recently, extending these problems to operate over larger fields has been considered to enhance applications. Despite the broad analysis available for traditional LPN approaches, the exploration of LPN over large fields remains underdeveloped. This gap in research has led to the development of improved attacks, suggesting that primitives based on LPN over large fields may not meet necessary security standards. We have developed an algorithm that enhances the efficiency of solving the LPN over large fields. This method innovatively modifies the Gaussian elimination attack, traditionally known as Prange's information set decoding algorithm. Our key advancement involves the selective use of partial Gaussian elimination rather than employing the full Gaussian algorithm throughout, which we have termed the ``Reduce and Prange's (RP) algorithm." Additionally, we apply the RP algorithm to address the LPN problem over large fields with regular noise. Our RP highlights two key aspects: the vulnerability of existing schemes and the superiority of our approach compared to recent analyses. Our findings reveal that cryptographic applications, including Syndrome Decoding in the Head frameworks (Crypto'22, Asiacrypt'23, Eurocrypt'24) and Scalable Multiparty Garbling (CCS'23), are not secure enough. To be precise, the schemes fail to achieve their intended bit-security. Compared to the previous analysis by Liu et al. (Eurocrypt'24), we show that for LPN over large fields (e.g., 128-bit field size), the bit-security is reduced by 5-11 bits . Furthermore, our RP algorithm for solving LPN with regular noise outperforms recent results by Liu et al., Briaud, and Øygard (Eurocrypt'23) under certain parameter choices, leading to a reduction in bit-security by 5-20 bits for LPN with regular noise.
Last updated:  2024-02-22
The Multi-user Constrained PRF Security of Generalized GGM Trees for MPC and Hierarchical Wallets
Chun Guo, Xiao Wang, Xiang Xie, and Yu Yu
Multi-user (mu) security considers large-scale attackers that, given access to a number of cryptosystem instances, attempt to compromise at least one of them. We initiate the study of mu security of the so-called GGMtree that stems from the PRG-to-PRF transformation of Goldreich, Goldwasser, and Micali, with a goal to provide references for its recently popularized use in applied cryptography. We propose a generalized model for GGM trees and analyze its mu prefix-constrained PRF security in the random oracle model. Our model allows to derive concrete bounds and improvements for various protocols, and we showcase on the Bitcoin-Improvement-Proposal standard Bip32 hierarchical wallets and function secret sharing (FSS) protocols. In both scenarios, we propose improvements with better performance and concrete security bounds at the same time. Compared with the state-of-the-art designs, our SHACAL3- and KeccaK-𝑝-based Bip32 variants reduce the communication cost of MPC-based implementations by 73.3%∼93.8%, while our AES-based FSS substantially improves mu security while reducing computations by 50%.
Last updated:  2024-02-19
Amortized Large Look-up Table Evaluation with Multivariate Polynomials for Homomorphic Encryption
Heewon Chung, Hyojun Kim, Young-Sik Kim, and Yongwoo Lee
We present a new method for efficient look-up table (LUT) evaluation in homomorphic encryption (HE), based on Ring-LWE-based HE schemes, including both integer-message schemes such as Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV), and complex-number-message schemes like the Cheon-Kim-Kim-Song (CKKS) scheme. Our approach encodes bit streams into codewords and translates LUTs into low-degree multivariate polynomials, allowing for the simultaneous evaluation of multiple independent LUTs with minimal overhead. To mitigate noise accumulation in the CKKS scheme, we propose a novel noise-reduction technique, accompanied by proof demonstrating its effectiveness in asymptotically decreasing noise levels. We demonstrate our algorithm's effectiveness through a proof-of-concept implementation, showcasing significant efficiency gains, including a 0.029ms per slot evaluation for 8-input, 8-output LUTs and a 280ms amortized decryption time for AES-128 using CKKS on a single GPU. This work not only advances LUT evaluation in HE but also introduces a transciphering method for the CKKS scheme utilizing standard symmetric-key encryption, bridging the gap between discrete bit strings and numerical data.
Last updated:  2025-04-10
Perfect 2-Party Computation from Additive Somewhat Homomorphic Encryption
Jonathan Trostle
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key somewhat additive homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is somewhat additive homomorphic and we extend it to support multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. Correctness of the server in the malicious model can be verified by a 3rd party where the client and server privacy are protected from the verifier.
Last updated:  2024-02-26
Deep Learning Based Analysis of Key Scheduling Algorithm of Advanced Ciphers
Narendra Kumar Patel and Hemraj Shobharam Lamkuche
The advancements in information technology have made the Advanced Encryption Standard (AES) and the PRESENT cipher indispensable in ensuring data security and facilitating private transactions. AES is renowned for its flexibility and widespread use in various fields, while the PRESENT cipher excels in lightweight cryptographic situations. This paper delves into a dual examination of the Key Scheduling Algorithms (KSAs) of AES and the PRESENT cipher, which play a crucial role in generating round keys for their respective encryption techniques. By implementing deep learning methods, particularly a Neural Network model, our study aims to unravel the complexities of these KSAs and shed light on their inner workings.
Last updated:  2024-02-19
Understanding User-Perceived Security Risks and Mitigation Strategies in the Web3 Ecosystem
Janice Jianing Si, Tanusree Sharma, and Kanye Ye Wang
The advent of Web3 technologies promises unprecedented levels of user control and autonomy. However, this decentralization shifts the burden of security onto the users, making it crucial to understand their security behaviors and perceptions. To address this, our study introduces a comprehensive framework that identifies four core components of user interaction within the Web3 ecosystem: blockchain infrastructures, Web3-based Decentralized Applications (DApps), online communities, and off-chain cryptocurrency platforms. We delve into the security concerns perceived by users in each of these components and analyze the mitigation strategies they employ, ranging from risk assessment and aversion to diversification and acceptance. We further discuss the landscape of both technical and human-induced security risks in the Web3 ecosystem, identify the unique security differences between Web2 and Web3, and highlight key challenges that render users vulnerable, to provide implications for security design in Web3.
Last updated:  2024-06-10
YPIR: High-Throughput Single-Server PIR with Silent Preprocessing
Samir Jordan Menon and David J. Wu
We introduce YPIR, a single-server private information retrieval (PIR) protocol that achieves high throughput (up to 83% of the memory bandwidth of the machine) without any offline communication. For retrieving a 1-bit (or 1-byte) record from a 32 GB database, YPIR achieves 12.1 GB/s/core server throughput and requires 2.5 MB of total communication. On the same setup, the state-of-the-art SimplePIR protocol achieves a 12.5 GB/s/core server throughput, requires 1.5 MB total communication, but additionally requires downloading a 724 MB hint in an offline phase. YPIR leverages a new lightweight technique to remove the hint from high-throughput single-server PIR schemes with small overhead. We also show how to reduce the server preprocessing time in the SimplePIR family of protocols by a factor of $10$-$15\times$. By removing the need for offline communication, YPIR significantly reduces the server-side costs for private auditing of Certificate Transparency logs. Compared to the best previous PIR-based approach, YPIR reduces the server-side costs by a factor of $8\times$. Note that to reduce communication costs, the previous approach assumed that updates to the Certificate Transparency log servers occurred in weekly batches. Since there is no offline communication in YPIR, our approach allows clients to always audit the most recent Certificate Transparency logs (e.g., updating once a day). Supporting daily updates using the prior scheme would cost $48\times$ more than YPIR (based on current AWS compute costs).
Last updated:  2024-02-17
A note on PUF-Based Robust and Anonymous Authentication and Key Establishment Scheme for V2G Networks
Milad Seddigh and Seyed Hamid Baghestani
Vehicle-to-grid (V2G) provides effective charging services, allows bidirectional energy communication between the power grid and electric vehicle (EV), and reduces environmental pollution and energy crises. Recently, Sungjin Yu et al. proposed a PUF-based, robust, and anonymous authentication and key establishment scheme for V2G networks. In this paper, we show that the proposed protocol does not provide user anonymity and is vulnerable to tracing attack. We also found their scheme is vulnerable to ephemeral secret leakage attacks.
Last updated:  2024-02-17
A New Approach to Generic Lower Bounds: Classical/Quantum MDL, Quantum Factoring, and More
Minki Hhan
This paper studies the limitations of the generic approaches to solving cryptographic problems in classical and quantum settings in various models. - In the classical generic group model (GGM), we find simple alternative proofs for the lower bounds of variants of the discrete logarithm (DL) problem: the multiple-instance DL and one-more DL problems (and their mixture). We also re-prove the unknown-order GGM lower bounds, such as the order finding, root extraction, and repeated squaring. - In the quantum generic group model (QGGM), we study the complexity of variants of the discrete logarithm. We prove the logarithm DL lower bound in the QGGM even for the composite order setting. We also prove an asymptotically tight lower bound for the multiple-instance DL problem. Both results resolve the open problems suggested in a recent work by Hhan, Yamakawa, and Yun. - In the quantum generic ring model we newly suggested, we give the logarithmic lower bound for the order-finding algorithms, an important step for Shor's algorithm. We also give a logarithmic lower bound for a certain generic factoring algorithm outputting relatively small integers, which includes a modified version of Regev's algorithm. - Finally, we prove a lower bound for the basic index calculus method for solving the DL problem in a new idealized group model regarding smooth numbers. The quantum lower bounds in both models allow certain (different) types of classical preprocessing. All of the proofs are significantly simpler than the previous proofs and are through a single tool, the so-called compression lemma, along with linear algebra tools. Our use of this lemma may be of independent interest.
Last updated:  2024-02-16
zkPi: Proving Lean Theorems in Zero-Knowledge
Evan Laufer, Alex Ozdemir, and Dan Boneh
Interactive theorem provers (ITPs), such as Lean and Coq, can express formal proofs for a large category of theorems, from abstract math to software correctness. Consider Alice who has a Lean proof for some public statement $T$. Alice wants to convince the world that she has such a proof, without revealing the actual proof. Perhaps the proof shows that a secret program is correct or safe, but the proof itself might leak information about the program's source code. A natural way for Alice to proceed is to construct a succinct, zero-knowledge, non-interactive argument of knowledge (zkSNARK) to prove that she has a Lean proof for the statement $T$. In this work we build zkPi, the first zkSNARKfor proofs expressed in Lean, a state of the art interactive theorem prover. With zkPi, a prover can convince a verifier that a Lean theorem is true, while revealing little else. The core problem is building an efficient zkSNARKfor dependent typing. We evaluate zkPion theorems from two core Lean libraries: stdlib and mathlib. zkPisuccessfuly proves 57.9% of the theorems in stdlib, and 14.1% of the theorems in mathlib, within 4.5 minutes per theorem. A zkPiproof is sufficiently short that Fermat could have written one in the margin of his notebook to convince the world, in zero knowledge, that he proved his famous last theorem. Interactive theorem provers (ITPs) can express virtually all systems of formal reasoning. Thus, an implemented zkSNARKfor ITP theorems generalizes practical zero-knowledge's interface beyond the status quo: circuit satisfiability and program execution.
Last updated:  2024-02-16
WhisPIR: Stateless Private Information Retrieval with Low Communication
Uncategorized
Leo de Castro, Kevin Lewi, and Edward Suh
Show abstract
Uncategorized
Recent constructions of private information retrieval (PIR) have seen significant improvements in computational performance. However, these improvements rely on heavy offline preprocessing that is typically difficult in real-world applications. Motivated by the question of PIR with no offline processing, we introduce WhisPIR, a fully stateless PIR protocol with low per-query communication. WhisPIR clients are all ephemeral, meaning that they appear with only the protocol public parameters and disappear as soon as their query is complete, giving no opportunity for additional "offline" communication that is not counted towards the overall query communication. As such, WhisPIR is highly suited for practical applications that must support many clients and frequent database updates. We demonstrate that WhisPIR requires significantly less communication than all other lattice-based PIR protocols in a stateless setting. WhisPIR is outperformed in computation only by SimplePIR and HintlessPIR when the database entries are large (several kilobytes). WhisPIR achieves this performance by introducing a number of novel optimizations. These include improvements to the index expansion algorithm of SealPIR & OnionPIR that optimizes the algorithm when only one rotation key is available. WhisPIR also makes novel use of the non-compact variant of the BGV homomorphic encryption scheme to further save communication and computation. To demonstrate the practicality of WhisPIR, we apply the protocol to the problem of secure blocklist checking, an important user-safety application in end-to-end encrypted messaging.
Last updated:  2024-02-16
Beyond the circuit: How to Minimize Foreign Arithmetic in ZKP Circuits
Michele Orrù, George Kadianakis, Mary Maller, and Greg Zaverucha
Zero-knowledge circuits are frequently required to prove gadgets that are not optimised for the constraint system in question. A particularly daunting task is to embed foreign arithmetic such as Boolean operations, field arithmetic, or public-key cryptography. We construct techniques for offloading foreign arithmetic from a zero-knowledge circuit including: (i) equality of discrete logarithms across different groups; (ii) scalar multiplication without requiring elliptic curve operations; (iii) proving knowledge of an AES encryption. To achieve our goal, we employ techniques inherited from rejection sampling and lookup protocols. We implement and provide concrete benchmarks for our protocols.
Last updated:  2024-03-13
Extractable Witness Encryption for KZG Commitments and Efficient Laconic OT
Nils Fleischhacker, Mathias Hall-Andersen, and Mark Simkin
We present a concretely efficient and simple extractable witness encryption scheme for KZG polynomial commitments. It allows to encrypt a message towards a triple $(\mathsf{com}, \alpha, \beta)$, where $\mathsf{com}$ is a KZG commitment for some polynomial $f$. Anyone with an opening for the commitment attesting $f(\alpha) = \beta$ can decrypt, but without knowledge of a valid opening the message is computationally hidden. Our construction is simple and highly efficient. The ciphertext is only a single group element. Encryption and decryption both require a single pairing evaluation and a constant number of group operations. Using our witness encryption scheme, we construct a simple and highly efficient laconic OT protocol, which significantly outperforms the state of the art in most important metrics.
Last updated:  2024-02-16
Threshold Encryption with Silent Setup
Sanjam Garg, Dimitris Kolonelos, Guru-Vamsi Policharla, and Mingyuan Wang
We build a concretely efficient threshold encryption scheme where the joint public key of a set of parties is computed as a deterministic function of their locally computed public keys, enabling a silent setup phase. By eliminating interaction from the setup phase, our scheme immediately enjoys several highly desirable features such as asynchronous setup, multiverse support, and dynamic threshold. Prior to our work, the only known constructions of threshold encryption with silent setup relied on heavy cryptographic machinery such as indistinguishability Obfuscation or witness encryption for all of $\mathsf{NP}$. Our core technical innovation lies in building a special purpose witness encryption scheme for the statement ``at least $t$ parties have signed a given message''. Our construction relies on pairings and is proved secure in the Generic Group Model. Notably, our construction, restricted to the special case of threshold $t=1$, gives an alternative construction of the (flexible) distributed broadcast encryption from pairings, which has been the central focus of several recent works. We implement and evaluate our scheme to demonstrate its concrete efficiency. Both encryption and partial decryption are constant time, taking $<7\,$ms and $<1\,$ms, respectively. For a committee of $1024$ parties, the aggregation of partial decryptions takes $<200\,$ms, when all parties provide partial decryptions. The size of each ciphertext is $\approx 8\times$ larger than an ElGamal ciphertext.
Last updated:  2024-02-16
Note on the cryptanalysis of Speedy
Tim Beyne and Addie Neyt
At Eurocrypt 2023, a differential attack on the block cipher Speedy-7-192 was presented. This note shows that the main differential characteristic that this attack is based on has probability zero.
Last updated:  2024-02-16
Election Eligibility with OpenID: Turning Authentication into Transferable Proof of Eligibility
Véronique Cortier, Alexandre Debant, Anselme Goetschmann, and Lucca Hirschi
Eligibility checks are often abstracted away or omitted in voting protocols, leading to situations where the voting server can easily stuff the ballot box. One reason for this is the difficulty of bootstraping the authentication material for voters without relying on trusting the voting server. In this paper, we propose a new protocol that solves this problem by building on OpenID, a widely deployed authentication protocol. Instead of using it as a standard authentication means, we turn it into a mechanism that delivers transferable proofs of eligibility. Using zk-SNARK proofs, we show that this can be done without revealing any compromising information, in particular, protecting everlasting privacy. Our approach remains efficient and can easily be integrated into existing protocols, as we have done for the Belenios voting protocol. We provide a full-fledged proof of concept along with benchmarks showing our protocol could be realistically used in large-scale elections.
Last updated:  2025-02-27
Kleptographic Attacks against Implicit Rejection
Antoine Joux, Julian Loss, and Benedikt Wagner
Given its integral role in modern encryption systems such as CRYSTALS-Kyber, the Fujisaki-Okamoto (FO) transform will soon be at the center of our secure communications infrastructure. An enduring debate surrounding the FO transform is whether to use explicit or implicit rejection when decapsulation fails. Presently, implicit rejection, as implemented in CRYSTALS-Kyber, is supported by a strong set of arguments. Therefore, understanding its security implications in different attacker models is essential. In this work, we study implicit rejection through a novel lens, namely, from the perspective of kleptography. Concretely, we consider an attacker model in which the attacker can subvert the user's code to compromise security while remaining undetectable. In this scenario, we present three attacks that significantly reduce the security level of the FO transform with implicit rejection.
Last updated:  2024-02-16
Anonymity on Byzantine-Resilient Decentralized Computing
Kehao Ma, Minghui Xu, Yihao Guo, Lukai Cui, Shiping Ni, Shan Zhang, Weibing Wang, Haiyong Yang, and Xiuzhen Cheng
In recent years, decentralized computing has gained popularity in various domains such as decentralized learning, financial services and the Industrial Internet of Things. As identity privacy becomes increasingly important in the era of big data, safeguarding user identity privacy while ensuring the security of decentralized computing systems has become a critical challenge. To address this issue, we propose ADC (Anonymous Decentralized Computing) to achieve anonymity in decentralized computing. In ADC, the entire network of users can vote to trace and revoke malicious nodes. Furthermore, ADC possesses excellent Sybil-resistance and Byzantine fault tolerance, enhancing the security of the system and increasing user trust in the decentralized computing system. To decentralize the system, we propose a practical blockchain-based decentralized group signature scheme called Group Contract. We construct the entire decentralized system based on Group Contract, which does not require the participation of a trusted authority to guarantee the above functions. Finally, we conduct rigorous privacy and security analysis and performance evaluation to demonstrate the security and practicality of ADC for decentralized computing with only a minor additional time overhead.
Last updated:  2024-02-16
SoK: Decentralized Storage Network
Chuanlei Li, Minghui Xu, Jiahao Zhang, Hechuan Guo, and Xiuzhen Cheng
Decentralized Storage Networks (DSNs) represent a paradigm shift in data storage methodology, distributing and housing data across multiple network nodes rather than relying on a centralized server or data center architecture. The fundamental objective of DSNs is to enhance security, reinforce reliability, and mitigate censorship risks by eliminating a single point of failure. Leveraging blockchain technology for functions such as access control, ownership validation, and transaction facilitation, DSN initiatives aim to provide users with a robust and secure alternative to traditional centralized storage solutions. This paper conducts a comprehensive analysis of the developmental trajectory of DSNs, focusing on key components such as Proof of Storage protocols, consensus algorithms, and incentive mechanisms. Additionally, the study explores recent optimization tactics, encountered challenges, and potential avenues for future research, thereby offering insights into the ongoing evolution and advancement within the DSN domain.
Last updated:  2024-07-30
LatticeFold: A Lattice-based Folding Scheme and its Applications to Succinct Proof Systems
Dan Boneh and Binyi Chen
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure and require a large (256-bit) field. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Since LatticeFold can operate over a small (64-bit) field, our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing plausible post-quantum security. Moreover, LatticeFold operates over the same module structure used by fully homomorphic encryption (FHE) and lattice signatures schemes, and can therefore benefit from software optimizations and custom hardware designed to accelerate these lattice schemes.
Last updated:  2024-02-16
Fiat-Shamir for Bounded-Depth Adversaries
Liyan Chen, Yilei Chen, Zikuan Huang, Nuozhou Sun, Tianqi Yang, and Yiding Zhang
We study how to construct hash functions that can securely instantiate the Fiat-Shamir transformation against bounded-depth adversaries. The motivation is twofold. First, given the recent fruitful line of research of constructing cryptographic primitives against bounded-depth adversaries under worst-case complexity assumptions, and the rich applications of Fiat-Shamir, instantiating Fiat-Shamir hash functions against bounded-depth adversaries under worst-case complexity assumptions might lead to further applications (such as SNARG for P, showing the cryptographic hardness of PPAD, etc.) against bounded-depth adversaries. Second, we wonder whether it is possible to overcome the impossibility results of constructing Fiat-Shamir for arguments [Goldwasser, Kalai, FOCS ’03] in the setting where the depth of the adversary is bounded, given that the known impossibility results (against p.p.t. adversaries) are contrived. Our main results give new insights for Fiat-Shamir against bounded-depth adversaries in both the positive and negative directions. On the positive side, for Fiat-Shamir for proofs with certain properties, we show that weak worst-case assumptions are enough for constructing explicit hash functions that give $\mathsf{AC}^0[2]$-soundness. In particular, we construct an $\mathsf{AC}^0[2]$-computable correlation-intractable hash family for constant-degree polynomials against $\mathsf{AC}^0[2]$ adversaries, assuming $\oplus \mathsf{L}/\mathsf{poly} \not\subseteq \widetilde{\mathsf{Sum}}_{n^{-c}} \circ\mathsf{AC}^0[2]$ for some $c > 0$. This is incomparable to all currently-known constructions, which are typically useful for larger classes and against stronger adversaries, but based on arguably stronger assumptions. Our construction is inspired by the Fiat-Shamir hash function by Peikert and Shiehian [CRYPTO ’19] and the fully-homomorphic encryption scheme against bounded-depth adversaries by Wang and Pan [EUROCRYPT ’22]. On the negative side, we show Fiat-Shamir for arguments is still impossible to achieve against bounded-depth adversaries. In particular, • Assuming the existence of $\mathsf{AC}^0[2]$-computable CRHF against p.p.t. adversaries, for every poly-size hash function, there is a (p.p.t.-sound) interactive argument that is not $\mathsf{AC}^0[2]$-sound after applying Fiat-Shamir with this hash function. • Assuming the existence of $\mathsf{AC}^0[2]$-computable CRHF against $\mathsf{AC}^0[2]$ adversaries, there is an $\mathsf{AC}^0[2]$-sound interactive argument such that for every hash function computable by $\mathsf{AC}^0[2]$ circuits the argument does not preserve $\mathsf{AC}^0[2]$-soundness when applying Fiat-Shamir with this hash function. This is a low-depth variant of the result of Goldwasser and Kalai.
Last updated:  2024-08-18
Revisiting Differential-Linear Attacks via a Boomerang Perspective With Application to AES, Ascon, CLEFIA, SKINNY, PRESENT, KNOT, TWINE, WARP, LBlock, Simeck, and SERPENT
Hosein Hadipour, Patrick Derbez, and Maria Eichlseder
In 1994, Langford and Hellman introduced differential-linear (DL) cryptanalysis, with the idea of decomposing the block cipher E into two parts, EU and EL, such that EU exhibits a high-probability differential trail, while EL has a high-correlation linear trail.Combining these trails forms a distinguisher for E, assuming independence between EU and EL. The dependency between the two parts of DL distinguishers remained unaddressed until EUROCRYPT 2019, where Bar-On et al. introduced the DLCT framework, resolving the issue up to one S-box layer. However, extending the DLCT framework to formalize the dependency between the two parts for multiple rounds remained an open problem. In this paper, we first tackle this problem from the perspective of boomerang analysis. By examining the relationships between DLCT, DDT, and LAT, we introduce a set of new tables facilitating the formulation of dependencies between the two parts of the DL distinguisher across multiple rounds. Then, we introduce a highly versatile and easy-to-use automatic tool for exploring DL distinguishers, inspired by automatic tools for boomerang distinguishers. This tool considers the dependency between differential and linear trails across multiple rounds. We apply our tool to various symmetric primitives, and in all applications, we either present the first DL distinguishers or enhance the best-known ones. We achieve successful results against Ascon, AES, SERPENT, PRESENT, SKINNY, TWINE, CLEFIA, WARP, LBlock, Simeck, and KNOT. Furthermore, we demonstrate that, in some cases, DL distinguishers outperform boomerang distinguishers significantly.
Last updated:  2024-02-16
Adaptive Security in SNARGs via iO and Lossy Functions
Brent Waters and Mark Zhandry
We construct an adaptively sound SNARGs in the plain model with CRS relying on the assumptions of (subexponential) indistinguishability obfuscation (iO), subexponential one-way functions and a notion of lossy functions we call length parameterized lossy functions. Length parameterized lossy functions take in separate security and input length parameters and have the property that the function image size in lossy mode depends only on the security parameter. We then show a novel way of constructing such functions from the Learning with Errors (LWE) assumption. Our work provides an alternative path towards achieving adaptively secure SNARGs from the recent work of Waters and Wu. Their work required the use of (essentially) perfectly re-randomizable one way functions (in addition to obfuscation). Such functions are only currently known to be realizable from assumptions such as discrete log or factoring that are known to not hold in a quantum setting.
Last updated:  2024-02-17
2PC-MPC: Emulating Two Party ECDSA in Large-Scale MPC
Offir Friedman, Avichai Marmor, Dolev Mutzari, Omer Sadika, Yehonatan C. Scaly, Yuval Spiizer, and Avishay Yanai
Motivated by the need for a massively decentralized network concurrently servicing many clients, we present novel low-overhead UC-secure, publicly verifiable, threshold ECDSA protocols with identifiable abort. For the first time, we show how to reduce the message complexity from O(n^2) to O(n) and the computational complexity from O(n) to practically O(1) (per party, where n is the number of parties). We require only a broadcast channel for communication. Therefore, we natively support use-cases like permissionless bridges and decentralized custody, where P2P channels between every pair of parties are infeasible. Consequently, the message complexity is reduced and the protocol is publicly verifiable. We enable all communication to be public (over a broadcast channel), by using a threshold additively homomorphic encryption scheme and novel zero-knowledge proofs. To further reduce the computation and communication overheads, our protocols employ novel batching and amortization techniques, which may be of independent interest. Our second main contribution is the introduction of the notion of a 2PC-MPC protocol - a two-party ECDSA protocol where the second party is fully emulated by a network of n parties. This notion assures that both the first party (the client) and (a threshold) of the network are required to participate in signing, while abstracting away the internal structure of the network. In particular, the communication and computation complexities of the client remain independent of the network properties (e.g. size). This allows ultimate decentralization in distributed custody use-cases, as recent growing interest in the industry demands. We report that our implementation completes the signing phase in 1.23 and 12.703 seconds, for 256 and 1024 parties, respectively.
Last updated:  2024-11-29
Faster Signatures from MPC-in-the-Head
Dung Bui, Eliana Carozza, Geoffroy Couteau, Dahmun Goudarzi, and Antoine Joux
We revisit the construction of signature schemes using the MPC-in-the-head paradigm. We obtain two main contributions: – We observe that previous signatures in the MPC-in-the-head paradigm must rely on a salted version of the GGM puncturable pseudorandom function (PPRF) to avoid collision attacks. We design a new efficient PPRF construction that is provably secure in the multi-instance setting. The security analysis of our PPRF, in the ideal cipher model, is quite involved and forms a core technical contribution to our work. While previous constructions had to rely on a hash function, our construction uses only a fixed-key block cipher and is considerably more efficient as a result: we observe a 12× to 55× speed improvement for a recent signature scheme (Joux and Huth, Crypto’24). Our improved PPRF can be used to speed up many MPC-in-the-head signatures. – We introduce a new signature scheme from the regular syndrome decoding assumption, based on a new protocol for the MPC-in-the-head paradigm, which significantly reduces communication compared to previous works. Our scheme is conceptually simple, though its security analysis requires a delicate and nontrivial combinatorial analysis.
Last updated:  2025-02-08
Communication-Optimal Convex Agreement
Diana Ghinea, Chen-Da Liu-Zhang, and Roger Wattenhofer
Byzantine Agreement (BA) allows a set of $n$ parties to agree on a value even when up to $t$ of the parties involved are corrupted. While previous works have shown that, for $\ell$-bit inputs, BA can be achieved with the optimal communication complexity $O(\ell n)$ for sufficiently large $\ell$, BA only ensures that honest parties agree on a meaningful output when they hold the same input, rendering the primitive inadequate for many real-world applications. This gave rise to the notion of Convex Agreement (CA), introduced by Vaidya and Garg [PODC'13], which requires the honest parties' outputs to be in the convex hull of the honest inputs. Unfortunately, all existing CA protocols incur a communication complexity of at least $O(\ell n^2)$. In this work, we introduce the first CA protocol with the optimal communication of $\mathcal{O}(\ell n)$ bits for inputs in $\mathbb{Z}$ of size $\ell = \Omega(\kappa \cdot n \log^2 n)$, where $\kappa$ is the security parameter.
Last updated:  2024-11-23
Exploring the Six Worlds of Gröbner Basis Cryptanalysis: Application to Anemoi
Katharina Koschatko, Reinhard Lüftenegger, and Christian Rechberger
Gröbner basis cryptanalysis of hash functions and ciphers, and their underlying permutations, has seen renewed interest recently. Anemoi (Crypto'23) is a permutation-based hash function that is efficient for a variety of arithmetizations used in zero-knowledge proofs. In this paper, exploring both theoretical bounds as well as experimental validation, we present new complexity estimates for Gröbner basis attacks on the Anemoi permutation over prime fields. We cast our findings in what we call the six worlds of Gröbner basis cryptanalysis. As an example, keeping the same security arguments of the design, we conclude that at least 41 instead of 37 rounds would need to be used for 256-bit security, whereby our suggestion does not yet include a security margin.
Last updated:  2024-05-30
Robust Additive Randomized Encodings from IO and Pseudo-Non-linear Codes
Nir Bitansky and Sapir Freizeit
Additive randomized encodings (ARE), introduced by Halevi, Ishai, Kushilevitz, and Rabin (CRYPTO 2023), reduce the computation of a k-party function $f (x_1, . . . , x_k )$ to locally computing encodings $\hat{x}_i$ of each input xi and then adding them together over some Abelian group into an output encoding $\hat{y} = ∑ \hat{x}_i$, which reveals nothing but the result. In robust ARE (RARE) the sum of any subset of $\hat{x}_i$, reveals only the residual function obtained by restricting the corresponding inputs. The appeal of (R)ARE comes from the simplicity of the online part of the computation involving only addition, which yields for instance non-interactive multi-party computation in the shuffle model where messages from different parties are anonymously shuffled. Halevi, Ishai, Kushilevitz, and Rabin constructed ARE from standard assumptions and RARE in the ideal obfuscation model, leaving open the question of whether RARE can be constructed in the plain model. We construct RARE in the plain model from indistinguishability obfuscation, which is necessary, and a new primitive that we call pseudo-non-linear codes. We provide two constructions of this primitive assuming either Learning with Errors or Decision Diffie Hellman. A bonus feature of our construction is that it is online succinct. Specifically, encodings $\hat{x}_i$ can be decomposed to offline parts $\hat{z}_i$ that can be sent directly to the evaluator and short online parts $\hat{g}_i$ that are added together.
Last updated:  2024-06-06
FRIDA: Data Availability Sampling from FRI
Mathias Hall-Andersen, Mark Simkin, and Benedikt Wagner
As blockchains like Ethereum continue to grow, clients with limited resources can no longer store the entire chain. Light nodes that want to use the blockchain, without verifying that it is in a good state overall, can just download the block headers without the corresponding block contents. As those light nodes may eventually need some of the block contents, they would like to ensure that they are in principle available. Data availability sampling, introduced by Bassam et al., is a process that allows light nodes to check the availability of data without download it. In a recent effort, Hall-Andersen, Simkin, and Wagner have introduced formal definitions and analyzed several constructions. While their work thoroughly lays the formal foundations for data availability sampling, the constructions are either prohibitively expensive, use a trusted setup, or have a download complexity for light clients scales with a square root of the data size. In this work, we make a significant step forward by proposing an efficient data availability sampling scheme without a trusted setup and only polylogarithmic overhead. To this end, we find a novel connection with interactive oracle proofs of proximity (IOPPs). Specifically, we prove that any IOPP meeting an additional consistency criterion can be turned into an erasure code commitment, and then, leveraging a compiler due to Hall-Andersen, Simkin, and Wagner, into a data availability sampling scheme. This new connection enables data availability to benefit from future results on IOPPs. We then show that the widely used FRI IOPP satisfies our consistency criterion and demonstrate that the resulting data availability sampling scheme outperforms the state-of-the-art asymptotically and concretely in multiple parameters.
Last updated:  2024-07-13
Fault-Resistant Partitioning of Secure CPUs for System Co-Verification against Faults
Simon Tollec, Vedad Hadžić, Pascal Nasahl, Mihail Asavoae, Roderick Bloem, Damien Couroussé, Karine Heydemann, Mathieu Jan, and Stefan Mangard
Fault injection attacks are a serious threat to system security, enabling attackers to bypass protection mechanisms or access sensitive information. To evaluate the robustness of CPU-based systems against these attacks, it is essential to analyze the consequences of the fault propagation resulting from the complex interplay between the software and the processor. However, current formal methodologies combining hardware and software face scalability issues due to the monolithic approach used. To address this challenge, this work formalizes the $k$-fault resistant partitioning notion to solve the fault propagation problem when assessing redundancy-based hardware countermeasures in a first step. Proven security guarantees can then reduce the remaining hardware attack surface when introducing the software in a second step. First, we validate our approach against previous work by reproducing known results on cryptographic circuits. In particular, we outperform state-of-the-art tools for evaluating AES under a three-fault-injection attack. Then, we apply our methodology to the OpenTitan secure element and formally prove the security of its CPU's hardware countermeasure to single bit-flip injections. Besides that, we demonstrate that previously intractable problems, such as analyzing the robustness of OpenTitan running a secure boot process, can now be solved by a co-verification methodology that leverages a $k$-fault resistant partitioning. We also report a potential exploitation of the register file vulnerability in two other software use cases. Finally, we provide a security fix for the register file, prove its robustness, and integrate it into the OpenTitan project.
Last updated:  2025-02-26
OCash: Fully Anonymous Payments between Blockchain Light Clients
Adam Blatchley Hansen, Jesper Buus Nielsen, and Mark Simkin
We study blockchain-based provably anonymous payment systems between light clients. Such clients interact with the blockchain through full nodes, which can see what the light clients read and write. The goal of our work is to enable light clients to perform anonymous payments, while maintaining privacy even against the full nodes through which they interact with the blockchain. We formalize the problem in the UC model and present a provably secure solution. We show that a variation of tree ORAM gives obliviousness even when an adversary can follow how its own data elements move in the tree. We use this for anonymity via shuffling of payments on the blockchain, while at the same time allowing the light client to know a few positions among which to find its payment without knowing the current state of the blockchain. In comparison to existing works, we are the first ones that simultaneously provide strong anonymity guarantees, provable security, and anonymity with respect to full nodes. Along the way, we make several contributions that may be of independent interest. We define and construct anonymous-coin friendly encryption schemes and show how they can be used within anonymous payment systems. We define and construct efficient compressible randomness beacons, which produce unpredictable values in regular intervals and allow for storing all published values in a short digest.
Last updated:  2024-07-09
Linear-Communication Asynchronous Complete Secret Sharing with Optimal Resilience
Xiaoyu Ji, Junru Li, and Yifan Song
Secure multiparty computation (MPC) allows a set of $n$ parties to jointly compute a function on their private inputs. In this work, we focus on the information-theoretic MPC in the \emph{asynchronous network} setting with optimal resilience ($t<n/3$). The best-known result in this setting is achieved by Choudhury and Patra [J. Cryptol '23], which requires $O(n^4\kappa)$ bits per multiplication gate, where $\kappa$ is the size of a field element. An asynchronous complete secret sharing (ACSS) protocol allows a dealer to share a batch of Shamir sharings such that all parties eventually receive their shares. ACSS is an important building block in AMPC. The best-known result of ACSS is due to Choudhury and Patra [J. Cryptol '23], which requires $O(n^3\kappa)$ bits per sharing. On the other hand, in the synchronous setting, it is known that distributing Shamir sharings can be achieved with $O(n\kappa)$ bits per sharing. There is a gap of $n^2$ in the communication between the synchronous setting and the asynchronous setting. Our work closes this gap by presenting the first ACSS protocol that achieves $O(n\kappa)$ bits per sharing. When combined with the compiler from ACSS to AMPC by Choudhury and Patra [IEEE Trans. Inf. Theory '17], we obtain an AMPC with $O(n^2\kappa)$ bits per multiplication gate, improving the previously best-known result by a factor of $n^2$. Moreover, with a concurrent work that improves the compiler by Choudhury and Patra by a factor of $n$, we obtain the first AMPC with $O(n\kappa)$ bits per multiplication gate.
Last updated:  2024-11-26
Don’t Use It Twice! Solving Relaxed Linear Code Equivalence Problems
Alessandro Budroni, Jesús-Javier Chi-Domínguez, Giuseppe D'Alconzo, Antonio J. Di Scala, and Mukul Kulkarni
The Linear Code Equivalence (LCE) Problem has received increased attention in recent years due to its applicability in constructing efficient digital signatures. Notably, the LESS signature scheme based on LCE is under consideration for the NIST post-quantum standardization process, along with the MEDS signature scheme that relies on an extension of LCE to the rank metric, namely the Matrix Code Equivalence (MCE) Problem. Building upon these developments, a family of signatures with additional properties, including linkable ring, group, and threshold signatures, has been proposed. These novel constructions introduce relaxed versions of LCE (and MCE), wherein multiple samples share the same secret equivalence. Despite their significance, these variations have often lacked a thorough security analysis, being assumed to be as challenging as their original counterparts. Addressing this gap, our work delves into the sample complexity of LCE and MCE --- precisely, the sufficient number of samples required for efficient recovery of the shared secret equivalence. Our findings reveal, for instance, that one should not use the same secret twice in the LCE setting since this enables a polynomial time (and memory) algorithm to retrieve the secret. Consequently, our results unveil the insecurity of two advanced signatures based on variants of the LCE Problem.
Last updated:  2024-07-10
Towards Achieving Asynchronous MPC with Linear Communication and Optimal Resilience
Vipul Goyal, Chen-Da Liu-Zhang, and Yifan Song
Secure multi-party computation (MPC) allows a set of $n$ parties to jointly compute a function over their private inputs. The seminal works of Ben-Or, Canetti and Goldreich [STOC '93] and Ben-Or, Kelmer and Rabin [PODC '94] settled the feasibility of MPC over asynchronous networks. Despite the significant line of work devoted to improving the communication complexity, current protocols with information-theoretic security and optimal resilience $t<n/3$ communicate $\Omega(n^4C)$ field elements for a circuit with $C$ multiplication gates. In contrast, synchronous MPC protocols with $O(nC)$ communication have long been known. In this work we make progress towards closing this gap. We provide a novel MPC protocol in the asynchronous setting with statistical security that makes black-box use of an asynchronous complete secret-sharing (ACSS) protocol. The cost per multiplication reduces to the cost of distributing a constant number of sharings via ACSS, improving a linear factor over the state of the art by Choudhury and Patra [IEEE Trans. Inf. Theory '17]. With a recent concurrent work achieving ACSS with linear cost per sharing, we achieve an MPC with ${O}(nC)$ communication and optimal resilience.
Last updated:  2024-09-16
Perfectly-Secure MPC with Constant Online Communication Complexity
Yifan Song and Xiaxi Ye
In this work, we study the communication complexity of perfectly secure MPC protocol with guaranteed output delivery against $t=(n-1)/3$ corruptions. The previously best-known result in this setting is due to Goyal, Liu, and Song (CRYPTO, 2019) which achieves $O(n)$ communication per gate, where $n$ is the number of parties. On the other hand, in the honest majority setting, a recent trend in designing efficient MPC protocol is to rely on packed Shamir sharings to speed up the online phase. In particular, the work by Escudero et al. (CCS 2022) gives the first semi-honest protocol that achieves a constant communication overhead per gate across all parties in the online phase while maintaining overall $O(n)$ communication per gate. We thus ask the following question: ``Is it possible to construct a perfectly secure MPC protocol with GOD such that the online communication per gate is $O(1)$ while maintaining overall $O(n)$ communication per gate?'' In this work, we give an affirmative answer by providing an MPC protocol for computing an arithmetic circuit $C$ over a finite field of size at least $2n$ with communication complexity $O(|C|+\mathsf{Depth}\cdot n+n^5 \cdot \log n)$ elements for the online phase, and $O(|C|\cdot n+\mathsf{Depth}\cdot n^2 + n^5 \cdot \log n)$ elements for the preprocessing phase, where $|C|$ is the circuit size and $\mathsf{Depth}$ is the circuit depth.
Last updated:  2024-05-31
Consecutive Adaptor Signature Scheme: From Two-Party to N-Party Settings
Kaisei Kajita, Go Ohtake, and Tsuyoshi Takagi
Adaptor signatures have attracted attention as a tool to address scalability and interoperability issues in blockchain applications. Adaptor signatures can be constructed by extending common digital signature schemes that both authenticate a message and disclose a secret witness to a specific party. In Asiacrypt 2021, Aumayr et al. formulated the two-party adaptor signature as an independent cryptographic primitive. In this study, we extend their adaptor signature formulation to $N$ parties, present its generic construction, and define the security to be satisfied. Then, we present a concrete construction based on Schnorr signatures and discuss the security properties.
Last updated:  2024-02-15
Implementation of Cryptanalytic Programs Using ChatGPT
Nobuyuki Sugio
Large language models (LLMs), exemplified by the advanced AI tool ChatGPT in 2023, have demonstrated remarkable capabilities in generating sentences, images, and program codes, driven by their development from extensive datasets. With over 100 million users worldwide, ChatGPT stands out as a leader among LLMs. Previous studies have shown its proficiency in generating program source codes for the symmetric-key block ciphers AES, CHAM, and ASCON. This study ventures into the implementation of cryptanalytic program source codes for a lightweight block cipher using ChatGPT, exploring its potential and limitations in the field of cryptography.
Last updated:  2024-05-26
Simulation-Secure Threshold PKE from Standard (Ring-)LWE
Hiroki Okada and Tsuyoshi Takagi
Threshold public key encryption (ThPKE) is PKE that can be decrypted by collecting “partial decryptions” from t (≤ N) out of N parties. ThPKE based on the learning with errors problem (LWE) is particularly important because it can be extended to threshold fully homomorphic encryption (ThFHE). ThPKE and ThFHE are fundamental tools for constructing multiparty computation (MPC) protocols: In 2023, NIST initiated a project (NIST IR 8214C) to establish guidelines for implementing threshold cryptosystems. Because MPC often requires simulation-security (SS), ThPKE schemes that satisfy SS (SS-ThPKE) are also important. Recently, Micciancio and Suhl (ePrint 2023/1728) presented an efficient SS-ThPKE scheme based on LWE with a polynomial modulus. However, the scheme requires the use of a nonstandard problem called “known-norm LWE” for the security proof because the norm ∥e∥ of the error of the public key is leaked from the partial decryptions. This leads to the following two challenges: 1) The construction based on LWE relies on a nontight reduction from known- norm LWE to LWE. 2) No construction based on (standard) Ring-LWE has been presented. In this paper, we address both of these challenges: we propose an efficient SS-ThPKE scheme whose security is (directly) reduced from standard LWE/Ring-LWE with a polynomial modulus. The core technique of our construction is what we call “error sharing”: We distribute shares of a small error ζ via secret sharing, and use them to prevent leakage of ∥e∥ from partial decryptions.
Last updated:  2024-11-12
A Single Trace Fault Injection Attack on Hedged CRYSTALS-Dilithium
Sönke Jendral
CRYSTALS-Dilithium is a post-quantum secure digital signature algorithm currently being standardised by NIST. As a result, devices making use of CRYSTALS-Dilithium will soon become generally available and be deployed in various environments. It is thus important to assess the resistance of CRYSTALS-Dilithum implementations to physical attacks. In this paper, we present an attack on a CRYSTALS-Dilithium implementation in hedged mode in ARM Cortex-M4 using fault injection. Voltage glitching is performed to skip computation of a seed during the generation of the signature. We identified settings that consistently skip the desired function without crashing the device. After the successful fault injection, the resulting signature allows for the extraction of the secret key vector. Our attack succeeds with probability 0.582 in a single trace. We also propose countermeasures against the presented attack.
Last updated:  2024-02-14
Collusion-Resilience in Transaction Fee Mechanism Design
Hao Chung, Tim Roughgarden, and Elaine Shi
Users bid in a transaction fee mechanism (TFM) to get their transactions included and confirmed by a blockchain protocol. Roughgarden (EC'21) initiated the formal treatment of TFMs and proposed three requirements: user incentive compatibility (UIC), miner incentive compatibility (MIC), and a form of collusion-resilience called OCA-proofness. Ethereum's EIP-1559 mechanism satisfies all three properties simultaneously when there is no contention between transactions, but loses the UIC property when there are too many eligible transactions to fit in a single block. Chung and Shi (SODA'23) considered an alternative notion of collusion-resilience, called c-side-constract-proofness (c-SCP), and showed that, when there is contention between transactions, no TFM can satisfy UIC, MIC, and c-SCP for any c at least 1. OCA-proofness asserts that the users and a miner should not be able to "steal from the protocol" and is intuitively weaker than the c-SCP condition, which stipulates that a coalition of a miner and a subset of users should not be able to profit through strategic deviations (whether at the expense of the protocol or of the users outside the coalition). Our main result is the first proof that, when there is contention between transactions, no (possibly randomized) direct-revelation TFM satisfies UIC, MIC, and OCA-proofness. This result resolves the main open question in Roughgarden(EC'21). We also suggest several relaxations of the basic model that allow our impossibility result to be circumvented.
Last updated:  2025-02-27
New Black-Box Separations through Mathematically Structured Primitives
Hart Montgomery and Sikhar Patranabis
We provide a novel view of public-key cryptography by showing full equivalences of certain primitives to "hard" monoid actions. More precisely, we show that key exchange and two-party computation are exactly equivalent to monoid actions with certain structural and hardness properties. To the best of our knowledge, this is the first "natural" characterization of the mathematical structure inherent to any key exchange or two-party computation protocol, and the first explicit proof of the necessity of mathematical structure for public-key cryptography. We then utilize these characterizations to show new black-box separation results. Concretely, we obtain the following results: Two-Party Key Exchange. We show that that any two-party noninteractive key exchange protocol is equivalent to the existence of an abelian monoid action equipped with a natural hardness property, namely (distributional) unpredictability. More generally, we show that any $k$-round (two-party) key exchange protocol is essentially equivalent to the existence of a (distributional) unpredictable monoid action with certain commutator-like properties. Rudich (Crypto '91) shows a black-box separation of $k$-round and $(k+1)$-round key exchange for any $k$; we use our generic primitive here to formalize this result and extend it to efficient key exchange protocols (where communication is $\textsf{poly}(k)$). Two-Party Computation. We show that any maliciously secure two-party computation protocol is also equivalent to a monoid action with commutator-like properties and certain hardness guarantees. We then use a generic version of this primitive to show a black-box separation between $k$-round semi-honest secure two-party computation and $(k+1)$-round maliciously secure two-party computation. This yields the first black-box separation (to our knowledge) between $k$-round and $(k+1)$-round maliciously secure two-party computation protocols.
Last updated:  2024-06-18
Pseudorandom Error-Correcting Codes
Miranda Christ and Sam Gunn
We construct pseudorandom error-correcting codes (or simply pseudorandom codes), which are error-correcting codes with the property that any polynomial number of codewords are pseudorandom to any computationally-bounded adversary. Efficient decoding of corrupted codewords is possible with the help of a decoding key. We build pseudorandom codes that are robust to substitution and deletion errors, where pseudorandomness rests on standard cryptographic assumptions. Specifically, pseudorandomness is based on either $2^{O(\sqrt{n})}$-hardness of LPN, or polynomial hardness of LPN and the planted XOR problem at low density. As our primary application of pseudorandom codes, we present an undetectable watermarking scheme for outputs of language models that is robust to cropping and a constant rate of random substitutions and deletions. The watermark is undetectable in the sense that any number of samples of watermarked text are computationally indistinguishable from text output by the original model. This is the first undetectable watermarking scheme that can tolerate a constant rate of errors. Our second application is to steganography, where a secret message is hidden in innocent-looking content. We present a constant-rate stateless steganography scheme with robustness to a constant rate of substitutions. Ours is the first stateless steganography scheme with provable steganographic security and any robustness to errors.
Last updated:  2024-05-30
Bare PAKE: Universally Composable Key Exchange from just Passwords
Manuel Barbosa, Kai Gellert, Julia Hesse, and Stanislaw Jarecki
In the past three decades, an impressive body of knowledge has been built around secure and private password authentication. In particular, secure password-authenticated key exchange (PAKE) protocols require only minimal overhead over a classical Diffie-Hellman key exchange. PAKEs are also known to fulfill strong composable security guarantees that capture many password-specific concerns such as password correlations or password mistyping, to name only a few. However, to enjoy both round-optimality and strong security, applications of PAKE protocols must provide unique session and participant identifiers. If such identifiers are not readily available, they must be agreed upon at the cost of additional communication flows, a fact which has been met with incomprehension among practitioners, and which hindered the adoption of provably secure password authentication in practice. In this work, we resolve this issue by proposing a new paradigm for truly password-only yet securely composable PAKE, called bare PAKE. We formally prove that two prominent PAKE protocols, namely CPace and EKE, can be cast as bare PAKEs and hence do not require pre-agreement of anything else than a password. Our bare PAKE modeling further allows us to investigate a novel "reusability" property of PAKEs, i.e., whether $n^2$ pairwise keys can be exchanged from only $n$ messages, just as the Diffie-Hellman non-interactive key exchange can do in a public-key setting. As a side contribution, this add-on property of bare PAKEs leads us to observe that some previous PAKE constructions relied on unnecessarily strong, "reusable" building blocks. By showing that ``non-reusable'' tools suffice for standard PAKE, we open a new path towards round-optimal post-quantum secure password-authenticated key exchange.
Last updated:  2024-02-14
Cayley hashing with cookies
Vladimir Shpilrain and Bianca Sosnovski
Cayley hash functions are based on a simple idea of using a pair of semigroup elements, $A$ and $B$, to hash the 0 and 1 bit, respectively, and then to hash an arbitrary bit string in the natural way, by using multiplication of elements in the semigroup. The main advantage of Cayley hash functions compared to, say, hash functions in the SHA family is that when an already hashed document is amended, one does not have to hash the whole amended document all over again, but rather hash just the amended part and then multiply the result by the hash of the original document. Some authors argued that this may be a security hazard, specifically that this property may facilitate finding a second preimage by splitting a long bit string into shorter pieces. In this paper, we offer a way to get rid of this alleged disadvantage and keep the advantages at the same time. We call this method ``Cayley hashing with cookies" using terminology borrowed from the theory of random walks in a random environment. For the platform semigroup, we use $2\times 2$ matrices over $F_p$.
Last updated:  2025-02-15
On the Security of Nova Recursive Proof System
Hyeonbum Lee and Jae Hong Seo
Nova is a new type of recursive proof system that uses a folding scheme as its core building block. This brilliant idea of folding relations can significantly reduce the recursion overhead. In this paper, we study some issues related to Nova’s soundness proof, which relies on the soundness of the folding scheme in a recursive manner. First, due to its recursive nature, the proof strategy inevitably causes the running time of the recursive extractor to expand polynomially for each additional recursive step. This constrains Nova's soundness model to only logarithmically bounded recursive steps. Consequently, the soundness proof in this limited model does not guarantee soundness for a linear number of rounds in the security parameter, such as 128 rounds for 128-bit security. On the other hand, there are no known attacks on the arbitrary depth recursion of Nova, leaving a gap between theoretical security guarantees and real-world attacks. We aim to bridge this gap in two opposite directions. In the negative direction, we present a recursive proof system that is unforgeable in a log-round model but forgeable if used in linear rounds. This shows that the soundness proof in the log-round model might not be applicable to real-world applications that require linearly long rounds. In a positive direction, we show that when Nova uses a specific group-based folding scheme, its knowledge soundness over polynomial rounds can be proven in the Extended Algebraic Group Model (EAGM), which is our novel computational model that lies between Algebraic Group Model (AGM) and the Generic Group Model (GGM). To the best of our knowledge, this is the first result to show Nova's polynomial rounds soundness. Second, the folding scheme is converted non-interactively via the Fiat-Shamir transformation and then arithmetized into R1CS. Therefore, the soundness of Nova using the non-interactive folding scheme essentially relies on the heuristic random oracle instantiation in the standard model. In our new soundness proof for Nova in the EAGM, we replace this heuristic with a new computational assumption for a cryptographic hash function, called the General Zero-Testing assumption. We treat this hash assumption as an independent subject of interest and expect it to contribute to a deeper understanding of Nova's soundness.
Last updated:  2024-02-14
Need for Speed: Leveraging the Power of Functional Encryption for Resource-Constrained Devices
Eugene Frimpong, Alexandros Bakas, Camille Foucault, and Antonis Michalas
Functional Encryption (FE) is a cutting-edge cryptographic technique that enables a user with a specific functional decryption key to determine a certain function of encrypted data without gaining access to the underlying data. Given its potential and the fact that FE is still a relatively new field, we set out to investigate how it could be applied to resource-constrained environments. This work presents what we believe to be the first lightweight FE scheme explicitly designed for resource-constrained devices. We also propose a use case protocol that demonstrates how our scheme can secure an Internet of Things (IoT) architecture where relevant devices collect data and securely deliver them to a storage server, where an analyst can request access to the encrypted data. Finally, we conduct thorough experiments on two commercially available resource-constrained devices to provide compelling evidence of our approach's practicality and efficiency. Although the results of our evaluations show that there is room for improvement in the proposed scheme, this work represents one of the first attempts to apply FE to the IoT setting that can directly impact people's daily lives and the everyday operations of organizations.
Last updated:  2024-05-10
Analysis of Layered ROLLO-I: A BII-LRPC code-based KEM
Seongtaek Chee, Kyung Chul Jeong, Tanja Lange, Nari Lee, Alex Pellegrini, and Hansol Ryu
We analyze Layered ROLLO-I, a code-based cryptosystem published in IEEE Communications Letters and submitted to the Korean post-quantum cryptography competition. Four versions of Layered ROLLO-I have been proposed in the competition. We show that the first two versions do not provide the claimed security against rank decoding attacks and give reductions to small instances of the original ROLLO-I scheme, which was a candidate in the NIST competition and eliminated there due to rank decoding attacks. As a second contribution, we provide two efficient message recovery attacks, affecting every security level of the first three versions of Layered ROLLO-I and security levels 128 and 192 of the fourth version.
Last updated:  2024-02-14
Strong Batching for Non-Interactive Statistical Zero-Knowledge
Changrui Mu, Shafik Nassar, Ron D. Rothblum, and Prashant Nalini Vasudevan
A zero-knowledge proof enables a prover to convince a verifier that $x \in S$, without revealing anything beyond this fact. By running a zero-knowledge proof $k$ times, it is possible to prove (still in zero-knowledge) that $k$ separate instances $x_1,\dots,x_k$ are all in $S$. However, this increases the communication by a factor of $k$. Can one do better? In other words, is (non-trivial) zero-knowledge batch verification for $S$ possible? Recent works by Kaslasi et al. (TCC 2020, Eurocrypt 2021) show that any problem possessing a non-interactive statistical zero-knowledge proof (NISZK) has a non-trivial statistical zero-knowledge batch verification protocol. Their results had two major limitations: (1) to batch verify $k$ inputs of size $n$ each, the communication in their batch protocol is roughly $\textrm{poly}(n,\log{k})+O(k)$, which is better than the naive cost of $k \cdot \textrm{poly}(n)$ but still scales linearly with $k$, and, (2) the batch protocol requires $\Omega(k)$ rounds of interaction. In this work we remove both of these limitations by showing that any problem in $NISZK$ has a non-interactive statistical zero-knowledge batch verification protocol with communication $\textrm{poly}(n,\log{k})$.
Last updated:  2024-02-19
On the Untapped Potential of the Quantum FLT-based Inversion
Ren Taguchi and Atsushi Takayasu
Thus far, several papers estimated concrete quantum resources of Shor’s algorithm for solving a binary elliptic curve discrete logarithm problem. In particular, the complexity of computing quantum inversions over a binary field F2n is dominant when running the algorithm, where n is a degree of a binary elliptic curve. There are two major methods for quantum inversion, i.e., the quantum GCD-based inversion and the quantum FLT-based inversion. Among them, the latter method is known to require more qubits; however, the latter one is valuable since it requires much fewer Toffoli gates and less depth. When n = 571, Kim-Hong’s quantum GCD-based inversion algorithm (Quantum Information Processing 2023) and Taguchi-Takayasu’s quantum FLT-based inversion algorithm (CT-RSA 2023) require 3, 473 qubits and 8, 566 qubits, respectively. In contrast, for the same n = 571, the latter algorithm requires only 2.3% of Toffoli gates and 84% of depth compared to the former one. In this paper, we modify Taguchi-Takayasu’s quantum FLT-based inversion algorithm to reduce the required qubits. While Taguchi-Takayasu’s FLT-based inversion algorithm takes an addition chain for n−1 as input and computes a sequence whose number is the same as the length of the chain, our proposed algorithm employs an uncomputation step and stores a shorter one. As a result, our proposed algorithm requires only 3, 998 qubits for n = 571, which is only 15% more than Kim-Hong’s GCD-based inversion algorithm. Furthermore, our proposed algorithm preserves the advantage of FLT-based inversion since it requires only 3.7% of Toffoli gates and 77% of depth compared to Kim-Hong’s GCD-based inversion algorithm for n = 571.
Last updated:  2024-04-01
Adaptively Sound Zero-Knowledge SNARKs for UP
Uncategorized
Surya Mathialagan, Spencer Peters, and Vinod Vaikuntanathan
Show abstract
Uncategorized
We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class $\mathsf{UP}$ in the reusable designated verifier model. $\mathsf{UP}$ is an expressive subclass of $\mathsf{NP}$ consisting of all $\mathsf{NP}$ languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm. Our main results are as follows. (1) A reusably and adaptively sound zero-knowledge (zk) dvSNARG for $\mathsf{UP}$, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error. (2) A generic transformation that lifts any ``Sahai-Waters-like'' (zk) SNARG to an adaptively sound (zk) SNARG, in the designated-verifier setting. In particular, this shows that the Sahai-Waters SNARG for $\mathsf{NP}$ is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction. (3) A generic transformation, building on the work of Campanelli, Ganesh, that lifts any adaptively sound (zk) SNARG for $\mathsf{UP}$ to an adaptively sound (zk) SNARK for $\mathsf{UP}$, while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of $\mathsf{NP}$ from falsifiable assumptions, so our restriction to $\mathsf{UP}$ is, in a sense, necessary. Applying (3) to our SNARG for $\mathsf{UP}$ from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for $\mathsf{UP}$ from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size $\mathsf{poly}(\lambda)$. These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of $\mathsf{NP}$ from (sub-exponentially) falsifiable assumptions.
Last updated:  2024-04-25
Attribute-based Keyed (Fully) Homomorphic Encryption
Keita Emura, Shingo Sato, and Atsushi Takayasu
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices although there are no KFHE schemes secure solely under the LWE assumption in the standard model. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a $q$-type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we provide two constructions of attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard $k$-linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Last updated:  2025-02-26
Universal Computational Extractors and Multi-Bit AIPO from Lattice Assumptions
Yilei Chen and Xinyu Mao
We put forth a new primitive called obliviously programmable function (OPF) to construct two random-oracle-like primitives: • Universal computational extractors (UCEs), introduced by Bellare, Hoang, and Keelveedhi [BHK13], can securely replace random oracles in various applications, including KDMsecure encryption, deterministic encryption, RSA-OAEP, universal hardcore bits, etc. • Multi-bit point obfuscation with auxiliary input (MB-AIPO). It enables upgrading CPAsecure public-key encryption (PKE) into a CCA-secure one [MH14] and serves as a tool to instantiate the random oracles used in the Fujisaki-Okamoto transform for lossy PKEs [MOZ23]. Despite their usefulness, constructing UCEs and MB-AIPO in the standard model is challenging. The existing constructions of both primitives [BM14a, BM14b] use indistinguishability obfuscation (iO) plus point functions with auxiliary input (AIPO). OPF can replace the use iO in the constructions of UCE and MB-AIPO. We use OPF plus AIPO to construct • UCE with one query for strongly unpredictable sources, • MB-AIPO for strongly unpredictable distributions and • PKE scheme that is IND-CPA secure in the presence of computationally uninvertible leakage on the secret key. We then construct OPF for NC1 circuits from lattice assumptions based on the GGH15 encodings [GGH15], without using iO. In sum, we give new constructions of the above three primitives under the following assumptions: (1) LWE with subexponential hardness; (2) private-coin evasive LWE assumption for specific samplers; (3) the existence of AIPO in NC1. As a byproduct, we construct an ‘NC1-universal AIPO’ under the assumptions (1) and (2).
Last updated:  2024-02-13
Amplification of Non-Interactive Zero Knowledge, Revisited
Nir Bitansky and Nathan Geier
In an (α,β)-weak non-interactive zero knowledge (NIZK), the soundness error is at most α and the zero-knowledge error is at most β. Goyal, Jain, and Sahai (CRYPTO 2019) show that if α+β<1 for some constants α,β, then (α,β)-weak NIZK can be turned into fully-secure NIZK, assuming sub-exponentially-secure public-key encryption. We revisit the problem of NIZK amplification: – We amplify NIZK arguments assuming only polynomially-secure public-key encryption, for any constants α+β<1. – We amplify NIZK proofs assuming only one-way functions, for any constants α+β<1. – When the soundness error α is negligible to begin with, we can also amplify NIZK arguments assuming only one-way functions. Our results are based on the hidden-bits paradigm, and can be viewed as a reduction from NIZK amplification to the better understood problem of pseudorandomness amplification.
Last updated:  2024-08-23
Game-Theoretically Fair Distributed Sampling
Sri AravindaKrishnan Thyagarajan, Ke Wu, and Pratik Soni
Cleve's celebrated result (STOC'86) showed that a strongly fair multi-party coin-toss is impossible in the presence of majority-sized coalitions. Recently, however, a fascinating line of work studied a relaxed fairness notion called \emph{game-theoretic fairness}, which guarantees that no coalition should be incentivized to deviate from the prescribed protocol. A sequence of works has explored the feasibility of game-theoretic fairness for \emph{two-sided} coin-toss, and indeed demonstrated feasibility in the dishonest majority setting under standard cryptographic assumptions. In fact, the recent work of Wu, Asharov, and Shi (EUROCRYPT'22) completely characterized the regime where game-theoretic fairness is feasible. However, this line of work is largely restricted to two-sided coin-toss, and more precisely on a \emph{uniform} coin-toss (i.e., Bernoulli with parameter $1/2$). The only exceptions are the works on game-theoretically fair leader election, which can be viewed as a special case of uniform $n$-sided coin-toss where $n$ is the number of parties. In this work, we \emph{initiate} the comprehensive study of game-theoretic fairness for multi-party \emph{sampling from general distributions}. In particular, for the case of $m$-sided \emph{uniform} coin-toss we give a nearly complete characterization of the regime in which game-theoretic fairness is feasible. Interestingly, contrary to standard fairness notions in cryptography, the composition of game-theoretically fair two-sided coin-toss protocols does not necessarily yield game-theoretically fair multi-sided coins. To circumvent this, we introduce new techniques compatible with game-theoretic fairness. In particular, we give the following results: - We give a protocol from standard cryptographic assumptions that achieves game-theoretic fairness for uniform $m$-sided coin-toss against half- or more-sized adversarial coalitions. - To complement our protocol, we give a general impossibility result that establishes the optimality of our protocol for a broad range of parameters modulo an additive constant. Even in the worst-case, the gap between our protocol and our impossibility result is only a small constant multiplicative factor. - We also present a game-theoretically fair protocol for \emph{any} efficiently sampleable $m$-outcome distribution in the dishonest majority setting. For instance, even for the case of $m=2$ (i.e., two-sided coin-toss), our result implies a game-theoretically fair protocol for an \emph{arbitrary} Bernoulli coin. In contrast, the work of Wu, Asharov, and Shi only focussed on a Bernoulli coin with parameter $1/2$.
Last updated:  2024-06-07
Reducing the Number of Qubits in Quantum Factoring
Clémence Chevignard, Pierre-Alain Fouque, and André Schrottenloher
This paper focuses on the optimization of the number of logical qubits in quantum algorithms for factoring and computing discrete logarithms in $\mathbb{Z}_N^*$. These algorithms contain an exponentiation circuit modulo $N$, which is responsible for most of their cost, both in qubits and operations. In this paper, we show that using only $o(\log N)$ work qubits, one can obtain the least significant bits of the modular exponentiation output. We combine this result with May and Schlieper's truncation technique (ToSC 2022) and the Ekerå-Håstad variant of Shor's algorithm (PQCrypto 2017) to solve the discrete logarithm problem in $\mathbb{Z}_N^*$ using only $d + o(\log N)$ qubits, where $d$ is the bit-size of the logarithm. Consequently we can factor $n$-bit RSA moduli using $n/2 + o(n)$ qubits, while current envisioned implementations require about $2n$ qubits. Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. For RSA factorization, we can reach a gate count $\mathcal{O}(n^3)$ for a depth $\mathcal{O}(n^2 \log^3 n)$, which then has to be multiplied by $\mathcal{O}(\log n)$ (the number of measurement results required by Ekerå-Håstad). To factor an RSA-2048 instance, we estimate that 1730 logical qubits and $2^{36}$ Toffoli gates will suffice for a single run, and the algorithm needs on average 40 runs. To solve a discrete logarithm instance of 224 bits (112-bit classical security) in a safe-prime group of 2048 bits, we estimate that 684 logical qubits would suffice, and 20 runs with $2^{32}$ Toffoli gates each.
Last updated:  2025-04-22
Mastic: Private Weighted Heavy-Hitters and Attribute-Based Metrics
Dimitris Mouris, Christopher Patton, Hannah Davis, Pratik Sarkar, and Nektarios Georgios Tsoutsos
Insight into user experience and behavior is critical to the success of large software systems and web services. Gaining such insights, while preserving user privacy, is a significant challenge. Recent advancements in multi-party computation have made it practical to securely compute aggregates over secret shared data. Two such protocols have emerged as candidates for standardization at the IETF: Prio (NSDI 2017) for general-purpose statistics; and Poplar (IEEE S&P 2021) for heavy hitters, where the goal is to compute the most popular inputs held by users without learning the inputs themselves. While each of these protocols is well-suited to certain applications, there remain a number of use cases identified by IETF for which neither Prio nor Poplar is practical. We introduce Mastic, a protocol for the following functionality: each of a large number of clients holds an input (e.g., a URL) and its corresponding weight (e.g., page load time); for a given candidate input (or prefix), a small number of non-colluding servers wish to securely aggregate the weights of clients that hold that input (or some input with that prefix), without learning the weights or which client holds which input. This functionality makes two new classes of applications possible. The first is a natural generalization of heavy hitters we call weighted heavy-hitters. The second is an enhancement of Prio-style metrics we call attribute-based metrics in which aggregates are grouped by hierarchical user attributes (e.g., their geographic location or software version). We demonstrate Mastic's practicality for these applications with a real-world example of each. We also compare our protocol with Prio and Poplar on a wide area network. Overall, we report over one order of magnitude performance improvement over Poplar for plain heavy-hitters and $1.5-2\times$ improvement over Prio for attribute-based metrics.
Last updated:  2024-02-22
Security of Symmetric Ratchets and Key Chains - Implications for Protocols like TLS 1.3, Signal, and PQ3
John Preuß Mattsson
Symmetric ratchets and one-way key chains play a vital role in numerous important security protocols such as TLS 1.3, DTLS 1.3, QUIC, Signal, MLS, EDHOC, OSCORE, and Apple PQ3. Despite the crucial role they play, very little is known about their security properties. This paper categorizes and examines different ratchet constructions, offering a comprehensive overview of their security. Our analysis reveals notable distinctions between different types of one-way key chains. Notably, the type of ratchet used by TLS 1.3, Signal, and PQ3 exhibit a significant number of weak keys, an unexpectedly high rate of key collisions surpassing birthday attack expectations, and a predictable shrinking key space susceptible to novel Time-Memory Trade-Off (TMTO) attacks with complexity $\approx N^{1/4}$. Consequently, the security level provided by e.g., TLS 1.3 is significantly lower than anticipated. To address these concerns, we analyze the aforementioned protocols and provide numerous concrete recommendations for enhancing their security, as well as guidance for future security protocol design.
Last updated:  2025-02-17
Singular points of UOV and VOX
Pierre Pébereau
In this work, we study the singular locus of the varieties defined by the public keys of UOV and VOX, two multivariate signature schemes submitted to the additional NIST call for post-quantum signature schemes. We give a new attack for $\hat +$ and VOX targeting singular points of the underlying UOV key. Our attack lowers the security of the schemes, both asymptotically and in number of gates, showing in particular that the parameter sets proposed for these schemes do not meet the NIST security requirements. More precisely, we show that the security of VOX/UOV$\hat +$ was overestimated by factors $2^{2}, 2^{18}, 2^{37}$ for security levels I, III, V respectively. As an essential element of the attack on VOX, we introduce a polynomial time algorithm performing a key recovery from one vector, with an implementation requiring only $15$ seconds at security level V.
Last updated:  2024-02-16
Lightweight Leakage-Resilient PRNG from TBCs using Superposition
Mustafa Khairallah, Srinivasan Yadhunathan, and Shivam Bhasin
In this paper, we propose a leakage-resilient pseudo-random number generator (PRNG) design that leverages the rekeying techniques of the PSV-Enc encryption scheme and the superposition property of the Superposition-Tweak-Key (STK) framework. The random seed of the PRNG is divided into two parts; one part is used as an ephemeral key that changes every two calls to a tweakable block cipher (TBC), and the other part is used as a static long-term key. Using the superposition property, we show that it is possible to eliminate observable leakage by only masking the static key. Thus, our proposal itself can be seen as a superposition of masking and rekeying. We show that our observations can be used to design an unpredictable-with-leakage PRNG as long as the static key is protected, and the ephemeral key cannot be attacked with 2 traces. Our construction enjoys better theoretical security arguments than PSV-Enc; better Time-Data trade-off and leakage assumptions, using the recently popularized unpredictability with leakage. We verify our proposal by performing Test Vector Leakage Assessment (TVLA) on an STK-based TBC (\deoxys) operated with a fixed key and a dynamic random tweak. Our results show that while the protection of the static key is non-trivial, it only requires $\approx 10\%$ overhead for first-order protection in the most conservative setting, unlike traditional masking which may require significant overheads of $300\%$ or more.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.