All papers in 2024 (Page 19 of 2100 results)
Diving Deep into the Preimage Security of AES-like Hashing
Since the seminal works by Sasaki and Aoki, Meet-in-the-Middle (MITM) attacks are recognized as an effective technique for preimage and collision attacks on hash functions. At Eurocrypt 2021, Bao et al. automated MITM attacks on AES-like hashing and improved upon the best manual result. The attack framework has been furnished by subsequent works, yet far from complete. This paper elucidates three key contributions dedicated in further generalizing the idea of MITM and refining the automatic model on AES-like hashing. (1) We introduce S-box linearization to MITM pseudo-preimage attacks on AES-like hashing. The technique suits perfectly with superposition states to preserve information after S-box with an affordable cost. (2) We propose distributed initial structures, an extension on the original concept of initial states, that selects initial degrees of freedom in a more versatile manner to enlarge the search space. (3) We exploit the structural similarities between encryption and key schedule in constructions (e.g. Whirlpool and Streebog) to model propagations more accurately and avoid repeated costs. Weaponed with these innovative techniques, we further empower the MITM framework and improve the attack results on AES-like designs for preimage and collision. We obtain the first preimage attacks on 10-round AES-192, 10-round Rijndael-192/256, and 7.75-round Whirlpool, reduced time and/or memory complexities for preimage attacks on 5-, 6-round Whirlpool and 7.5-, 8.5-round Streebog, as well as improved collision attacks on 6- and 6.5-round Whirlpool.
Divide and Surrender: Exploiting Variable Division Instruction Timing in HQC Key Recovery Attacks
We uncover a critical side-channel vulnerability in the Hamming Quasi-Cyclic (HQC) round 4 optimized implementation arising due to the use of the modulo operator. In some cases, compilers optimize uses of the modulo operator with compile-time known divisors into constant-time Barrett reductions. However, this optimization is not guaranteed: for example, when a modulo operation is used in a loop the compiler may emit division (div) instructions which have variable execution time depending on the numerator. When the numerator depends on secret data, this may yield a timing side-channel. We name vulnerabilities of this kind Divide and
Surrender (DaS) vulnerabilities.
For processors supporting Simultaneous Multithreading (SMT) we propose a new approach called DIV-SMT which enables precisely measuring small division timing variations using scheduler and/or execution unit contention. We show that using only 100 such side-channel traces we can build a Plaintext-Checking (PC) oracle with above 90% accuracy. Our approach may also prove applicable to other instances of the DaS vulnerability, such as KyberSlash. We stress that exploitation with DIV-SMT requires co-location of the attacker on the same physical core as the victim.
We then apply our methodology to HQC and present a novel way to recover HQC secret keys faster, achieving an 8-fold decrease in the number of idealized oracle queries when compared to previous approaches. Our new PC oracle attack uses our newly developed Zero Tester method to quickly determine whether an entire block of bits contains only zero-bits. The Zero Tester method enables the DIV-SMT powered attack on HQC-128 to complete in under 2 minutes on our targeted AMD Zen2
machine.
New Models for the Cryptanalysis of ASCON
This paper focuses on the cryptanalysis of the ASCON family using automatic tools. We analyze two different problems with the goal to obtain new modelings, both simpler and less computationally heavy than previous works (all our models require only a small amount of code and run on regular desktop computers).
The first problem is the search for Meet-in-the-middle attacks on reduced-round ASCON-Hash. Starting from the MILP modeling of Qin et al. (EUROCRYPT 2023 & ePrint 2023), we rephrase the problem in SAT, which accelerates significantly the solving time and removes the need for the ``weak diffusion structure'' heuristic. This allows us to reduce the memory complexity of Qin et al.'s attacks and to prove some optimality results.
The second problem is the search for lower bounds on the probability of differential characteristics for the ASCON permutation. We introduce a lossy MILP encoding of the propagation rules based on the Hamming weight, in order to find quickly lower bounds which are comparable to the state of the art. We find a small improvement over the existing bound on 7 rounds.
Accelerating Training and Enhancing Security Through Message Size Optimization in Symmetric Cryptography
Uncategorized
Uncategorized
This research extends Abadi and Andersen's exploration of neural networks using secret keys for information protection in multiagent systems. Focusing on enhancing confidentiality properties, we employ end-to-end adversarial training with neural networks Alice, Bob, and Eve. Unlike prior work limited to 64-bit messages, our study spans message sizes from 4 to 1024 bits, varying batch sizes and training steps. An innovative aspect involves training model Bob to approach a minimal error value close to zero and examining its effect on the feasibility of the model. This research unveils the neural networks' adaptability and scalability in encryption and decryption across diverse scenarios, offering valuable insights into their optimization potential for secure communication.
Attacking ECDSA with Nonce Leakage by Lattice Sieving: Bridging the Gap with Fourier Analysis-based Attacks
The Hidden Number Problem (HNP) has found extensive applications in side-channel attacks against cryptographic schemes, such as ECDSA and Diffie-Hellman. There are two primary algorithmic approaches to solving the HNP: lattice-based attacks and Fourier analysis-based attacks. Lattice-based attacks exhibit better efficiency and require fewer samples when sufficiently long substrings of the nonces are known. However, they face significant challenges when only a small fraction of the nonce is leaked, such as 1-bit leakage, and their performance degrades in the presence of errors.
In this paper, we address an open question by introducing an algorithmic tradeoff that significantly bridges the gap between these two approaches.
By introducing a parameter $x$ to modify Albrecht and Heninger's lattice, the lattice dimension is reduced by approximately $(\log_2{x})/ l$, where $l$ represents the number of leaked bits. We present a series of new methods, including the interval reduction algorithm, several predicates, and the pre-screening technique. Furthermore, we extend our algorithms to solve the HNP with erroneous input. Our attack outperforms existing state-of-the-art lattice-based attacks against ECDSA. We break several records including 1-bit and less than 1-bit leakage on a 160-bit curve, while the best previous lattice-based attack for 1-bit leakage was conducted only on a 112-bit curve.
An Efficient Hash Function for Imaginary Class Groups
This paper presents a new efficient hash function for imaginary class groups. Many class group based protocols, such as verifiable delay functions, timed commitments and accumulators, rely on the existence of an efficient and secure hash function, but there are not many concrete constructions available in the literature, and existing constructions are too inefficient for practical use cases.
Our novel approach, building on Wesolowski's initial scheme, achieves a 200 fold increase in computation speed, making it exceptionally practical for real-world applications. This optimisation is achieved at the cost of a smaller image of the hash function, but we show that the image is still sufficiently large for the hash function to be secure.
Multiplex: TBC-based Authenticated Encryption with Sponge-Like Rate
Authenticated Encryption (AE) modes of operation based on Tweakable Block Ciphers (TBC) usually measure efficiency in the number of calls to the underlying primitive per message block. On the one hand, many existing solutions reach a primitive-rate of 1, meaning that each n-bit block of message asymptotically needs a single call to the TBC with output length n. On the other hand, while these modes look optimal in a blackbox setting, they become less attractive when leakage comes into play, since all these calls must then be equally well protected to maintain security. Leakage-resistant modes improve this situation, by generating ephemeral keys every constant number of calls. However, rekeying is inherently suboptimal in primitive-rate, since a TBC call can only be used either to refresh a key or to encrypt a block. Even worse, existing solutions achieving almost n bits of security for n-bit secret keys have at most a primitive-rate 2/3. Hence the question: Can we design a highly-secure TBC-based rekeying mode with ``nearly optimal'' primitive-rate? We answer this question positively with Multiplex, a new mode that has primitive-rate d/(d+1) given a TBC with a dn-bit tweak. Multiplex achieves $n-\log_2(dn)$ bits of security for both (i) misuse-resilience CCA security in the blackbox setting and (ii) Ciphertext Integrity with Misuse-resistant and unbounded Leakage in encryption and decryption (CIML2). It also provides (iii) confidentiality with leakage up to the birthday bound. Furthermore, Multiplex can run d+1 calls in parallel in each iteration. The combination of these features gives a mode of operation that inherits most of the good implementation features and flexibility of a Duplex sponge -- therefore paving the way towards sound comparisons between TBC-based and permutation-based AE.
Registered Attribute-Based Signature
This paper introduces the notion of registered attribute-based signature (registered ABS). Distinctly different from classical attribute-based signature (ABS), registered ABS allows any user to generate their own public/secret key pair and register it with the system. The key curator is critical to keep the system flowing, which is a fully transparent entity that does not retain secrets. Our results can be summarized as follows.
-This paper provides the first definition of registered ABS, which has never been defined.
-This paper presents the first generic fully secure registered ABS over the prime-order group from $k$-Lin assumption under the standard model, which supports various classes of predicate.
-This paper gives the first concrete registered ABS scheme for arithmetic branching program (ABP), which achieves full security in the standard model.
Technically, our registered ABS is inspired by the blueprint of Okamoto and Takashima[PKC'11]. We convert the prime-order registered attribute-based encryption (registered ABE) scheme of Zhu et al.[ASIACRYPT'23] via predicate encoding to registered ABS by employing the technique of re-randomization with specialized delegation, while we employ the different dual-system method considering the property of registration. Prior to our work, the work of solving the key-escrow issue was presented by Okamoto and Takashima[PKC'13] while their work considered the weak adversary in the random oracle model.
IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials via ZK-JSON
Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose ZK-JSON, a unique R1CS circuit design tailored for IDE over generic JSON documents. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision.
Quantum Pseudorandomness Cannot Be Shrunk In a Black-Box Way
Pseudorandom Quantum States (PRS) were introduced by Ji, Liu and Song as quantum analogous to Pseudorandom Generators. They are an ensemble of states efficiently computable but computationally indistinguishable from Haar random states. Subsequent works have shown that some cryptographic primitives can be constructed from PRSs. Moreover, recent classical and quantum oracle separations of PRS from One-Way Functions strengthen the interest in a purely quantum alternative building block for quantum cryptography, potentially weaker than OWFs.
However, our lack of knowledge of extending or shrinking the number of qubits of the PRS output still makes it difficult to reproduce some of the classical proof techniques and results. Short-PRSs, that is PRSs with logarithmic size output, have been introduced in the literature along with cryptographic applications, but we still do not know how they relate to PRSs. Here we answer half of the question, by showing that it is not possible to shrink the output of a PRS from polynomial to logarithmic qubit length while still preserving the pseudorandomness property, in a relativized way. More precisely, we show that relative to Kretschmer's quantum oracle (TQC 2021) short-PRSs cannot exist (while PRSs exist, as shown by Kretschmer's work).
Secure Integrated Sensing and Communication Under Correlated Rayleigh Fading
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
SoK: Parameterization of Fault Adversary Models - Connecting Theory and Practice
Since the first fault attack by Boneh et al. in 1997, various physical fault injection mechanisms have been explored to induce errors in electronic systems. Subsequent fault analysis methods of these errors have been studied, and successfully used to attack many cryptographic implementations. This poses a significant challenge to the secure implementation of cryptographic algorithms. To address this, numerous countermeasures have been proposed. Nevertheless, these countermeasures are primarily designed to protect against the particular assumptions made by the fault analysis methods. These assumptions, however, encompass only a limited range of the capabilities inherent to physical fault injection mechanisms.
In this paper, we narrow our focus to fault attacks and countermeasures specific to ASICs, and introduce a novel parameterized fault adversary model capturing an adversary's control over an ASIC. We systematically map (a) the physical fault injection mechanisms, (b) adversary models assumed in fault analysis, and (c) adversary models used to design countermeasures into our introduced model. This model forms the basis for our comprehensive exploration that covers a broad spectrum of fault attacks and countermeasures within symmetric key cryptography as a comprehensive survey. Furthermore, our investigation highlights a notable misalignment among the adversary models assumed in countermeasures, fault attacks, and the intrinsic capabilities of the physical fault injection mechanisms. Through this study, we emphasize the need to reevaluate existing fault adversary models, and advocate for the development of a unified model.
A generic algorithm for efficient key recovery in differential attacks – and its associated tool
Differential cryptanalysis is an old and powerful attack against block ciphers. While different techniques have been introduced throughout the years to improve the complexity of this attack, the key recovery phase remains a tedious and error-prone procedure. In this work, we propose a new algorithm and its associated tool that permits, given a distinguisher, to output an efficient key guessing strategy. Our tool can be applied to SPN ciphers whose linear layer consists of a bit-permutation and whose key schedule is linear or almost linear. It can be used not only to help cryptanalysts find the best differential attack on a given cipher but also to assist designers in their security analysis. We applied our tool to four targets: RECTANGLE, PRESENT-80, SPEEDY-7-192 and GIFT-64. We extend the previous best attack on RECTANGLE-128 by one round and the previous best differential attack against PRESENT-80 by 2 rounds. We improve a previous key recovery step in an attack against SPEEDY and present more efficient key recovery strategies for RECTANGLE-80 and GIFT. Our tool outputs the results in only a second for most targets.
CAPABARA: A Combined Attack on CAPA
Physical attacks pose a substantial threat to the secure implementation of cryptographic algorithms. While considerable research efforts are dedicated to protecting against passive physical attacks (e.g., side-channel analysis (SCA)), the landscape of protection against other types of physical attacks remains a challenge. Fault attacks (FA), though attracting growing attention in research, still lack the prevalence of provably secure designs when compared to SCA. The realm of combined attacks, which leverage the capabilities of both SCA and FA adversaries, introduces powerful adversarial models, rendering protection against them challenging. This challenge has consequently led to a relatively unexplored area of research, resulting in a notable gap in understanding and efficiently protecting against combined attacks. The CAPA countermeasure, published at CRYPTO 2018, addresses this challenge with a robust adversarial model that goes beyond conventional SCA and FA adversarial models. Drawing inspiration from the principles of Multiparty Computation (MPC), CAPA claims security against higher-order SCA, higher-order fault attacks, and their combination. In this work, we present a combined attack that breaks CAPA within the constraints of its assumed adversarial model. In response, we propose potential fixes to the design of CAPA that increase the complexity of the proposed attack, although not provably thwarting it. With this presented combined attack, we highlight the difficulty of effectively protecting against combined attacks.
Efficient Zero-Knowledge Arguments and Digital Signatures via Sharing Conversion in the Head
We present a novel technique within the MPC-in-the-Head framework, aiming to design efficient zero-knowledge protocols and digital signature schemes. The technique allows for the simultaneous use of additive and multiplicative sharings of secret information, enabling efficient proofs of linear and multiplicative relations.
The applications of our technique are manifold. It is first applied to construct zero-knowledge arguments of knowledge for Double Discrete Logarithms (DDLP). The resulting protocol achieves improved communication complexity without compromising efficiency. We also propose a new zero-knowledge argument of knowledge for the Permuted Kernel Problem. Eventually, we suggest a short (candidate) post-quantum digital signature scheme constructed from a new one-way function based on simple polynomials known as fewnomials. This scheme offers simplicity and ease of implementation.
Finally, we present two additional results inspired by this work but using alternative approaches. We propose a zero-knowledge argument of knowledge of an RSA plaintext for a small public exponent that significantly improves the state-of-the-art communication complexity.
We also detail a more efficient forward-backward construction for the DDLP.
Mirrored Commitment: Fixing ``Randomized Partial Checking'' and Applications
Randomized Partial Checking (RPC} was proposed by Jakobsson, Juels, and Rivest and attracted attention as an efficient method of verifying the correctness of the mixing process in numerous applied scenarios. In fact,
RPC is a building block for many electronic voting schemes, including Prêt à Voter, Civitas, Scantegrity II as well as voting-systems used in real-world elections (e.g., in Australia). Mixing is also used in anonymous transfers of cryptocurrencies.
It turned out, however, that a series of works showed, however,
subtle issues with analyses behind RPC. First, that the actual
security level of the RPC protocol is way off the claimed bounds. The probability of successful manipulation of $k$ votes is $(\frac{3}{4})^k$ instead of the claimed $\frac{1}{2^k}$ (this difference, in turn, negatively affects actual implementations of the notion within existing election systems. This is so since concrete implemented procedures of a given length were directly based on this parameter).
Further, privacy guarantees that a constant number of mix-servers is enough turned out to also not be correct. We can conclude from the above that these analyses of the processes of mixing are not trivial.
In this paper, we review the relevant attacks, and we present Mirrored-RPC -- a fix to RPC based on ``mirrored commitment'' which makes it optimally secure; namely, having a probability of successful manipulation of $k$ votes $\frac{1}{2^k}$.
Then, we present an analysis of the privacy level of both RPC and mRPC. We show that for $n$ messages, the number of mix-servers (rounds) needed to be $\varepsilon$-close to the uniform distribution in total variation distance is lower bounded by:
\[
r(n, \varepsilon) \geq \log_{2}{n \choose 2}/\varepsilon.
\]
This proof of privacy, in turn, gives insights into the anonymity of various cryptocurrencies (e.g., Zerocash) using anonymizing pools. If a random fraction $q$ of $n$ existing coins is mixed (in each block), then to achieve full anonymity, the number of blocks one needs to run the protocol for, is:
\[
rb(n, q, \varepsilon) \geq - \frac{\log n + \log (n-1) - \log (2\varepsilon)}{ {\log({1-q^2}})}.
\]
Practical Improvements to Statistical Ineffective Fault Attacks
Statistical Fault Attacks (SFA), introduced by Fuhr et al., exploit the statistical bias resulting from injected faults. Unlike prior fault analysis attacks, which require both faulty and correct ciphertexts under the same key, SFA leverages only faulty ciphertexts. In CHES 2018, more powerful attacks called Statistical Ineffective Fault Attacks (SIFA) have been proposed. In contrast to the previous fault attacks that utilize faulty ciphertexts, SIFA exploits the distribution of the intermediate values leading to fault-free ciphertexts. As a result, the SIFA attacks were shown to be effective even in the presence of widely used fault injection countermeasures based on detection and infection. In this work, we build upon the core idea of SIFA, and provide two main practical improvements over the previously proposed analysis methods. Firstly, we show how to perform SIFA from the input side, which in contrast to the original SIFA, requires injecting faults in the earlier rounds of an encryption or decryption operation. If we consider the start of the operation as the trigger for fault injection, the cumulative jitter in the first few rounds of a cipher is much lower than the last rounds. Hence, performing the attack in the first or second round requires a narrower parameter range for fault injection and hence less fault injection attempts to recover the secret key. Secondly, in comparison to the straightforward SIFA approach of guessing 32-bits at a time, we propose a chosen input approach that reduces the guessing effort to 16-bits at a time. This decreases the key search space for full key recovery of an AES-128 implementation from $2^{34}$ to $2^{19}$.
Toward Malicious Constant-Rate 2PC via Arithmetic Garbling
A recent work by Ball, Li, Lin, and Liu [Eurocrypt'23] presented a new instantiation of the arithmetic garbling paradigm introduced by Applebaum, Ishai, and Kushilevitz [FOCS'11]. In particular, Ball et al.'s garbling scheme is the first constant-rate garbled circuit over large enough bounded integer computations, inferring the first constant-round constant-rate secure two-party computation (2PC) over bounded integer computations in the presence of semi-honest adversaries.
The main source of difficulty in lifting the security of garbling schemes-based protocols to the malicious setting lies in proving the correctness of the underlying garbling scheme. In this work, we analyze the security of Ball et al.'s scheme in the presence of malicious attacks.
- We demonstrate an overflow attack, which is inevitable in this computational model, even if the garbled circuit is fully correct. Our attack follows by defining an adversary, corrupting either the garbler or the evaluator, that chooses a bad input and causes the computation to overflow, thus leaking information about the honest party's input. By utilizing overflow attacks, we show that $1$-bit leakage is necessary for achieving security against a malicious garbler, discarding the possibility of achieving full malicious security in this model. We further demonstrate a wider range of overflow attacks against a malicious evaluator with more than $1$ bit of leakage.
- We boost the security level of Ball et al.'s scheme by utilizing two variants of Vector Oblivious Linear Evaluation, denoted by VOLEc and aVOLE. We present the first constant-round constant-rate 2PC protocol over bounded integer computations, in the presence of a malicious garbler with $1$-bit leakage and a semi-honest evaluator, in the {VOLEc,aVOLE}-hybrid model and being black-box in the underlying group and ring. Compared to the semi-honest variant, our protocol incurs only a constant factor overhead, both in computation and communication. The constant-round and constant-rate properties hold even in the plain model.
A Concrete Analysis of Wagner's $k$-List Algorithm over $\mathbb{Z}_p$
Since its introduction by Wagner (CRYPTO `02), the $k$-list algorithm has found significant utility in cryptanalysis. One important application thereof is in computing forgeries on several interactive signature schemes that implicitly rely on the hardness of the ROS problem formulated by Schnorr (ICICS `01). The current best attack strategy for these schemes relies the conjectured runtime of the $k$-list algorithm over $\mathbb{Z}_p$. The tightest known analysis of Wagner's algorithm over $\mathbb{Z}_p$ is due to Shallue (ANTS `08). However, it hides large polynomial factors and leaves a gap with respect to desirable concrete parameters for the attack. In this work, we develop a degraded version of the $k$-list algorithm which provably enforces the heuristic invariants in Wagner's original. In the process, we devise and analyze a new list merge procedure that we dub the interval merge. We give a thorough analysis of the runtime and success probability of our degraded algorithm, and show that it beats the projected runtime of the analysis by Shallue for parameters relevant to the generalized ROS attack of Benhamouda et al. (EUROCRYPT `21). For a $256$-bit prime $p$, and $k = 8$, our degraded $k$-list algorithm runs in time $\approx 2^{70.4}$, while Shallue's analysis states that the Wagner's original algorithm runs in time $\approx 2^{98.3}$.
Polynomial Commitments from Lattices: Post-Quantum Security, Fast Verification and Transparent Setup
Polynomial commitment scheme allows a prover to commit to a polynomial $f \in \mathcal{R}[X]$ of degree $L$, and later prove that the committed function was correctly evaluated at a specified point $x$; in other words $f(x)=u$ for public $x,u \in\mathcal{R}$. Most applications of polynomial commitments, e.g. succinct non-interactive arguments of knowledge (SNARKs), require that (i) both the commitment and evaluation proof are succinct (i.e., polylogarithmic in the degree $L$) - with the latter being efficiently verifiable, and (ii) no pre-processing step is allowed.
Surprisingly, as far as plausibly quantum-safe polynomial commitments are concerned, the currently most efficient constructions only rely on weak cryptographic assumptions, such as security of hash functions. Indeed, despite making use of the underlying algebraic structure, prior lattice-based polynomial commitments still seem to be much behind the hash-based ones. Moreover, security of the aforementioned lattice constructions against quantum adversaries was never formally discussed.
In this work, we bridge the gap and propose the first (asymptotically and concretely) efficient lattice-based polynomial commitment with transparent setup and post-quantum security. Our interactive variant relies on the standard (Module-)SIS problem, and can be made non-interactive in the random oracle model using Fiat-Shamir transformation. In addition, we equip the scheme with a knowledge soundness proof against quantum adversaries which can be of independent interest. In terms of concrete efficiency, for $L=2^{20}$ our scheme yields proofs of size $2$X smaller than the hash-based \textsf{FRI} commitment (Block et al., Asiacrypt 2023), and $70$X smaller than the very recent lattice-based construction by Albrecht et al. (Eurocrypt 2024).
HARTS: High-Threshold, Adaptively Secure, and Robust Threshold Schnorr Signatures
Threshold variants of the Schnorr signature scheme have recently been at the center of attention due to their applications to cryptocurrencies. However, existing constructions for threshold Schnorr signatures among a set of $n$ parties with corruption threshold $t_c$ suffer from at least one of the following drawbacks: (i) security only against static (i.e., non-adaptive) adversaries, (ii) cubic or higher communication cost to generate a single signature, (iii) strong synchrony assumptions on the network, or (iv) $t_c+1$ are sufficient to generate a signature, i.e., the corruption threshold of the scheme equals its reconstruction threshold. Especially (iv) turns out to be a severe limitation for many asynchronous real-world applications where $t_c < n/3$ is necessary to maintain liveness, but a higher signing threshold of $n-t_c$ is needed. A recent scheme, ROAST, proposed by Ruffing et al. (ACM CCS 2022) addresses (iii) and (iv), but still falls short of obtaining subcubic communication complexity and adaptive security.
In this work, we present HARTS, the first threshold Schnorr signature scheme to incorporate all these desiderata. More concretely:
- HARTS is adaptively secure and remains fully secure and operational even under asynchronous network conditions in the presence of up to $t_c < n/3$ malicious parties. This is optimal.
- HARTS outputs a Schnorr signature of size $\lambda$ with a near-optimal amortized communication cost of $O(\lambda n^2 \log{n})$ bits and a single asynchronous online round per signature.
- HARTS is high-threshold: no fewer than $t_r+1$ signature shares can be combined to yield a full signature, where any $t_r\in [t_c,n-t_c)$ is supported. This especially covers the case $t_r \geq 2n/3 > 2t_c$. This is optimal.
We prove our result in a modular fashion in the algebraic group model. At the core of our construction, we design a new simple and adaptively secure high-threshold asynchronous verifiable secret sharing (AVSS) scheme which may be of independent interest.
Polynomial-Time Key-Recovery Attack on the ${\tt NIST}$ Specification of ${\tt PROV}$
In this paper, we present an efficient attack against ${\tt PROV}$, a recent variant of the popular Unbalanced Oil and Vinegar (${\tt UOV}$) multivariate signature scheme, that has been submitted to the ongoing ${\tt NIST}$ standardization process for additional post-quantum signature schemes. A notable feature of ${\tt PROV}$ is its proof of security, namely, existential unforgeability under a chosen-message attack (${\tt EUF-CMA}$), assuming the hardness of solving the system formed by the public-key non-linear equations.
We present a polynomial-time key-recovery attack against the first specification of ${\tt PROV}$ (v$1.0$). To do so, we remark that a small fraction of the ${\tt PROV}$ secret-key is leaked during the signature process. Adapting and extending previous works on basic ${\tt UOV}$, we show that the entire secret-key can be then recovered from such a small fraction in polynomial-time. This leads to an efficient attack against ${\tt PROV}$ that we validated in practice. For all the security parameters suggested in by the authors of ${\tt PROV}$, our attack recovers the secret-key in at most $8$ seconds. We conclude the paper by discussing the apparent mismatch between such a practical attack and the theoretical security claimed by ${\tt PROV}$ designers. Our attack is not structural but exploits that the current specification of ${\tt PROV}$ differs from the required security model.
A simple countermeasure makes ${\tt PROV}$ immune against the attack presented here and led the designers to update the specification of ${\tt PROV}$ (v$1.1$).
Circle STARKs
Traditional STARKs require a cyclic group of a smooth order in the field. This allows efficient interpolation of points using the FFT algorithm, and writing constraints that involve neighboring rows. The Elliptic Curve FFT (ECFFT, Part I and II) introduced a way to make efficient STARKs for any finite field, by using a cyclic group of an elliptic curve. We show a simpler construction in the lines of ECFFT over the circle curve $x^2 + y^2 = 1$. When $p + 1$ is divisible by a large power of $2$, this construction is as efficient as traditional STARKs and ECFFT. Applied to the Mersenne prime $p = 2^{31} − 1$, which has been recently advertised in the IACR eprint 2023:824, our preliminary benchmarks indicate a speed-up by a factor of $1.4$ compared to a traditional STARK using the Babybear prime $p = 2^{31} − 2^{27} + 1$.
Fault Attacks on UOV and Rainbow
Multivariate cryptography is one of the main candidates for
creating post-quantum public key cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. The signature schemes UOV and Rainbow are two of the most promising and best studied multivariate schemes which have proven secure
for more than a decade. However, so far the security of multivariate signature schemes towards physical attacks has not been appropriately assessed. Towards a better understanding of the physical security of multivariate
signature schemes, this paper presents fault attacks against SingleField schemes, especially UOV and Rainbow. Our analysis shows that although promising attack vectors exist, multivariate signature schemes inherently offer a good protection against fault attacks.
Reduce and Prange: Revisiting Prange's ISD for Solving LPN/RSD over Large Fields
The Learning Parity with Noise (LPN) problem and its specific form, LPN with regularity, are widely utilized in crafting cryptographic primitives. Recently, extending these problems to operate over larger fields has been considered to enhance applications. Despite the broad analysis available for traditional LPN approaches, the exploration of LPN over large fields remains underdeveloped. This gap in research has led to the development of improved attacks, suggesting that primitives based on LPN over large fields may not meet necessary security standards.
We have developed an algorithm that enhances the efficiency of solving the LPN over large fields. This method innovatively modifies the Gaussian elimination attack, traditionally known as Prange's information set decoding algorithm. Our key advancement involves the selective use of partial Gaussian elimination rather than employing the full Gaussian algorithm throughout, which we have termed the ``Reduce and Prange's (RP) algorithm." Additionally, we apply the RP algorithm to address the LPN problem over large fields with regular noise. Our RP highlights two key aspects: the vulnerability of existing schemes and the superiority of our approach compared to recent analyses.
Our findings reveal that cryptographic applications, including Syndrome Decoding in the Head frameworks (Crypto'22, Asiacrypt'23, Eurocrypt'24) and Scalable Multiparty Garbling (CCS'23), are not secure enough. To be precise, the schemes fail to achieve their intended bit-security.
Compared to the previous analysis by Liu et al. (Eurocrypt'24), we show that for LPN over large fields (e.g., 128-bit field size), the bit-security is reduced by 5-11 bits .
Furthermore, our RP algorithm for solving LPN with regular noise outperforms recent results by Liu et al., Briaud, and Øygard (Eurocrypt'23) under certain parameter choices, leading to a reduction in bit-security by 5-20 bits for LPN with regular noise.
The Multi-user Constrained PRF Security of Generalized GGM Trees for MPC and Hierarchical Wallets
Multi-user (mu) security considers large-scale attackers that, given access to a number of cryptosystem instances, attempt to compromise at least one of them. We initiate the study of mu security of the so-called GGMtree that stems from the PRG-to-PRF transformation of Goldreich, Goldwasser, and Micali, with a goal to provide references for its recently popularized use in applied cryptography. We propose a generalized model for GGM trees and analyze its mu prefix-constrained PRF security in the random oracle model. Our model allows to derive concrete bounds and improvements for various protocols, and we showcase on the Bitcoin-Improvement-Proposal standard Bip32 hierarchical wallets and function secret sharing (FSS) protocols. In both scenarios, we propose improvements with better performance and concrete security bounds at the same time. Compared with the state-of-the-art designs, our SHACAL3- and KeccaK-𝑝-based Bip32 variants reduce the communication cost of MPC-based implementations by 73.3%∼93.8%, while our AES-based FSS substantially improves mu security while reducing computations by 50%.
Amortized Large Look-up Table Evaluation with Multivariate Polynomials for Homomorphic Encryption
We present a new method for efficient look-up table (LUT) evaluation in homomorphic encryption (HE), based on Ring-LWE-based HE schemes, including both integer-message schemes such as Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV), and complex-number-message schemes like the Cheon-Kim-Kim-Song (CKKS) scheme. Our approach encodes bit streams into codewords and translates LUTs into low-degree multivariate polynomials, allowing for the simultaneous evaluation of multiple independent LUTs with minimal overhead. To mitigate noise accumulation in the CKKS scheme, we propose a novel noise-reduction technique, accompanied by proof demonstrating its effectiveness in asymptotically decreasing noise levels.
We demonstrate our algorithm's effectiveness through a proof-of-concept implementation, showcasing significant efficiency gains, including a 0.029ms per slot evaluation for 8-input, 8-output LUTs and a 280ms amortized decryption time for AES-128 using CKKS on a single GPU. This work not only advances LUT evaluation in HE but also introduces a transciphering method for the CKKS scheme utilizing standard symmetric-key encryption, bridging the gap between discrete bit strings and numerical data.
Perfect 2-Party Computation from Additive Somewhat Homomorphic Encryption
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key somewhat additive homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is somewhat additive homomorphic and we extend it to support multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. Correctness of the server in the malicious model can be verified by a 3rd party where the client and server privacy are protected from the verifier.
Deep Learning Based Analysis of Key Scheduling Algorithm of Advanced Ciphers
The advancements in information technology have made the Advanced Encryption Standard (AES) and the PRESENT cipher indispensable in ensuring data security and facilitating private transactions. AES is renowned for its flexibility and widespread use in various fields, while the PRESENT cipher excels in lightweight cryptographic situations. This paper delves into a dual examination of the Key Scheduling Algorithms (KSAs) of AES and the PRESENT cipher, which play a crucial role in generating round keys for their respective encryption techniques. By implementing deep learning methods, particularly a Neural Network model, our study aims to unravel the complexities of these KSAs and shed light on their inner workings.
Understanding User-Perceived Security Risks and Mitigation Strategies in the Web3 Ecosystem
The advent of Web3 technologies promises unprecedented levels of user control and autonomy. However, this decentralization shifts the burden of security onto the users, making it crucial to understand their security behaviors and perceptions. To address this, our study introduces a comprehensive framework that identifies four core components of user interaction within the Web3 ecosystem: blockchain infrastructures, Web3-based Decentralized Applications (DApps), online communities, and off-chain cryptocurrency platforms. We delve into the security concerns perceived by users in each of these components and analyze the mitigation strategies they employ, ranging from risk assessment and aversion to diversification and acceptance. We further discuss the landscape of both technical and human-induced security risks in the Web3 ecosystem, identify the unique security differences between Web2 and Web3, and highlight key challenges that render users vulnerable, to provide implications for security design in Web3.
YPIR: High-Throughput Single-Server PIR with Silent Preprocessing
We introduce YPIR, a single-server private information retrieval (PIR) protocol that achieves high throughput (up to 83% of the memory bandwidth of the machine) without any offline communication. For retrieving a 1-bit (or 1-byte) record from a 32 GB database, YPIR achieves 12.1 GB/s/core server throughput and requires 2.5 MB of total communication. On the same setup, the state-of-the-art SimplePIR protocol achieves a 12.5 GB/s/core server throughput, requires 1.5 MB total communication, but additionally requires downloading a 724 MB hint in an offline phase. YPIR leverages a new lightweight technique to remove the hint from high-throughput single-server PIR schemes with small overhead. We also show how to reduce the server preprocessing time in the SimplePIR family of protocols by a factor of $10$-$15\times$.
By removing the need for offline communication, YPIR significantly reduces the server-side costs for private auditing of Certificate Transparency logs. Compared to the best previous PIR-based approach, YPIR reduces the server-side costs by a factor of $8\times$. Note that to reduce communication costs, the previous approach assumed that updates to the Certificate Transparency log servers occurred in weekly batches. Since there is no offline communication in YPIR, our approach allows clients to always audit the most recent Certificate Transparency logs (e.g., updating once a day). Supporting daily updates using the prior scheme would cost $48\times$ more than YPIR (based on current AWS compute costs).
A note on PUF-Based Robust and Anonymous Authentication and Key Establishment Scheme for V2G Networks
Vehicle-to-grid (V2G) provides effective charging services, allows bidirectional energy communication between the power grid and electric vehicle (EV), and reduces environmental pollution and energy crises. Recently, Sungjin Yu et al. proposed a PUF-based, robust, and anonymous authentication and key establishment scheme for V2G networks. In this paper, we show that the proposed protocol does not provide user anonymity and is vulnerable to tracing attack. We also found their scheme is vulnerable to ephemeral secret leakage attacks.
A New Approach to Generic Lower Bounds: Classical/Quantum MDL, Quantum Factoring, and More
This paper studies the limitations of the generic approaches to solving cryptographic problems in classical and quantum settings in various models.
- In the classical generic group model (GGM), we find simple alternative proofs for the lower bounds of variants of the discrete logarithm (DL) problem: the multiple-instance DL and one-more DL problems (and their mixture). We also re-prove the unknown-order GGM lower bounds, such as the order finding, root extraction, and repeated squaring.
- In the quantum generic group model (QGGM), we study the complexity of variants of the discrete logarithm. We prove the logarithm DL lower bound in the QGGM even for the composite order setting. We also prove an asymptotically tight lower bound for the multiple-instance DL problem. Both results resolve the open problems suggested in a recent work by Hhan, Yamakawa, and Yun.
- In the quantum generic ring model we newly suggested, we give the logarithmic lower bound for the order-finding algorithms, an important step for Shor's algorithm. We also give a logarithmic lower bound for a certain generic factoring algorithm outputting relatively small integers, which includes a modified version of Regev's algorithm.
- Finally, we prove a lower bound for the basic index calculus method for solving the DL problem in a new idealized group model regarding smooth numbers.
The quantum lower bounds in both models allow certain (different) types of classical preprocessing.
All of the proofs are significantly simpler than the previous proofs and are through a single tool, the so-called compression lemma, along with linear algebra tools. Our use of this lemma may be of independent interest.
zkPi: Proving Lean Theorems in Zero-Knowledge
Interactive theorem provers (ITPs), such as Lean and Coq, can express
formal proofs for a large category of theorems, from abstract math to
software correctness. Consider Alice who has a Lean proof for some
public statement $T$. Alice wants to convince the world that she has
such a proof, without revealing the actual proof. Perhaps the proof
shows that a secret program is correct or safe, but the proof itself
might leak information about the program's source code. A natural way
for Alice to proceed is to construct a succinct, zero-knowledge,
non-interactive argument of knowledge (zkSNARK) to prove that she has a
Lean proof for the statement $T$.
In this work we build zkPi, the first zkSNARKfor proofs expressed in
Lean, a state of the art interactive theorem prover. With zkPi, a prover
can convince a verifier that a Lean theorem is true, while revealing
little else. The core problem is building an efficient zkSNARKfor
dependent typing. We evaluate zkPion theorems from two core Lean
libraries: stdlib and mathlib. zkPisuccessfuly proves 57.9% of the
theorems in stdlib, and 14.1% of the theorems in mathlib, within 4.5
minutes per theorem. A zkPiproof is sufficiently short that Fermat could
have written one in the margin of his notebook to convince the world, in
zero knowledge, that he proved his famous last theorem.
Interactive theorem provers (ITPs) can express virtually all systems of
formal reasoning. Thus, an implemented zkSNARKfor ITP theorems
generalizes practical zero-knowledge's interface beyond the status quo:
circuit satisfiability and program execution.
WhisPIR: Stateless Private Information Retrieval with Low Communication
Uncategorized
Uncategorized
Recent constructions of private information retrieval (PIR) have seen significant improvements in computational performance. However, these improvements rely on heavy offline preprocessing that is typically difficult in real-world applications. Motivated by the question of PIR with no offline processing, we introduce WhisPIR, a fully stateless PIR protocol with low per-query communication. WhisPIR clients are all ephemeral, meaning that they appear with only the protocol public parameters and disappear as soon as their query is complete, giving no opportunity for additional "offline" communication that is not counted towards the overall query communication. As such, WhisPIR is highly suited for practical applications that must support many clients and frequent database updates.
We demonstrate that WhisPIR requires significantly less communication than all other lattice-based PIR protocols in a stateless setting. WhisPIR is outperformed in computation only by SimplePIR and HintlessPIR when the database entries are large (several kilobytes). WhisPIR achieves this performance by introducing a number of novel optimizations. These include improvements to the index expansion algorithm of SealPIR & OnionPIR that optimizes the algorithm when only one rotation key is available. WhisPIR also makes novel use of the non-compact variant of the BGV homomorphic encryption scheme to further save communication and computation. To demonstrate the practicality of WhisPIR, we apply the protocol to the problem of secure blocklist checking, an important user-safety application in end-to-end encrypted messaging.
Beyond the circuit: How to Minimize Foreign Arithmetic in ZKP Circuits
Zero-knowledge circuits are frequently required to prove gadgets that are not optimised for the constraint system in question. A particularly daunting task is to embed foreign arithmetic such as Boolean operations, field arithmetic, or public-key cryptography.
We construct techniques for offloading foreign arithmetic from a zero-knowledge circuit including:
(i) equality of discrete logarithms across different groups;
(ii) scalar multiplication without requiring elliptic curve operations;
(iii) proving knowledge of an AES encryption.
To achieve our goal, we employ techniques inherited from rejection sampling and lookup protocols. We implement and provide concrete benchmarks for our protocols.
Extractable Witness Encryption for KZG Commitments and Efficient Laconic OT
We present a concretely efficient and simple extractable witness encryption scheme for KZG polynomial commitments.
It allows to encrypt a message towards a triple $(\mathsf{com}, \alpha, \beta)$, where $\mathsf{com}$ is a KZG commitment for some polynomial $f$.
Anyone with an opening for the commitment attesting $f(\alpha) = \beta$ can decrypt, but without knowledge of a valid opening the message is computationally hidden.
Our construction is simple and highly efficient. The ciphertext is only a single group element. Encryption and decryption both require a single pairing evaluation and a constant number of group operations.
Using our witness encryption scheme, we construct a simple and highly efficient laconic OT protocol, which significantly outperforms the state of the art in most important metrics.
Threshold Encryption with Silent Setup
We build a concretely efficient threshold encryption scheme where the joint public key of a set of parties is computed as a deterministic function of their locally computed public keys, enabling a silent setup phase. By eliminating interaction from the setup phase, our scheme immediately enjoys several highly desirable features such as asynchronous setup, multiverse support, and dynamic threshold.
Prior to our work, the only known constructions of threshold encryption with silent setup relied on heavy cryptographic machinery such as indistinguishability Obfuscation or witness encryption for all of $\mathsf{NP}$. Our core technical innovation lies in building a special purpose witness encryption scheme for the statement ``at least $t$ parties have signed a given message''. Our construction relies on pairings and is proved secure in the Generic Group Model.
Notably, our construction, restricted to the special case of threshold $t=1$, gives an alternative construction of the (flexible) distributed broadcast encryption from pairings, which has been the central focus of several recent works.
We implement and evaluate our scheme to demonstrate its concrete efficiency. Both encryption and partial decryption are constant time, taking $<7\,$ms and $<1\,$ms, respectively. For a committee of $1024$ parties, the aggregation of partial decryptions takes $<200\,$ms, when all parties provide partial decryptions. The size of each ciphertext is $\approx 8\times$ larger than an ElGamal ciphertext.
Note on the cryptanalysis of Speedy
At Eurocrypt 2023, a differential attack on the block cipher Speedy-7-192 was presented. This note shows that the main differential characteristic that this attack is based on has probability zero.
Election Eligibility with OpenID: Turning Authentication into Transferable Proof of Eligibility
Eligibility checks are often abstracted away or omitted in voting protocols, leading to situations where the voting server can easily stuff the ballot box. One reason for this is the difficulty of bootstraping the authentication material for voters without relying on trusting the voting server.
In this paper, we propose a new protocol that solves this problem by building on OpenID, a widely deployed authentication protocol. Instead of using it as a standard authentication means, we turn it into a mechanism that delivers transferable proofs of eligibility. Using zk-SNARK proofs, we show that this can be done without revealing any compromising information, in particular, protecting everlasting privacy. Our approach remains efficient and can easily be integrated into existing protocols, as we have done for the Belenios voting protocol. We provide a full-fledged proof of concept along with benchmarks showing our protocol could be realistically used in large-scale elections.
Kleptographic Attacks against Implicit Rejection
Given its integral role in modern encryption systems such as CRYSTALS-Kyber, the Fujisaki-Okamoto (FO) transform will soon be at the center of our secure communications infrastructure. An enduring debate surrounding the FO transform is whether to use explicit or implicit rejection when decapsulation fails. Presently, implicit rejection, as implemented in CRYSTALS-Kyber, is supported by a strong set of arguments. Therefore, understanding its security implications in different attacker models is essential.
In this work, we study implicit rejection through a novel lens, namely, from the perspective of kleptography. Concretely, we consider an attacker model in which the attacker can subvert the user's code to compromise security while remaining undetectable. In this scenario, we present three attacks that significantly reduce the security level of the FO transform with implicit rejection.
Anonymity on Byzantine-Resilient Decentralized Computing
In recent years, decentralized computing has gained popularity in various domains such as decentralized learning, financial services and the Industrial Internet of Things. As identity privacy becomes increasingly important in the era of big data, safeguarding user identity privacy while ensuring the security of decentralized computing systems has become a critical challenge. To address this issue, we propose ADC (Anonymous Decentralized Computing) to achieve anonymity in decentralized computing. In ADC, the entire network of users can vote to trace and revoke malicious nodes. Furthermore, ADC possesses excellent Sybil-resistance and Byzantine fault tolerance, enhancing the security of the system and increasing user trust in the decentralized computing system. To decentralize the system, we propose a practical blockchain-based decentralized group signature scheme called Group Contract. We construct the entire decentralized system based on Group Contract, which does not require the participation of a trusted authority to guarantee the above functions. Finally, we conduct rigorous privacy and security analysis and performance evaluation to demonstrate the security and practicality of ADC for decentralized computing with only a minor additional time overhead.
SoK: Decentralized Storage Network
Decentralized Storage Networks (DSNs) represent a paradigm shift in data storage methodology, distributing and housing data across multiple network nodes rather than relying on a centralized server or data center architecture. The fundamental objective of DSNs is to enhance security, reinforce reliability, and mitigate censorship risks by eliminating a single point of failure. Leveraging blockchain technology for functions such as access control, ownership validation, and transaction facilitation, DSN initiatives aim to provide users with a robust and secure alternative to traditional centralized storage solutions. This paper conducts a comprehensive analysis of the developmental trajectory of DSNs, focusing on key components such as Proof of Storage protocols, consensus algorithms, and incentive mechanisms. Additionally, the study explores recent optimization tactics, encountered challenges, and potential avenues for future research, thereby offering insights into the ongoing evolution and advancement within the DSN domain.
LatticeFold: A Lattice-based Folding Scheme and its Applications to Succinct Proof Systems
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure and require a large (256-bit) field. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Since LatticeFold can operate over a small (64-bit) field, our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing plausible post-quantum security. Moreover, LatticeFold operates over the same module structure used by fully homomorphic encryption (FHE) and lattice signatures schemes, and can therefore benefit from software optimizations and custom hardware designed to accelerate these lattice schemes.
Fiat-Shamir for Bounded-Depth Adversaries
We study how to construct hash functions that can securely instantiate the Fiat-Shamir transformation against bounded-depth adversaries. The motivation is twofold. First, given the recent fruitful line of research of constructing cryptographic primitives against bounded-depth adversaries under worst-case complexity assumptions, and the rich applications of Fiat-Shamir, instantiating Fiat-Shamir hash functions against bounded-depth adversaries under worst-case complexity assumptions might lead to further applications (such as SNARG for P, showing the cryptographic hardness of PPAD, etc.) against bounded-depth adversaries. Second, we wonder whether it is possible to overcome the impossibility results of constructing Fiat-Shamir for arguments [Goldwasser, Kalai, FOCS ’03] in the setting where the depth of the adversary is bounded, given that the known impossibility results (against p.p.t. adversaries) are contrived.
Our main results give new insights for Fiat-Shamir against bounded-depth adversaries in both the positive and negative directions. On the positive side, for Fiat-Shamir for proofs with certain properties, we show that weak worst-case assumptions are enough for constructing explicit hash functions that give $\mathsf{AC}^0[2]$-soundness. In particular, we construct an $\mathsf{AC}^0[2]$-computable correlation-intractable hash family for constant-degree polynomials against $\mathsf{AC}^0[2]$ adversaries, assuming $\oplus \mathsf{L}/\mathsf{poly} \not\subseteq \widetilde{\mathsf{Sum}}_{n^{-c}} \circ\mathsf{AC}^0[2]$ for some $c > 0$. This is incomparable to all currently-known constructions, which are typically useful for larger classes and against stronger adversaries, but based on arguably stronger assumptions. Our construction is inspired by the Fiat-Shamir hash function by Peikert and Shiehian [CRYPTO ’19] and the fully-homomorphic encryption scheme against bounded-depth adversaries by Wang and Pan [EUROCRYPT ’22].
On the negative side, we show Fiat-Shamir for arguments is still impossible to achieve against bounded-depth adversaries. In particular,
• Assuming the existence of $\mathsf{AC}^0[2]$-computable CRHF against p.p.t. adversaries, for every poly-size hash function, there is a (p.p.t.-sound) interactive argument that is not $\mathsf{AC}^0[2]$-sound after applying Fiat-Shamir with this hash function.
• Assuming the existence of $\mathsf{AC}^0[2]$-computable CRHF against $\mathsf{AC}^0[2]$ adversaries, there is an $\mathsf{AC}^0[2]$-sound interactive argument such that for every hash function computable by $\mathsf{AC}^0[2]$ circuits the argument does not preserve $\mathsf{AC}^0[2]$-soundness when applying Fiat-Shamir with this hash function. This is a low-depth variant of the result of Goldwasser and Kalai.
Revisiting Differential-Linear Attacks via a Boomerang Perspective With Application to AES, Ascon, CLEFIA, SKINNY, PRESENT, KNOT, TWINE, WARP, LBlock, Simeck, and SERPENT
In 1994, Langford and Hellman introduced differential-linear (DL) cryptanalysis, with the idea of decomposing the block cipher E into two parts, EU and EL, such that EU exhibits a high-probability differential trail, while EL has a high-correlation linear trail.Combining these trails forms a distinguisher for E, assuming independence between EU and EL. The dependency between the two parts of DL distinguishers remained unaddressed until EUROCRYPT 2019, where Bar-On et al. introduced the DLCT framework, resolving the issue up to one S-box layer. However, extending the DLCT framework to formalize the dependency between the two parts for multiple rounds remained an open problem.
In this paper, we first tackle this problem from the perspective of boomerang analysis. By examining the relationships between DLCT, DDT, and LAT, we introduce a set of new tables facilitating the formulation of dependencies between the two parts of the DL distinguisher across multiple rounds. Then, we introduce a highly versatile and easy-to-use automatic tool for exploring DL distinguishers, inspired by automatic tools for boomerang distinguishers. This tool considers the dependency between differential and linear trails across multiple rounds. We apply our tool to various symmetric primitives, and in all applications, we either present the first DL distinguishers or enhance the best-known ones. We achieve successful results against Ascon, AES, SERPENT, PRESENT, SKINNY, TWINE, CLEFIA, WARP, LBlock, Simeck, and KNOT. Furthermore, we demonstrate that, in some cases, DL distinguishers outperform boomerang distinguishers significantly.
Adaptive Security in SNARGs via iO and Lossy Functions
We construct an adaptively sound SNARGs in the plain model with CRS
relying on the assumptions of (subexponential) indistinguishability obfuscation (iO), subexponential one-way functions and a notion of lossy functions we call length parameterized lossy functions. Length parameterized lossy functions take in separate security and input length parameters and have the property that the function image size in lossy mode depends only on the security parameter. We then show a novel way of constructing such functions from the Learning with Errors (LWE) assumption.
Our work provides an alternative path towards achieving adaptively secure SNARGs from the recent work of Waters and Wu. Their work required the use of (essentially) perfectly re-randomizable one way functions (in addition to obfuscation). Such functions are only currently known to be realizable from assumptions such as discrete log or factoring that are known to not hold in a quantum setting.
2PC-MPC: Emulating Two Party ECDSA in Large-Scale MPC
Motivated by the need for a massively decentralized network concurrently servicing many clients, we present novel low-overhead UC-secure, publicly verifiable, threshold ECDSA protocols with identifiable abort.
For the first time, we show how to reduce the message complexity from O(n^2) to O(n) and the computational complexity from O(n) to practically O(1) (per party, where n is the number of parties).
We require only a broadcast channel for communication. Therefore, we natively support use-cases like permissionless bridges and decentralized custody, where P2P channels between every pair of parties are infeasible. Consequently, the message complexity is reduced and the protocol is publicly verifiable.
We enable all communication to be public (over a broadcast channel), by using a threshold additively homomorphic encryption scheme and novel zero-knowledge proofs.
To further reduce the computation and communication overheads, our protocols employ novel batching and amortization techniques, which may be of independent interest.
Our second main contribution is the introduction of the notion of a 2PC-MPC protocol - a two-party ECDSA protocol where the second party is fully emulated by a network of n parties.
This notion assures that both the first party (the client) and (a threshold) of the network are required to participate in signing, while abstracting away the internal structure of the network.
In particular, the communication and computation complexities of the client remain independent of the network properties (e.g. size). This allows ultimate decentralization in distributed custody use-cases, as recent growing interest in the industry demands.
We report that our implementation completes the signing phase in 1.23 and 12.703 seconds, for 256 and 1024 parties, respectively.
Faster Signatures from MPC-in-the-Head
We revisit the construction of signature schemes using the MPC-in-the-head paradigm. We obtain two main contributions:
– We observe that previous signatures in the MPC-in-the-head paradigm must rely on a salted version of the GGM puncturable pseudorandom function (PPRF) to avoid collision attacks. We design a new efficient PPRF construction that is provably secure in the multi-instance setting. The security analysis of our PPRF, in the ideal cipher model, is quite involved and forms a core technical contribution to our work. While previous constructions had to rely on a hash function, our construction uses only a fixed-key block cipher and is considerably more efficient as a result: we observe a 12× to 55× speed improvement for a recent signature scheme (Joux and Huth, Crypto’24). Our improved PPRF can be used to speed up many MPC-in-the-head signatures.
– We introduce a new signature scheme from the regular syndrome decoding assumption, based on a new protocol for the MPC-in-the-head paradigm, which significantly reduces communication compared to previous works. Our scheme is conceptually simple, though its security analysis requires a delicate and nontrivial combinatorial analysis.
Communication-Optimal Convex Agreement
Byzantine Agreement (BA) allows a set of $n$ parties to agree on a value even when up to $t$ of the parties involved are corrupted.
While previous works have shown that, for $\ell$-bit inputs, BA can be achieved with the optimal communication complexity $O(\ell n)$ for sufficiently large $\ell$, BA only ensures that honest parties agree on a meaningful output when they hold the same input, rendering the primitive inadequate for many real-world applications.
This gave rise to the notion of Convex Agreement (CA), introduced by Vaidya and Garg [PODC'13], which requires the honest parties' outputs to be in the convex hull of the honest inputs. Unfortunately, all existing CA protocols incur a communication complexity of at least $O(\ell n^2)$.
In this work, we introduce the first CA protocol with the optimal communication of $\mathcal{O}(\ell n)$ bits for inputs in $\mathbb{Z}$ of size $\ell = \Omega(\kappa \cdot n \log^2 n)$, where $\kappa$ is the security parameter.
Exploring the Six Worlds of Gröbner Basis Cryptanalysis: Application to Anemoi
Gröbner basis cryptanalysis of hash functions and ciphers, and their underlying permutations, has seen renewed interest recently. Anemoi (Crypto'23) is a permutation-based hash function that is efficient for a variety of arithmetizations used in zero-knowledge proofs. In this paper, exploring both theoretical bounds as well as experimental validation, we present new complexity estimates for Gröbner basis attacks on the Anemoi permutation over prime fields.
We cast our findings in what we call the six worlds of Gröbner basis cryptanalysis. As an example, keeping the same security arguments of the design, we conclude that at least 41 instead of 37 rounds would need to be used for 256-bit security, whereby our suggestion does not yet include a security margin.
Robust Additive Randomized Encodings from IO and Pseudo-Non-linear Codes
Additive randomized encodings (ARE), introduced by Halevi, Ishai, Kushilevitz, and Rabin (CRYPTO 2023), reduce the computation of a k-party function $f (x_1, . . . , x_k )$ to locally computing encodings $\hat{x}_i$ of each input xi and then adding them together over some Abelian group into an output encoding $\hat{y} = ∑ \hat{x}_i$, which reveals nothing but the result. In robust ARE (RARE) the sum of any subset of $\hat{x}_i$, reveals only the residual function obtained by restricting the corresponding inputs. The appeal of (R)ARE comes from the simplicity of the online part of the computation involving only addition, which yields for instance non-interactive multi-party computation in the shuffle model where messages from different parties are anonymously shuffled. Halevi, Ishai, Kushilevitz, and Rabin constructed ARE from standard assumptions and RARE in the ideal obfuscation model, leaving open the question of whether RARE can be constructed in the plain model.
We construct RARE in the plain model from indistinguishability obfuscation, which is necessary, and a new primitive that we call pseudo-non-linear codes. We provide two constructions of this primitive assuming either Learning with Errors or Decision Diffie Hellman. A bonus feature of our construction is that it is online succinct. Specifically, encodings $\hat{x}_i$ can be decomposed to offline parts $\hat{z}_i$ that can be sent directly to the evaluator and short online parts $\hat{g}_i$ that are added together.
FRIDA: Data Availability Sampling from FRI
As blockchains like Ethereum continue to grow, clients with limited resources can no longer store the entire chain.
Light nodes that want to use the blockchain, without verifying that it is in a good state overall, can just download the block headers without the corresponding block contents.
As those light nodes may eventually need some of the block contents, they would like to ensure that they are in principle available.
Data availability sampling, introduced by Bassam et al., is a process that allows light nodes to check the availability of data without download it.
In a recent effort, Hall-Andersen, Simkin, and Wagner have introduced formal definitions and analyzed several constructions.
While their work thoroughly lays the formal foundations for data availability sampling, the constructions are either prohibitively expensive, use a trusted setup, or have a download complexity for light clients scales with a square root of the data size.
In this work, we make a significant step forward by proposing an efficient data availability sampling scheme without a trusted setup and only polylogarithmic overhead.
To this end, we find a novel connection with interactive oracle proofs of proximity (IOPPs).
Specifically, we prove that any IOPP meeting an additional consistency criterion can be turned into an erasure code commitment, and then, leveraging a compiler due to Hall-Andersen, Simkin, and Wagner, into a data availability sampling scheme.
This new connection enables data availability to benefit from future results on IOPPs.
We then show that the widely used FRI IOPP satisfies our consistency criterion and demonstrate that the resulting data availability sampling scheme outperforms the state-of-the-art asymptotically and concretely in multiple parameters.
Fault-Resistant Partitioning of Secure CPUs for System Co-Verification against Faults
Fault injection attacks are a serious threat to system security, enabling attackers to bypass protection mechanisms or access sensitive information. To evaluate the robustness of CPU-based systems against these attacks, it is essential to analyze the consequences of the fault propagation resulting from the complex interplay between the software and the processor. However, current formal methodologies combining hardware and software face scalability issues due to the monolithic approach used.
To address this challenge, this work formalizes the $k$-fault resistant partitioning notion to solve the fault propagation problem when assessing redundancy-based hardware countermeasures in a first step. Proven security guarantees can then reduce the remaining hardware attack surface when introducing the software in a second step. First, we validate our approach against previous work by reproducing known results on cryptographic circuits. In particular, we outperform state-of-the-art tools for evaluating AES under a three-fault-injection attack. Then, we apply our methodology to the OpenTitan secure element and formally prove the security of its CPU's hardware countermeasure to single bit-flip injections. Besides that, we demonstrate that previously intractable problems, such as analyzing the robustness of OpenTitan running a secure boot process, can now be solved by a co-verification methodology that leverages a $k$-fault resistant partitioning. We also report a potential exploitation of the register file vulnerability in two other software use cases. Finally, we provide a security fix for the register file, prove its robustness, and integrate it into the OpenTitan project.
OCash: Fully Anonymous Payments between Blockchain Light Clients
We study blockchain-based provably anonymous payment systems between light clients. Such clients interact with the blockchain through full nodes, which can see what the light clients read and write. The goal of our work is to enable light clients to perform anonymous payments, while maintaining privacy even against the full nodes through which they interact with the blockchain. We formalize the problem in the UC model and present a provably secure solution. We show that a variation of tree ORAM gives obliviousness even when an adversary can follow how its own data elements move in the tree. We use this for anonymity via shuffling of payments on the blockchain, while at the same time allowing the light client to know a few positions among which to find its payment without knowing the current state of the blockchain. In comparison to existing works, we are the first ones that simultaneously provide strong anonymity guarantees, provable security, and anonymity with respect to full nodes. Along the way, we make several contributions that may be of independent interest. We define and construct anonymous-coin friendly encryption schemes and show how they can be used within anonymous payment systems. We define and construct efficient compressible randomness beacons, which produce unpredictable values in regular intervals and allow for storing all published values in a short digest.
Linear-Communication Asynchronous Complete Secret Sharing with Optimal Resilience
Secure multiparty computation (MPC) allows a set of $n$ parties to jointly compute a function on their private inputs. In this work, we focus on the information-theoretic MPC in the \emph{asynchronous network} setting with optimal resilience ($t<n/3$). The best-known result in this setting is achieved by Choudhury and Patra [J. Cryptol '23], which requires $O(n^4\kappa)$ bits per multiplication gate, where $\kappa$ is the size of a field element.
An asynchronous complete secret sharing (ACSS) protocol allows a dealer to share a batch of Shamir sharings such that all parties eventually receive their shares. ACSS is an important building block in AMPC. The best-known result of ACSS is due to Choudhury and Patra [J. Cryptol '23], which requires $O(n^3\kappa)$ bits per sharing. On the other hand, in the synchronous setting, it is known that distributing Shamir sharings can be achieved with $O(n\kappa)$ bits per sharing. There is a gap of $n^2$ in the communication between the synchronous setting and the asynchronous setting.
Our work closes this gap by presenting the first ACSS protocol that achieves $O(n\kappa)$ bits per sharing. When combined with the compiler from ACSS to AMPC by Choudhury and Patra [IEEE Trans. Inf. Theory '17], we obtain an AMPC with $O(n^2\kappa)$ bits per multiplication gate, improving the previously best-known result by a factor of $n^2$. Moreover, with a concurrent work that improves the compiler by Choudhury and Patra by a factor of $n$, we obtain the first AMPC with $O(n\kappa)$ bits per multiplication gate.
Don’t Use It Twice! Solving Relaxed Linear Code Equivalence Problems
The Linear Code Equivalence (LCE) Problem has received increased attention in recent years due to its applicability in constructing efficient digital signatures. Notably, the LESS signature scheme based on LCE is under consideration for the NIST post-quantum standardization process, along with the MEDS signature scheme that relies on an extension of LCE to the rank metric, namely the Matrix Code Equivalence (MCE) Problem. Building upon these developments, a family of signatures with additional properties, including linkable ring, group, and threshold signatures, has been proposed. These novel constructions introduce relaxed versions of LCE (and MCE), wherein multiple samples share the same secret equivalence. Despite their significance, these variations have often lacked a thorough security analysis, being assumed to be as challenging as their original counterparts. Addressing this gap, our work delves into the sample complexity of LCE and MCE --- precisely, the sufficient number of samples required for efficient recovery of the shared secret equivalence. Our findings reveal, for instance, that one should not use the same secret twice in the LCE setting since this enables a polynomial time (and memory) algorithm to retrieve the secret. Consequently, our results unveil the insecurity of two advanced signatures based on variants of the LCE Problem.
Towards Achieving Asynchronous MPC with Linear Communication and Optimal Resilience
Secure multi-party computation (MPC) allows a set of $n$ parties to jointly compute a function over their private inputs.
The seminal works of Ben-Or, Canetti and Goldreich [STOC '93] and Ben-Or, Kelmer and Rabin [PODC '94] settled the feasibility of MPC over asynchronous networks. Despite the significant line of work devoted to improving the communication complexity, current protocols with information-theoretic security and optimal resilience $t<n/3$ communicate $\Omega(n^4C)$ field elements for a circuit with $C$ multiplication gates. In contrast, synchronous MPC protocols with $O(nC)$ communication have long been known.
In this work we make progress towards closing this gap. We provide a novel MPC protocol in the asynchronous setting with statistical security that makes black-box use of an asynchronous complete secret-sharing (ACSS) protocol. The cost per multiplication reduces to the cost of distributing a constant number of sharings via ACSS, improving a linear factor over the state of the art by Choudhury and Patra [IEEE Trans. Inf. Theory '17].
With a recent concurrent work achieving ACSS with linear cost per sharing, we achieve an MPC with ${O}(nC)$ communication and optimal resilience.
Perfectly-Secure MPC with Constant Online Communication Complexity
In this work, we study the communication complexity of perfectly secure MPC protocol with guaranteed output delivery against $t=(n-1)/3$ corruptions. The previously best-known result in this setting is due to Goyal, Liu, and Song (CRYPTO, 2019) which achieves $O(n)$ communication per gate, where $n$ is the number of parties.
On the other hand, in the honest majority setting, a recent trend in designing efficient MPC protocol is to rely on packed Shamir sharings to speed up the online phase. In particular, the work by Escudero et al. (CCS 2022) gives the first semi-honest protocol that achieves a constant communication overhead per gate across all parties in the online phase while maintaining overall $O(n)$ communication per gate. We thus ask the following question: ``Is it possible to construct a perfectly secure MPC protocol with GOD such that the online communication per gate is $O(1)$ while maintaining overall $O(n)$ communication per gate?''
In this work, we give an affirmative answer by providing an MPC protocol for computing an arithmetic circuit $C$ over a finite field of size at least $2n$ with communication complexity $O(|C|+\mathsf{Depth}\cdot n+n^5 \cdot \log n)$ elements for the online phase, and $O(|C|\cdot n+\mathsf{Depth}\cdot n^2 + n^5 \cdot \log n)$ elements for the preprocessing phase, where $|C|$ is the circuit size and $\mathsf{Depth}$ is the circuit depth.
Consecutive Adaptor Signature Scheme: From Two-Party to N-Party Settings
Adaptor signatures have attracted attention as a tool to address scalability and interoperability issues in blockchain applications. Adaptor signatures can be constructed by extending common digital signature schemes that both authenticate a message and disclose a secret witness to a specific party. In Asiacrypt 2021, Aumayr et al. formulated the two-party adaptor signature as an independent cryptographic primitive. In this study, we extend their adaptor signature formulation to $N$ parties, present its generic construction, and define the security to be satisfied. Then, we present a concrete construction based on Schnorr signatures and discuss the security properties.
Implementation of Cryptanalytic Programs Using ChatGPT
Large language models (LLMs), exemplified by the advanced AI tool ChatGPT in 2023, have demonstrated remarkable capabilities in generating sentences, images, and program codes, driven by their development from extensive datasets. With over 100 million users worldwide, ChatGPT stands out as a leader among LLMs. Previous studies have shown its proficiency in generating program source codes for the symmetric-key block ciphers AES, CHAM, and ASCON. This study ventures into the implementation of cryptanalytic program source codes for a lightweight block cipher using ChatGPT, exploring its potential and limitations in the field of cryptography.
Last updated: 2024-05-26
Simulation-Secure Threshold PKE from Standard (Ring-)LWE
Threshold public key encryption (ThPKE) is PKE that can be decrypted by collecting “partial decryptions” from t (≤ N) out of N parties. ThPKE based on the learning with errors problem (LWE) is particularly important because it can be extended to threshold fully homomorphic encryption (ThFHE). ThPKE and ThFHE are fundamental tools for constructing multiparty computation (MPC) protocols: In 2023, NIST initiated a project (NIST IR 8214C) to establish guidelines for implementing threshold cryptosystems. Because MPC often requires simulation-security (SS), ThPKE schemes that satisfy SS (SS-ThPKE) are also important. Recently, Micciancio and Suhl (ePrint 2023/1728) presented an efficient SS-ThPKE scheme based on LWE with a polynomial modulus. However, the scheme requires the use of a nonstandard problem called “known-norm LWE” for the security proof because the norm ∥e∥ of the error of the public key is leaked from the partial decryptions. This leads to the following two challenges: 1) The construction based on LWE relies on a nontight reduction from known- norm LWE to LWE. 2) No construction based on (standard) Ring-LWE has been presented. In this paper, we address both of these challenges: we propose an efficient SS-ThPKE scheme whose security is (directly) reduced from standard LWE/Ring-LWE with a polynomial modulus. The core technique of our construction is what we call “error sharing”: We distribute shares of a small error ζ via secret sharing, and use them to prevent leakage of ∥e∥ from partial decryptions.
A Single Trace Fault Injection Attack on Hedged CRYSTALS-Dilithium
CRYSTALS-Dilithium is a post-quantum secure digital signature algorithm currently being standardised by NIST. As a result, devices making use of CRYSTALS-Dilithium will soon become generally available and be deployed in various environments. It is thus important to assess the resistance of CRYSTALS-Dilithum implementations to physical attacks.
In this paper, we present an attack on a CRYSTALS-Dilithium implementation in hedged mode in ARM Cortex-M4 using fault injection. Voltage glitching is performed to skip computation of a seed during the generation of the signature. We identified settings that consistently skip the desired function without crashing the device. After the successful fault injection, the resulting signature allows for the extraction of the secret key vector. Our attack succeeds with probability 0.582 in a single trace.
We also propose countermeasures against the presented attack.
Collusion-Resilience in Transaction Fee Mechanism Design
Users bid in a transaction fee mechanism (TFM) to get their transactions included and confirmed by a blockchain protocol. Roughgarden (EC'21) initiated the formal treatment of TFMs and proposed three requirements: user incentive compatibility (UIC), miner incentive compatibility (MIC), and a form of collusion-resilience called OCA-proofness. Ethereum's EIP-1559 mechanism satisfies all three properties simultaneously when there is no contention between transactions, but loses the UIC property when there are too many eligible transactions to fit in a single block. Chung and Shi (SODA'23) considered an alternative notion of collusion-resilience, called c-side-constract-proofness (c-SCP), and showed that, when there is contention between transactions, no TFM can satisfy UIC, MIC, and c-SCP for any c at least 1. OCA-proofness asserts that the users and a miner should not be able to "steal from the protocol" and is intuitively weaker than the c-SCP condition, which stipulates that a coalition of a miner and a subset of users should not be able to profit through strategic deviations (whether at the expense of the protocol or of the users outside the coalition).
Our main result is the first proof that, when there is contention between transactions, no (possibly randomized) direct-revelation TFM satisfies UIC, MIC, and OCA-proofness. This result resolves the main open question in Roughgarden(EC'21). We also suggest several relaxations of the basic model that allow our impossibility result to be circumvented.
New Black-Box Separations through Mathematically Structured Primitives
We provide a novel view of public-key cryptography by showing full equivalences of certain primitives to "hard" monoid actions. More precisely, we show that key exchange and two-party computation are exactly equivalent to monoid actions with certain structural and hardness properties. To the best of our knowledge, this is the first "natural" characterization of the mathematical structure inherent to any key exchange or two-party computation protocol, and the first explicit proof of the necessity of mathematical structure for public-key cryptography. We then utilize these characterizations to show new black-box separation results. Concretely, we obtain the following results:
Two-Party Key Exchange. We show that that any two-party noninteractive key exchange protocol is equivalent to the existence of an abelian monoid action equipped with a natural hardness property, namely (distributional) unpredictability. More generally, we show that any $k$-round (two-party) key exchange protocol is essentially equivalent to the existence of a (distributional) unpredictable monoid action with certain commutator-like properties. Rudich (Crypto '91) shows a black-box separation of $k$-round and $(k+1)$-round key exchange for any $k$; we use our generic primitive here to formalize this result and extend it to efficient key exchange protocols (where communication is $\textsf{poly}(k)$).
Two-Party Computation. We show that any maliciously secure two-party computation protocol is also equivalent to a monoid action with commutator-like properties and certain hardness guarantees. We then use a generic version of this primitive to show a black-box separation between $k$-round semi-honest secure two-party computation and $(k+1)$-round maliciously secure two-party computation. This yields the first black-box separation (to our knowledge) between $k$-round and $(k+1)$-round maliciously secure two-party computation protocols.
Pseudorandom Error-Correcting Codes
We construct pseudorandom error-correcting codes (or simply pseudorandom codes), which are error-correcting codes with the property that any polynomial number of codewords are pseudorandom to any computationally-bounded adversary. Efficient decoding of corrupted codewords is possible with the help of a decoding key.
We build pseudorandom codes that are robust to substitution and deletion errors, where pseudorandomness rests on standard cryptographic assumptions. Specifically, pseudorandomness is based on either $2^{O(\sqrt{n})}$-hardness of LPN, or polynomial hardness of LPN and the planted XOR problem at low density.
As our primary application of pseudorandom codes, we present an undetectable watermarking scheme for outputs of language models that is robust to cropping and a constant rate of random substitutions and deletions. The watermark is undetectable in the sense that any number of samples of watermarked text are computationally indistinguishable from text output by the original model. This is the first undetectable watermarking scheme that can tolerate a constant rate of errors.
Our second application is to steganography, where a secret message is hidden in innocent-looking content. We present a constant-rate stateless steganography scheme with robustness to a constant rate of substitutions. Ours is the first stateless steganography scheme with provable steganographic security and any robustness to errors.
Bare PAKE: Universally Composable Key Exchange from just Passwords
In the past three decades, an impressive body of knowledge has been built around secure and private password authentication. In particular, secure password-authenticated key exchange (PAKE) protocols require only minimal overhead over a classical Diffie-Hellman key exchange. PAKEs are also known to fulfill strong composable security guarantees that capture many password-specific concerns such as password correlations or password mistyping, to name only a few. However, to enjoy both round-optimality and strong security, applications of PAKE protocols must provide unique session and participant identifiers. If such identifiers are not readily available, they must be agreed upon at the cost of additional communication flows, a fact which has been met with incomprehension among practitioners, and which hindered the adoption of provably secure password authentication in practice.
In this work, we resolve this issue by proposing a new paradigm for truly password-only yet securely composable PAKE, called bare PAKE. We formally prove that two prominent PAKE protocols, namely CPace and EKE, can be cast as bare PAKEs and hence do not require pre-agreement of anything else than a password. Our bare PAKE modeling further allows us to investigate a novel "reusability" property of PAKEs, i.e., whether $n^2$ pairwise keys can be exchanged from only $n$ messages, just as the Diffie-Hellman non-interactive key exchange can do in a public-key setting. As a side contribution, this add-on property of bare PAKEs leads us to observe that some previous PAKE constructions relied on unnecessarily strong, "reusable" building blocks. By showing that ``non-reusable'' tools suffice for standard PAKE, we open a new path towards round-optimal post-quantum secure password-authenticated key exchange.
Cayley hashing with cookies
Cayley hash functions are based on a simple idea of using a pair of semigroup elements, $A$ and $B$, to hash the 0 and 1 bit, respectively, and then to hash an arbitrary bit string in the natural way, by using multiplication of elements in the semigroup. The main advantage of Cayley hash functions compared to, say, hash functions in the SHA family is that when an already hashed document is amended, one does not have to hash the whole amended document all over again, but rather hash just the amended part and then multiply the result by the hash of the original document. Some authors argued that this may be a security hazard, specifically that this property may facilitate finding a second preimage by splitting a long bit string into shorter pieces. In this paper, we offer a way to get rid of this alleged disadvantage and keep the advantages at the same time. We call this method ``Cayley hashing with cookies" using terminology borrowed from the theory of random walks in a random environment. For the platform semigroup, we use $2\times 2$ matrices over $F_p$.
On the Security of Nova Recursive Proof System
Nova is a new type of recursive proof system that uses a folding scheme as its core building block. This brilliant idea of folding relations can significantly reduce the recursion overhead. In this paper, we study some issues related to Nova’s soundness proof, which relies on the soundness of the folding scheme in a recursive manner.
First, due to its recursive nature, the proof strategy inevitably causes the running time of the recursive extractor to expand polynomially for each additional recursive step. This constrains Nova's soundness model to only logarithmically bounded recursive steps. Consequently, the soundness proof in this limited model does not guarantee soundness for a linear number of rounds in the security parameter, such as 128 rounds for 128-bit security. On the other hand, there are no known attacks on the arbitrary depth recursion of Nova, leaving a gap between theoretical security guarantees and real-world attacks. We aim to bridge this gap in two opposite directions. In the negative direction, we present a recursive proof system that is unforgeable in a log-round model but forgeable if used in linear rounds. This shows that the soundness proof in the log-round model might not be applicable to real-world applications that require linearly long rounds. In a positive direction, we show that when Nova uses a specific group-based folding scheme, its knowledge soundness over polynomial rounds can be proven in the Extended Algebraic Group Model (EAGM), which is our novel computational model that lies between Algebraic Group Model (AGM) and the Generic Group Model (GGM). To the best of our knowledge, this is the first result to show Nova's polynomial rounds soundness.
Second, the folding scheme is converted non-interactively via the Fiat-Shamir transformation and then arithmetized into R1CS. Therefore, the soundness of Nova using the non-interactive folding scheme essentially relies on the heuristic random oracle instantiation in the standard model. In our new soundness proof for Nova in the EAGM, we replace this heuristic with a new computational assumption for a cryptographic hash function, called the General Zero-Testing assumption. We treat this hash assumption as an independent subject of interest and expect it to contribute to a deeper understanding of Nova's soundness.
Need for Speed: Leveraging the Power of Functional Encryption for Resource-Constrained Devices
Functional Encryption (FE) is a cutting-edge cryptographic technique that enables a user with a specific functional decryption key to determine a certain function of encrypted data without gaining access to the underlying data. Given its potential and the fact that FE is still a relatively new field, we set out to investigate how it could be applied to resource-constrained environments. This work presents what we believe to be the first lightweight FE scheme explicitly designed for resource-constrained devices. We also propose a use case protocol that demonstrates how our scheme can secure an Internet of Things (IoT) architecture where relevant devices collect data and securely deliver them to a storage server, where an analyst can request access to the encrypted data. Finally, we conduct thorough experiments on two commercially available resource-constrained devices to provide compelling evidence of our approach's practicality and efficiency. Although the results of our evaluations show that there is room for improvement in the proposed scheme, this work represents one of the first attempts to apply FE to the IoT setting that can directly impact people's daily lives and the everyday operations of organizations.
Analysis of Layered ROLLO-I: A BII-LRPC code-based KEM
We analyze Layered ROLLO-I, a code-based cryptosystem
published in IEEE Communications Letters and submitted to the Korean
post-quantum cryptography competition. Four versions of Layered
ROLLO-I have been proposed in the competition. We show that the first
two versions do not provide the claimed security against rank decoding
attacks and give reductions to small instances of the original ROLLO-I
scheme, which was a candidate in the NIST competition and eliminated
there due to rank decoding attacks. As a second contribution, we provide
two efficient message recovery attacks, affecting every security level
of the first three versions of Layered ROLLO-I and security levels 128
and 192 of the fourth version.
Strong Batching for Non-Interactive Statistical Zero-Knowledge
A zero-knowledge proof enables a prover to convince a verifier that $x \in S$, without revealing anything beyond this fact. By running a zero-knowledge proof $k$ times, it is possible to prove (still in zero-knowledge) that $k$ separate instances $x_1,\dots,x_k$ are all in $S$. However, this increases the communication by a factor of $k$. Can one do better? In other words, is (non-trivial) zero-knowledge batch verification for $S$ possible?
Recent works by Kaslasi et al. (TCC 2020, Eurocrypt 2021) show that any problem possessing a non-interactive statistical zero-knowledge proof (NISZK) has a non-trivial statistical zero-knowledge batch verification protocol. Their results had two major limitations: (1) to batch verify $k$ inputs of size $n$ each, the communication in their batch protocol is roughly $\textrm{poly}(n,\log{k})+O(k)$, which is better than the naive cost of $k \cdot \textrm{poly}(n)$ but still scales linearly with $k$, and, (2) the batch protocol requires $\Omega(k)$ rounds of interaction.
In this work we remove both of these limitations by showing that any problem in $NISZK$ has a non-interactive statistical zero-knowledge batch verification protocol with communication $\textrm{poly}(n,\log{k})$.
On the Untapped Potential of the Quantum FLT-based Inversion
Thus far, several papers estimated concrete quantum resources of Shor’s algorithm for solving a binary elliptic curve discrete logarithm problem. In particular, the complexity of computing quantum inversions over a binary field F2n is dominant when running the algorithm, where n is a degree of a binary elliptic curve. There are two major methods for quantum inversion, i.e., the quantum GCD-based inversion and the quantum FLT-based inversion. Among them, the latter method is known to require more qubits; however, the latter one is valuable since it requires much fewer Toffoli gates and less depth. When n = 571, Kim-Hong’s quantum GCD-based inversion algorithm (Quantum Information Processing 2023) and Taguchi-Takayasu’s quantum FLT-based inversion algorithm (CT-RSA 2023) require 3, 473 qubits and 8, 566 qubits, respectively. In contrast, for the same n = 571, the latter algorithm requires only 2.3% of Toffoli gates and 84% of depth compared to the former one. In this paper, we modify Taguchi-Takayasu’s quantum FLT-based inversion algorithm to reduce the required qubits. While Taguchi-Takayasu’s FLT-based inversion algorithm takes an addition chain for n−1 as input and computes a sequence whose number is the same as the length of the chain, our proposed algorithm employs an uncomputation step and stores a shorter one. As a result, our proposed algorithm requires only 3, 998 qubits for n = 571, which is only 15% more than Kim-Hong’s GCD-based inversion algorithm. Furthermore, our proposed algorithm preserves the advantage of FLT-based inversion since it requires only 3.7% of Toffoli gates and 77% of depth compared to Kim-Hong’s GCD-based inversion algorithm for n = 571.
Adaptively Sound Zero-Knowledge SNARKs for UP
Uncategorized
Uncategorized
We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class $\mathsf{UP}$ in the reusable designated verifier model. $\mathsf{UP}$ is an expressive subclass of $\mathsf{NP}$ consisting of all $\mathsf{NP}$ languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm.
Our main results are as follows.
(1) A reusably and adaptively sound zero-knowledge (zk) dvSNARG for $\mathsf{UP}$, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error.
(2) A generic transformation that lifts any ``Sahai-Waters-like'' (zk) SNARG to an adaptively sound (zk) SNARG, in the designated-verifier setting. In particular, this shows that the Sahai-Waters SNARG for $\mathsf{NP}$ is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction.
(3) A generic transformation, building on the work of Campanelli, Ganesh, that lifts any adaptively sound (zk) SNARG for $\mathsf{UP}$ to an adaptively sound (zk) SNARK for $\mathsf{UP}$, while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of $\mathsf{NP}$ from falsifiable assumptions, so our restriction to $\mathsf{UP}$ is, in a sense, necessary.
Applying (3) to our SNARG for $\mathsf{UP}$ from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for $\mathsf{UP}$ from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size $\mathsf{poly}(\lambda)$. These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of $\mathsf{NP}$ from (sub-exponentially) falsifiable assumptions.
Attribute-based Keyed (Fully) Homomorphic Encryption
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices although there are no KFHE schemes secure solely under the LWE assumption in the standard model. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a $q$-type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we provide two constructions of attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard $k$-linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Universal Computational Extractors and Multi-Bit AIPO from Lattice Assumptions
We put forth a new primitive called obliviously programmable function (OPF) to construct two random-oracle-like primitives:
• Universal computational extractors (UCEs), introduced by Bellare, Hoang, and Keelveedhi [BHK13], can securely replace random oracles in various applications, including KDMsecure encryption, deterministic encryption, RSA-OAEP, universal hardcore bits, etc.
• Multi-bit point obfuscation with auxiliary input (MB-AIPO). It enables upgrading CPAsecure public-key encryption (PKE) into a CCA-secure one [MH14] and serves as a tool to instantiate the random oracles used in the Fujisaki-Okamoto transform for lossy PKEs [MOZ23].
Despite their usefulness, constructing UCEs and MB-AIPO in the standard model is challenging. The existing constructions of both primitives [BM14a, BM14b] use indistinguishability obfuscation (iO) plus point functions with auxiliary input (AIPO).
OPF can replace the use iO in the constructions of UCE and MB-AIPO. We use OPF plus AIPO to construct
• UCE with one query for strongly unpredictable sources,
• MB-AIPO for strongly unpredictable distributions and
• PKE scheme that is IND-CPA secure in the presence of computationally uninvertible leakage on the secret key. We then construct OPF for NC1 circuits from lattice assumptions based on the GGH15 encodings [GGH15], without using iO. In sum, we give new constructions of the above three primitives under the following assumptions: (1) LWE with subexponential hardness; (2) private-coin evasive LWE assumption for specific samplers; (3) the existence of AIPO in NC1. As a byproduct, we construct an ‘NC1-universal AIPO’ under the assumptions (1) and (2).
Amplification of Non-Interactive Zero Knowledge, Revisited
In an (α,β)-weak non-interactive zero knowledge (NIZK), the soundness error is at most α and the zero-knowledge error is at most β. Goyal, Jain, and Sahai (CRYPTO 2019) show that if α+β<1 for some constants α,β, then (α,β)-weak NIZK can be turned into fully-secure NIZK, assuming sub-exponentially-secure public-key encryption.
We revisit the problem of NIZK amplification:
– We amplify NIZK arguments assuming only polynomially-secure public-key encryption, for any constants α+β<1.
– We amplify NIZK proofs assuming only one-way functions, for any constants α+β<1.
– When the soundness error α is negligible to begin with, we can also amplify NIZK arguments assuming only one-way functions.
Our results are based on the hidden-bits paradigm, and can be viewed as a reduction from NIZK amplification to the better understood problem of pseudorandomness amplification.
Game-Theoretically Fair Distributed Sampling
Cleve's celebrated result (STOC'86) showed that a strongly fair multi-party coin-toss is impossible in the presence of majority-sized coalitions. Recently, however, a fascinating line of work studied a relaxed fairness notion called \emph{game-theoretic fairness}, which guarantees that no coalition should be incentivized to deviate from the prescribed protocol.
A sequence of works has explored the feasibility of game-theoretic fairness for \emph{two-sided} coin-toss, and indeed demonstrated feasibility in the dishonest majority setting under standard cryptographic assumptions. In fact, the recent work of Wu, Asharov, and Shi (EUROCRYPT'22) completely characterized the regime where game-theoretic fairness is feasible. However, this line of work is largely restricted to two-sided coin-toss, and more precisely on a \emph{uniform} coin-toss (i.e., Bernoulli with parameter $1/2$). The only exceptions are the works on game-theoretically fair leader election, which can be viewed as a special case of uniform $n$-sided coin-toss where $n$ is the number of parties.
In this work, we \emph{initiate} the comprehensive study of game-theoretic fairness for multi-party \emph{sampling from general distributions}. In particular, for the case of $m$-sided \emph{uniform} coin-toss we give a nearly complete characterization of the regime in which game-theoretic fairness is feasible. Interestingly, contrary to standard fairness notions in cryptography, the composition of game-theoretically fair two-sided coin-toss protocols does not necessarily yield game-theoretically fair multi-sided coins. To circumvent this, we introduce new techniques compatible with game-theoretic fairness.
In particular, we give the following results:
- We give a protocol from standard cryptographic assumptions that achieves game-theoretic fairness for uniform $m$-sided coin-toss against half- or more-sized adversarial coalitions.
- To complement our protocol, we give a general impossibility result that establishes the optimality of our protocol for a broad range of parameters modulo an additive constant. Even in the worst-case, the gap between our protocol and our impossibility result is only a small constant multiplicative factor.
- We also present a game-theoretically fair protocol for \emph{any} efficiently sampleable $m$-outcome distribution in the dishonest majority setting. For instance, even for the case of $m=2$ (i.e., two-sided coin-toss), our result implies a game-theoretically fair protocol for an \emph{arbitrary} Bernoulli coin. In contrast, the work of Wu, Asharov, and Shi only focussed on a Bernoulli coin with parameter $1/2$.
Reducing the Number of Qubits in Quantum Factoring
This paper focuses on the optimization of the number of logical qubits in quantum algorithms for factoring and computing discrete logarithms in $\mathbb{Z}_N^*$. These algorithms contain an exponentiation circuit modulo $N$, which is responsible for most of their cost, both in qubits and operations.
In this paper, we show that using only $o(\log N)$ work qubits, one can obtain the least significant bits of the modular exponentiation output. We combine this result with May and Schlieper's truncation technique (ToSC 2022) and the Ekerå-Håstad variant of Shor's algorithm (PQCrypto 2017) to solve the discrete logarithm problem in $\mathbb{Z}_N^*$ using only $d + o(\log N)$ qubits, where $d$ is the bit-size of the logarithm. Consequently we can factor $n$-bit RSA moduli using $n/2 + o(n)$ qubits, while current envisioned implementations require about $2n$ qubits.
Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. For RSA factorization, we can reach a gate count $\mathcal{O}(n^3)$ for a depth $\mathcal{O}(n^2 \log^3 n)$, which then has to be multiplied by $\mathcal{O}(\log n)$ (the number of measurement results required by Ekerå-Håstad). To factor an RSA-2048 instance, we estimate that 1730 logical qubits and $2^{36}$ Toffoli gates will suffice for a single run, and the algorithm needs on average 40 runs. To solve a discrete logarithm instance of 224 bits (112-bit classical security) in a safe-prime group of 2048 bits, we estimate that 684 logical qubits would suffice, and 20 runs with $2^{32}$ Toffoli gates each.
Mastic: Private Weighted Heavy-Hitters and Attribute-Based Metrics
Insight into user experience and behavior is critical to the success of large software systems and web services. Gaining such insights, while preserving user privacy, is a significant challenge. Recent advancements in multi-party computation have made it practical to securely compute aggregates over secret shared data. Two such protocols have emerged as candidates for standardization at the IETF: Prio (NSDI 2017) for general-purpose statistics; and Poplar (IEEE S&P 2021) for heavy hitters, where the goal is to compute the most popular inputs held by users without learning the inputs themselves. While each of these protocols is well-suited to certain applications, there remain a number of use cases identified by IETF for which neither Prio nor Poplar is practical.
We introduce Mastic, a protocol for the following functionality: each of a large number of clients holds an input (e.g., a URL) and its corresponding weight (e.g., page load time); for a given candidate input (or prefix), a small number of non-colluding servers wish to securely aggregate the weights of clients that hold that input (or some input with that prefix), without learning the weights or which client holds which input. This functionality makes two new classes of applications possible. The first is a natural generalization of heavy hitters we call weighted heavy-hitters. The second is an enhancement of Prio-style metrics we call attribute-based metrics in which aggregates are grouped by hierarchical user attributes (e.g., their geographic location or software version). We demonstrate Mastic's practicality for these applications with a real-world example of each. We also compare our protocol with Prio and Poplar on a wide area network. Overall, we report over one order of magnitude performance improvement over Poplar for plain heavy-hitters and $1.5-2\times$ improvement over Prio for attribute-based metrics.
Security of Symmetric Ratchets and Key Chains - Implications for Protocols like TLS 1.3, Signal, and PQ3
Symmetric ratchets and one-way key chains play a vital role in numerous important security protocols such as TLS 1.3, DTLS 1.3, QUIC, Signal, MLS, EDHOC, OSCORE, and Apple PQ3. Despite the crucial role they play, very little is known about their security properties. This paper categorizes and examines different ratchet constructions, offering a comprehensive overview of their security. Our analysis reveals notable distinctions between different types of one-way key chains. Notably, the type of ratchet used by TLS 1.3, Signal, and PQ3 exhibit a significant number of weak keys, an unexpectedly high rate of key collisions surpassing birthday attack expectations, and a predictable shrinking key space susceptible to novel Time-Memory Trade-Off (TMTO) attacks with complexity $\approx N^{1/4}$. Consequently, the security level provided by e.g., TLS 1.3 is significantly lower than anticipated. To address these concerns, we analyze the aforementioned protocols and provide numerous concrete recommendations for enhancing their security, as well as guidance for future security protocol design.
Singular points of UOV and VOX
In this work, we study the singular locus of the varieties defined by the public keys of UOV and VOX, two multivariate signature schemes submitted to the additional NIST call for post-quantum signature schemes.
We give a new attack for $\hat +$ and VOX targeting singular points of the underlying UOV key.
Our attack lowers the security of the schemes, both asymptotically and in number of gates, showing in particular that the parameter sets proposed for these schemes do not meet the NIST security requirements.
More precisely, we show that the security of VOX/UOV$\hat +$ was overestimated by factors $2^{2}, 2^{18}, 2^{37}$ for security levels I, III, V respectively.
As an essential element of the attack on VOX, we introduce a polynomial time algorithm performing a key recovery from one vector, with an implementation requiring only $15$ seconds at security level V.
Lightweight Leakage-Resilient PRNG from TBCs using Superposition
In this paper, we propose a leakage-resilient pseudo-random number generator (PRNG) design that leverages the rekeying techniques of the PSV-Enc encryption scheme and the superposition property of the Superposition-Tweak-Key (STK) framework. The random seed of the PRNG is divided into two parts; one part is used as an ephemeral key that changes every two calls to a tweakable block cipher (TBC), and the other part is used as a static long-term key. Using the superposition property, we show that it is possible to eliminate observable leakage by only masking the static key. Thus, our proposal itself can be seen as a superposition of masking and rekeying. We show that our observations can be used to design an unpredictable-with-leakage PRNG as long as the static key is protected, and the ephemeral key cannot be attacked with 2 traces. Our construction enjoys better theoretical security arguments than PSV-Enc; better Time-Data trade-off and leakage assumptions, using the recently popularized unpredictability with leakage. We verify our proposal by performing Test Vector Leakage Assessment (TVLA) on an STK-based TBC (\deoxys) operated with a fixed key and a dynamic random tweak. Our results show that while the protection of the static key is non-trivial, it only requires $\approx 10\%$ overhead for first-order protection in the most conservative setting, unlike traditional masking which may require significant overheads of $300\%$ or more.
Hardware Acceleration of the Prime-Factor and Rader NTT for BGV Fully Homomorphic Encryption
Fully Homomorphic Encryption (FHE) enables computation on encrypted data, holding immense potential for enhancing data privacy and security in various applications. Presently, FHE adoption is hindered by slow computation times, caused by data being encrypted into large polynomials. Optimized FHE libraries and hardware acceleration are emerging to tackle this performance bottleneck. Often, these libraries implement the Number Theoretic Transform (NTT) algorithm for efficient polynomial multiplication. Existing implementations mostly focus on the case where the polynomials are defined over a power-of-two cyclotomic ring, allowing to make use of the simpler Cooley-Tukey NTT. However, generalized cyclotomics have several benefits in the BGV FHE scheme, including more SIMD plaintext slots and a simpler bootstrapping algorithm.
We present a hardware architecture for the NTT targeting generalized cyclotomics within the context of the BGV FHE scheme. We explore different non-power-of-two NTT algorithms, including the Prime-Factor, Rader, and Bluestein NTTs. Our most efficient architecture targets the 21845-th cyclotomic polynomial --- a practical parameter for BGV --- with ideal properties for use with a combination of the Prime-Factor and Rader algorithms. The design achieves high throughput with optimized resource utilization, by leveraging parallel processing, pipelining, and reusing processing elements. Compared to Wu et al.'s VLSI architecture of the Bluestein NTT, our approach showcases 2$\times$ to 5$\times$ improved throughput and area efficiency. Simulation and implementation results on an AMD Alveo U250 FPGA demonstrate the feasibility of the proposed hardware design for FHE.
Rate-1 Fully Local Somewhere Extractable Hashing from DDH
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at $m$ secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the $m$ locations from the digest.
Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is $m + \mathsf{poly}(\lambda)$ (where $\lambda$ is the security parameter). The fully local property requires that for any index $i$, there is a "very short" opening showing that $i$-th bit of the hashed input is equal to $b$ for some $b \in \{0,1\}$. The size of this opening is required to be independent of $m$ and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE).
In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Batch PIR and Labeled PSI with Oblivious Ciphertext Compression
In this paper, we study two problems: oblivious compression and decompression of ciphertexts. In oblivious compression, a server holds a set of ciphertexts with a subset of encryptions of zeroes whose positions are only known to the client. The goal is for the server to effectively compress the ciphertexts obliviously, while preserving the non-zero plaintexts and without learning the plaintext values. For oblivious decompression, the client, instead, succinctly encodes a sequence of plaintexts such that the server may decode encryptions of all plaintexts value, but the zeroes may be replaced with arbitrary values. We present solutions to both problems that construct lossless compressions only 5% more than the optimal minimum using only additive homomorphism. The crux of both algorithms involve embedding ciphertexts as random linear systems that are efficiently solvable.
Using our compression schemes, we obtain state-of-the-art schemes for batch private information retrieval (PIR) where a client wishes to privately retrieve multiple entries from a server-held database in one query. We show that our compression schemes may be used to reduce communication by up to 30% for batch PIR in both the single- and two-server settings.
Additionally, we study labeled private set intersection (PSI) in the unbalanced setting where one party's set is significantly smaller than the other party's set and each entry has associated data. By utilizing our novel compression algorithm, we present a protocol with 65-88% reduction in communication with comparable computation compared to prior works.
Distributed Fiat-Shamir Transform: from Threshold Identification Protocols to Signatures
The recent surge of distribute technologies caused an increasing interest towards threshold signature protocols, that peaked with the recent NIST First Call for Multi-Party Threshold Schemes.
Since its introduction, the Fiat-Shamir Transform has been the most popular way to design standard digital signature schemes.
Many threshold signature schemes are designed in a way that recalls the structure of digital signatures created using Fiat Shamir, by having the signers generate a common commitment, compute the challenge as the hash of it, and then jointly create the response.
In this work we formalize this approach. In particular we introduce the notion of threshold identification scheme and threshold sigma protocol. Next, we introduce the concept of generalized Fiat-Shamir transform, that links the security of the threshold signature with the underlying threshold identification protocol. Our framework seeks to be an alternative, easier way to design concurrently secure threshold digital signatures and we show its potentiality providing an alternative security proof for Sparkle, a recent threshold Schnorr signature, and GRASS, a full threshold signature based on cryptographic group actions.
A Note on Adversarial Online Complexity in Security Proofs of Duplex-Based Authenticated Encryption Modes
This note examines a nuance in the methods employed for counting the adversarial online complexity in the security proofs of duplex-based modes, with a focus on authenticated encryption. A recent study by Gilbert et al., reveals an attack on a broad class of duplex-based authenticated encryption modes. In particular, their approach to quantifying the adversarial online complexity, which capture realistic attack scenarios, includes certain queries in the count which are not in the security proofs. This note analyzes these differences and concludes that the attack of Gilbert et al, for certain parameter choices, matches the security bound.
Analysis of a Programmable Quantum Annealer as a Random Number Generator
Quantum devices offer a highly useful function - that is generating random numbers in a non-deterministic way since the measurement of a quantum state is not deterministic. This means that quantum devices can be constructed that generate qubits in a uniform superposition and then measure the state of those qubits. If the preparation of the qubits in a uniform superposition is unbiased, then quantum computers can be used to create high entropy, secure random numbers. Typically, preparing and measuring such quantum systems requires more time compared to classical pseudo random number generators (PRNGs) which are inherently deterministic algorithms. Therefore, the typical use of quantum random number generators (QRNGs) is to provide high entropy secure seeds for PRNGs. Quantum annealing (QA) is a type of analog quantum computation that is a relaxed form of adiabatic quantum computation and uses quantum fluctuations in order to search for ground state solutions of a programmable Ising model. Here we present extensive experimental random number results from a D-Wave 2000Q quantum annealer, totaling over 20 billion bits of QA measurements, which is significantly larger than previous D-Wave QA random number generator studies. Current quantum annealers are susceptible to noise from environmental sources and calibration errors, and are not in general unbiased samplers. Therefore, it is of interest to quantify whether noisy quantum annealers can effectively function as an unbiased QRNG. The amount of data that was collected from the quantum annealer allows a comprehensive analysis of the random bits to be performed using the NIST SP 800-22 Rev 1a testsuite, as well as min-entropy estimates from NIST SP 800-90B. The randomness tests show that the generated random bits from the D-Wave 2000Q are biased, and not unpredictable random bit sequences. With no server-side sampling post-processing, the $1$ microsecond annealing time measurements had a min-entropy of $0.824$.
INSPECT: Investigating Supply Chain and Cyber-Physical Security of Battery Systems
Battery-operated applications have been ubiquitous all over the world ranging from power-intensive electric cars down to low-power smart terminals and embedded devices. Meanwhile, serious incidents around batteries such as swelling, fire, and explosion have been witnessed, which resulted in horribly huge financial and even life loss. People used to attribute such aftermaths to unintentional design mistakes or insufficient quality inspection of original battery manufacturers. However, this is not fair anymore today given the convoluted battery supply chain and the extended cyber-physical attack surface of battery management systems (BMS). In this paper, we will focus on the authenticity and assurance of prevalent (Li-ion) battery instances. We look into battery authenticity by modeling the contemporary battery supply chain and discussing practical concerns such as rewrapping and recycling in-depth at each stage. As for battery assurance, we consider emerging attack vectors that can compromise the confidentiality, integrity, and availability of the microelectronic BMS. Besides, real-world attack examples are highlighted to reflect the capabilities of advanced adversaries. Moreover, promising countermeasures regarding the detection and avoidance of threats on both battery authenticity and assurance are presented, such that researchers will gain insights into how the problem could be addressed/alleviated. We also provide our perspectives on the vulnerabilities of battery systems and their consequent impacts as well as our point of view on potential countermeasure techniques.
Rollerblade: Replicated Distributed Protocol Emulation on Top of Ledgers
We observe that most fixed-party distributed protocols can be rewritten by replacing a party with a ledger (such as a blockchain system) and the authenticated channel communication between parties with cross-chain relayers. This transform is useful because blockchain systems are always online and have battle-tested security assumptions. We provide a definitional framework that captures this analogy. We model the transform formally, and posit and prove a generic metatheorem that allows translating all theorems from the party setting into theorems in the emulated setting, while preserving analogies between party honesty and ledger security. In the heart of our proof lies a reduction-based simulation argument. As an example, our metatheorem can be used to construct a consensus protocol on top of other blockchains, creating a reliable rollup that assumes only the majority of the underlying layer-1s are secure.
General Adversary Structures in Byzantine Agreement and Multi-Party Computation with Active and Omission Corruption
Typical results in multi-party computation (in short, MPC) capture faulty parties by assuming a threshold adversary corrupting parties actively and/or fail-corrupting. These corruption types are, however, inadequate for capturing correct parties that might suffer temporary network failures and/or localized faults - these are particularly relevant for MPC over large, global scale networks. Omission faults and general adversary structures have been proposed as more suitable alternatives. However, to date, there is no characterization of the feasibility landscape combining the above ramifications of fault types and patterns.
In this work we provide a tight characterization of feasibility of MPC in the presence of general adversaries - characterized by an adversary structure - that combine omission and active corruption. To this front we first provide a tight characterization of feasibility for Byzantine agreement (BA), a key tool in MPC protocols - this BA result can be of its own separate significance.
Subsequently, we demonstrate that the common techniques employed in the threshold MPC literature to deal with omission corruptions do not work in the general adversary setting, not even for proving bounds that would appear straightforward, e.g, sufficiency of the well known $Q^3$ condition on omission-only general adversaries. Nevertheless we provide a new protocol that implements general adversary MPC under a surprisingly complex, yet tight as we prove, bound. All our results are for the classical synchronous model of computation.
As a contribution of independent interest, our work puts forth, for the first time, a formal treatment of general-adversary MPC with (active and) omission corruptions in Canetti's universal composition framework.
Last updated: 2024-02-16
Asymmetric Cryptography from Number Theoretic Transformations
In this work, we introduce a family of asymmetric cryptographic functions based on dynamic number theoretic transformations with multiple rounds of modular arithmetic to enhance diffusion and difficulty of inversion. This function acts as a basic cryptographic building block for a novel communication-efficient zero-knowledge crypto-system. The system as defined exhibits partial homomorphism and behaves as an additive positive accumulator. By using a novel technique to constructively embed lattice problems in a nested fashion, the dimensionality and overall complexity of the lattice structure is increased.
This linked lattice framework obscures internal structure and mitigates cryptanalysis by applying a novel ’noisy roots’ technique. By relaxing the need for specifically correct nth ω roots in a given field, we apply offset values to create a framework of consisting of a set of uniquely transforming but arithmetically compatible NTTs. We provide specific parameters for conjectured NIST level V security. Communication costs are extremely low at 288-bytes per public key and 144-bytes per cipher-text or digital signature. Example protocols for key agreement, secure data exchange, additive accumulation, and digital signatures are provided.
Peer review is in preliminary stages at time of dissemination. Claims within have not undergone rigorous validation and likely contain inaccuracies, errors, flaws or incomplete analysis. Contents may see significant modification through later iterations.
NIZKs with Maliciously Chosen CRS: Subversion Advice-ZK and Accountable Soundness
Trusted setup is commonly used for non-interactive proof and argument systems. However, there is no guarantee that the setup parameters in these systems are generated in a trustworthy manner. Building upon previous works, we conduct a systematic study of non-interactive zero-knowledge arguments in the common reference string model where the authority running the trusted setup might be corrupted. We explore both zero-knowledge and soundness properties in this setting.
- We consider a new notion of NIZK called subversion advice-ZK NIZK that strengthens the notion of zero-knowledge with malicious authority security considered by Ananth, Asharov, Dahari and Goyal (EUROCRYPT'21), and present a construction of a subversion advice-ZK NIZK from the sub-exponential hardness of learning with errors.
- We introduce a new notion that strengthens the traditional definition of soundness, called accountable soundness, and present generic compilers that lift any NIZK for interesting languages in NP to additionally achieve accountable soundness.
- Finally, we combine our results for both subversion advice-ZK and accountable soundness to achieve a subversion advice-ZK NIZK that also satisfies accountable soundness. This results in the first NIZK construction that satisfies meaningful notions of both soundness and zero-knowledge even for maliciously chosen CRS.
Kronos: A Secure and Generic Sharding Blockchain Consensus with Optimized Overhead
Sharding enhances blockchain scalability by dividing the network into shards, each managing specific unspent transaction outputs or accounts. As an introduced new transaction type, cross-shard transactions pose a critical challenge to the security and efficiency of sharding blockchains. Currently, there is a lack of a generic sharding blockchain consensus pattern that achieves both security and low overhead.
In this paper, we present Kronos, a secure sharding blockchain consensus achieving optimized overhead. In particular, we propose a new secure sharding blockchain consensus pattern, based on a buffer managed jointly by shard members. Valid transactions are transferred to the payee via the buffer, while invalid ones are rejected through happy or unhappy paths. Kronos is proved to achieve security with atomicity under malicious clients while maintaining optimal intra-shard overhead. Efficient rejection even requires no Byzantine fault tolerance (BFT) protocol execution in happy paths, and the cost in unhappy paths is still not higher than a two-phase commit. Besides, we propose secure cross-shard certification methods. Handling b transactions, Kronos is proved to achieve cross-shard communication with low cross-shard overhead O(n b \lambda) (n for the shard size and \lambda for the security parameter). Notably, Kronos imposes no restrictions on BFT and does not rely on timing assumptions, offering optional constructions in various modules. Kronos could serve as a universal framework for enhancing the performance and scalability of existing BFT protocols. Kronos supports generic models, including asynchronous networks, and can increase the throughput by several orders of magnitude.
We implement Kronos using two prominent BFT protocols: asynchronous Speeding Dumbo (NDSS'22) and partially synchronous Hotstuff (PODC'19). Extensive experiments (over up to 1000 AWS EC2 nodes across 4 AWS regions) demonstrate Kronos scales the consensus nodes to thousands, achieving a substantial throughput of 320 ktx/sec with 2.0 sec latency. Compared with the past solutions, Kronos outperforms, achieving up to a 12$\times$ improvement in throughput and a 50% reduction in latency when cross-shard transactions dominate the workload.
Last updated: 2024-02-13
A Generalized Distributed RSA Key Generation
In this paper, we propose a novel bi-primality test to determine whether $N=pq$ is the product of two primes on any RSA modulus in which we relaxed the restriction, $p\equiv q \equiv 3 \Mod{4}$, that was assumed in most of current bi-primality tests. Our bi-primality test is generalized from Lucas primality test to the bi-prime case. Our test always accepts when $p$ and $q$ are both prime, and otherwise accepts with probability at most $1/2$. In addition, we also prove that the Boneh-Franklin's bi-primality test accepts composite with probability at most $1/4$ instead of $1/2$, if we add an additional condition $\gcd(N, p+q-1)=1$. Moreover, we design a multiparty protocol against of static semi-honest adversaries in the hybrid model and provide a security proof. We then implement the proposed protocol and run in a single thread on a laptop which turned out with average 224 seconds execution time, given that $N$ is around $2048$-bit.
PerfOMR: Oblivious Message Retrieval with Reduced Communication and Computation
Anonymous message delivery, as in privacy-preserving blockchain and private messaging applications, needs to protect recipient metadata: eavesdroppers should not be able to link messages to their recipients. This raises the question: how can untrusted servers assist in delivering the pertinent messages to each recipient, without learning which messages are addressed to whom?
Recent work constructed Oblivious Message Retrieval (OMR) protocols that outsource the message detection and retrieval in a privacy-preserving way, using homomorphic encryption. Their construction exhibits significant costs in computation per message scanned (${\sim}0.1$ second), as well as in the size of the associated messages (${\sim}1$kB overhead) and public keys (${\sim}132$kB).
This work constructs more efficient OMR schemes, by replacing the LWE-based clue encryption of prior works with a Ring-LWE variant, and utilizing the resulting flexibility to improve several components of the scheme. We thus devise, analyze, and benchmark two protocols:
The first protocol focuses on improving the detector runtime, using a new retrieval circuit that can be homomorphically evaluated $13.8$x faster than the prior work.
The second protocol focuses on reducing the communication costs, by designing a different homomorphic decryption circuit that allows the parameter of the Ring-LWE encryption to be set such that the public key size is about $235$x smaller than the prior work, and the message size is roughly $1.6$x smaller. The runtime of this second construction is ${\sim}40.0$ms per message, still more than $2.5$x faster than prior works.
Application-Aware Approximate Homomorphic Encryption: Configuring FHE for Practical Use
Fully Homomorphic Encryption (FHE) is a powerful tool for performing computations on encrypted data. The Cheon-Kim-Kim-Song (CKKS) scheme, an instantiation of approximate FHE, is particularly effective for privacy-preserving machine learning applications over real and complex numbers. Although CKKS offers clear efficiency advantages, confusion persists around accurately describing applications in FHE libraries and securely instantiating the scheme for these applications, particularly after the key recovery attacks by Li and Micciancio (EUROCRYPT'21) for the $IND-CPA^D$ setting.
There is presently a gap between the application-agnostic, generic definition of $IND-CPA^D$, and efficient, application-specific instantiation of CKKS in software libraries, which led to recent attacks by Guo et al. (USENIX Security'24).
To close this gap, we introduce the notion of application-aware homomorphic encryption (AAHE) and devise related security definitions. This model corresponds more closely to how FHE schemes are implemented and used in practice, while also identifying and addressing the potential vulnerabilities in popular libraries. We then provide an application specification language (ASL) and formulate guidelines for implementing the AAHE model to achieve $IND-CPA^D$ security for practical applications of CKKS. We present a proof-of-concept implementation of the ASL in the OpenFHE library showing how the attacks by Guo et al. can be countered. Moreover, we show that our new model and ASL can be used for the secure and efficient instantiation of exact FHE schemes and to counter the recent $IND-CPA^D$ attacks by Cheon et al. (CCS'24) and Checri et al. (CRYPTO'24).
Fully Homomorphic Encryption beyond IND-CCA1 Security: Integrity through Verifiability
We focus on the problem of constructing fully homomorphic encryption (FHE) schemes that achieve some meaningful notion of adaptive chosen-ciphertext security beyond CCA1. Towards this, we propose a new notion, called security against verified chosen-ciphertext attack (vCCA). The idea behind it is to ascertain integrity of the ciphertext by imposing a strong control on the evaluation algorithm. Essentially, we require that a ciphertext obtained by the use of homomorphic evaluation must be "linked" to the original input ciphertexts. We formalize the vCCA notion in two equivalent formulations; the first is in the indistinguishability paradigm, the second follows the non-malleability simulation-based approach, and is a generalization of the targeted malleability introduced by Boneh et al. in 2012.
We strengthen the credibility of our definitions by exploring relations to existing security notions for homomorphic encryption schemes, namely CCA1, RCCA, FuncCPA, CCVA, and HCCA. We prove that vCCA security is the strongest notion known so far, that can be achieved by an FHE scheme; in particular, vCCA is strictly stronger than CCA1.
Finally, we provide a general transformation, that takes any CPA-secure FHE scheme and makes it vCCA-secure. Our transformation first turns an FHE scheme into a CCA2-secure scheme where a part of the ciphertext retains the homomorphic properties and then extends it with a succinct non-interactive argument of knowledge (SNARK) to verifiably control the evaluation algorithm. In fact, we obtain four general variation of this transformation. We handle both the asymmetric and the symmetric key FHE schemes, and for each we give two variations differing in whether the ciphertext integrity can be verified publicly or requires the secret key. We use well-known techniques to achieve CCA security in the first step of our transformation. In the asymmetric case, we use the double encryption paradigm, and in the symmetric case, we use Encrypt-then-MAC techniques. Furthermore, our transformation also gives the first CCA-secure FHE scheme based on bootstrapping techniques.
Breaking the decisional Diffie-Hellman problem in totally non-maximal imaginary quadratic orders
This paper introduces an algorithm to efficiently break the Decisional Diffie-Hellman (DDH) assumption in totally non-maximal imaginary quadratic orders, specifically when $\Delta_1 = 3$, and $f$ is non-prime with knowledge of a single factor. Inspired by Shanks and Dedekind's work on 3-Sylow groups, we generalize their observations to undermine DDH security.