All papers in 2024 (Page 18 of 2100 results)
SILBE: an Updatable Public Key Encryption Scheme from Lollipop Attacks
We present a new post-quantum Public Key Encryption scheme (PKE) named Supersingular Isogeny Lollipop Based Encryption or SILBE. SILBE is obtained by leveraging the generalised lollipop attack of Castryck and Vercauteren on the M-SIDH Key exchange by Fouotsa, Moriya and Petit.
Doing so, we can in fact make SILBE a post-quantum secure Updatable Public Key Encryption scheme (UPKE). SILBE is in fact the first isogeny-based UPKE which is not based on group actions. Hence, SILBE overcomes the limitations highlighted by Eaton, Jao, Komlo and Mokrani at SAC'21 regarding the design of an SIDH-style UPKE. This is possible by leveraging both the Deuring Correspondence and Kani's Lemma, two central concepts in Isogeny-Based Cryptography.
A Direct PRF Construction from Kolmogorov Complexity
While classic result in the 1980s establish that one-way functions (OWFs) imply the existence of pseudorandom generators (PRGs) which in turn imply pseudorandom functions (PRFs), the constructions (most notably the one from OWFs to PRGs) is complicated and inefficient.
Consequently, researchers have developed alternative \emph{direct} constructions of PRFs from various different concrete hardness assumptions. In this work, we continue this thread of work and demonstrate the first direct constructions of PRFs from average-case hardness of the time-bounded Kolmogorov complexity problem $\mktp[s]$, where given a threshold, $s(\cdot)$, and a polynomial time-bound, $t(\cdot)$, $\mktp[s]$ denotes the language consisting of strings $x$ with $t$-bounded Kolmogorov complexity, $K^t(x)$, bounded by $s(|x|)$.
In more detail, we demonstrate a direct PRF construction with quasi-polynomial security from mild average-case of hardness of $\mktp[2^{O(\sqrt{\log n})}]$ w.r.t the uniform distribution. We note that by earlier results, this
assumption is known to be equivalent to the existence of quasi-polynomially secure OWFs; as such, our results yield the first direct (quasi-polynomially secure) PRF constructions from a natural hardness assumptions that also is known to be implied by (quasi-polynomially secure) PRFs.
Perhaps surprisingly, we show how to make use of the Nisan-Wigderson PRG construction to get a cryptographic, as opposed to a complexity-theoretic, PRG.
The Last Challenge Attack: Exploiting a Vulnerable Implementation of the Fiat-Shamir Transform in a KZG-based SNARK
The Fiat-Shamir transform [1] is a well-known and widely employed technique for converting sound public-coin interactive protocols into sound non-interactive protocols. Even though the transformation itself is relatively clear and simple, some implementations choose to deviate from the specifications, for example for performance reasons. In this short note, we present a vulnerability arising from such a deviation in a KZG-based PLONK verifier implementation. This deviation stemmed from the incorrect computation of the last challenge of the PLONK protocol [2], where the KZG batching proof challenge was computed before, and, hence, independently from the KZG evaluation proofs. More generally, such a vulnerability may affect any KZG [3] implementation where one uses batched KZG proof evaluations for at least two distinct evaluation points. We call an attack enabled by such a deviation a Last Challenge Attack. For concreteness, we show that when a PLONK verifier implementation presents such a deviation, a malicious PLONK prover can mount a Last Challenge Attack to construct verifiable proofs of false statements. The described vulnerability was initially discovered as part of an audit, and has been responsibly disclosed to the developers and fixed. A proof of concept of the vulnerability, in which a proof is forged for an arbitrary public input, is made available.
In addition to the above, in this work we also provide a security proof of the knowledge-soundness of the batched KZG scheme with evaluations for at least two distinct values.
Exponent-VRFs and Their Applications
Verifiable random functions (VRFs) are pseudorandom functions where the function owner can prove that a generated output is correct relative to a committed key. In this paper we introduce the notion of an exponent-VRF (eVRF): a VRF that does not provide its output $y$ explicitly, but instead provides $Y = y \cdot G$, where $G$ is a generator of some finite cyclic group (or $Y=g^y$ in multiplicative notation). We construct eVRFs from the Paillier encryption scheme and from DDH, both in the random-oracle model. We then show that an eVRF is a powerful tool that has many important applications in threshold cryptography. In particular, we construct (1) a one-round fully simulatable distributed key-generation protocol (after a single two-round initialization phase), (2) a two-round fully simulatable signing protocol for multiparty Schnorr with a deterministic variant, (3) a two-party ECDSA protocol that has a deterministic variant, (4) a threshold Schnorr signing protocol where the parties can later prove that they signed without being able to frame another group, and (5) an MPC-friendly and verifiable HD-derivation. All these applications are derived from this single new eVRF abstraction, and the resulting protocols are concretely efficient.
On the impact of ionizing and non-ionizing irradiation damage on security microcontrollers in CMOS technology
The possible effects of irradiation on security controllers implemented in CMOS technology are studied. First, the decrease of the effectiveness of a light sensor/detector as countermeasure against laser fault injection is analysed. Second, the use of irradiation as fault injection method is proposed.
Notus: Dynamic Proofs of Liabilities from Zero-knowledge RSA Accumulators
Proofs of Liabilities (PoL) allow an untrusted prover to commit to its liabilities towards a set of users and then prove independent users' amounts or the total sum of liabilities, upon queries by users or third-party auditors. This application setting is highly dynamic. User liabilities may increase/decrease arbitrarily and the prover needs to update proofs in epoch increments (e.g., once a day for a crypto-asset exchange platform). However, prior works mostly focus on the static case and trivial extensions to the dynamic setting open the system to windows of opportunity for the prover to under-report its liabilities and rectify its books in time for the next check, unless all users check their liabilities at all epochs. In this work, we develop Notus, the first dynamic PoL system for general liability updates that avoids this issue. Moreover, it achieves $O(1)$ query proof size, verification time, and auditor overhead-per-epoch.
The core building blocks underlying Notus are a novel zero-knowledge (and SNARK-friendly) RSA accumulator and a corresponding zero-knowledge MultiSwap protocol, which may be of independent interest. We then propose optimizations to reduce the prover's update overhead and make Notus scale to large numbers of users ($10^6$ in our experiments). Our results are very encouraging, e.g., it takes less than $2$ms to verify a user's liability and the proof size is $256$ Bytes. On the prover side, deploying Notus on a cloud-based testbed with eight 32-core machines and exploiting parallelism, it takes ${\sim}3$ minutes to perform the complete epoch update, after which all proofs have already been computed.
A Deniably Authenticated Searchable Public Key Encryption Scheme in Mobile Electronic Mail System
Confidentiality and authentication are two main security goals in secure electronic mail (e-mail). Furthermore, deniability is also a significant security property for some e-mail applications to protect the privacy of the sender. Although searchable encryption solves the keyword searching problem in a secure e-mail system, it also breaks the deniability of the system. Because the adversary can obtain the information of the data sender and data user from the trapdoor as well as ciphertext used for keyword searching. At the same time, efficiency is another problem in mobile computing. To overcome the issues, we put forward deniably authenticated searchable public key encryption (DA-SPKE) which can apply to the deniable e-mail system. We first define the DA-SPKE model and propose the DA-SPKE scheme. Then we prove its security in the random oracle model and evaluate its efficiency. Finally, we use it in a deniably secure e-mail system to protect both data senders’ and data users' privacy.
Solving McEliece-1409 in One Day --- Cryptanalysis with the Improved BJMM Algorithm
Syndrome decoding problem (SDP) is the security assumption of the code-based cryptography. Three out of the four NIST-PQC round 4 candidates are code-based cryptography. Information set decoding (ISD) is known for the fastest existing algorithm to solve SDP instances with relatively high code rate. Security of code-based cryptography is often constructed on the asymptotic complexity of the ISD algorithm. However, the concrete complexity of the ISD algorithm has hardly ever been known. Recently, Esser, May and Zweydinger (Eurocrypt '22) provide the first implementation of the representation-based ISD, such as May--Meurer--Thomae (MMT) or Becker--Joux--May--Meurer (BJMM) algorithm and solve the McEliece-1284 instance in the decoding challenge, revealing the practical efficiency of these ISDs.
In this work, we propose a practically fast depth-2 BJMM algorithm and provide the first publicly available GPU implementation. We solve the McEliece-1409 instance for the first time and present concrete analysis for the record. Cryptanalysis for NIST-PQC round 4 code-based candidates against the improved BJMM algorithm is also conducted. In addition, we revise the asymptotic space complexity of the time-memory trade-off MMT algorithm presented by Esser and Zweydinger (Eurocrypt '23) from $2^{0.375n}$ to $2^{0.376n}$.
Heuristic Ideal Obfuscation Based on Evasive LWR
This paper introduces a heuristic ideal obfuscation scheme grounded in the lattice problems, which differs from that proposed by Jain, Lin, and Luo ([JLLW23], CRYPTO 2023). The approach in this paper follows a methodology akin to that of Brakerski, Dottling, Garg, and Malavolta ([BDGM20], EUROCRYPT 2020) for building indistinguishable obfuscation (iO). The proposal is achieved by leveraging a variant of learning with rounding (LWR) to build linearly homomorphic encryption (LHE) and employing {\em Evasive LWR} to construct multilinear maps. Initially, we reprove the hardness of LWR using the prime number theorem and the fixed-point theorem, showing that the statistical distance between $\lfloor As\rfloor_p$ and $\lfloor u\rfloor_p$ does not exceed $\exp\left(-\frac{n\log_2n\ln p}{\sqrt{5}}\right)$ when the security parameter $q>2^{n}p$. Additionally, we provide definitions for {\em Evasive LWR} and {\em composite homomorphic pseudorandom function} (cHPRF), and based on these, we construct multilinear maps, thereby establishing the ideal obfuscation scheme proposed in this paper.
On Information-Theoretic Secure Multiparty Computation with Local Repairability
In this work we consider the task of designing information-theoretic MPC protocols for which the state of a given party can be recovered from a small amount of parties, a property we refer to as local repairability.
This is useful when considering MPC over dynamic settings where parties leave and join a computation, a scenario that has gained notable attention in recent literature.
Thanks to the results of (Cramer et al. EUROCRYPT'00), designing such protocols boils down to constructing a linear secret-sharing scheme (LSSS) with good locality, that is, each share is determined by only a small amount of other shares, that also satisfies the so-called multiplicativity property.
Previous constructions that achieve locality (e.g. using locally recoverable codes---LRCs) do not enjoy multiplicativity, and LSSS that are multiplicative (e.g. Shamir's secret-sharing) do not satisfy locality.
Our construction bridges this literature gap by showing the existence of an LSSS that achieves both properties simultaneously.
Our results are obtained by making use of well known connection between error correcting codes and LSSS, in order to adapt the LRC construction by (Tamo & Barg, IEEE Transactions on Information Theory 2014) to turn it into a LSSS.
With enough care, such coding-theoretic construction yields our desired locality property, but it falls short at satisfying multiplicativity.
In order to address this, we perform an extensive analysis of the privacy properties of our scheme in order to identify parameter regimes where our construction satisfies multiplicativity.
Finally, since our LSSS satisfies locality, every share is determined by a small amount of shares.
However, in an MPC context it is not enough to let the (small set of) parties to send their shares to the repaired party, since this may leak more information than the regenerated share.
To obtain our final result regarding MPC with local repairability, we construct a lightweight MPC protocol that performs such repairing process without any leakage.
We provide both a passively secure construction (for the plain multiplicative regime) and an actively secure one (for strong multiplicativity).
STIR: Reed–Solomon Proximity Testing with Fewer Queries
We present STIR (Shift To Improve Rate), an interactive oracle proof of proximity (IOPP) for Reed-Solomon codes that achieves the best known query complexity of any concretely efficient IOPP for this problem. For $\lambda$ bits of security, STIR has query complexity $O(\log d + \lambda \cdot \log \log d )$, while FRI, a popular protocol, has query complexity $O(\lambda \cdot \log d )$ (including variants of FRI based on conjectured security assumptions). STIR relies on a new technique for recursively improving the rate of the tested Reed-Solomon code.
We provide an implementation of STIR compiled to a SNARK. Compared to a highly-optimized implementation of FRI, STIR achieves an improvement in argument size that ranges from $1.25\times$ to $2.46\times$ depending on the chosen parameters, with similar prover and verifier running times. For example, in order to achieve 128 bits of security for degree $2^{26}$ and rate $1/4$, STIR has argument size $114$ KiB, compared to $211$ KiB for FRI.
On the Feasibility of Sliced Garbling
Garbling schemes are one of the most fundamental objects in cryptography and have been studied extensively due to their broad applicability. The state-of-the-art is a construction in which XOR gates are free and AND gates require $3\kappa/2+\mathcal{O}(1)$ bits per gate, due to Rosulek and Roy (CRYPTO'21). An important technique in their garbling is slicing, which partitions the labels into two equal-length slices. In this paper, we explore the feasibility of the slicing technique for garbling schemes beyond the results introduced by Rosulek and Roy, demonstrating both its potential and its limitations.
In particular, we extend this technique to demonstrate the garbling of certain higher fan-in gadgets, and then use this to show that it is possible to garble 2-input AND gates at a cost of $4\kappa/3 +\mathcal{O}(1)$ bits. We then give a separation result showing that sliced garbling cannot be used to garble higher fan-in gadgets of degree $\geq 3$ when restricted to making queries that are linear functions of the input labels to the random oracle. We further demonstrate the usefulness of our techniques in the context of oblivious garbling, a newly introduced concept for capturing circuit hiding from the garbler. The complexity of our construction is superior to that of universal circuits, and grows linearly with circuit size.
Leakage-Resilient Attribute-Based Encryption with Attribute-Hiding
In this work, we present two generic frameworks for leakage-resilient attribute-based encryption (ABE), which is an improved version of ABE that can be proven secure even when part of the secret key is leaked. Our frameworks rely on the standard assumption ($k$-Lin) over prime-order groups. The first framework is designed for leakage-resilient ABE with attribute-hiding in the bounded leakage model. Prior to this work, no one had yet derived a generic leakage-resilient ABE framework with attribute-hiding. The second framework provides a generic method to construct leakage-resilient ABE in the continual leakage model. It is compatible with Zhang et al.'s work [DCC 2018] but more generic. Concretely, Zhang et al.'s framework cannot act on some specific ABE schemes while ours manages to do that. Technically, our frameworks are built on the predicate encoding of Chen et al.'s [EUROCRYPT 2015] combined with a method of adding redundancy. At last, several instantiations are derived from our frameworks, which cover the cases of zero inner-product predicate and non-zero inner-product predicate.
Ceno: Non-uniform, Segment and Parallel Zero-knowledge Virtual Machine
In this paper, we explore a novel Zero-knowledge Virtual Machine (zkVM) framework leveraging succinct, non-interactive zero-knowledge proofs for verifiable computation over any code. Our approach divides the proof of program execution into two stages. In the first stage, the process breaks down program execution into segments, identifying and grouping identical sections. These segments are then proved through data-parallel circuits that allow for varying amounts of duplication. In the subsequent stage, the verifier examines these segment proofs, reconstructing the program's control and data flow based on the segments' duplication number and the original program. The second stage can be further attested by a uniform recursive proof.
We propose two specific designs of this concept, where segmentation and parallelization occur at two levels: opcode and basic block. Both designs try to minimize the control flow that affects the circuit size and support dynamic copy numbers, ensuring that computational costs directly correlate with the actual code executed (i.e., you only pay as much as you use). In our second design, in particular, by proposing an innovative data-flow reconstruction technique in the second stage, we can drastically cut down on the stack operations even compared to the original program execution. Note that the two designs are complementary rather than mutually exclusive. Integrating both approaches in the same zkVM could unlock more significant potential to accommodate various program patterns.
We present an asymmetric GKR scheme to implement our designs, pairing a non-uniform prover and a uniform verifier to generate proofs for dynamic-length data-parallel circuits. The use of a GKR prover also significantly reduces the size of the commitment. GKR allows us to commit only the circuit's input and output, whereas in Plonkish-based solutions, the prover needs to commit to all the witnesses.
High-Throughput Secure Multiparty Computation with an Honest Majority in Various Network Settings
In this work, we present novel protocols over rings for semi-honest secure three-party computation (3PC) and malicious four-party computation (4PC) with one corruption. While most existing works focus on improving total communication complexity, challenges such as network heterogeneity and computational complexity, which impact MPC performance in practice, remain underexplored.
Our protocols address these issues by tolerating multiple arbitrarily weak network links between parties without any substantial decrease in performance. Additionally, they significantly reduce computational complexity by requiring up to half the number of basic instructions per gate compared to related work. These improvements lead to up to twice the throughput of state-of-the-art protocols in homogeneous network settings and up to eight times higher throughput in real-world heterogeneous settings. These advantages come at no additional cost: Our protocols maintain the best-known total communication complexity per multiplication, requiring 3 elements for 3PC and 5 elements for 4PC.
We implemented our protocols alongside several state-of-the-art protocols (Replicated 3PC, ASTRA, Fantastic Four, Tetrad) in a novel open-source C++ framework optimized for high throughput. Five out of six implemented 3PC and 4PC protocols achieve more than one billion 32-bit multiplications or over 32 billion AND gates per second using our implementation in a 25 Gbit/s LAN environment. This represents the highest throughput achieved in 3PC and 4PC so far, outperforming existing frameworks like MP-SPDZ, ABY3, MPyC, and MOTION by two to three orders of magnitude.
A New Public Key Cryptosystem Based on the Cubic Pell Curve
Since its invention in 1978 by Rivest, Shamir and Adleman, the public key cryptosystem RSA has become a widely popular and a widely useful scheme in cryptography. Its security is related to the difficulty of factoring large integers which are the product of two large prime numbers. For various reasons, several variants of RSA have been proposed, and some have different arithmetics such as elliptic and singular cubic curves. In 2018, Murru and Saettone proposed another variant of RSA based on the cubic Pell curve with a modulus of the form $N=pq$. In this paper, we present a new public key cryptosystem based on the arithmetic of the cubic Pell curve with a modulus of the form $N=p^rq^s$. Its security is based on the hardness of factoring composite integers, and on Rabin's trapdoor one way function. In the new scheme, the arithmetic operations are performed on a cubic Pell curve which is known only to the sender and the recipient of a plaintext.
Transmitter Actions for Secure Integrated Sensing and Communication
This work models a secure integrated sensing and communication (ISAC) system as a wiretap channel with action-dependent channel states and channel output feedback, e.g., obtained through reflections. The transmitted message is split into a common and a secure message, both of which must be reliably recovered at the legitimate receiver, while the secure message needs to be kept secret from the eavesdropper. The transmitter actions, such as beamforming vector design, affect the corresponding state at each channel use. The action sequence is modeled to depend on both the transmitted message and channel output feedback. For perfect channel output feedback, the secrecy-distortion regions are provided for physically-degraded and reversely-physically-degraded secure ISAC channels with transmitter actions. The corresponding rate regions when the entire message should be kept secret are also provided. The results are illustrated through characterizing the secrecy-distortion region of a binary example.
Malicious Security for SCALES: Outsourced Computation with Ephemeral Servers
SCALES (Small Clients And Larger Ephemeral Servers) model is a recently proposed model for MPC (Acharya et al., TCC 2022). While the SCALES model offers several attractive features for practical large-scale MPC, the result of Acharya et al. only offered semi-honest secure protocols in this model.
We present a new efficient SCALES protocol secure against malicious adversaries, for general Boolean circuits. We start with the base construction of Acharya et al. and design and use a suite of carefully defined building blocks that may be of independent interest. The resulting protocol is UC-secure without honest majority, with a CRS and bulletin-board as setups, and allows publicly identifying deviations from correct execution.
Decentralized Access Control Infrastructure for Enterprise Digital Asset Management
With the rapidly evolving landscape of cryptography, blockchain technology has advanced to cater to diverse user requirements, leading to the emergence of a multi-chain ecosystem featuring various use cases characterized by distinct transaction speed and decentralization trade-offs. At the heart of this evolution lies digital signature schemes, responsible for safeguarding blockchain-based assets such as ECDSA, Schnorr, and EdDSA, among others.
However, a critical gap exists in the current landscape — there is no solution empowering a consortium of entities to collectively manage or generate digital signatures for diverse digital assets in a distributed manner with dynamic threshold settings, all while mitigating counter-party risks. Existing threshold signature schemes impose a fixed threshold during the key generation phase, limiting the adaptability of threshold settings for the subsequent signature phase. Attempts to address this challenge often involve relinquishing signature generation control either partially or entirely from the participating parties, introducing vulnerabilities that could jeopardize digital assets in the event of network disruptions.
Addressing this gap, our work introduces an innovative infrastructure that allows a group of users to programmatically define and manage access control policies, supported by a blockchain network dedicated to policy enforcement. This network is uniquely designed to prevent any entity, including itself, from autonomously generating digital signatures, thereby mitigating counter-party risks and enhancing asset security. This system is particularly suited for enterprise contexts, where collaborative asset oversight and policy adherence are essential. Our solution marks a significant stride in the realm of blockchain technology, paving the way for more sophisticated and secure digital asset management in a rapidly evolving digital landscape.
Quantum Circuits of AES with a Low-depth Linear Layer and a New Structure
In recent years quantum computing has developed rapidly. The security threat posed by quantum computing to cryptography makes it necessary to better evaluate the resource cost of attacking algorithms, some of which require quantum implementations of the attacked cryptographic building blocks. In this paper we manage to optimize quantum circuits of AES in several aspects. Firstly, based on de Brugière \textit{et al.}'s greedy algorithm, we propose an improved depth-oriented algorithm for synthesizing low-depth CNOT circuits with no ancilla qubits. Our algorithm finds a CNOT circuit of AES MixColumns with depth 10, which breaks a recent record of depth 16. In addition, our algorithm gives low-depth CNOT circuits for many MDS matrices and matrices used in block ciphers studied in related work. Secondly, we present a new structure named compressed pipeline structure to synthesize quantum circuits of AES, which can be used for constructing quantum oracles employed in quantum attacks based on Grover's and Simon's algorithms. When the number of ancilla qubits required by the round function and its inverse is not very large, our structure will have a better trade-off of $D$-$W$ cost. Moreover, our encryption oracle will have the lowest depth to date. We then give detailed encryption circuits of AES-128 under the guidance of our structure and make some comparisons with other circuits. Finally, the encryption part and the key schedule part have their own application scenarios. The Encryption oracle used in Simon's algorithm built with the former will have smaller round depth. For example, we can construct an AES-128 Encryption oracle with $T$-depth 33, while the previous best result is 60. A small variant of the latter, along with our method to make an Sbox input-invariant, can avoid the allocation of extra ancilla qubits for storing key words in the shallowed pipeline structure. Based on this, we achieve an encryption circuit of AES-128 with the lowest $TofD$-$W$ cost 130720 to date.
Collision Resistance from Multi-Collision Resistance for all Constant Parameters
A $t$-multi-collision-resistant hash function ($t$-MCRH) is a family of shrinking functions for which it is computationally hard to find $t$ distinct inputs mapping to the same output for a function sampled from this family. Several works have shown that $t$-MCRHs are sufficient for many of the applications of collision-resistant hash functions (CRHs), which correspond to the special case of $t = 2$.
An important question is hence whether $t$-MCRHs for $t > 2$ are fundamentally weaker objects than CRHs. As a first step towards resolving this question, Rothblum and Vasudevan (CRYPTO '22) recently gave non-black-box constructions of infinitely-often secure CRHs from $t$-MCRHs for $t \in \{3,4\}$ assuming the MCRH is sufficiently shrinking. Earlier on, Komargodski and Yogev (CRYPTO '18) also showed that $t$-MCRHs for any constant $t$ imply the weaker notion of a distributional CRH.
In this paper, we remove the limitations of prior works, and completely resolve the question of the power of $t$-MCRHs for constant $t$ in the infinitely-often regime, showing that the existence of such a function family always implies the existence of an infinitely-often secure CRH. As in the works mentioned above, our construction is non-blackbox and non-constructive. We further give a new domain extension result for MCRHs that enables us to show that the underlying MCRH need only have arbitrarily small linear shrinkage (mapping $(1 + \epsilon)n$ bits to $n$ bits for any fixed $\epsilon > 0$) to imply the existence of CRHs.
SyRA: Sybil-Resilient Anonymous Signatures with Applications to Decentralized Identity
We introduce a new cryptographic primitive, called Sybil-Resilient Anonymous (SyRA) signatures, which enable users to generate, on demand, unlinkable pseudonyms tied to any given context, and issue signatures on behalf of these pseudonyms. Concretely, given a personhood relation, an issuer (who may be a distributed entity) enables users to prove their personhood and extract an associated long-term key, which can then be used to issue signatures for any given context and message. Sybil-resilient anonymous signatures achieve two key security properties: 1) Sybil resilience, which ensures that every user is entitled to at most one pseudonym per context, and 2) anonymity, which requires that no information about the user is leaked through their various pseudonyms or the signatures they issue on their pseudonyms’ behalf.
We conceptualize SyRA signatures as an ideal functionality in the Universal Composition (UC) setting and realize the functionality via an efficient, pairing-based construction that utilizes two levels of verifiable random functions (VRFs), which may be of independent interest. One of the key features of this approach is the statelessness of the issuer: we achieve the core properties of Sybil resilience and anonymity without requiring the issuer to retain any information about past user interactions. SyRA signatures have various applications in multiparty systems, such as e-voting (e.g., for decentralized governance), privacy-preserving regulatory compliance (e.g., AML/CFT checks), and cryptocurrency airdrops, making them an attractive option for deployment in decentralized identity (DID) systems. Furthermore, we demonstrate the practicality of SyRA signatures for use in such systems by providing a performance evaluation of our construction.
Strong PUF Security Metrics: Sensitivity of Responses to Single Challenge Bit Flips
This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong Physical Unclonable Functions (PUFs). These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial complexity analyses, and allow simple iterative design optimization. Moreover, they are computationally more efficient and far easier to standardize than typical ML-studies. This manuscript treats one very natural, but also very impactful metric, and investigates the effects that the alteration of single challenge bits has on the associated PUF-responses.
We define several concrete metric scores based on this idea, and demonstrate their predictive power by applying them to various popular Strong PUF design families as test cases. This includes XOR Arbiter PUFs, XOR Bistable Ring PUFs, and Feed-Forward Arbiter PUFs, whose practical security is particularly well known after two decades of intense research. In passing, our manuscript also suggests techniques for representing our metric scores graphically, and for interpreting them in a meaningful manner. Our work demonstrates that if comparable methods had existed earlier, various Strong PUF candidates deemed secure and broken later could have been recognized and winnowed early on.
Connecting Leakage-Resilient Secret Sharing to Practice: Scaling Trends and Physical Dependencies of Prime Field Masking
Symmetric ciphers operating in (small or mid-size) prime fields have been shown to be promising candidates to maintain security against low-noise (or even noise-free) side-channel leakage.
In order to design prime ciphers that best trade physical security and implementation efficiency, it is essential to understand how side-channel security evolves with the field size (i.e., scaling trends).
Unfortunately, it has also been shown that such a scaling trend depends on the leakage functions and cannot be explained by the standard metrics used to analyze Boolean masking with noise.
In this work, we therefore initiate a formal study of prime field masking for two canonical leakage functions: bit leakages and Hamming weight leakages. By leveraging theoretical results from the leakage-resilient secret sharing literature, we explain formally why (1) bit leakages correspond to a worst-case and do not encourage operating in larger fields, and (2) an opposite conclusion holds for
Hamming weight leakages, where increasing the prime field modulus p can contribute to a security amplification that is exponential in the number of shares,with log(p) seen as security parameter like the noise variance in Boolean masking. We combine these theoretical results with experimental ones
and show that the interest masking in larger prime fields can degrade gracefully when leakage functions slightly deviate from the Hamming weight abstraction, motivating further research towards characterizing (ideally wide) classes of leakage functions offering such guarantees.
Perfect (Parallel) Broadcast in Constant Expected Rounds via Statistical VSS
We study broadcast protocols in the information-theoretic model under optimal conditions, where the number of corruptions $t$ is at most one-third of the parties, $n$. While worst-case $\Omega(n)$ round broadcast protocols are known to be impossible to achieve, protocols with an expected constant number of rounds have been demonstrated since the seminal work of Feldman and Micali [STOC'88]. Communication complexity for such protocols has gradually improved over the years, reaching $O(nL)$ plus expected $O(n^4\log n)$ for broadcasting a message of size $L$ bits.
This paper presents a perfectly secure broadcast protocol with expected constant rounds and communication complexity of $O(nL)$ plus expected $O(n^3 \log^2n)$ bits. In addition, we consider the problem of parallel broadcast, where $n$ senders, each wish to broadcast a message of size $L$. We show a parallel broadcast protocol with expected constant rounds and communication complexity of $O(n^2L)$ plus expected $O(n^3 \log^2n)$ bits. Our protocol is optimal (up to expectation) for messages of length $L \in \Omega(n \log^2 n)$.
Our main contribution is a framework for obtaining perfectly secure broadcast with an expected constant number of rounds from a statistically secure verifiable secret sharing. Moreover, we provide a new statistically secure verifiable secret sharing where the broadcast cost per participant is reduced from $O(n \log n)$ bits to only $O({\sf poly} \log n)$ bits. All our protocols are adaptively secure.
Efficient and Generic Methods to Achieve Active Security in Private Information Retrieval and More Advanced Database Search
Motivated by secure database search, we present secure computation protocols for a function $f$ in the client-servers setting, where a client can obtain $f(x)$ on a private input $x$ by communicating with multiple servers each holding $f$. Specifically, we propose generic compilers from passively secure protocols, which only keep security against servers following the protocols, to actively secure protocols, which guarantee privacy and correctness even against malicious servers. Our compilers are applied to protocols computing any class of functions, and are efficient in that the overheads in communication and computational complexity are only polynomial in the number of servers, independent of the complexity of functions. We then apply our compilers to obtain concrete actively secure protocols for various functions including private information retrieval (PIR), bounded-degree multivariate polynomials and constant-depth circuits. For example, our actively secure PIR protocols achieve exponentially better computational complexity in the number of servers than the currently best-known protocols. Furthermore, our protocols for polynomials and constant-depth circuits reduce the required number of servers compared to the previous actively secure protocols. In particular, our protocol instantiated from the sparse Learning Parity with Noise (LPN) assumption is the first actively secure protocol for multivariate polynomials which has the minimum number of servers, without assuming fully homomorphic encryption.
Universal Composable Password Authenticated Key Exchange for the Post-Quantum World
In this paper, we construct the first password authenticated key exchange (PAKE) scheme from isogenies with Universal Composable (UC) security in the random oracle model (ROM). We also construct the first two PAKE schemes with UC security in the quantum random oracle model (QROM), one is based on the learning with error (LWE) assumption, and the other is based on the group-action decisional Diffie- Hellman (GA-DDH) assumption in the isogeny setting.
To obtain our UC-secure PAKE scheme in ROM, we propose a generic construction of PAKE from basic lossy public key encryption (LPKE) and CCA-secure PKE. We also introduce a new variant of LPKE, named extractable LPKE (eLPKE). By replacing the basic LPKE with eLPKE, our generic construction of PAKE achieves UC security in QROM. The LPKE and eLPKE have instantiations not only from LWE but also from GA-DDH, which admit four specific PAKE schemes with UC security in ROM or QROM, based on LWE or GA-DDH.
Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries
One of the most basic problems for studying the "price of privacy over time" is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2010). In this problem, we aim to track the number of events that occur over time, while hiding the existence of every single event. More specifically, in every time step $t\in[T]$ we learn (in an online fashion) that $\Delta_t\geq 0$ new events have occurred, and must respond with an estimate $n_t\approx\sum_{j=1}^t \Delta_j$. The privacy requirement is that all of the outputs together, across all time steps, satisfy event level differential privacy.
The main question here is how our error needs to depend on the total number of time steps $T$ and the total number of events $n$. Dwork et al. (2015) showed an upper bound of $O\left(\log(T)+\log^2(n)\right)$, and Henzinger et al. (2023) showed a lower bound of $\Omega\left(\min\{\log n, \log T\}\right)$. We show a new lower bound of $\Omega\left(\min\{n,\log T\}\right)$, which is tight w.r.t. the dependence on $T$, and is tight in the sparse case where $\log^2 n=O(\log T)$. Our lower bound has the following implications:
(1) We show that our lower bound extends to the online thresholds problem, where the goal is to privately answer many "quantile queries" when these queries are presented one-by-one. This resolves an open question of Bun et al. (2017).
(2) Our lower bound implies, for the first time, a separation between the number of mistakes obtainable by a private online learner and a non-private online learner. This partially resolves a COLT'22 open question published by Sanyal and Ramponi.
(3) Our lower bound also yields the first separation between the standard model of private online learning and a recently proposed relaxed variant of it, called private online prediction.
Two-Round Maliciously-Secure Oblivious Transfer with Optimal Rate
We give a construction of a two-round batch oblivious transfer (OT) protocol in the CRS model that is UC-secure against malicious adversaries and has (near) optimal communication cost. Specifically, to perform a batch of $k$ oblivious transfers where the sender's inputs are bits, the sender and the receiver need to communicate a total of $3k + o(k) \cdot \mathsf{poly}(\lambda)$ bits. We argue that $3k$ bits are required by any protocol with a black-box and straight-line simulator. The security of our construction is proven assuming the hardness of Quadratic Residuosity (QR) and the Learning Parity with Noise (LPN).
Preimage Attacks on Reduced-Round Ascon-Xof
Ascon, a family of algorithms that supports authenticated encryption and hashing, has been selected as the new standard for lightweight cryptography in the NIST Lightweight Cryptography Project. Ascon’s permutation and authenticated encryption have been actively analyzed, but there are relatively few analyses on the hashing. In this paper, we concentrate on preimage attacks on Ascon-Xof. We focus on linearizing the polynomials leaked by the hash value to find its inverse. In an attack on 2-round Ascon-Xof, we carefully construct the set of guess bits using a greedy algorithm in the context of guess-and-determine. This allows us to attack Ascon-Xof more efficiently than the method in Dobraunig et al., and we fully implement our attack to demonstrate its effectiveness. We also provide the number of guess bits required to linearize one output bit after 3- and 4-round Ascon’s permutation, respectively. In particular, for the first time, we connect the result for 3-round Ascon to a preimage attack on Ascon-Xof with a 64-bit output. Our attacks primarily focus on analyzing weakened versions of Ascon-Xof, where the weakening involves setting all the IV values to 0 and omitting the round constants. Although our attacks do not compromise the security of the full Ascon-Xof, they provide new insights into their security.
Perfectly-Secure Multiparty Computation with Linear Communication Complexity over Any Modulus
Consider the task of secure multiparty computation (MPC) among $n$ parties with perfect security and guaranteed output delivery, supporting $t<n/3$ active corruptions. Suppose the arithmetic circuit $C$ to be computed is defined over a finite ring $\mathbb{Z}/q\mathbb{Z}$, for an arbitrary $q\in\mathbb{Z}$. It is known that this type of MPC over such ring is possible, with communication that scales as $O(n|C|)$, assuming that $q$ scales as $\Omega(n)$. However, for constant-size rings $\mathbb{Z}/q\mathbb{Z}$ where $q = O(1)$, the communication is actually $O(n\log n|C|)$ due to the need of the so-called ring extensions. In most natural settings, the number of parties is variable but the "datatypes" used for the computation are fixed (e.g. 64-bit integers). In this regime, no protocol with linear communication exists.
In this work we provide an MPC protocol in this setting: perfect security, G.O.D. and $t<n/3$ active corruptions, that enjoys linear communication $O(n|C|)$, even for constant-size rings $\mathbb{Z}/q\mathbb{Z}$. This includes as important particular cases small fields such as $\mathbb{F}_2$, and also the ring $\mathbb{Z}/2^k\mathbb{Z}$. The main difficulty in achieving this result is that widely used techniques such as linear secret-sharing cannot work over constant-size rings, and instead, one must make use of ring extensions that add $\Omega(\log n)$ overhead, while packing $\Omega(\log n)$ ring elements in each extension element in order to amortize this cost. We make use of reverse multiplication-friendly embeddings (RMFEs) for this packing, and adapt recent techniques in network routing (Goyal et al. CRYPTO'22) to ensure this can be efficiently used for non-SIMD circuits. Unfortunately, doing this naively results in a restriction on the minimum width of the circuit, which leads to an extra additive term in communication of $\mathsf{poly}(n)\cdot\mathsf{depth}(C)$. One of our biggest technical contributions lies in designing novel techniques to overcome this limitation by packing elements that are distributed across different layers. To the best of our knowledge, all works that have a notion of packing (e.g. RMFE or packed secret-sharing) group gates across the same layer, and not doing so, as in our work, leads to a unique set of challenges and complications.
Garbled Circuit Lookup Tables with Logarithmic Number of Ciphertexts
Garbled Circuit (GC) is a basic technique for practical secure computation. GC handles Boolean circuits; it consumes significant network bandwidth to transmit encoded gate truth tables, each of which scales with the computational security parameter $\kappa$. GC optimizations that reduce bandwidth consumption are valuable.
It is natural to consider a generalization of Boolean two-input one-output gates (represented by $4$-row one-column lookup tables, LUTs) to arbitrary $N$-row $m$-column LUTs. Known techniques for this do not scale, with naive size-$O(Nm\kappa)$ garbled LUT being the most practical approach in many scenarios.
Our novel garbling scheme -- logrow -- implements GC LUTs while sending only a logarithmic in $N$ number of ciphertexts! Specifically, let $n = \lceil \log_2 N \rceil$. We allow the GC parties to evaluate a LUT for $(n-1)\kappa + nm\kappa + Nm$ bits of communication. logrow is compatible with modern GC advances, e.g. half gates and free XOR.
Our work improves state-of-the-art GC handling of several interesting applications, such as privacy-preserving machine learning, floating-point arithmetic, and DFA evaluation.
Algorithms for Matrix Code and Alternating Trilinear Form Equivalences via New Isomorphism Invariants
We devise algorithms for finding equivalences of trilinear forms over finite fields modulo linear group actions. Our focus is on two problems under this umbrella, Matrix Code Equivalence (MCE) and Alternating Trilinear Form Equivalence (ATFE), since their hardness is the foundation of the NIST round-$1$ signature candidates MEDS and ALTEQ respectively.
We present new algorithms for MCE and ATFE, which are further developments of the algorithms for polynomial isomorphism and alternating trilinear form equivalence, in particular by Bouillaguet, Fouque, and Véber (Eurocrypt 2013), and Beullens (Crypto 2023). Key ingredients in these algorithms are new easy-to-compute distinguishing invariants under the respective group actions.
For MCE, we associate new isomorphism invariants to corank-$1$ points of matrix codes, which lead to a birthday-type algorithm. We present empirical justifications that these isomorphism invariants are easy-to-compute and distinguishing, and provide an implementation of this algorithm. This algorithm has some implications to the security of MEDS.
The invariant function for ATFE is similar, except it is associated with lower rank points. Modulo certain assumptions on turning the invariant function into canonical forms, our algorithm for ATFE improves on the runtime of the previously best known algorithm of Beullens (Crypto 2023).
Finally, we present quantum variants of our classical algorithms with cubic runtime improvements.
Accelerating SLH-DSA by Two Orders of Magnitude with a Single Hash Unit
We report on efficient and secure hardware implementation techniques for the FIPS 205 SLH-DSA Hash-Based Signature Standard. We demonstrate that very significant overall performance gains can be obtained from hardware that optimizes the padding formats and iterative hashing processes specific to SLH-DSA. A prototype implementation, SLotH, contains Keccak/SHAKE, SHA2-256, and SHA2-512 cores and supports all 12 parameter sets of SLH-DSA. SLotH also supports side-channel secure PRF computation and Winternitz chains. SLotH drivers run on a small RISC-V control core, as is common in current Root-of-Trust (RoT) systems.
The new features make SLH-DSA on SLotH many times faster compared to similarly-sized general-purpose hash accelerators. Compared to unaccelerated microcontroller implementations, the performance of SLotH's SHAKE variants is up to $300\times$ faster; signature generation with 128f parameter set is is 4,903,978 cycles, while signature verification with 128s parameter set is only 179,603 cycles. The SHA2 parameter sets have approximately half of the speed of SHAKE parameter sets. We observe that the signature verification performance of SLH-DSA's ``s'' parameter sets is generally better than that of accelerated ECDSA or Dilithium on similarly-sized RoT targets. The area of the full SLotH system is small, from 63 kGE (SHA2, Cat 1 only) to 155 kGe (all parameter sets). Keccak Threshold Implementation adds another 130 kGE.
We provide sensitivity analysis of SLH-DSA in relation to side-channel leakage. We show experimentally that an SLH-DSA implementation with CPU hashing will rapidly leak the SK.seed master key. We perform a 100,000-trace TVLA leakage assessment with a protected SLotH unit.
Key Recovery Attack on the Partial Vandermonde Knapsack Problem
The Partial Vandermonde (PV) Knapsack problem is an algebraic variant of the low-density inhomogeneous SIS problem. The problem has been used as a building block for various lattice-based constructions, including signatures (ACNS'14, ACISP'18), encryptions (DCC'15,DCC'20), and signature aggregation (Eprint'20). At Crypto'22, Boudgoust, Gachon, and Pellet-Mary proposed a key distinguishing attack on the PV Knapsack exploiting algebraic properties of the problem. Unfortunately, their attack doesn't offer key recovery, except for worst-case keys.
In this paper, we propose an alternative attack on the PV Knapsack problem, which provides key recovery for a much larger set of keys. Like the Crypto'22 attack, it is based on lattice reduction and uses a dimension reduction technique to speed-up the underlying lattice reduction algorithm and enhance its performance. As a side bonus, our attack transforms the PV Knapsack problem into uSVP instances instead of SVP instances in the Crypto'22 attack. This also helps the lattice reduction algorithm, both from a theoretical and practical point of view.
We use our attack to re-assess the hardness of the concrete parameters used in the literature. It appears that many contain a non-negligible fraction of weak keys, which are easily identified and extremely susceptible to our attack. For example, a fraction of $2^{-19}$ of the public keys of a parameter set from ACISP'18 can be solved in about $30$ hours on a moderate server using off-the-shelf lattice reduction. This parameter set was initially claimed to have a $129$-bit security against key recovery attack. Its security was reduced to $87$-bit security using the distinguishing attack from Crypto'22. Similarly, the ACNS'14 proposal also includes a parameter set containing a fraction of $2^{-19}$ of weak keys; those can be solved in about $17$ hours.
Combined Threshold Implementation
Physical security is an important aspect of devices for which an adversary can manipulate the physical execution environment. Recently, more and more attention has been directed towards a security model that combines the capabilities of passive and active physical attacks, i.e., an adversary that performs fault-injection and side-channel analysis at the same time. Implementing countermeasures against such a powerful adversary is not only costly but also requires the skillful combination of masking and redundancy to counteract all reciprocal effects.
In this work, we propose a new methodology to generate combined-secure circuits. We show how to transform TI-like constructions to resist any adversary with the capability to tamper with internal gates and probe internal wires. For the resulting protection scheme, we can prove the combined security in a well-established theoretical security model.
Since the transformation preserves the advantages of TI-like structures, the resulting circuits prove to be more efficient in the number of required bits of randomness (up to 100%), the latency in clock cycles (up to 40%), and even the area for pipelined designs (up to 40%) than the state of the art for an adversary restricted to manipulating a single gate and probing a single wire.
Algebraic Algorithm for the Alternating Trilinear Form Equivalence Problem
The Alternating Trilinear Form Equivalence (ATFE) problem was recently used by Tang et al. as a hardness assumption in the design of a Fiat-Shamir digital signature scheme ALTEQ. The scheme was submitted to the additional round for digital signatures of the NIST standardization process for post-quantum cryptography.
ATFE is a hard equivalence problem known to be in the class of equivalence problems that includes, for instance, the Tensor Isomorphism (TI), Quadratic Maps Linear Equivalence (QMLE) and the Matrix Code Equivalence (MCE) problems. Due to the increased cryptographic interest, the understanding of its practical hardness has also increased in the last couple of years. Currently, there are several combinatorial and algebraic algorithms for solving it, the best of which is a graph-theoretic algorithm that also includes an algebraic subroutine.
In this paper, we take a purely algebraic approach to the ATFE problem, but we use a coding theory perspective to model the problem. This modelling was introduced earlier for the MCE problem. Using it, we improve the cost of algebraic attacks against ATFE compared to previously known ones.
Taking into account the algebraic structure of alternating trilinear forms, we show that the obtained system has less variables but also less equations than for MCE and gives rise to structural degree-3 syzygies. Under the assumption that outside of these syzygies the system behaves semi-regularly, we provide a concrete, non-asymptotic complexity estimate of the performance of our algebraic attack.
Our results show that the complexity of our attack is below the estimated security levels of ALTEQ by more than 20 bits for NIST level I (and more for the others), thus the scheme requires re-parametrization for all three NIST security levels.
Selfish Mining Time-Averaged Analysis in Bitcoin: Is Orphan Reporting an Effective Countermeasure?
A Bitcoin miner who owns a sufficient amount of mining power can perform selfish mining to increase its relative revenue. Studies have demonstrated that the time-averaged profit of a selfish miner starts to rise once the mining difficulty level gets adjusted in favor of the attacker. Selfish mining profitability lies in the fact that orphan blocks are not incorporated into the current version of Bitcoin's difficulty adjustment mechanism (DAM). Therefore, it is believed that considering the count of orphan blocks in the DAM can result in complete unprofitability for selfish mining. In this paper, we disprove this belief by providing a formal analysis of the selfish mining time-averaged profit. We present a precise definition of the orphan blocks that can be incorporated into calculating the next epoch's target and then introduce two modified versions of DAM in which both main-chain blocks and orphan blocks are incorporated. We propose two versions of smart intermittent selfish mining, where the first one dominates the normal intermittent selfish mining, and the second one results in selfish mining profitability under the modified DAMs. Moreover, we present the orphan exclusion attack with the help of which the attacker can stop honest miners from reporting the orphan blocks. Using combinatorial tools, we analyze the profitability of selfish mining accompanied by the orphan exclusion attack under the modified DAMs. Our results show that even when considering orphan blocks in the DAM, selfish mining can still be profitable. However, the level of profitability under the modified DAMs is significantly lower than that observed under the current version of Bitcoin DAM, suggesting that orphan reporting can be an effective countermeasure against a payoff-maximizing selfish miner.
Integrating Causality in Messaging Channels
Causal reasoning plays an important role in the comprehension of communication, but it has been elusive so far how causality should be properly preserved by instant messaging services. To the best of our knowledge, causality preservation is not even treated as a desired security property by most (if not all) existing secure messaging protocols like Signal. This is probably due to the intuition that causality seems already preserved when all received messages are intact and displayed according to their sending order. Our starting point is to notice that this intuition is wrong.
Until now, for messaging channels (where conversations take place), both the proper causality model and the provably secure constructions have been left open. Our work fills this gap, with the goal to facilitate the formal understanding of causality preservation in messaging.
First, we focus on the common two-user secure messaging channels and model the desired causality preservation property. We take the popular Signal protocol as an example and analyze the causality security of its cryptographic core (the double-ratchet mechanism). We show its inadequacy with a simple causality attack, then fix it such that the resulting Signal channel is causality-preserving, even in a strong sense that guarantees post-compromise security. Our fix is actually generic: it can be applied to any bidirectional channel to gain strong causality security.
Then, we model causality security for the so-called message franking channels. Such a channel additionally enables end users to report individual abusive messages to a server (e.g., the service provider), where this server relays the end-to-end-encrypted communication between users. Causality security in this setting further allows the server to retrieve the necessary causal dependencies of each reported message, essentially extending isolated reported messages to message flows. This has great security merit for dispute resolution, because a benign message may be deemed abusive when isolated from the context. As an example, we apply our model to analyze Facebook’s message franking scheme. We show that a malicious user can easily trick Facebook (i.e., the server) to accuse an innocent user. Then we fix this issue by amending the underlying message franking channel to preserve the desired causality.
Key Exchange with Tight (Full) Forward Secrecy via Key Confirmation
Weak forward secrecy (wFS) of authenticated key exchange (AKE) protocols is a passive variant of (full) forward secrecy (FS). A natural mechanism to upgrade from wFS to FS is the use of key confirmation messages which compute a message authentication code (MAC) over the transcript. Unfortunately, Gellert, Gjøsteen, Jacobson and Jager (GGJJ, CRYPTO 2023) show that this mechanism inherently incurs a loss proportional to the number of users, leading to an overall non-tight reduction, even if wFS was established using a tight reduction.
Inspired by GGJJ, we propose a new notion, called one-way verifiable weak forward secrecy (OW-VwFS), and prove that OW-VwFS can be transformed tightly to FS using key confirmation in the random oracle model (ROM). To implement our generic transformation, we show that several tightly wFS AKE protocols additionally satisfy our OW-VwFS notion tightly. We highlight that using the recent lattice-based protocol from Pan, Wagner, and Zeng (CRYPTO 2023) can give us the first lattice-based tightly FS AKE via key confirmation in the classical random oracle model. Besides this, we also obtain a Decisional-Diffie-Hellman-based protocol that is considerably more efficient than the previous ones.
Finally, we lift our study on FS via key confirmation to the quantum random oracle model (QROM). While our security reduction is overall non-tight, it matches the best existing bound for wFS in the QROM (Pan, Wagner, and Zeng, ASIACRYPT 2023), namely, it is square-root- and session-tight. Our analysis is in the multi-challenge setting, and it is more realistic than the single-challenge setting as in Pan et al..
The NISQ Complexity of Collision Finding
Collision-resistant hashing, a fundamental primitive in modern cryptography, ensures that there is no efficient way to find distinct inputs that produce the same hash value. This property underpins the security of various cryptographic applications, making it crucial to understand its complexity. The complexity of this problem is well-understood in the classical setting and $\Theta(N^{1/2})$ queries are needed to find a collision. However, the advent of quantum computing has introduced new challenges since quantum adversaries - equipped with the power of quantum queries - can find collisions much more efficiently. Brassard, Höyer and Tapp and Aaronson and Shi established that full-scale quantum adversaries require $\Theta(N^{1/3})$ queries to find a collision, prompting a need for longer hash outputs, which impacts efficiency in terms of the key lengths needed for security.
This paper explores the implications of quantum attacks in the Noisy-Intermediate Scale Quantum (NISQ) era. In this work, we investigate three different models for NISQ algorithms and achieve tight bounds for all of them:
(1) A hybrid algorithm making adaptive quantum or classical queries but with a limited quantum query budget, or
(2) A quantum algorithm with access to a noisy oracle, subject to a dephasing or depolarizing channel, or
(3) A hybrid algorithm with an upper bound on its maximum quantum depth; i.e., a classical algorithm aided by low-depth quantum circuits.
In fact, our results handle all regimes between NISQ and full-scale quantum computers. Previously, only results for the pre-image search problem were known for these models by Sun and Zheng, Rosmanis, Chen, Cotler, Huang and Li while nothing was known about the collision finding problem.
Along with our main results, we develop an information-theoretic framework for recording query transcripts of quantum-classical algorithms. The main feature of this framework is that it allows us to record queries in two incompatible bases - classical queries in the standard basis and quantum queries in the Fourier basis - consistently. We call the framework the hybrid compressed oracle as it naturally interpolates between the classical way of recording queries and the compressed oracle framework of Zhandry for recording quantum queries.
Key-Recovery Attack on a Public-Key Encryption Related to Planted Clique
Hudoba proposed a public key encryption (PKE) scheme and conjectured its security to be based on the Planted Clique problem. In this note, we show that this scheme is not secure. We do so by devising an efficient algorithm for the even neighbor independent set problem proposed by Hudoba. This leaves open the possibility of building PKE based on Planted Clique.
Stateless Deterministic Multi-Party EdDSA Signatures with Low Communication
EdDSA is a standardized signing algorithm, by both the IRTF and NIST, that is widely used in blockchain, e.g., Hyperledger, Cardano, Zcash, etc. It is a variant of the well-known Schnorr signature scheme that leverages Edwards curves. It features stateless and deterministic nonce generation, meaning it does not rely on a reliable source of randomness or state continuity. Recently, NIST issued a call for multi-party threshold EdDSA signatures, with one approach verifying nonce generation through zero-knowledge (ZK) proofs. In this paper, we propose a new stateless and deterministic multi-party EdDSA protocol in the full-threshold setting, capable of tolerating all-but-one malicious corruption. Compared to the state-of-the-art multi-party EdDSA protocol by Garillot et al. (Crypto’21), our protocol reduces communication cost by a factor of 56x while maintaining the same three-round structure, albeit with a roughly 2.25x increase in computational cost. We utilize information-theoretic message authentication codes (IT-MACs) in a multi-verifier setting to authenticate values and transform them from the Boolean domain to the arithmetic domain by refining multi-verifier extended doubly-authenticated bits (mv-edabits). Additionally, we employ pseudorandom correlation functions (PCF) to generate IT-MACs in a stateless and deterministic manner. Combining these elements, we design a multi-verifier zero-knowledge (MVZK) protocol for stateless and deterministic nonce generation. Our protocol can be used to build secure blockchain wallets and custody solutions, enhancing key protection.
Security analysis of the iMessage PQ3 protocol
The iMessage PQ3 protocol is an end-to-end encrypted messaging protocol designed for exchanging data in long-lived sessions between two devices. It aims to provide classical and post-quantum confidentiality for forward secrecy and post-compromise secrecy, as well as classical authentication. Its initial authenticated key exchange is constructed from digital signatures plus elliptic curve Diffie–Hellman and post-quantum key exchanges; to derive per-message keys on an ongoing basis, it employs an adaptation of the Signal double ratchet that includes a post-quantum key encapsulation mechanism. This paper presents the cryptographic details of the PQ3 protocol and gives a reductionist security analysis by adapting the multi-stage key exchange security analysis of Signal by Cohn-Gordon et al. (J. Cryptology, 2020). The analysis shows that PQ3 provides confidentiality with forward secrecy and post-compromise security against both classical and quantum adversaries, in both the initial key exchange as well as the continuous rekeying phase of the protocol.
On Central Primitives for Quantum Cryptography with Classical Communication
Recent work has introduced the "Quantum-Computation Classical-Communication"
(QCCC) (Chung et. al.) setting for cryptography. There has been some evidence that
One Way Puzzles (OWPuzz) are the natural central cryptographic primitive for this
setting (Khurana and Tomer). For a primitive to be considered central it should
have several characteristics. It should be well behaved (which for this paper we will
think of as having amplification, combiners, and universal constructions); it should
be implied by a wide variety of other primitives; and it should be equivalent to some
class of useful primitives. We present combiners, correctness and security amplifica-
tion, and a universal construction for OWPuzz. Our proof of security amplification
uses a new and cleaner version construction of EFI from OWPuzz (in comparison to
the result of Khurana and Tomer) that generalizes to weak OWPuzz and is the most
technically involved section of the paper. It was previously known that OWPuzz are
implied by other primitives of interest including commitments, symmetric key encryp-
tion, one way state generators (OWSG), and therefore pseudorandom states (PRS).
However we are able to rule out OWPuzz’s equivalence to many of these primitives
by showing a black box separation between general OWPuzz and a restricted class
of OWPuzz (those with efficient verification, which we call EV − OWPuzz). We then
show that EV − OWPuzz are also implied by most of these primitives, which separates
them from OWPuzz as well. This separation also separates extending PRS from highly
compressing PRS answering an open question of Ananth et. al.
Adaptively Secure Streaming Functional Encryption
This paper introduces the first adaptively secure streaming functional encryption (sFE) scheme for P/Poly. sFE stands as an evolved variant of traditional functional encryption (FE), catering specifically to contexts with vast and/or dynamically evolving data sets. sFE is designed for applications where data arrives in a streaming fashion and is computed on in an iterative manner as the stream arrives. Unlike standard FE, in sFE: (1) encryption is possible without knowledge of the full data set, (2) partial decryption is possible given only a prefix of the input.
Guan, Korb, and Sahai introduced this concept in their recent publication [CRYPTO 2023], where they constructed an sFE scheme for P/Poly using a compact standard FE scheme for the same. However, their sFE scheme only achieved semi-adaptive-function-selective security, which constrains the adversary to obtain all functional keys prior to seeing any ciphertext for the challenge stream. This limitation severely limits the scenarios where sFE can be applied, and therefore fails to provide a suitable theoretical basis for sFE.
In contrast, the adaptive security model empowers the adversary to arbitrarily interleave requests for functional keys with ciphertexts related to the challenge stream. Guan, Korb, and Sahai identified achieving adaptive security for sFE as the key question left open by their work.
We resolve this open question positively by constructing an adaptively secure sFE construction from indistinguishability obfuscation for P/Poly and injective PRGs. By combining our work with that of Jain, Lin, and Sahai [STOC 2021, EUROCRYPT 2022], we obtain the first adaptively secure sFE scheme for P/Poly based on sub-exponential hardness of well-studied computational problems
WARPfold : Wrongfield ARithmetic for Protostar folding
Inspired by range-check trick from recent Latticefold paper we construct elliptic-curve based IVC capable of simulating non-native arithmetic efficiently.
We explain the general principle (which can be applied to both Protostar and Hypernova), and describe the Wrongfield ARithmetic for Protostar folding in details.
Our construction supports circuits over mutilple non-native fields simultaneously and allows interfacing between them using range-checked elements.
WARPfold can be used to warp between different proof systems and construct folding schemes over curves not admitting a dual partner (such as BLS12-381).
FuLeakage: Breaking FuLeeca by Learning Attacks
FuLeeca is a signature scheme submitted to the recent NIST call for additional signatures. It is an efficient hash-and-sign scheme based on quasi-cyclic codes in the Lee metric and resembles the lattice-based signature Falcon. FuLeeca proposes a so-called concentration step within the signing procedure to avoid leakage of secret-key information from the signatures. However, FuLeeca is still vulnerable to learning attacks, which were first observed for lattice-based schemes. We present three full key-recovery attacks by exploiting the proximity of the code-based FuLeeca scheme to lattice-based primitives.
More precisely, we use a few signatures to extract an $n/2$-dimensional circulant sublattice from the given length-$n$ code, that still contains the exceptionally short secret-key vector. This significantly reduces the classical attack cost and, in addition, leads to a full key recovery in quantum-polynomial time. Furthermore, we exploit a bias in the concentration procedure to classically recover the full key for any security level with at most 175,000 signatures in less than an hour.
Improved Meet-in-the-Middle Nostradamus Attacks on AES-like Hashing
The Nostradamus attack was originally proposed as a security vulnerability for a hash function by Kelsey and Kohno at EUROCRYPT 2006. It requires the attacker to commit to a hash value y of an iterated hash function H. Subsequently, upon being provided with a message prefix P, the adversary’s task is to identify a suffix S such that H(P||S) equals y. Kelsey and Kohno demonstrated a herding attack requiring $O(\sqrt{n}\cdot 2^{2n/3})$ evaluations of the compression function of H, where n represents the output and state size of the hash, placing this attack between preimage attacks and collision searches in terms of complexity. At ASIACRYPT 2022, Benedikt et al. transform Kelsey and Kohno’s attack into a quantum variant, decreasing the time complexity from $O(\sqrt{n}\cdot 2^{2n/3})$ to $O(\sqrt[3]{n}\cdot 2^{3n/7})$. At ToSC 2023, Zhang et al. proposed the first dedicated Nostradamus attack on AES-like hashing in both classical and quantum settings. In this paper, we have made revisions to the multi-target technique incorporated into the meet-in-the-middle automatic search framework. This modification leads to a decrease in time complexity during the online linking phase, effectively reducing the overall attack time complexity in both classical and quantum scenarios. Specifically, we can achieve more rounds in the classical setting and reduce the time complexity for the same round in the quantum setting.
Improved Differential Meet-In-The-Middle Cryptanalysis
In this paper, we extend the applicability of differential meet-
in-the-middle attacks, proposed at Crypto 2023, to truncated differen-
tials, and in addition, we introduce three new ideas to improve this type
of attack: we show how to add longer structures than the original pa-
per, we show how to improve the key recovery steps by introducing some
probability in them, and we combine this type of attacks with the state-
test technique, that was introduced in the context of impossible differ-
ential attacks. Furthermore, we have developed a MILP-based tool to
automate the search for a truncated differential-MITM attack with op-
timized overall complexity, incorporating some of the proposed improve-
ments. Thanks to this, we can build the best known attacks on the cipher
CRAFT, reaching 23 rounds against 21 previously; we provide a new at-
tack on 23-round SKINNY-64-192, and we improve the best attacks on
SKINNY-128-384.
Automating Collision Attacks on RIPEMD-160
As an ISO/IEC standard, the hash function RIPEMD-160 has been used to generate the Bitcoin address with SHA-256. However, due to the complex double-branch structure of RIPEMD-160, the best collision attack only reaches 36 out of 80 steps of RIPEMD-160, and the best semi-free-start (SFS) collision attack only reaches 40 steps. To improve the 36-step collision attack proposed at EUROCRYPT 2023, we explored the possibility of using different message differences to increase the number of attacked steps, and we finally identified one choice allowing a 40-step collision attack. To find the corresponding 40-step differential characteristic, we re-implement the MILP-based method to search for signed differential characteristics with SAT/SMT. As a result, we can find a colliding message pair for 40-step RIPEMD-160 in practical time, which significantly improves the best collision attack on RIPEMD-160. For the best SFS collision attack published at ToSC 2019, we observe that the bottleneck is the probability of the right-branch differential characteristics as they are fully uncontrolled in the message modification. To address this issue, we utilize our SAT/SMT-based tool to search for high-probability differential characteristics for the right branch. Consequently, we can mount successful SFS collision attacks on 41, 42 and 43 steps of RIPEMD-160, thus significantly improving the SFS collision attacks. In addition, we also searched for a 44-step differential characteristic, but the differential probability is too low to allow a meaningful SFS collision attack.
New Records in Collision Attacks on SHA-2
The SHA-2 family including SHA-224, SHA-256, SHA-384,
SHA-512, SHA-512/224 and SHA512/256 is a U.S. federal standard pub-
lished by NIST. Especially, there is no doubt that SHA-256 is one of the
most important hash functions used in real-world applications. Due to
its complex design compared with SHA-1, there is almost no progress
in collision attacks on SHA-2 after ASIACRYPT 2015. In this work, we
retake this challenge and aim to significantly improve collision attacks
on the SHA-2 family. First, we observe from many existing attacks on
SHA-2 that the current advanced tool to search for SHA-2 characteristics
has reached the bottleneck. Specifically, longer differential characteristics
could not be found, and this causes that the collision attack could not
reach more steps. To address this issue, we adopt Liu et al.’s MILP-based
method and implement it with SAT/SMT for SHA-2, where we also add
more techniques to detect contradictions in SHA-2 characteristics. This
answers an open problem left in Liu et al.’s paper to apply the technique
to SHA-2. With this SAT/SMT-based tool, we search for SHA-2 charac-
teristics by controlling its sparsity in a dedicated way. As a result, we
successfully find the first practical semi-free-start (SFS) colliding message
pair for 39-step SHA-256, improving the best 38-step SFS collision attack
published at EUROCRYPT 2013. In addition, we also report the first
practical free-start (FS) collision attack on 40-step SHA-224, while the
previously best theoretic 40-step attack has time complexity 2110. More-
over, for the first time, we can mount practical and theoretic collision
attacks on 28-step and 31-step SHA-512, respectively, which improve the
best collision attack only reaching 27 steps of SHA-512 at ASIACRYPT
2015. In a word, with new techniques to find SHA-2 characteristics, we
have made some notable progress in the analysis of SHA-2 after the major
achievements made at EUROCRYPT 2013 and ASIACRYPT 2015.
A Computational Tsirelson's Theorem for the Value of Compiled XOR Games
Uncategorized
Uncategorized
Nonlocal games are a foundational tool for understanding entanglement and constructing quantum protocols in settings with multiple spatially separated quantum devices. In this work, we continue the study initiated by Kalai et al. (STOC '23) of compiled nonlocal games, played between a classical verifier and a single cryptographically limited quantum device. Our main result is that the compiler proposed by Kalai et al. is sound for any two-player XOR game. A celebrated theorem of Tsirelson shows that for XOR games, the quantum value is exactly given by a semidefinite program, and we obtain our result by showing that the SDP upper bound holds for the compiled game up to a negligible error arising from the compilation. This answers a question raised by Natarajan and Zhang (FOCS '23), who showed soundness for the specific case of the CHSH game. Using our techniques, we obtain several additional results, including (1) tight bounds on the compiled value of parallel-repeated XOR games, (2) operator self-testing statements for any compiled XOR game, and (3) a ``nice" sum-of-squares certificate for any XOR game, from which operator rigidity is manifest.
The Algebraic Freelunch: Efficient Gröbner Basis Attacks Against Arithmetization-Oriented Primitives
In this paper, we present a new type of algebraic attack that applies to many recent arithmetization-oriented families of permutations, such as those used in Griffin, Anemoi, ArionHash, and XHash8, whose security relies on the hardness of the constrained-input constrained-output (CICO) problem. We introduce the FreeLunch approach: the monomial ordering is chosen so that the natural polynomial system encoding the CICO problem already is a Gröbner basis. In addition, we present a new dedicated resolution algorithm for FreeLunch systems of complexity lower than applicable state-of-the-art FGLM algorithms.
We show that the FreeLunch approach challenges the security of fullround instances of Anemoi, Arion and Griffin. We confirm these theoretical results with experimental results on those three permutations. In particular, using the FreeLunch attack combined with a new technique to bypass 3 rounds of Griffin, we recover a CICO solution for 7 out of 10 rounds of Griffin in less than four hours on one core of AMD EPYC 7352 (2.3GHz).
A data aggregation protocol based on TFHE
Torus Fully Homomorphic Encryption (TFHE) is a probabilistic cryptosytem over the real torus which allows one to operate directly on encrypted data without first decrypting them. We present an aggregation protocol based on a variant of TFHE for computing the sum of sensitive data, working only with the corresponding ciphertexts. Our scheme is an ideal choice for a system of smart meters - electronic devices for measuring energy consumption - that demands consumers’
privacy. In contrast to some other solutions, our proposal does not require any communication among smart meters and it is quantum-safe.
An Efficient Adaptive Attack Against FESTA
At EUROCRYPT’23, Castryck and Decru, Maino et al., and Robert present efficient attacks against supersingular isogeny Diffie-Hellman key exchange protocol (SIDH). Drawing inspiration from these attacks, Andrea Basso, Luciano Maino, and Giacomo Pope introduce FESTA, an isogeny-based trapdoor function, along with a corresponding IND-CCA secure public key encryption (PKE) protocol at ASIACRYPT’23. FESTA incorporates either a diagonal or circulant matrix into the secret key to mask torsion points.
In this paper, we employ a side-channel attack to construct an auxiliary verification oracle. By querying this oracle, we propose an adaptive attack strategy to recover the secret key in FESTA when the secret matrix is circulant. Compared with existing attacks, our strategy is more efficient and formal. Leveraging these findings, we implement our attack algorithms to recover the circulant matrix in secret key. Finally, we demonstrate that if the secret matrix is circulant, then the adversary can successfully recover FESTA’s secret key with a polynomial number of decryption machine queries. Consequently, our paper illustrates that FESTA PKE protocol with secret circulant matrix does not achieve IND-CCA security.
Probabilistic Extensions: A One-Step Framework for Finding Rectangle Attacks and Beyond
Uncategorized
Uncategorized
In differential-like attacks, the process typically involves extending a distinguisher forward and backward with probability 1 for some rounds and recovering the key involved in the extended part. Particularly in rectangle attacks, a holistic key recovery strategy can be employed to yield the most efficient attacks tailored to a given distinguisher. In this paper, we treat the distinguisher and the extended part as an integrated entity and give a one-step framework for finding rectangle attacks with the purpose of reducing the overall complexity or attacking more rounds. In this framework, we propose to allow probabilistic differential propagations in the extended part and incorporate the holistic recovery strategy. Additionally, we introduce the ``split-and-bunch technique'' to further reduce the time complexity. Beyond rectangle attacks, we extend these foundational concepts to encompass differential attacks as well. To demonstrate the efficiency of our framework, we apply it to Deoxys-BC-384, SKINNY, ForkSkinny, and CRAFT, achieving a series of refined and improved rectangle attacks and differential attacks. Notably, we obtain the first 15-round attack on Deoxys-BC-384, narrowing its security margin to only one round. Furthermore, our differential attack on CRAFT extends to 23 rounds, covering two more rounds than the previous best attacks.
Last updated: 2024-02-27
Partial Differential Fault Analysis on Ascon
Authenticated Encryption with Associated Data (AEAD) is a trend in applied cryptography because it combine confidentiality, integrity, and authentication into one algorithm and is more efficient than using block ciphers and hash functions separately. The Ascon algorithm, as the winner in both the CAESAR competition and the NIST LwC competition, will soon become the AEAD standard for protecting the Internet of Things and micro devices with limited computing resources. We propose a partial differential fault analysis (PDFA) technology for the Ascon algorithm, using stuck-at fault and random-nibble fault models respectively. Theoretically, after 9.9 full-round fault injections or 263 single nibble fault injections, 128-bit key can be completely recovered. In addition, we conducted the first discussion of this analysis method under different nonce configurations. In the Nonce-respect case, an average of 130 additional Tag queries are required to complete the guessing of the faulty tag, afterwards equating this case with the Nonce-misuse case. Subsequent experimental results proved the correctness of the theoretical model. Finally we discuss some countermeasures against proposed attacks, and we propose a new S-box that can be used to replace the existing S-box in ASCON to render PDFA ineffective.
Massive Superpoly Recovery with a Meet-in-the-middle Framework -- Improved Cube Attacks on Trivium and Kreyvium
The cube attack extracts the information of secret key bits by recovering the coefficient called superpoly in the output bit with respect to a subset of plaintexts/IV, which is called a cube. While the division property provides an efficient way to detect the structure of the superpoly, superpoly recovery could still be prohibitively costly if the number of rounds is sufficiently high. In particular, Core Monomial Prediction (CMP) was proposed at ASIACRYPT 2022 as a scaled-down version of Monomial Prediction (MP), which sacrifices accuracy for efficiency but ultimately gets stuck at 848 rounds of \trivium.
In this paper, we provide new insights into CMP by elucidating the algebraic meaning to the core monomial trails. We prove that it is sufficient to recover the superpoly by extracting all the core monomial trails, an approach based solely on CMP, thus demonstrating that CMP can achieve perfect accuracy as MP does. We further reveal that CMP is still MP in essence, but with variable substitutions on the target function. Inspired by the divide-and-conquer strategy that has been widely used in previous literature, we design a meet-in-the-middle (MITM) framework, in which the CMP-based approach can be embedded to achieve a speedup.
To illustrate the power of these new techniques, we apply the MITM framework to \trivium, \grain and \kreyvium. As a result, not only can the previous computational cost of superpoly recovery be reduced (e.g., 5x faster for superpoly recovery on 192-round \grain), but we also succeed in recovering superpolies for up to 851 rounds of \trivium and up to 899 rounds of \kreyvium. This surpasses the previous best results by respectively 3 and 4 rounds. Using the memory-efficient M\"obius transform proposed at EUROCRYPT 2021, we can perform key recovery attacks on target ciphers, even though the superpoly may contain over $2^{40}$ monomials. This leads to the best cube attacks on the target ciphers.
VeriSimplePIR: Verifiability in SimplePIR at No Online Cost for Honest Servers
We present VeriSimplePIR, a verifiable version of the state-of-the-art semi-honest SimplePIR protocol. VeriSimplePIR is a stateful verifiable PIR scheme guaranteeing that all queries are consistent with a fixed, well-formed database. It is the first efficient verifiable PIR scheme to not rely on an honest digest to ensure security; any digest, even one produced by a malicious server, is sufficient to commit to some database. This is due to our extractable verification procedure, which can extract the entire database from the consistency proof checked against each response.
Furthermore, VeriSimplePIR ensures this strong security guarantee without compromising the performance of SimplePIR. The online communication overhead is roughly $1.1$-$1.5\times$ SimplePIR, and the online computation time on the server is essentially the same. We achieve this low overhead via a novel one-time preprocessing protocol that generates a reusable proof that can verify any number of subsequent query-response pairs as long as no malicious behavior is detected. As soon as the verification procedure rejects a response from the server, the offline phase must be rerun to compute a new proof. VeriSimplePIR represents an approach to maliciously secure cryptography that is highly optimized for honest parties while maintaining security even in the presence of malicious adversaries.
A New Approach for Non-Interactive Zero-Knowledge from Learning with Errors
We put forward a new approach for achieving non-interactive zero-knowledge proofs (NIKZs) from the learning with errors (LWE) assumption (with subexponential modulus to noise ratio). We provide a LWE-based construction of a hidden bits generator that gives rise to a NIZK via the celebrated hidden bits paradigm. A noteable feature of our construction is its simplicity. Our construction employs lattice trapdoors, but beyond that uses only simple operations. Unlike prior solutions we do not rely on a correlation intractability argument nor do we utilize fully homomorphic encryption techniques. Our solution provides a new methodology that adds to the diversity of techniques for solving this fundamental problem.
From Random Probing to Noisy Leakages Without Field-Size Dependence
Side channel attacks are devastating attacks targeting cryptographic implementations. To protect against these attacks, various countermeasures have been proposed -- in particular, the so-called masking scheme. Masking schemes work by hiding sensitive information via secret sharing all intermediate values that occur during the evaluation of a cryptographic implementation. Over the last decade, there has been broad interest in designing and formally analyzing such schemes. The random probing model considers leakage where the value on each wire leaks with some probability $\epsilon$. This model is important as it implies security in the noisy leakage model via a reduction by Duc et al. (Eurocrypt 2014). Noisy leakages are considered the "gold-standard" for analyzing masking schemes as they accurately model many real-world physical leakages. Unfortunately, the reduction of Duc et al. is non-tight, and in particular requires that the amount of noise increases by a factor of $|\mathbb{F}|$ for circuits that operate over $\mathbb{F}$ (where $\mathbb{F}$ is a finite field). In this work, we give a generic transformation from random probing to average probing, which avoids this loss of $|\mathbb{F}|$. Since the average probing is identical to the noisy leakage model (Eurocrypt 2014), this yields for the first time a security analysis of masked circuits where the noise parameter $\delta$ in the noisy leakage model is independent of $|\mathbb{F}|$. The latter is particularly important for cryptographic schemes operating over large fields, e.g., the AES or the recently standardized post-quantum schemes.
Tight Indistinguishability Bounds for the XOR of Independent Random Permutations by Fourier Analysis
The XOR of two independent permutations (XoP) is a well-known construction for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). The idealized construction (where the permutations are uniformly chosen and independent) and its variants have been extensively analyzed over nearly 25 years.
The best-known asymptotic information-theoretic indistinguishability bound for the XoP construction is $O(q/2^{1.5n})$, derived by Eberhard in 2017, where $q$ is the number of queries and $n$ is the block length.
A generalization of the XoP construction outputs the XOR of $r \geq 2$ independent permutations, and has also received significant attention in both the single-user and multi-user settings. In particular, for $r = 3$, the best-known bound (obtained by Choi et al. [ASIACRYPT'22]) is about $q^2/2^{2.5 n}$ in the single-user setting and $\sqrt{u} q_{\max}^2/2^{2.5 n}$ in the multi-user setting (where $u$ is the number of users and $q_{\max}$ is the number of queries per user).
In this paper, we prove an indistinguishability bound of $q/2^{(r - 0.5)n}$ for the (generalized) XoP construction in the single-user setting, and a bound of $\sqrt{u} q_{\max}/2^{(r - 0.5)n}$ in the multi-user setting. In particular, for $r=2$, we obtain the bounds $q/2^{1.5n}$ and $\sqrt{u} q_{\max}/2^{1.5n}$ in single-user and multi-user settings, respectively. For $r=3$ the corresponding bounds are $q/2^{2.5n}$ and $\sqrt{u} q_{\max}/2^{2.5n}$. All of these bounds hold assuming $q < 2^{n}/2$ (or $q_{\max} < 2^{n}/2$).
Compared to previous works, we improve all the best-known bounds for the (generalized) XoP construction in the multi-user setting, and the best-known bounds for the generalized XoP construction for $r \geq 3$ in the single-user setting (assuming $q \geq 2^{n/2}$). For the basic two-permutation XoP construction in the single-user setting, our concrete bound of $q/2^{1.5n}$ stands in contrast to the asymptotic bound of $O(q/2^{1.5n})$ by Eberhard.
Since all of our bounds are matched (up to constant factors) for $q \geq 2^{n/2}$ by attacks published by Patarin in 2008 (and their generalizations to the multi-user setting), they are all tight.
We obtain our results by Fourier analysis of Boolean functions. Most of our technical work involves bounding (sums of) Fourier coefficients of the density function associated with sampling without replacement. While the proof of Eberhard relies on similar bounds, our proof is elementary and simpler.
Solving the Tensor Isomorphism Problem for special orbits with low rank points: Cryptanalysis and repair of an Asiacrypt 2023 commitment scheme
The Tensor Isomorphism Problem (TIP) has been shown to be equivalent to the matrix code equivalence problem, making it an interesting candidate on which to build post-quantum cryptographic primitives. These hard problems have already been used in protocol development. One of these, MEDS, is currently in Round 1 of NIST's call for additional post-quantum digital signatures.
In this work, we consider the TIP for a special class of tensors. The hardness of the decisional version of this problem is the foundation of a commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (Asiacrypt 2023). We present polynomial-time algorithms for the decisional and computational versions of TIP for special orbits, which implies that the commitment scheme is not secure. The key observations of these algorithms are that these special tensors contain some low-rank points, and their stabilizer groups are not trivial.
With these new developments in the security of TIP in mind, we give a new commitment scheme based on the general TIP that is non-interactive, post-quantum, and statistically binding, making no new assumptions. Such a commitment scheme does not currently exist in the literature.
RAMenPaSTA: Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs
Incremental Verifiable Computation (IVC) allows a prover to prove to a verifier the correct execution of a sequential computation. Recent works focus on improving the universality and efficiency of IVC Schemes, which can be categorized into Accumulation and Folding-based IVCs with Folding-based ones being more efficient (due to their deferred proof generation until the final step). Unfortunately, both approaches satisfy only heuristic security as they model the Random Oracle (RO) as a circuit in their non-constant depth recursive composition of the base Scheme. Such drawback is two-fold: to connect the consecutive execution step the RO is recursively modeled as a circuit during the folding or the accumulating process, and again in the final SNARK wrapper circuit (a common practice in Folding-based IVCs).
We revisit this problem, with a focus on the Folding-based IVCs due to their efficiency, and propose the detachment of RO invocation from the folding circuit. We can instead accumulate such invocations, yielding the so-called Conditional Folding (CF) Scheme to overcome the first drawback. One can consider our CF Scheme a hybrid Folding-Accumulation Scheme with provable security. We provide a non-trivial practical construction for our CF scheme that is natively parallelizable, which offers great efficiency. We rigorously prove the security of our CF scheme (also for the case of folding in parallel; and our scheme can be made non-interactive using Fiat-Shamir). Our CF scheme is generic and does not require trusted setup. It can be adapted to construct the first IVC for RAM programs, i.e. Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs that we dub RAMenPaSTA, that can be used to build zero-knowledge virtual machines (zkVMs). Both our CF Scheme and RAMenPaSTA can be of independent research interests.
Split-State Non-Malleable Codes and Secret Sharing Schemes for Quantum Messages
Non-malleable codes are fundamental objects at the intersection of cryptography and coding theory. These codes provide security guarantees even in settings where error correction and detection are impossible, and have found applications to several other cryptographic tasks. One of the strongest and most well-studied adversarial tampering models is $2$-split-state tampering. Here, a codeword is split into two parts which are stored in physically distant servers, and the adversary can then independently tamper with each part using arbitrary functions. This model can be naturally extended to the secret sharing setting with several parties by having the adversary independently tamper with each share. Previous works on non-malleable coding and secret sharing in the split-state tampering model only considered the encoding of classical messages. Furthermore, until recent work by Aggarwal, Boddu, and Jain (IEEE Trans. Inf. Theory 2024 & arXiv 2022), adversaries with quantum capabilities and shared entanglement had not been considered, and it is a priori not clear whether previous schemes remain secure in this model.
In this work, we introduce the notions of split-state non-malleable codes and secret sharing schemes for quantum messages secure against quantum adversaries with shared entanglement. Then, we present explicit constructions of such schemes that achieve low-error non-malleability. More precisely, we construct efficiently encodable and decodable split-state non-malleable codes and secret sharing schemes for quantum messages preserving entanglement with external systems and achieving security against quantum adversaries having shared entanglement with codeword length $n$, any message length at most $n^{\Omega(1)}$, and error $\varepsilon=2^{-{n^{\Omega(1)}}}$. In the easier setting of average-case non-malleability, we achieve efficient non-malleable coding with rate close to $1/11$.
The Impact of Reversibility on Parallel Pebbling
The (parallel) classical black pebbling game is a helpful abstraction which allows us to analyze the resources (time, space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$ on a (parallel) computer. In particular, the parallel black pebbling game has been used as a tool to quantify the (in)security of Data-Independent Memory-Hard Functions (iMHFs). However, the classical black pebbling game is not suitable to analyze the cost of quantum preimage attack. Thus, Blocki et al. (TCC 2022) introduced the parallel reversible pebbling game as a tool to analyze resource requirements for a quantum computer. While there is an extensive line of work analyzing pebbling complexity in the (parallel) black pebbling game, comparatively little is known about the parallel reversible pebbling game. Our first result is a lower bound of $\Omega\left(N^{1+\sqrt{\frac{ 2-o(1)}{\log N}}} \right)$ on the reversible cumulative pebbling cost for a line graph on $N$ nodes. This yields a separation between classical and reversible pebbling costs demonstrating that the reversibility constraint can increase cumulative pebbling costs (and space-time costs) by a multiplicative factor of $N^{(\sqrt 2 + o(1))/\sqrt{\log N}}$ --- the classical pebbling cost (space-time or cumulative) for a line graph is just $\mathcal{O}(N)$. On the positive side, we prove that any classical parallel pebbling can be transformed into a reversible pebbling strategy whilst increasing space-time (resp. cumulative memory) costs by a multiplicative factor of at most $\mathcal{O}\left(N^{\sqrt{\frac{8}{\log N}}}\right)$ (resp. $\mathcal{O}\left(N^{\mathcal{O}(1)/\sqrt[4]{\log N}}\right)$). We also analyze the impact of the reversibility constraint on the cumulative pebbling cost of depth-robust and depth-reducible DAGs exploiting reversibility to improve constant factors in a prior lower bound of Alwen et al. (EUROCRYPT 2017). For depth-reducible DAGs we show that the state-of-the-art recursive pebbling techniques of Alwen et al. (EUROCRYPT 2017) can be converted into a recursive reversible pebbling attack without any asymptotic increases in pebbling costs. Finally, we extend a result of Blocki et al. (ITCS 2020) to show that it is Unique Games hard to approximate the reversible cumulative pebbling cost of a DAG $G$ to within any constant factor.
Practical Attack on All Parameters of the DME Signature Scheme
DME is a multivariate scheme submitted to the call for additional signatures recently launched by NIST. Its performance is one of the best among all the candidates. The public key is constructed from the alternation of very structured linear and non-linear components that constitute the private key, the latter being defined over an extension field. We exploit these structures by proposing an algebraic attack which is practical on all DME parameters.
Leakage-Tolerant Circuits
A leakage-resilient circuit for $f:\{0,1\}^n\to\{0,1\}^m$ is a randomized Boolean circuit $C$ mapping a randomized encoding of an input $x$ to an encoding of $y=f(x)$, such that applying any leakage function $L\in \cal L$ to the wires of $C$ reveals essentially nothing about $x$. A leakage-tolerant circuit achieves the stronger guarantee that even when $x$ and $y$ are not protected by any encoding, the output of $L$ can be simulated by applying some $L'\in \cal L$ to $x$ and $y$ alone. Thus, $C$ is as secure as an ideal hardware implementation of $f$ with respect to leakage from $\cal L$.
Leakage-resilient circuits were constructed for low-complexity classes $\cal L$, including (length-$t$ output) $\mathcal{AC}0$ functions, parities, and functions with bounded communication complexity. In contrast, leakage-tolerant circuits were only known for the simple case of probing leakage, where $L$ outputs the values of $t$ wires in $C$.
We initiate a systematic study of leakage-tolerant circuits for natural classes $\cal L$ of global leakage functions, obtaining the following main results.
$\textbf{Leakage-tolerant circuits for depth-1 leakage.}$ Every circuit $C_f$ for $f$ can be efficiently compiled into an $\cal L$-tolerant circuit $C$ for $f$, where $\cal L$ includes all leakage functions $L$ that output either $t$ parities or $t$ disjunctions (alternatively, conjunctions) of any number of wires or their negations. In the case of parities, our simulator runs in $2^{O(t)}$ time. We provide partial evidence that this may be inherent.
$\textbf{Application to stateful leakage-resilient circuits.}$ We present a general transformation from (stateless) leakage-tolerant circuits to stateful leakage-resilient circuits. Using this transformation, we obtain the first constructions of stateful $t$-leakage-resilient circuits that tolerate a continuous parity/disjunction/conjunction leakage in which the circuit size grows sub-quadratically with $t$. Interestingly, here we can obtain $\mathtt{poly}(t)$-time simulation even in the case of parities.
Transaction Fee Mechanism Design in a Post-MEV World
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with passive block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of active block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the phrase ``maximal extractable value (MEV).''
We first prove that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: With active block producers, no non-trivial or approximately welfare maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These impossibility results can be interpreted as a mathematical justification for augmenting transaction fee mechanisms with additional components such as orderflow auctions, block producer competition, trusted hardware, or cryptographic techniques.
We then proceed to a more fine-grained model of block production that is inspired by current practice, in which we distinguish the roles of ``searchers'' (who actively identify opportunities for value extraction from the application layer and compete for the right to take advantage of them) and ``proposers'' (who participate directly in the blockchain protocol and make the final choice of the published block). Searchers can effectively act as an ``MEV oracle'' for a transaction fee mechanism, thereby enlarging the design space. Here, we first consider a transaction fee mechanism that resembles how searchers have traditionally been incorporated into the block production process, with each transaction effectively sold off to a searcher through a first-price auction. We then explore the design space with searchers more generally, and design a mechanism that circumvents our impossibility results for mechanisms without searchers. Our mechanism (the ``SAKA'' mechanism) is deterministic, incentive-compatible (for users, searchers, and the block producer), and sybil-proof, and it guarantees roughly 50% of the maximum-possible welfare when transaction sizes are small relative to block sizes. We conclude with a matching negative result: even when transactions are small relative to blocks, no incentive-compatible, sybil proof, and deterministic transaction fee mechanism can guarantee more than 50% of the maximum-possible welfare.
Fuzzy Private Set Intersection with Large Hyperballs
Traditional private set intersection (PSI) involves a receiver and a sender holding sets $X$ and $Y$, respectively, with the receiver learning only the intersection $X\cap Y$.
We turn our attention to its fuzzy variant, where the receiver holds \(|X|\) hyperballs of radius \(\delta\) in a metric space and the sender has $|Y|$ points.
Representing the hyperballs by their center, the receiver learns the points $x\in X$ for which there exists $y\in Y$ such that $\mathsf{dist}(x,y)\leq \delta$ with respect to some distance metric.
Previous approaches either require general-purpose multi-party computation (MPC) techniques like garbled circuits or fully homomorphic encryption (FHE), leak details about the sender’s precise inputs, support limited distance metrics, or scale poorly with the hyperballs' volume.
This work presents the first black-box construction for fuzzy PSI (including other variants such as PSI cardinality, labeled PSI, and circuit PSI), which can handle polynomially large radius and dimension (i.e., a potentially exponentially large volume) in two interaction messages, supporting general \(L_{p\in[1,\infty]}\) distance, without relying on garbled circuits or FHE. The protocol excels in both asymptotic and concrete efficiency compared to existing works. For security, we solely rely on the assumption that the Decisional Diffie-Hellman (DDH) holds in the random oracle model.
How to Validate a Verification?
This paper introduces \textsl{signature validation}, a primitive allowing any \underline{t}hird party $T$ (\underline{T}héodore) to verify that a \underline{v}erifier $V$ (\underline{V}adim) computationally verified a signature $s$ on a message $m$ issued by a \underline{s}igner $S$ (\underline{S}arah).
A naive solution consists in sending by Sarah $x=\{m,\sigma_s\}$ where $\sigma_s$ is Sarah's signature on $m$ and have Vadim confirm reception by a signature $\sigma_v$ on $x$.
Unfortunately, this only attests \textsl{proper reception} by Vadim, i.e. that Vadim \textsl{could have checked} $x$ and not that Vadim \textsl{actually verified} $x$. By ``actually verifying'' we mean providing a proof or a convincing argument that a program running on Vadim's machine checked the correctness of $x$.
This paper proposes several solutions for doing so, thereby providing a useful building-block in numerous commercial and legal interactions for proving informed consent.
Attribute-Based Signatures with Advanced Delegation, and Tracing
Attribute-based cryptography allows fine-grained control on the use of the private key. In particular, attribute-based signature (ABS) specifies the capabilities of the signer, which can only sign messages associated to a policy that is authorized by his set of attributes. Furthermore, we can expect signature to not leak any information about the identity of the signer. ABS is a useful tool for identity-preserving authentication process which requires granular access-control, and can furthermore be enhanced with additional properties, for example delegation where users are able to manage a set of keys derived from their original one.
In this paper, we address delegation of signing keys. Our first delegation works for any subset of the original attributes, which is the intuitive approach of delegation. Furthermore, we also provide another kind of delegation where the delegator can choose a policy at delegation time to produce keys that can sign any message under this specific policy. This last approach to delegation is a direct application of a new version of the indexing technique, which was first introduced by Okamoto and Takashima in order to prove adaptive security in ABS and its counterpart for encryption, ABE.
On top of that, we prove that our scheme is compatible with a well studied feature of ABS, traceability, by using an approach based on Linearly-Homomorphic signatures.
All our schemes also guarantee the anonymity of the real signer.
The unforgeability of our schemes is proven using the SXDH assumption, and our constructions use the Dual Pairing Vector Spaces (DPVS) framework developed by Okamoto and Takashima, which has been widely used for all kind of attribute and functional cryptography mechanisms.
Registered Functional Encryptions from Pairings
This work initiates the study of concrete registered functional encryption (Reg-FE) beyond ``all-or-nothing'' functionalities:
- We build the first Reg-FE for linear function or inner-product evaluation (Reg-IPFE) from pairings. The scheme achieves adaptive IND-security under $k$-Lin assumption in the prime-order bilinear group. A minor modification yields the first Registered Inner-Product Encryption (Reg-IPE) scheme from $k$-Lin assumption. Prior work achieves the same security in the generic group model.
-We build the first Reg-FE for quadratic function (Reg-QFE) from pairings. The scheme achieves very selective simulation-based security (SIM-security) under bilateral $k$-Lin assumption in the prime-order bilinear group. Here, ``very selective'' means that the adversary claims challenge messages, all quadratic functions to be registered and all corrupted users at the beginning.
Besides focusing on the compactness of the master public key and helper keys, we also aim for compact ciphertexts in Reg-FE.
Let $L$ be the number of slots and $n$ be the input size. Our first Reg-IPFE has weakly compact ciphertexts of size $O(n\cdot\log L)$ while our second Reg-QFE has compact ciphertexts of size $O(n+\log L)$. Technically, for our first Reg-IPFE, we employ nested dual-system method within the context of Reg-IPFE; for our second Reg-QFE, we follow Wee's ``IPFE-to-QFE'' transformation [TCC' 20] but devise a set of new techniques that make our pairing-based Reg-IPFE compatible.
Along the way, we introduce a new notion named Pre-Constrained Registered IPFE which generalizes slotted Reg-IPFE by constraining the form of functions that can be registered.
Haven++: Batched and Packed Dual-Threshold Asynchronous Complete Secret Sharing with Applications
Asynchronous complete secret sharing (ACSS) is a foundational primitive in the design of distributed algorithms and cryptosystems that require confidentiality. ACSS permits a dealer to distribute a secret to a collection of $n$ servers so that everyone holds shares of a polynomial containing the dealer's secret.
This work contributes a new ACSS protocol, called Haven++, that uses packing and batching to make asymptotic and concrete advances in the design and application of ACSS for large secrets. Haven++ allows the dealer to pack multiple secrets in a single sharing phase, and to reconstruct either one or all of them later. For even larger secrets, we contribute a batching technique to amortize the cost of proof generation and verification across multiple invocations of our protocol.
The result is an asymptotic improvement in the worst-case amortized communication and computation complexity, both for ACSS itself and for its application to asynchronous distributed key generation. Our ADKG based on Haven++ achieves, for the first time, an optimal worst case amortized communication complexity of $O(\kappa n)$ without a trusted setup. To show the practicality of Haven++, we implement it and find that it outperforms the work of Yurek et al.\ (NDSS 2022) by more than an order of magnitude when there are malicious, faulty parties.
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
An important part in proving machine computation is to prove the correctness of the read and write operations performed from the memory, which we term memory-proving. Previous methodologies required proving Merkle Tree openings or multi-set hashes, resulting in relatively large proof circuits. We construct an efficient memory-proving Incrementally Verifiable Computation (IVC) scheme from accumulation, which is particularly useful for machine computations with large memories and deterministic steps. In our scheme, the IVC prover PIVC has cost entirely independent of the memory size T and only needs to commit to approximately 15 field elements per read/write operation, marking a more than 100X improvement over prior work. We further reduce this cost by employing a modified, accumulation-friendly version of the GKR protocol. In the optimized version, PIVC only needs to commit to 6 small memory-table elements per read/write. If the table stores 32-bit values, then this is equivalent to committing to less than one single field element per read and write. Our modified GKR protocol is also valuable for proving other deterministic computations within the context of IVC. Our memory-proving protocol can be extended to support key-value store.
Under What Conditions Is Encrypted Key Exchange Actually Secure?
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE.
In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks:
1. We show that among the five existing results on the UC-security of (O)EKE using a general KA protocol, all are incorrect;
2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy several additional security properties: though some of these are closely related to existing security properties, some are new, and all are missing from existing works on (O)EKE;
3. We give UC-security proofs for EKE and OEKE using Programmable-Once Public Function (POPF), which is the most efficient instantiation to date and is around 4 times faster
than the standard instantiation using Ideal Cipher (IC).
Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also present a security analysis of POPF using a new, weakened notion of almost UC realizing a functionality, that is still sufficient for proving composed protocols to be fully
UC-secure.
Circuit Bootstrapping: Faster and Smaller
We present a novel circuit bootstrapping algorithm that outperforms the state-of-the-art TFHE method with 9.9× speedup and 15.6× key size reduction. These improvements can be attributed to two technical contributions. Firstly, we redesigned the circuit bootstrapping workflow to operate exclusively under the ring ciphertext type, which eliminates the need of conversion between LWE and RLWE ciphertexts. Secondly, we improve the LMKC+ blind rotation algorithm by reducing the number of automorphisms, then propose the first automorphism type multi-value functional bootstrapping. These automorphism-based techniques lead to further key size optimization, and are of independent interest besides circuit bootstrapping. Based our new circuit bootstrapping we can evaluate AES-128 in 26.2s (single thread), achieving 10.3× speedup compared with the state-of-the-art TFHE-based approach.
Theoretical Explanation and Improvement of Deep Learning-aided Cryptanalysis
At CRYPTO 2019, Gohr demonstrated that differential-neural distinguishers (DNDs) for Speck32/64 can learn more features than classical cryptanalysis's differential distribution tables (DDT). Furthermore, a non-classical key recovery procedure is devised by combining the Upper Confidence Bound (UCB) strategy and the BayesianKeySearch algorithm. Consequently, the time complexity of 11-round key recovery attacks on Speck32/64 is significantly reduced compared with the state-of-the-art results in classical cryptanalysis. This advancement in deep learning-assisted cryptanalysis has opened up new possibilities. However, the specific encryption features exploited by DNDs remain unclear.
In this paper, we begin by analyzing the features learned by DND based on the probability distribution of a ciphertext pair. Our analysis reveals that DND not only learns the differential features of the ciphertext pair but also captures the XOR information of the left and right branches of the ciphertext pair. This explains why the performance of DND can outperform DDT in certain cases. For other ciphers, we can also predict whether deep learning methods can achieve superior results to classical methods based on the probability distribution of the ciphertext pair. Next, we modify the input data format and network structure based on the specific features that can be learned to train DND specifically. With these modifications, it is possible to reduce the size of their parameters to only 1/16 of their previous networks while maintaining high precision. Additionally, the training time for the DNDs is significantly reduced. Finally, to improve the efficiency of deep learning-assisted cryptanalysis, we introduce Bayes-UCB to select promising ciphertext structures more efficiently. We also introduce an improved BayesianKeySearch algorithm to retain guessed keys with the highest scores in key guessing. We use both methods to launch 11-round, 12-round, and 13-round key recovery attacks on Speck32/64. The results show that under the same conditions, the success rate of 11-round key recovery attacks has increased from Gohr's 36.1% to 52.8%, the success rate of 12-round key recovery attacks has increased from Gohr's 39% to 50%, and the success rate of 13-round key recovery attacks has increased from Zhang et al.'s 21% to 24%. In addition, the time complexity of these experiments is also significantly reduced.
Formal Verification of Emulated Floating-Point Arithmetic in Falcon
We show that there is a discrepancy between the emulated floating-point multiplication in the submission package of the digital signature Falcon and the claimed behavior. In particular, we show that some floating-point products with absolute values the smallest normal positive floating-point number are incorrectly zeroized. However, we show that the discrepancy doesn’t affect the complex fast Fourier transform in the signature generation of Falcon by modeling the floating-point addition, subtraction, and multiplication in CryptoLine. We later implement our own floating-point multiplications in Armv7-M assembly and Jasmin and prove their equivalence with our model, demonstrating the possibility of transferring the challenging verification task (verifying highly-optimized assembly) to the presumably more readable code base (Jasmin).
POPSTAR: Lightweight Threshold Reporting with Reduced Leakage
This paper proposes POPSTAR, a new lightweight protocol for the private computation of heavy hitters, also known as a private threshold reporting system. In such a protocol, the users provide input measurements, and a report server learns which measurements appear more than a pre-specified threshold. POPSTAR follows the same architecture as STAR (Davidson et al, CCS 2022) by relying on a helper randomness server in addition to a main server computing the aggregate heavy hitter statistics. While STAR is extremely lightweight, it leaks a substantial amount of information, consisting of an entire histogram of the provided measurements (but only reveals the actual measurements that appear beyond the threshold). POPSTAR shows that this leakage can be reduced at a modest cost ($\sim$7$\times$ longer aggregation time). Our leakage is closer to that of Poplar (Boneh et al, S&P 2021), which relies however on distributed point functions and a different model which requires interactions of two non-colluding servers (with equal workloads) to compute the heavy hitters.
On the cryptosystems based on two Eulerian transfor-mations defined over the commutative rings $Z_{2^s}, s>1$.
We suggest the family of ciphers s^E^n, n=2,3,.... with the space of plaintexts (Z*_{2^s})^n, s >1 such that the encryption map is the composition of kind G=G_1A_1G_2A_2 where A_i are the affine transformations from AGL_n(Z_{2^s}) preserving the variety (Z*_{2^s)}^n , Eulerian endomorphism G_i , i=1,2 of K[x_1, x_2,...., x_n] moves x_i to monomial term ϻ(x_1)^{d(1)}(x_2)^{d(2)}...(x_n)^{d(n)} , ϻϵ Z*_{2^s} and act on (Z*_{2^s})^n as bijective transformations.
The cipher is converted to a protocol supported cryptosystem. Protocols of Noncommutative Cryptography implemented on the platform of Eulerian endomorphism are used for the delivery of G_i and A_i from Alice to Bob. One can use twisted Diffie-Hellman protocols which security rests on the complexity of Conjugacy Power problem or hidden tame homomorphism protocol which security rests of the word decomposition problem. Instead of the delivery of G_i Alice and Bob can elaborate these transformations via the inverse twisted Diffie-Hellman protocol implemented on the platform of tame Eulerian transformations of (Z*_{2^s})^n. The cost of single protocol is O(n^3) and the cost of the computation of the reimage of used nonlinear map is O(n^2). So the verification of n^t , t≥1 signatures takes time O(n^{t+2}). Instead of inverse twisted Diffie-Hellman protocol correspondents can use inverse hidden tame homomorphism protocol which rests on the complexity of word decomposition for tame Eulerian transformations. We use natural bijections between Z*_{2^s} and Z_{2^{s-1}}, Z*_{2^s} and finite field F_{2^{s-1}} and Z*_{2^s} and Boolean ring B_{s-1} of order 2^{s-1} to modify the family of ciphers or cryptosystems via the change of AGL_n(Z_{2^s}) for the AGL_n(K), where K is one of the rings
Z_{2^{s-1}, F_{2^{s-1} and B_{s-1}. New ciphers are defined via the multiplications of two different commutative rings Z_{2^s} and K. It does not allow to treat them as stream ciphers of multivariate cryptography and use corresponding cryptanalytic technique.
Adversary is not able to use known cryptanalytical methods such as linearisation attacks. We discuss the option of change the mentioned above elements of AGL_n(Z_{2^s) or AGL_n(K) for nonlinear multivariate transformation F of (Z_{2^s})^n or K^n with the symmetric trapdoor accelerator T, i.e. the piece of information such that the knowledge of T allows to compute the value F(p) in arbitrarily chosen p ϵ P in time O(n^2) and to solve the equation of kind F(x)=c for each c from C in time O(n ^2).
Plinko: Single-Server PIR with Efficient Updates via Invertible PRFs
We study single-server private information retrieval (PIR) where a client wishes to privately retrieve the $x$-th entry from a database held by a server without revealing the index $x$. In our work, we focus on PIR with client pre-processing where the client may compute hints during an offline phase. The hints are then leveraged during queries to obtain sub-linear online time. We present Plinko that is the first single-server PIR with client pre-processing that obtains optimal trade-offs between client storage and total (client and server) query time for all parameters. Our scheme uses $t = \tilde{O}(n/r)$ query time for any client storage size $r$. This matches known lower bounds of $r \cdot t = \Omega(n)$ up to logarithmic factors for all parameterizations whereas prior works could only match the lower bound when $r = \tilde{O}(\sqrt{n})$. Moreover, Plinko is also the first updateable PIR scheme where an entry can be updated in worst-case $\tilde{O}(1)$ time.
As our main technical tool, we define the notion of an invertible pseudorandom function (iPRF) that generalizes standard PRFs to be equipped with an efficient inversion algorithm. We present a construction of an iPRF from one-way functions where forward evaluation runs in $\tilde{O}(1)$ time and inversion runs in time linear in the inverse set (output) size. Furthermore, our iPRF construction is the first that remains efficient and secure for arbitrary domain and range sizes (including small domains and ranges). In the context of single-server PIR, we show that iPRFs may be used to construct the first hint set representation where finding a hint containing an entry $x$ may be done in $\tilde{O}(1)$ time.
Closing the Efficiency Gap between Synchronous and Network-Agnostic Consensus
In the consensus problem, $n$ parties want to agree on a common value, even if some of them are corrupt and arbitrarily misbehave. If the parties have a common input $m$, then they must agree on $m$.
Protocols solving consensus assume either a synchronous communication network, where messages are delivered within a known time, or an asynchronous network with arbitrary delays. Asynchronous protocols only tolerate $t_a < n/3$ corrupt parties. Synchronous ones can tolerate $t_s < n/2$ corruptions with setup, but their security completely breaks down if the synchrony assumptions are violated.
Network-agnostic consensus protocols, as introduced by Blum, Katz, and Loss [TCC'19], are secure regardless of network conditions, tolerating up to $t_s$ corruptions with synchrony and $t_a$ without, under provably optimal assumptions $t_a \leq t_s$ and $2t_s + t_a < n$. Despite efforts to improve their efficiency, all known network-agnostic protocols fall short of the asymptotic complexity of state-of-the-art purely synchronous protocols.
In this work, we introduce a novel technique to compile any synchronous and any asynchronous consensus protocols into a network-agnostic one. This process only incurs a small constant number of overhead rounds, so that the compiled protocol matches the optimal round complexity for synchronous protocols. Our compiler also preserves under a variety of assumptions the asymptotic communication complexity of state-of-the-art synchronous and asynchronous protocols. Hence, it closes the current efficiency gap between synchronous and network-agnostic consensus.
As a plus, our protocols support $\ell$-bit inputs, and can be extended to achieve communication complexity $O(n^2\kappa + \ell n)$ under the assumptions for which this is known to be possible for purely synchronous protocols.
Threshold Garbled Circuits with Low Overhead
The folklore approach to designing a threshold variant of symmetric
cryptographic algorithms involves applying generic MPC methods to se-
cret sharing techniques: the MPC first combines participant input shares
using the secret sharing scheme, and then evaluates the cryptographic
function on the reconstructed key. Hardening this secure against n − 1
malicious parties requires some mechanism to ensure input consistency,
e.g., adding MACs to inputs, which consequently, increases the number
of inputs and gates to the MPC. In many cases, this extra overhead is
substantially more than the underlying cost of evaluating the symmetric
cryptographic algorithm.
We present a scheme that can convert any suitable maliciously secure
dishonest majority boolean-circuit FMPC into a threshold scheme Fthresh
with almost no overhead. Specifically, we present an SUC-secure scheme
that allows for reactive threshold t-of-n boolean circuit evaluation amongst
a group of n parties P , for any t ≤ n, against a malicious adversary that
corrupts any number of parties less than the threshold t. Moreover, mul-
tiple circuits can be evaluated sequentially with the secret-shared authen-
ticated outputs of a circuit to be used subsequently as inputs for a new
circuit by any S ⊆ P of size |S| ≥ t.
Building upon the works of Wang et al, Hazay et al, and Yang et al,
[WRK17, HSSV17, YWZ20] for dishonest majority FMPC, our key insight
is to create threshold versions of the “authenticated bits” used to han-
dle input in these recent n-party garbled circuits protocols. The resulting
design incurs a small overhead to produce the reusable “threshold authen-
ticated bits” during preprocessing, and adds no extra communication to
evaluate with the authenticated input during the online phase.
Using our methods, thresholdizing a boolean circuit has essentially no
performance overhead. For example, to compute HMAC, a full Setup+Eval
execution of the (n − 2)-out-of-n thresholdized version is approximately
4% more expensive than the state-of-the-art n-party MPC. In contrast,
using the folklore method is approximately 100% more expensive. This is
especially true for small circuits such as AES which has 6800 gates and
thus incurs the most overhead for thresholdizing. Simply considering the
online Eval cost, our approach can evaluate AES blocks at 2.3/s with 16 parties, exceeding the baseline MPC cost without preprocessing, and sur-
passing the folklore method that only achieves .33/s blocks. Ultimately,
this result makes threshold boolean circuit MPC as feasible as any MPC
application.
Alternative Key Schedules for the AES
The AES block cipher is today the most important and analyzed symmetric algorithm. While all versions of the AES are known to be secure in the single-key setting, this is not the case in the related-key scenario. In this article we try to answer the question whether the AES would resist better differential-like related-key attacks if the key schedule was different. For this, we search for alternative permutation-based key schedules by extending the work of Khoo et al. at ToSC 2017 and Derbez et al. at SAC 2018. We first show that the model of Derbez et al. was flawed. Then, we develop different approaches together with MILP-based tools to find good permutations that could be used as the key schedule for AES-128, AES-192 and AES-256. Our methods permitted to find permutations that outperform the permutation exhibited by Khoo et al. for AES-128. Moreover, our new approach based on two MILP models that call one another allowed us to handle a larger search space and thus to search for alternative key schedules for the two bigger versions of AES. This method permitted us to find permutations for AES-192 and AES-256 that provide better resistance to related-key differential attacks. Most importantly, we showed that these variants can resist full-round boomerang attacks.
Exploring the Advantages and Challenges of Fermat NTT in FHE Acceleration
Recognizing the importance of a fast and resource-efficient polynomial multiplication in homomorphic encryption, in this paper, we design a multiplier-less number theoretic transform using a Fermat number as an auxiliary modulus. To make this algorithm scalable with the degree of polynomial, we apply a univariate to multivariate polynomial ring transformation.
We develop an accelerator architecture for fully homomorphic encryption using these algorithmic techniques for efficient multivariate polynomial multiplication. For practical homomorphic encryption application benchmarks, the hardware accelerator achieves a 1,200$\times$ speed-up compared to software implementations. Finally, we conclude the paper by discussing the advantages and limitations of the proposed polynomial multiplication method.
The Complexity of Algebraic Algorithms for LWE
Arora & Ge introduced a noise-free polynomial system to compute the secret of a Learning With Errors (LWE) instance via linearization. Albrecht et al. later utilized the Arora-Ge polynomial model to study the complexity of Gröbner basis computations on LWE polynomial systems under the assumption of semi-regularity. In this paper we revisit the Arora-Ge polynomial and prove that it satisfies a genericity condition recently introduced by Caminata & Gorla, called being in generic coordinates. For polynomial systems in generic coordinates one can always estimate the complexity of DRL Gröbner basis computations in terms of the Castelnuovo-Mumford regularity and henceforth also via the Macaulay bound.
Moreover, we generalize the Gröbner basis algorithm of Semaev & Tenti to arbitrary polynomial systems with a finite degree of regularity. In particular, existence of this algorithm yields another approach to estimate the complexity of DRL Gröbner basis computations in terms of the degree of regularity. In practice, the degree of regularity of LWE polynomial systems is not known, though one can always estimate the lowest achievable degree of regularity. Consequently, from a designer's worst case perspective this approach yields sub-exponential complexity estimates for general, binary secret and binary error LWE.
In recent works by Dachman-Soled et al. the hardness of LWE in the presence of side information was analyzed.
Utilizing their framework we discuss how hints can be incorporated into LWE polynomial systems and how they affect the complexity of Gröbner basis computations.
Trapdoor Memory-Hard Functions
Memory-hard functions (MHF) are functions whose evaluation provably requires a lot of memory. While MHFs are an unkeyed primitive, it is natural to consider the notion of trapdoor MHFs (TMHFs). A TMHF is like an MHF, but when sampling the public parameters one also samples a trapdoor which allows evaluating the function much cheaper.
Biryukov and Perrin (Asiacrypt'17) were the first to consider TMHFs and put forth a candidate TMHF construction called Diodon that is based on the Scrypt MHF (Percival, BSDCan'09). To allow for a trapdoor, Scrypt's initial hash chain is replaced by a sequence of squares in a group of unknown order where the order of the group is the trapdoor. For a length $n$ sequence of squares and a group of order $N$, Diodon's cumulative memory complexity (CMC) is $O(n^2\log N)$ without the trapdoor and $O(n \log(n) \log(N)^2)$ with knowledge of it.
While Scrypt is proven to be optimally memory-hard in the random oracle model (Alwen et al., Eurocrypt'17), Diodon's memory-hardness has not been proven so far. In this work, we fill this gap by rigorously analyzing a specific instantiation of Diodon. We show that its CMC is lower bounded by $\Omega(\frac{n^2}{\log n} \log N)$ which almost matches the upper bound. Our proof is based Alwen et al.'s lower bound on Scrypt's CMC but requires non-trivial modifications due to the algebraic structure of Diodon. Most importantly, our analysis involves a more elaborate compression argument and a solvability criterion for certain systems of Diophantine equations.
Aggregating Falcon Signatures with LaBRADOR
Several prior works have suggested to use non-interactive arguments of knowledge with short proofs to aggregate signatures of Falcon, which is part of the first post-quantum signatures selected for standardization by NIST. Especially LaBRADOR, based on standard structured lattice assumptions and published at CRYPTO’23, seems promising to realize this task. However, no prior work has tackled this idea in a rigorous way. In this paper, we thoroughly prove how to aggregate Falcon signatures using LaBRADOR. We start by providing the first complete knowledge soundness analysis for the non-interactive version of LaBRADOR. Here, the multi-round and recursive nature of LaBRADOR requires a complex and thorough analysis. For this purpose, we introduce the notion of predicate special soundness (PSS). This is a general framework for evaluating the knowledge error of complex Fiat-Shamir arguments of knowledge protocols in a modular fashion, which we believe to be of independent interest. We then explain the exact steps to take in order to adapt the non-interactive LaBRADOR proof system for aggregating Falcon signatures and provide concrete proof size estimates. Additionally, we formalize the folklore approach of obtaining aggregate signatures from the class of hash-then-sign signatures through arguments of knowledge.
Last updated: 2024-03-20
A Zero-Dimensional Gröbner Basis for Poseidon
In this paper we construct dedicated weight orders $>$ so that a $>$-Gröbner bases of Poseidon can be found via linear transformations for the preimage as well as the CICO problem. In particular, with our Gröbner bases we can exactly compute the $\mathbb{F}_q$-vector space dimension of the quotient space for all possible Poseidon configurations. This in turn resolves previous attempts to assess the security of Poseidon against Gröbner basis attacks, since the vector space dimension quantifies the complexity of computing the variety of a zero-dimensional polynomial system.
NiLoPher: Breaking a Modern SAT-Hardened Logic-Locking Scheme via Power Analysis Attack
LoPher brings, for the first time, cryptographic security promises to the field of logic locking in a bid to break the game of cat-and-mouse seen in logic locking. Toward this end, LoPher embeds the circuitry to lock within multiple rounds of a block cipher, by carefully configuring all the S-Boxes. To realize general Boolean functionalities and to support varying interconnect topologies, LoPher also introduces additional layers of MUXes between S-Boxes and the permutation operations. The authors of LoPher claim resilience against SAT-based attacks in particular. Here, we show the first successful attack on LoPher. First, we uncover a significant limitation for LoPher’s key-space configuration, resulting in large numbers of equivalent keys and, thus, a largely simplified search space for attackers in practice. Second, motivated by their well-proven working against ciphers, we employ a power side-channel attack against LoPher. We find that ISCAS-85 benchmarks locked with LoPher can all be broken in few thousands of traces. Finally, we also outline a simple and low-cost countermeasure to render LoPher more secure.
C'est très CHIC: A compact password-authenticated key exchange from lattice-based KEM
Driven by the NIST's post-quantum standardization efforts and the selection of Kyber as a lattice-based Key-Encapsulation Mechanism (KEM), several Password Authenticated Key Exchange (PAKE) protocols have been recently proposed that leverage a KEM to create an efficient, easy-to-implement and secure PAKE. In two recent works, Beguinet et al. (ACNS 2023) and Pan and Zeng (ASIACRYPT 2023) proposed generic compilers that transform KEM into PAKE, relying on an Ideal Cipher (IC) defined over a group. However, although IC on a group is often used in cryptographic protocols, special care must be taken to instantiate such objects in practice, especially when a low-entropy key is used. To address this concern, Dos Santos et al. (EUROCRYPT 2023) proposed a relaxation of the IC model under the Universal Composability (UC) framework called Half-Ideal Cipher (HIC). They demonstrate how to construct a UC-secure PAKE protocol, EKE-KEM, from a KEM and a modified 2-round Feistel construction called m2F. Remarkably, the m2F sidesteps the use of an IC over a group, and instead employs an IC defined over a fixed-length bitstring domain, which is easier to instantiate.
In this paper, we introduce a novel PAKE protocol called CHIC that improves the communication and computation efficiency of EKE-KEM, by avoiding the HIC abstraction. Instead, we split the KEM public key in two parts and use the m2F directly, without further randomization. We provide a detailed proof of the security of CHIC and establish precise security requirements for the underlying KEM, including one-wayness and anonymity of ciphertexts, and uniformity of public keys. Our findings extend to general KEM-based EKE-style protocols and show that a passively secure KEM is not sufficient. In this respect, our results align with those of Pan and Zeng (ASIACRYPT 2023), but contradict the analyses of KEM-to-PAKE compilers by Beguinet et al. (ACNS 2023) and Dos Santos et al. (EUROCRYPT 2023).
Finally, we provide an implementation of CHIC, highlighting its minimal overhead compared to the underlying KEM -- Kyber. An interesting aspect of the implementation is that we reuse the rejection sampling procedure in Kyber reference code to address the challenge of hashing onto the public key space. As of now, to the best of our knowledge, CHIC stands as the most efficient PAKE protocol from black-box KEM that offers rigorously proven UC security.
SweetPAKE: Key exchange with decoy passwords
Decoy accounts are often used as an indicator of the compromise of sensitive data, such as password files. An attacker targeting only specific known-to-be-real accounts might, however, remain undetected. A more effective method proposed by Juels and Rivest at CCS'13 is to maintain additional fake passwords associated with each account. An attacker who gains access to the password file is unable to tell apart real passwords from fake passwords, and the attempted usage of a false password immediately sets off an alarm indicating a password file compromise.
Password-Authenticated Key Exchange (PAKE) has long been recognised for its strong security guarantees when it comes to low-entropy password authentication and secure channel establishment, without having to rely on the setup of a PKI. In this paper, we introduce SweetPAKE, a new cryptographic primitive that offers the same security guarantees as PAKE for key exchange, while allowing clients with a single password to authenticate against servers with $n$ candidate passwords for that account and establish a secure channel. Additional security properties are identified and formalized to ensure that (a) high-entropy session keys are indistinguishable from random, even if later on the long-term secret password becomes corrupted (forward secrecy); (b) upon password file leakage, an adversary cannot tell apart real from fake passwords; and (c) a malicious client cannot trigger a false alarm. We capture these properties by extending well-established game-based definitions of PAKE. Furthermore, we propose a new UC formulation that comprehensively unifies both SweetPAKE (session key indistinguishability and sugarword indistinguishability) and a related notion known as Oblivious-PAKE. Finally, we propose efficient SweetPAKE and Oblivious-PAKE protocols constructed from Password-Authenticated Public-Key Encryption (PAPKE) that satisfy all the proposed notions.
Concretely Efficient Lattice-based Polynomial Commitment from Standard Assumptions
Polynomial commitment is a crucial cryptographic primitive in constructing zkSNARKs. Most practical constructions to date are either vulnerable against quantum adversaries or lack homomorphic properties, which are essential for recursive proof composition and proof batching. Recently, lattice-based constructions have drawn attention for their potential to achieve all the desirable properties, though they often suffer from concrete inefficiency or rely on newly introduced assumptions requiring further cryptanalysis.
In this paper, we propose a novel construction of a polynomial commitment scheme based on standard lattice-based assumptions. Our scheme achieves a square-root proof size and verification complexity, ensuring concrete efficiency in proof size, proof generation, and verification. Additionally, it features a transparent setup and publicly verifiability.
When compared with Brakedown (CRYPTO 2023), a recent code-based construction, our scheme offers comparable performance across all metrics. Furthermore, its proof size is approximately 4.1 times smaller than SLAP (EUROCRYPT 2024), a recent lattice-based construction.
Single-Input Functionality against a Dishonest Majority: Practical and Round-Optimal
In this work, we focus on Single-Input Functionality (SIF), which can be viewed as a special case of MPC. In a SIF, only one distinguished party called the dealer holds a private input. SIF allows the dealer to perform a computation task with other parties without revealing any additional information about the private input. SIF has diverse applications, including multiple-verifier zero-knowledge, and verifiable relation sharing.
As our main contribution, we propose the first 1-round SIF protocol against a dishonest majority in the preprocessing model, which is highly efficient. The prior works either require at least 2-round online communication (Yang and Wang, Asiacrypt 2022; Baum et al., CCS 2022; Zhou et al., Euro SP 2024) or are only feasibility results (Lepinski et al., TCC 2005; Applebaum et al., Crypto 2022). We show the necessity of using the broadcast channels, by formally proving that 1-round SIF is impossible to achieve in the preprocessing model, if there are no broadcast channels available. We implement our protocol and conduct extensive experiments to illustrate the practical efficiency of our protocol.
A Two-Layer Blockchain Sharding Protocol Leveraging Safety and Liveness for Enhanced Performance
Sharding is a critical technique that enhances the scalability of blockchain technology. However, existing protocols often assume adversarial nodes in a general term without considering the different types of attacks, which limits transaction throughput at runtime because attacks on liveness could be mitigated. There have been attempts to increase transaction throughput by separately handling the attacks; however, they have security vulnerabilities. This paper introduces Reticulum, a novel sharding protocol that overcomes these limitations and achieves enhanced scalability in a blockchain network without security vulnerabilities.
Reticulum employs a two-phase design that dynamically adjusts transaction throughput based on runtime adversarial attacks on either or both liveness and safety. It consists of `control' and `process' shards in two layers corresponding to the two phases. Process shards are subsets of control shards, with each process shard expected to contain at least one honest node with high confidence. Conversely, control shards are expected to have a majority of honest nodes with high confidence. Reticulum leverages unanimous voting in the first phase to involve fewer nodes in accepting/rejecting a block, allowing more parallel process shards. The control shard finalizes the decision made in the first phase and serves as a lifeline to resolve disputes when they surface.
Experiments demonstrate that the unique design of Reticulum empowers high transaction throughput and robustness in the face of different types of attacks in the network, making it superior to existing sharding protocols for blockchain networks.
Single Pass Client-Preprocessing Private Information Retrieval
Recently, many works have considered Private Information Retrieval (PIR) with client-preprocessing: In this model a client and a server jointly run a preprocessing phase, after which client queries can run in time sublinear in the size of the database. In addition, such approaches store no additional bits per client at the server, allowing us to scale PIR to a large number of clients.
In this work, we propose the first client-preprocessing PIR scheme with ``single pass'' client-preprocessing. In particular, our scheme is concretely optimal with respect to preprocessing, in the sense that it requires exactly one linear pass over the database. This is in stark contrast with existing works, whose preprocessing is proportional to $\lambda \cdot N$, where $\lambda$ is the security parameter (e.g., $\lambda=128$). Our approach yields a preprocessing speedup of 45-100$\times$ and a query speedup of up to 20$\times$ when compared to previous state-of-the-art schemes (e.g., Checklist, USENIX 2021), making preprocessing PIR more attractive for a myriad of use cases that are ``session-based''.
In addition to fast preprocessing, our scheme features extremely fast updates (additions and edits)---in constant time. Previously, the best known approach for handling updates in client-preprocessing PIR had time complexity $O(\log N)$, while also adding a $\log N$ factor to the bandwidth. We implement our update algorithm and show concrete speedups of about 20$\times$ in update time when compared to the previous state-of-the-art updatable scheme (e.g., Checklist, USENIX 2021).
Simple constructions of linear-depth t-designs and pseudorandom unitaries
Uniformly random unitaries, i.e., unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that ``look'' sufficiently Haar random while also being efficient to implement.
Two different notions of derandomisation have emerged:
$t$-designs are random unitaries that information-theoretically reproduce the first $t$ moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random.
In this work, we take a unified approach to constructing $t$-designs and PRUs. For this, we introduce and analyse the ``$PFC$ ensemble'', the product of a random computational basis permutation $P$, a random binary phase operator $F$, and a random Clifford unitary $C$. We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the $PFC$ ensemble to show the following:
1. Linear-depth $t$-designs. We give the first construction of a (diamond-error) approximate $t$-design with circuit depth linear in $t$. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their $2t$-wise independent counterparts.
2. Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e., we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts.
3. Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from $n$ to $n + \omega(\log n)$ qubits, a small modification of our PRU construction achieves adaptive security, i.e., even a distinguisher that can query the isometry adaptively in sequence cannot distinguish it from Haar random isometries. This gives the first construction of adaptive pseudorandom isometries. Under an additional conjecture, this proof also extends to adaptive PRUs.
Recommendations for the Design and Validation of a Physical True Random Number Generator Integrated in an Electronic Device
These Recommendations describe essential elements of the design of a secure physical true random number generator (PTRNG) integrated in an electronic device. Based on these elements, we describe and justify requirements for the design, validation and testing of PTRNGs, which are intended to guarantee the security of generators aimed at cryptographic applications.