Papers updated in last 183 days (Page 12 of 1675 results)
Proximity Gaps in Interleaved Codes
A linear error-correcting code exhibits proximity gaps if each affine line of words either consists entirely of words which are close to the code or else contains almost no such words. In this short note, we prove that for each linear code which exhibits proximity gaps within the unique decoding radius, that code's interleaved code also does. Combining our result with a recent argument of Angeris, Evans and Roh ('24), we extend those authors' sharpening of the tensor-based proximity gap of Diamond and Posen (Commun. Cryptol. '24) up to the unique decoding radius, at least in the Reed–Solomon setting.
Extreme Algebraic Attacks
When designing filter functions in Linear Feedback Shift Registers (LFSR) based stream ciphers, algebraic criteria of Boolean functions such as the Algebraic Immunity (AI) become key characteristics because they guarantee the security of ciphers against the powerful algebraic attacks. In this article, we investigate a generalization of the algebraic attacks proposed by Courtois and Meier on filtered LFSR twenty years ago. We consider how the standard algebraic attack can be generalized beyond filtered LFSR to stream ciphers applying a Boolean filter function to an updated state. Depending on the updating process, we can use different sets of annihilators than the ones used in the standard algebraic attack; it leads to a generalization of the concept of algebraic immunity, and more efficient attacks. To illustrate these strategies, we focus on one of these generalizations and introduce a new notion called Extreme Algebraic Immunity (EAI).
We perform a theoretic study of the EAI criterion and explore its relation to other algebraic criteria. We prove the upper bound of the EAI of an n-variable Boolean function and further show that the EAI can be lower bounded by the AI restricted to a subset, as defined by Carlet, Méaux and Rotella at FSE 2017. We also exhibit functions with EAI guaranteed to be lower than the AI, in particular we highlight a pathological case of functions with optimal algebraic immunity and EAI only n/4. As applications, we determine the EAI of filter functions of some existing stream ciphers and discuss how extreme algebraic attacks using EAI could apply to some ciphers.
Our generalized algebraic attack does not give a better complexity than Courtois and Meier's result on the existing stream ciphers. However, we see this work as a study to avoid weaknesses in the construction of future stream cipher designs.
Separating Broadcast from Cheater Identification
Secure Multiparty Computation (MPC) protocols that achieve Identifiable Abort (IA) guarantee honest parties that in the event that they are denied output, they will be notified of the identity of at least one corrupt party responsible for the abort. Cheater identification provides recourse in the event of a protocol failure, and in some cases can even be desired over Guaranteed Output Delivery. However, protocols in the literature typically make use of broadcast as a necessary tool in identifying cheaters. In many deployments, the broadcast channel itself may be the most expensive component.
In this work, we investigate if it is inherent that MPC with IA should bear the full complexity of broadcast. As the implication of broadcast from IA has been established in previous work, we relax our target to circumvent this connection: we allow honest parties to differ in which cheaters they identify, nonetheless retaining the ability to prove claims of cheating to an auditor.
We show that in the honest majority setting, our notion of Provable Identifiable Selective Abort (PISA) can be achieved without a traditional broadcast channel. Indeed, broadcast in this setting---which we term Broadcast with Selective Identifiable Abort (BC-IA)---is achievable in only two point-to-point rounds with a simple echoing technique. On the negative side, we also prove that BC-IA is impossible to achieve in the dishonest majority setting.
As a general result, we show that any MPC protocol that achieves IA with $r$ broadcasts, can be compiled to one that achieves PISA with $2(r+1)$ point to point rounds. As a practical application of this methodology, we design, implement, and benchmark a six-round honest majority threshold ECDSA protocol that achieves PISA, and can be deployed in any environment with point to point communication alone.
Post-Quantum Privacy for Traceable Receipt-Free Encryption
Traceable Receipt-free Encryption (TREnc) has recently been introduced as a verifiable public-key encryption primitive endowed with a unique security model. In a nutshell, TREnc allows randomizing ciphertexts in transit in order to remove any subliminal information up to a public trace that ensures the non-malleability of the underlying plaintext. A remarkable property of TREnc is the indistinguishability of the randomization of chosen ciphertexts against traceable chosen-ciphertext attacks (TCCA). The main application lies in voting systems by allowing voters to encrypt their votes, tracing whether a published ballot takes their choices into account, and preventing them from proving how they
voted. While being a very promising primitive, the few existing TREnc mechanisms solely rely on discrete-logarithm related assumptions making them vulnerable to the well-known record-now/decrypt-later attack in the wait of quantum computers.
We address this limitation by building the first TREnc whose privacy withstands the advent of quantum adversaries in the future. To design our construction, we first generalize the original TREnc primitive that is too restrictive to be easily compatible with built-in lattice-based semantically-secure encryption. Our more flexible model keeps all the ingredients generically implying receipt-free voting. Our instantiation relies on Ring Learning With Errors (RLWE) with pairing-based statistical zero-knowledge simulation sound proofs from Groth-Sahai, and further enjoys a public-coin common reference string removing the need of a trusted setup.
Black-Box Registered ABE from Lattices
This paper presents the first black-box registered ABE for circuit from lattices. The selective security is based on evasive LWE assumption [EUROCRYPT'22, CRYPTO'22]. The unique prior Reg-ABE scheme from lattices is derived from non-black-box construction based on function-binding hash and witness encryption [CRYPTO'23]. Technically, we first extend the black-box registration-based encryption from standard LWE [CRYPTO'23] so that we can register a public key with a function; this yields a LWE-based Reg-ABE with ciphertexts of size $L \cdot \mathsf{polylog}(L)$ where $L$ is the number of users. We then make use of the special structure of its ciphertext to reduce its size to $\mathsf{polylog}(L)$ via an algebraic obfuscator based on evasive LWE [CRYPTO'24].
Concretely Efficient Private Set Union via Circuit-based PSI
Private set intersection (PSI) is a type of private set operation (PSO) for which concretely efficient linear-complexity protocols do exist. However, the situation is currently less satisfactory for other relevant PSO problems such as private set union (PSU): For PSU, the most promising protocols either rely entirely on computationally expensive public-key operations or suffer from substantial communication overhead.
In this work, we present the first PSU protocol that is mainly based on efficient symmetric-key primitives yet enjoys comparable communication as public-key-based alternatives. Our core idea is to re-purpose state-of-the-art circuit-based PSI to realize a multi-query reverse private membership test (mq-RPMT), which is instrumental for building PSU. We carefully analyze a privacy leakage issue resulting from the hashing paradigm commonly utilized in circuit-based PSI and show how to mitigate this via oblivious pseudorandom function (OPRF) and new shuffle sub-protocols. Our protocol is modularly designed as a sequential execution of different building blocks that can be easily replaced by more efficient variants in the future, which will directly benefit the overall performance.
We implement our resulting PSU protocol, showing a run-time improvement of 10% over the state-of-the-art public-key-based protocol of Chen et al. (PKC'24) for input sets of size $2^{20}$. Furthermore, we improve communication by $1.6\times$ over the state-of-the-art symmetric-key-based protocol of Zhang et al. (USENIX Sec'23).
Cryptojacking detection using local interpretable model-agnostic explanations
Cryptojacking, the unauthorised use of computing resources to mine cryptocurrency, has emerged as a critical threat in today’s digital landscape. These attacks not only compromise system integrity but also result in increased costs, reduced hardware lifespan, and heightened network security risks. Early and accurate detection is essential to mitigate the adverse effects of cryptojacking. This study focuses on developing a semi-supervised machine learning (ML) approach that leverages an autoencoder for feature extraction and a random forest (RF) model for classification. The objective is to enhance cryptojacking detection while maintaining a balance between accuracy and interpretability. The proposed methodology is further enhanced with explainable artificial intelligence (XAI) techniques such as local interpretable model-agnostic explanations (LIME) to offer insights into model predictions. Results from datasets such as UGRansome and BitcoinHeist indicate that the semi-supervised approach achieves accuracy rates ranging from 70% to 99%. The study demonstrates that the proposed model provides an efficient, interpretable, and scalable solution for real-time cryptojacking detection across various scenarios.
Quantum-safe Signatureless DNSSEC
We present $\mathsf{SL\text{-}DNSSEC}$: a backward-compatible protocol that leverages a quantum-safe KEM and a MAC to perform signature-less $\mathsf{(SL)}$ DNSSEC validations in a single UDP query/response style. Our experiments targeting NIST level I security for QTYPE A query resolution show that $\mathsf{SL\text{-}DNSSEC}$ is practically equivalent to the presently deployed RSA-2048 in terms of bandwidth usage and resolution speeds. Compared to post-quantum signatures, $\mathsf{SL\text{-}DNSSEC}$ reduces bandwidth consumption and resolution times by up to $95\%$ and $60\%$, respectively. Moreover, with response size $<$ query size $\leq 1232$ bytes, $\mathsf{SL\text{-}DNSSEC}$ obviates the long-standing issues of IP fragmentation, TCP re-transmits and DDoS amplification attacks.
On the gap between terms in an addition chain
In this paper, we study the distribution of the \textit{gap} between terms in an addition chain. In particular, we show that if $1,2,\ldots,s_{\delta(n)}=n$ is an addition chain of length $\delta(n)$ leading to $n$, then $$\underset{1\leq l\leq \delta(n)}{\mathrm{sup}}(s_{l+k}-s_l)\gg k\frac{n}{\delta(n)}$$ and $$\underset{1\leq l\leq \delta(n)}{\mathrm{inf}}(s_{l+k}-s_l)\ll k\frac{n}{\delta(n)}$$ for fixed $k\geq 1$.
ABLE: Optimizing Mixed Arithmetic and Boolean Garbled Circuit
Privacy and security have become critical priorities in many scenarios. Privacy-preserving computation (PPC) is a powerful solution that allows functions to be computed directly on encrypted data. Garbled circuit (GC) is a key PPC technology that enables secure, confidential computing. GC comes in two forms: Boolean GC supports all operations by expressing functions as logic circuits; arithmetic GC is a newer technique to efficiently compute a set of arithmetic operations like addition and multiplication. Mixed GC combines both Boolean and arithmetic GC, in an attempt to optimize performance by computing each function in its most natural form. However, this introduces additional costly conversions between the two forms. It remains unclear if and when the efficiency gains of arithmetic GC outweigh these conversion costs.
In this paper, we present A̲rithmetic Ḇoolean Ḻogic E̲xchange for Garbled Circuit, the first real implementation of mixed GC. ABLE profiles the performance of Boolean and arithmetic GC operations along with their conversions. We assess not only communication but also computation latency, a crucial factor in evaluating the end-to-end runtime of GC. Based on these insights, we propose a method to determine whether it is more efficient to use general Boolean GC or mixed GC for a given application. Rather than implementing both approaches to identify the better solution for each unique case, our method enables users to select the most suitable GC realization early in the design process. This method evaluates whether the benefits of transitioning operations from Boolean to arithmetic GC offset the associated conversion costs. We apply this approach to a neural network task as a case study.
Implementing ABLE reveals opportunities for further GC optimization. We propose a heuristic to reduce the number of primes in arithmetic GC, cutting communication by 14.1% and compute latency by 15.7% in a 16-bit system. Additionally, we optimize mixed GC conversions with row reduction technique, achieving a 48.6% reduction in garbled table size for bit-decomposition and a 50% reduction for bit-composition operation. These improvements reduce communication overhead in stream GC and free up storage in the GC with preprocessing approach. We open source our code for community use.
Time-Lock Puzzles from Lattices
Time-lock puzzles (TLP) are a cryptographic tool that allow one to encrypt a message into the future, for a predetermined amount of time $T$. At present, we have only two constructions with provable security: One based on the repeated squaring assumption and the other based on obfuscation. Basing TLP on any other assumption is a long-standing question, further motivated by the fact that known constructions are broken by quantum algorithms.
In this work, we propose a new approach to construct time-lock puzzles based on lattices, and therefore with plausible post-quantum security. We obtain the following main results:
* In the preprocessing model, where a one-time public-coin preprocessing is allowed, we obtain a time-lock puzzle with encryption time $\log(T)$.
* In the plain model, where the encrypter does all the computation, we obtain a time-lock puzzle with encryption time $\sqrt{T}$.
Both constructions assume the existence of any sequential function $f$, and the hardness of the circular small-secret learning with errors (LWE) problem. At the heart of our results is a new construction of succinct randomized encodings (SRE) for $T$-folded repeated circuits, where the complexity of the encoding is $\sqrt{T}$. This is the first construction of SRE where the overall complexity of the encoding algorithm is sublinear in the runtime $T$, and which is not based on obfuscation. As a direct corollary, we obtain a non-interactive RAM delegation scheme with sublinear complexity (in the number of steps $T$).
The Meta-Complexity of Secret Sharing
A secret-sharing scheme allows the distribution of a secret $s$ among $n$ parties, such that only certain predefined “authorized” sets of parties can reconstruct the secret, while all other “unauthorized” sets learn nothing about $s$. The collection of authorized/unauthorized sets is defined by a monotone function $f: \{0,1\}^n \rightarrow \{0,1\}$. It is known that any monotone function can be realized by a secret-sharing scheme; thus, the smallest achievable \emph{total share size}, $S(f)$, serves as a natural complexity measure.
In this paper, we initiate the study of the following meta-complexity question: Given a monotone function $f$, is it possible to efficiently distinguish between cases where the secret-sharing complexity of $f$ is small versus large? We examine this question across several computational models, yielding the following main results.
(Hardness for formulas and circuits): Given a monotone formula $f$ of size $L$, it is coNP-hard to distinguish between ``cheap'' functions, where the maximum share size is 1 bit and the total share size is $O(L^{0.01})$, and ``expensive'' functions, where the maximum share size is $\Omega(\sqrt{L})$ and the total share size is $\Omega(L/\log L)$.
This latter bound nearly matches known secret-sharing constructions yielding a total share size of $L$ bits. For monotone circuits, we strengthen the bound on the expensive case to a maximum share size of $\Omega(L/\log L)$ and a total share size of $\Omega(L^2/\log L)$. These results rule out the existence of instance-optimal compilers that map a formula $f$ to a secret-sharing scheme with complexity polynomially related to $S(f)$.
(Hardness for truth tables): Under cryptographic assumptions, either (1) every $n$-bit slice function can be realized by a $\text{poly}(n)$-size secret-sharing scheme, or (2) given a truth-table representation of $f$ of size $N = 2^n$, it is computationally infeasible to distinguish in time $\text{poly}(N)$ between cases where $S(f) = \text{poly}(n)$ and $S(f) = n^{\omega(1)}$. Option (1) would be considered a breakthrough result, as the best-known construction for slices has a sub-exponential complexity of $2^{\tilde{O}(\sqrt{n})}$ (Liu, Vaikuntanathan, and Wee; Eurocrypt 2018). Our proof introduces a new worst-case-to-average-case reduction for slices, which may be of independent interest.
(Hardness for graphs): We examine the simple case where $f$ is given as a 2-DNF, represented by a graph $G$ whose edges correspond to 2-terms, and ask whether it is possible to distinguish between cases where the share size is constant and those where the share size is large, say $\Omega(\log n)$. We establish several connections between this question and questions in communication complexity. For instance, we show that graphs admitting constant-cost secret sharing form a subclass of graphs with constant randomized communication complexity and constant-size adjacency sketches (Harms, Wild, and Zamaraev; STOC 2022). We leverage these connections to establish new lower bounds for specific graph families, derive a combinatorial characterization of graphs with constant-size linear secret-sharing schemes, and show that a natural class of myopic algorithms fails to distinguish cheap graphs from expensive ones.
Use of Simple Arithmetic Operations to Construct Efficiently Implementable Boolean functions Possessing High Nonlinearity and Good Resistance to Algebraic Attacks
We describe a new class of Boolean functions which provide the presently best known trade-off between low computational complexity, nonlinearity and (fast) algebraic immunity. In particular, for $n\leq 20$, we show that there are functions in the family achieving a combination of nonlinearity and (fast) algebraic immunity which is superior to what is achieved by any other efficiently implementable function. The main novelty of our approach is to apply a judicious combination of simple integer and binary field arithmetic to Boolean function construction.
IND-CPA$^{\text{C}}$: A New Security Notion for Conditional Decryption in Fully Homomorphic Encryption
Fully Homomorphic Encryption (FHE) allows a server to perform computations directly over the encrypted data. In general FHE protocols, the client is tasked with decrypting the computation result using its secret key. However, certain FHE applications benefit from the server knowing this result, especially without the aid of the client. Providing the server with the secret key allows it to decrypt all the data, including the client's private input. Protocols such as Goldwasser et. al. (STOC'13) have shown that it is possible to provide the server with the capability of conditional decryption that allows it to decrypt the result of some pre-defined computation and nothing else. While beneficial to an honest-but-curious server to aid in providing better services, a malicious server may utilize this added advantage to perform active attacks on the overall FHE application to leak secret information. Existing security notions fail to capture this scenario since they assume that only the client has the ability to decrypt FHE ciphertexts. Therefore, in this paper, we propose a new security notion named IND-CPA$^{\text{C}}$, that provides the adversary with access to a conditional decryption oracle. We then show that none of the practical exact FHE schemes are secure under this notion by demonstrating a generic attack that utilizes this restricted decryption capability to recover the underlying errors in the FHE ciphertexts. Given the security guarantee of the underlying (R)LWE hardness problem collapses with the leakage of these error values, we show a full key recovery attack. Finally, we propose a technique to convert any IND-CPA secure FHE scheme into an IND-CPA$^\text{C}$ secure FHE scheme. Our technique utilizes Succinct Non-Interactive Argument of Knowledge, where the server is forced to generate a proof of an honest computation of the requested function along with the computation result. During decryption, the proof is verified, and the decryption proceeds only if the verification holds. Both the verification and decryption steps run inside a Garbled Circuit and thus are not observable or controllable by the server.
Bitwise Garbling Schemes --- A Model with $\frac{3}{2}\kappa$-bit Lower Bound of Ciphertexts
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes.
This model proved a $2\kappa$-bit lower bound of ciphertexts for a broad class of garbling schemes.
At Crypto 2021, Rosulek and Roy presented the innovative "three-halves" garbling scheme in which AND gates cost $1.5\kappa+5$ bits and XOR gates are free.
A noteworthy aspect of their scheme is the slicing-and-dicing technique,
which is applicable universally to all AND gates when garbling a boolean circuit.
Following this revelation, Rosulek and Roy presented several open problems.
Our research primarily addresses one of them: ``Is $1.5\kappa$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?''
In this paper, we propose theBitwise Garbling Schemes.
Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model.
Moreover, we prove the necessity of the free-XOR technique:
If free-XOR is forbidden, we prove a $2\kappa$-bit lower bound.
As an extension, we apply our idea to construct a model for fan-in 3 gates.
Somewhat unexpectedly, we prove a $\frac{7}{4}\kappa$-bit lower bound.
Unfortunately, the corresponding construction
is not suitable for 3-input AND gates.
This construction may be of independent interest.
Adaptor Signatures: New Security Definition and A Generic Construction for NP Relations
An adaptor signatures (AS) scheme is an extension of digital signatures that allows the signer to generate a pre-signature for an instance of a hard relation. This pre-signature can later be adapted to a full signature with a corresponding witness. Meanwhile, the signer can extract a witness from both the pre-signature and the signature. AS have recently garnered more attention due to its scalability and interoperability. Dai et al. [INDOCRYPT 2022] proved that AS can be constructed for any NP relation using a generic construction. However, their construction has a shortcoming: the associated witness is exposed by the adapted signature. This flaw poses limits the applications of AS, even in its motivating setting, i.e., blockchain, where the adapted signature is typically uploaded to the blockchain and is public to everyone.
To address this issue, in this work we augment the security definition of AS by a natural property which we call witness hiding. We then prove the existence of AS for any NP relation, assuming the existence of one-way functions. Concretely, we propose a generic construction of witness-hiding AS from signatures and a weak variant of trapdoor commitments, which we term trapdoor commitments with a specific adaptable message. We instantiate the latter based on the Hamiltonian cycle problem. Since the Hamiltonian cycle problem is NP-complete, we can obtain witness hiding adaptor signatures for any NP relation.
Semi-Quantum Tokenized Signatures
Quantum tokenized signature schemes (Ben-David and Sattath, QCrypt 2017) allow a sender to generate and distribute quantum unclonable states which grant their holder a one-time permission to sign in the name of the sender. Such schemes are a strengthening of public-key quantum money schemes, as they imply public-key quantum money where some channels of communication in the system can be made classical.
An even stronger primitive is semi-quantum tokenized signatures, where the sender is classical and can delegate the generation of the token to a (possibly malicious) quantum receiver. Semi-quantum tokenized signature schemes imply a powerful version of public-key quantum money satisfying two key features:
1. The bank is classical and the scheme can execute on a completely classical communication network. In addition, the bank is \emph{stateless} and after the creation of a banknote, does not hold any information nor trapdoors except the balance of accounts in the system. Such quantum money scheme solves the main open problem presented by Radian and Sattath (AFT 2019).
2. Furthermore, the classical-communication transactions between users in the system are \emph{direct} and do not need to go through the bank. This enables the transactions to be both classical and private.
While fully-quantum tokenized signatures (where the sender is quantum and generates the token by itself) are known based on quantum-secure indistinguishability obfuscation and injective one-way functions, the semi-quantum version is not known under any computational assumption. In this work we construct a semi-quantum tokenized signature scheme based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning with Errors problem.
In the process, we show new properties of quantum coset states and a new hardness result on indistinguishability obfuscation of classical subspace membership circuits.
Siniel: Distributed Privacy-Preserving zkSNARK
Uncategorized
Uncategorized
Zero-knowledge Succinct Non-interactive Argument of Knowledge (zkSNARK) is a powerful cryptographic primitive, in which a prover convinces a verifier that a given statement is true without leaking any additional information. However, existing zkSNARKs suffer from high computation overhead in the proof generation. This limits the applications of zkSNARKs, such as private payments, private smart contracts, and anonymous credentials. Private delegation has become a prominent way to accelerate proof generation.
In this work, we propose Siniel, an efficient private delegation framework for zkSNARKs constructed from polynomial interactive oracle proof (PIOP) and polynomial commitment scheme (PCS). Our protocol allows a computationally limited prover (a.k.a. delegator) to delegate its expensive prover computation to several workers without leaking any information about the private witness. Most importantly, compared with the recent work EOS (USENIX'23), the state-of-the-art zkSNARK prover delegation framework, a prover in Siniel needs not to engage in the MPC protocol after sending its shares of private witness. This means that a Siniel prover can outsource the entire computation to the workers.
We compare Siniel with EOS and show significant performance advantages of the former. The experimental results show that, under low bandwidth conditions (10MBps), Siniel saves about 16% time for delegators than that of EOS, whereas under high bandwidth conditions (1000MBps), Siniel saves about 80% than EOS.
UTRA: Universe Token Reusability Attack and Verifiable Delegatable Order-Revealing Encryption
As dataset sizes grow, users increasingly rely on encrypted data and secure range queries on cloud servers, raising privacy concerns about potential data leakage.
Order-revealing encryption (ORE) enables efficient operations on numerical datasets, and Delegatable ORE (DORE) extends this functionality to multi-client environments, but it faces risks of token forgery. Secure DORE (SEDORE) and Efficient DORE (EDORE) address some vulnerabilities, with EDORE improving speed and storage efficiency. However, we find that both schemes remain susceptible to token forgery.
To address this issue, we propose the concept of Verifiable Delegatable Order-Revealing Encryption (VDORE) with a formal definition of token unforgeability. We then construct a new VDORE scheme $\mathsf{TUDORE}$ (Token Unforgebale DORE), which ensures token unforgeability. Furthermore, our $\mathsf{TUDORE}$ achieves about 1.5× speed-up in token generation compared to SEDORE and EDORE.
Extending Groth16 for Disjunctive Statements
Two most common ways to design non-interactive zero knowledge (NIZK) proofs are based on Sigma ($\Sigma$)-protocols (an efficient way to prove algebraic statements) and zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK) protocols (an efficient way to prove arithmetic statements). However, in the applications of cryptocurrencies such as privacy-preserving credentials, privacy-preserving audits, and blockchain-based voting systems, the zk-SNARKs for general statements are usually implemented with encryption, commitment, or other algebraic cryptographic schemes. Moreover, zk-SNARKs for many different arithmetic statements may also be required to be implemented together. Clearly, a typical solution is to extend the zk-SNARK circuit to include the code for algebraic part. However, complex cryptographic operations in the algebraic algorithms will significantly increase the circuit size, which leads to impractically large proving time and CRS size. Thus, we need a flexible enough proof system for composite statements including both algebraic and arithmetic statements. Unfortunately, while the conjunction of zk-SNARKs is relatively natural and numerous effective solutions are currently available (e.g. by utilizing the commit-and-prove technique), the disjunction of zk-SNARKs is rarely discussed in detail.
In this paper, we mainly focus on the disjunctive statements of Groth16, and we propose a Groth16 variant---CompGroth16, which provides a framework for Groth16 to prove the disjunctive statements that consist of a mix of algebraic and arithmetic components. Specifically, we could directly combine CompGroth16 with $\Sigma$-protocol or even CompGroth16 with CompGroth16 just like the logical composition of $\Sigma$-protocols. From this, we can gain many good properties, such as broader expression, better prover's efficiency and shorter CRS. In addition, for the combination of CompGroth16 and $\Sigma$-protocol, we also present two representative application scenarios to demonstrate the practicality of our construction.
Structural Results for Maximal Quaternion Orders and Connecting Ideals of Prime Power Norm in $B_{p,\infty}$
Fix odd primes $p, \ell$ with $p \equiv 3 \mod 4$ and $\ell \neq p$. Consider the rational quaternion algebra ramified at $p$ and $\infty$ with a fixed maximal order $\mathcal{O}_{1728}$. We give explicit formulae for bases of all cyclic norm $\ell^n$ ideals of $\mathcal{O}_{1728}$ and their right orders, in Hermite Normal Form (HNF). Further, in the case $\ell \mid p+1$, or more generally, $-p$ is a square modulo $\ell$, we derive a parametrization of these bases along paths of the $\ell$-ideal graph, generalising the results of [1]. With such orders appearing as the endomorphism rings of supersingular elliptic curves defined over $\overline{\mathbb{F}_{p}}$, we note several potential applications to isogeny-based cryptography including fast ideal sampling algorithms. We also demonstrate how our findings may lead to further structural observations, by using them to prove a result on the successive minima of endomorphism rings of supersingular curves defined over $\mathbb{F}_p$.
[1] = https://eprint.iacr.org/2025/033
On Quantum Simulation-Soundness
Non-interactive zero-knowledge (NIZK) proof systems are a cornerstone of modern cryptography, but their security has received little attention in the quantum settings. Motivated by improving our understanding of this fundamental primitive against quantum adversaries, we propose a new definition of security against quantum adversary. Specifically, we define the notion of quantum simulation soundness (SS-NIZK), that allows the adversary to access the simulator in superposition.
We show a separation between post-quantum and quantum security of SS-NIZK, and prove that Sahai’s construction for SS-NIZK (in the CRS model) can be made quantumly-simulation-sound. As an immediate application of our new notion, we prove the security of the Naor-Yung paradigm in the quantum settings, with respect to a strong quantum IND-CCA security notion. This provides the quantum analogue of the classical dual key approach to prove the security of encryption schemes. Along the way, we introduce a new notion of quantum-query advantage functions, which may be used as a general framework to show classical/quantum separation for other cryptographic primitives, and it may be of independent interest.
Designing a General-Purpose 8-bit (T)FHE Processor Abstraction
Making the most of TFHE programmable bootstrapping to evaluate functions or operators otherwise challenging to perform with only the native addition and multiplication of the scheme is a very active line of research. In this paper, we systematize this approach and apply it to build an 8-bit FHE processor abstraction, i.e., a software entity that works over FHE-encrypted 8-bit data and presents itself to the programmer by means of a conventional-looking assembly instruction set. In doing so, we provide several homomorphic LookUp Table (LUT) dereferencing operators based on variants of the tree-based method and show that they are the most efficient option for manipulating encryptions of 8-bit data (optimally represented as two basis 16 digits). We then systematically apply this approach over a set of around 50 instructions, including, notably, conditional assignments, divisions, or fixed-point arithmetic operations. We further test the approach on several simple algorithms, including the execution of a neuron with a sigmoid activation function with 16-bit precision. We conclude the paper by comparing our work to the FHE compilers available in the state of the art. Finally, this work reveals that a very limited set of functional bootstrapping patterns is versatile and efficient enough to achieve general-purpose FHE computations beyond the boolean circuit approach. As such, these patterns may be an appropriate target for further works on advanced software optimizations or hardware implementations.
Deletions and Dishonesty: Probabilistic Data Structures in Adversarial Settings
Probabilistic data structures (PDS) are compact representations of high-volume data that provide approximate answers to queries about the data. They are commonplace in today's computing systems, finding use in databases, networking and more. While PDS are designed to perform well under benign inputs, they are frequently used in applications where inputs may be adversarially chosen. This may lead to a violation of their expected behaviour, for example an increase in false positive rate.
In this work, we focus on PDS that handle approximate membership queries (AMQ). We consider adversarial users with the capability of making adaptive insertions, deletions and membership queries to AMQ-PDS, and analyse the performance of AMQ-PDS under such adversarial inputs.
We argue that deletions significantly empower adversaries, presenting a challenge to enforcing honest behaviour when compared to insertion-only AMQ-PDS. To address this, we introduce a new concept of an honest setting for AMQ-PDS with deletions. By leveraging simulation-based security definitions, we then quantify how much harm can be caused by adversarial users to the functionality of AMQ-PDS. Our resulting bounds only require calculating the maximal false positive probability and insertion failure probability achievable in our novel honest setting.
We apply our results to Cuckoo filters and Counting filters. We show how to protect these AMQ-PDS at low cost, by replacing or composing the hash functions with keyed pseudorandom functions in their construction. This strategy involves establishing practical bounds for the probabilities mentioned above. Using our new techniques, we demonstrate that achieving security against adversarial users making both insertions and deletions remains practical.
Compact Proofs of Partial Knowledge for Overlapping CNF Formulae
At CRYPTO '94, Cramer, Damgaard, and Schoenmakers introduced a general technique for constructing
honest-verifier zero-knowledge proofs of partial knowledge (PPK), where a prover Alice wants to prove to a verifier Bob she knows $\tau$ witnesses for $\tau$ claims out of $k$ claims without revealing the indices of those $\tau$ claims.
Their solution starts from a base honest-verifier zero-knowledge proof of knowledge $\Sigma$ and requires to run in parallel $k$ execution of the base protocol, giving a complexity of $O(k\gamma(\Sigma))$, where $\gamma(\Sigma)$ is the communication complexity of the base protocol.
However, modern practical scenarios require communication-efficient zero-knowledge proofs tailored to handle partial knowledge in specific application-dependent formats.
In this paper we propose a technique to compose a large class of $\Sigma$-protocols for atomic statements into $\Sigma$-protocols for PPK over formulae in conjunctive normal form (CNF) that overlap, in the sense that there is a common subset of literals among all clauses of the formula.
In such formulae, the statement is expressed as a conjunction of $m$ clauses, each of which consists of a disjunction of $k$ literals (i.e., each literal is an atomic statement) and $\ell$ literals are shared among clauses.
The prover, for a threshold parameter $\tau \le k$, proves knowledge of at least $\tau$ witnesses for $\tau$ distinct literals in each clause.
At the core of our protocol, there is a new technique to compose $\Sigma$-protocols for regular CNF relations (i.e., when $ \tau=1$) that exploits the overlap among clauses and
that we then generalize to formulae where $\tau>1$ providing improvements over state-of-the-art constructions.
VDORAM: Towards a Random Access Machine with Both Public Verifiability and Distributed Obliviousness
Verifiable random access machines (vRAMs) serve as a foundational model for expressing complex computations with provable security guarantees, serving applications in areas such as secure electronic voting, financial auditing, and privacy-preserving smart contracts. However, no existing vRAM provides distributed obliviousness, a critical need in scenarios where multiple provers seek to prevent disclosure against both other provers and the verifiers.
Implementing a publicly verifiable distributed oblivious RAM (VDORAM) presents several challenges. Firstly, the development of VDORAM is hindered by the limited availability of sophisticated publicly verifiable multi-party computation (MPC) protocols. Secondly, the lack of readily available front-end implementations for multi-prover zero-knowledge proofs (ZKPs) poses a significant obstacle to developing practical applications. Finally, directly adapting existing RAM designs to the VDORAM paradigm may prove either impractical or inefficient due to the inherent complexities of reconciling oblivious computation with the generation of publicly verifiable proofs.
To address these challenges, we introduce CompatCircuit, the first multi-prover ZKP front-end implementation to our knowledge. CompatCircuit integrates collaborative zkSNARKs to implement publicly verifiable MPC protocols with rich functionalities beyond those of an arithmetic circuit, enabling the development of multi-prover ZKP applications. Building upon CompatCircuit, we present VDORAM, the first publicly verifiable distributed oblivious RAM. By combining distributed oblivious architectures with verifiable RAM, VDORAM achieves an efficient RAM design that balances communication overhead and proof generation time. We have implemented CompatCircuit and VDORAM in approximately 15,000 lines of code, demonstrating usability by providing a practical and efficient implementation. Our performance evaluation result reveals that the system still provides moderate performance with distributed obliviousness.
Forking the RANDAO: Manipulating Ethereum's Distributed Randomness Beacon
Proof-of-stake consensus protocols often rely on distributed randomness beacons (DRBs) to generate randomness for leader selection. This work analyses the manipulability of Ethereum's DRB implementation, RANDAO, in its current consensus mechanism. Even with its efficiency, RANDAO remains vulnerable to manipulation through the deliberate omission of blocks from the canonical chain. Previous research has shown that economically rational players can withhold blocks --~known as a block withholding attack or selfish mixing~-- when the manipulated RANDAO outcome yields greater financial rewards.
We introduce and evaluate a new manipulation strategy, the RANDAO forking attack. Unlike block withholding, whereby validators opt to hide a block, this strategy relies on selectively forking out an honest proposer's block to maximize transaction fee revenues and block rewards.
In this paper, we draw attention to the fact that the forking attack is significantly more harmful than selfish mixing for two reasons. Firstly, it exacerbates the unfairness among validators. More importantly, it significantly undermines the reliability of the blockchain for the average user by frequently causing already published blocks to be forked out. By doing so, the attacker can fork the chain without losing slots, and we demonstrate that these are later fully compensated for. Our empirical measurements, investigating such manipulations on Ethereum mainnet, revealed no statistically significant traces of these attacks to date.
A Framework for Generating S-Box Circuits with Boyar-Peralta Algorithm-Based Heuristics, and Its Applications to AES, SNOW3G, and Saturnin
In many lightweight cryptography applications, low area and latency are required for efficient implementation. The gate count in the cipher and the circuit depth must be low to minimize these two metrics. Many optimization strategies have been developed for the linear layer, led by the Boyar-Peralta (BP) algorithm. The Advanced Encryption Standard (AES) has been a focus of extensive research in this area. However, while the linear layer uses only XOR gates, the S-box, which is an essential nonlinear component in symmetric cryptography, uses various gate types, making optimization challenging, particularly as the bit size increases.
In this paper, we propose a new framework for a heuristic search to optimize the circuit depth or XOR gate count of S-box circuits. Existing S-box circuit optimization studies have divided the nonlinear and linear layers of the S-box, optimizing each separately, but limitations still exist in optimizing large S-box circuits. To extend the optimization target from individual internal components to the entire S-box circuit, we extract the XOR information of each node in the target circuit and reconstruct the nodes based on nonlinear gates. Next, we extend the BP algorithm-based heuristics to address nonlinear gates and incorporate this into the framework. It is noteworthy that the effects of our framework occur while maintaining the AND gate count and AND depth without any increase.
To demonstrate the effectiveness of the proposed framework, we apply it to the AES, SNOW3G, and Saturnin S-box circuits. Our results include depth improvements by about 40% and 11% compared to the existing AES S-box [BP10] and Saturnin super S-box [CDL+20] circuits, respectively. We implement a new circuit for the SNOW3G S-box, which has not previously been developed, and apply our framework to reduce its depth. We expect the proposed framework to contribute to the design and implementation of various symmetric-key cryptography solutions.
A Survey of Interactive Verifiable Computing: Utilizing Randomness in Low-Degree Polynomials
This survey provides a comprehensive examination of verifiable computing, tracing its evolution from foundational complexity theory to modern zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARKs). We explore key developments in interactive proof systems, knowledge complexity, and the application of low-degree polynomials in error detection and verification protocols. The survey delves into essential mathematical frameworks such as the Cook-Levin Theorem, the sum-check protocol, and the GKR protocol, highlighting their roles in enhancing verification efficiency and soundness. By systematically addressing the limitations of traditional NP-based proof systems and then introducing advanced interactive proof mechanisms to overcome them, this work offers an accessible step-by-step introduction for newcomers while providing detailed mathematical analyses for researchers. Ultimately, we synthesize these concepts to elucidate the GKR protocol, which serves as a foundation for contemporary verifiable computing models. This survey not only reviews the historical and theoretical advancements in verifiable computing over the past three decades but also lays the groundwork for understanding recent innovations in the field.
All-You-Can-Compute: Packed Secret Sharing for Combined Resilience
Unprotected cryptographic implementations are vulnerable to implementation attacks, such as passive side-channel attacks and active fault injection attacks. Recently, countermeasures like polynomial masking and duplicated masking have been introduced to protect implementations against combined attacks that exploit leakage and faults simultaneously.
While duplicated masking requires $O(t * e)$ shares to resist an adversary capable of probing $t$ values and faulting $e$ values, polynomial masking requires only $O(t + e)$ shares, which is particularly beneficial for affine computation.
At CHES'$24$, Arnold et al. showed how to further improve the efficiency of polynomial masking in the presence of combined attacks by embedding two secrets into one polynomial sharing. This essentially reduces the complexity of previous constructions by half. The authors also observed that using techniques from packed secret sharing (Grosso et al., CHES'$13$) cannot easily achieve combined resilience to encode an arbitrary number of secrets in one polynomial encoding.
In this work, we resolve these challenges and show that it is possible to embed an arbitrary number of secrets in one encoding and propose gadgets that are secure against combined attacks. We present two constructions that are generic and significantly improve the computational and randomness complexity of existing compilers, such as the laOla compiler presented by Berndt et al. at CRYPTO'$23$ and its improvement by Arnold et al.
For example, for an AES evaluation that protects against $t$ probes and $e$ faults, we improve the randomness complexity of the state-of-the-art construction when $t+e>3$, leading to an improvement of up to a factor of $2.41$.
Scalable Post-Quantum Oblivious Transfers for Resource-Constrained Receivers
It is imperative to modernize traditional core cryptographic primitives, such as Oblivious Transfer (OT), to address the demands of the new digital era, where privacy-preserving computations are executed on low-power devices. This modernization is not merely an enhancement but a necessity to ensure security, efficiency, and continued relevance in an ever-evolving technological landscape.
This work introduces two scalable OT schemes: (1) Helix OT, a $1$-out-of-$n$ OT, and (2) Priority OT, a $t$-out-of-$n$ OT. Both schemes provide unconditional security, ensuring resilience against quantum adversaries. Helix OT achieves a receiver-side download complexity of $O(1)$. In big data scenarios, where certain data may be more urgent or valuable, we propose Priority OT. With a receiver-side download complexity of $O(t)$, this scheme allows data to be received based on specified priorities. By prioritizing data transmission, Priority OT ensures that the most important data is received first, optimizing bandwidth, storage, and processing resources. Performance evaluations indicate that Helix OT completes the transfer of 1 out of $n=$ 16,777,216 messages in 9 seconds, and Priority OT handles $t=$ 1,048,576 out of $n$ selections in 30 seconds. Both outperform existing $t$-out-of-$n$ OTs (when $t\geq 1$), underscoring their suitability for large-scale applications. To the best of our knowledge, Helix OT and Priority OT introduce unique advancements that distinguish them from previous schemes.
Statistical testing of random number generators and their improvement using randomness extraction
Random number generators (RNGs) are notoriously challenging to build and test, especially for cryptographic applications. While statistical tests cannot definitively guarantee an RNG’s output quality, they are a powerful verification tool and the only universally applicable testing method. In this work, we design, implement, and present various post-processing methods, using randomness extractors, to improve the RNG output quality and compare them through statistical testing. We begin by performing intensive tests on three RNGs—the 32-bit linear feedback shift register (LFSR), Intel’s ‘RDSEED,’ and IDQuantique’s ‘Quantis’—and compare their performance. Next, we apply the different post-processing methods to each RNG and conduct further intensive testing on the processed output. To facilitate this, we introduce a comprehensive statistical testing environment, based on existing test suites, that can be parametrised for lightweight (fast) to intensive testing.
Zero-Knowledge Proofs for Set Membership: Efficient, Succinct, Modular
We consider the problem of proving in zero knowledge that an element of a public set satisfies a given property without disclosing the element, i.e., for some $u$, ``$u \in S$ and $P(u)$ holds''. This problem arises in many applications (anonymous cryptocurrencies, credentials or whitelists) where, for privacy or anonymity reasons, it is crucial to hide certain data while ensuring properties of such data.
We design new \textit{modular} and \textit{efficient} constructions for this problem through new \textit{commit-and-prove zero-knowledge systems for set membership}, i.e. schemes proving $u \in S$ for a value $u$ that is in a public commitment $c_u$. We also extend our results to support {\em non-membership proofs}, i.e. proving $u \notin S$.
Being commit-and-prove, our solutions can act as plug-and-play modules in statements of the form ``$u \in S$ and $P(u)$ holds'' by combining our set (non-)membership systems with any other commit-and-prove scheme for $P(u)$. Also, they work with Pedersen commitments over prime order groups which makes them compatible with popular systems such as Bulletproofs or Groth16.
We implemented our schemes as a software library, and tested experimentally their performance. Compared to previous work that achieves similar properties---the clever techniques combining zkSNARKs and Merkle Trees in Zcash---our solutions offer more flexibility, shorter public parameters and $3.7 \times$--$30\times$ faster proving time for a set of size $2^{64}$.
BIP32-Compatible Threshold Wallets
Cryptographic wallets are an essential tool to securely store and maintain users’ secret keys and consequently their funds in Blockchain networks. A compelling approach to construct such wallets is to share the user’s secret key among several devices, such that an adversary must corrupt multiple machines to extract the entire secret key. Indeed, many leading cryptocurrency companies such as Coinbase, Binance, or ZenGo have started offering such distributed wallets to their customers. An important feature of a cryptographic wallet is its compatibility with the so-called BIP32 specification, the most widely adopted standard for cryptographic wallets. Essentially, BIP32 specifies the notion of a hierarchical deterministic wallet, which allows to create a key hierarchy in a deterministic fashion. Unfortunately, despite significant interest, no practically efficiently solution for a fully distributed wallet scheme, that also follows the BIP32 standard, exists.
In this work, we show the first concretely efficient construction of a fully distributed wallet that is compliant with the BIP32 standard. To this end, we first provide a game-based notion of threshold signatures with rerandomizable keys and show an instantiation via the Gennaro and Goldfeder threshold ECDSA scheme (CCS’18). We then observe that one of BIP32’s key derivation mechanisms, the so-called hardened derivation, cannot efficiently be translated to the threshold setting. Instead, we devise a novel and efficient hardened derivation mechanism for the threshold setting that satisfies the same properties as the original mechanism as specified by BIP32. As a final contribution, we evaluate our solution with respect to its running time and communication cost.
Hash your Keys before Signing: BUFF Security of the Additional NIST PQC Signatures
In this work, we analyze the so-called Beyond UnForgeability Features (BUFF) security of the submissions to the current standardization process of additional signatures by NIST. The BUFF notions formalize security against maliciously generated keys and have various real-world use cases, where security can be guaranteed despite misuse potential on a protocol level. Consequently, NIST declared the security against the BUFF notions as desirable features. Despite NIST's interest, only $6$ out of $40$ schemes consider BUFF security at all, but none give a detailed analysis. We close this gap by analyzing the schemes based on codes, isogenies, lattices, and multivariate equations. The results vary from schemes that achieve neither notion (e.g., Wave) to schemes that achieve all notions (e.g., PROV). In particular, we dispute certain claims by SQUIRRELS and VOX regarding their BUFF security. Resulting from our analysis, we observe that three schemes (CROSS, HAWK and PROV) achieve BUFF security without having the hash of public key and message as part of the signature, as BUFF transformed schemes would have. HAWK and PROV essentially use the lighter PS-3 transform by Pornin and Stern (ACNS'05). We further point out whether this transform suffices for the other schemes to achieve the BUFF notions, with both positive and negative results.
ZODA: Zero-Overhead Data Availability
Uncategorized
Uncategorized
We introduce ZODA, short for 'zero-overhead data availability,' which is a protocol for proving that symbols received from an encoding (for tensor codes) were correctly constructed. ZODA has optimal overhead for both the encoder and the samplers. Concretely, the ZODA scheme incurs essentially no incremental costs (in either computation or size) beyond those of the tensor encoding itself. In particular, we show that a slight modification to the encoding scheme for tensor codes allows sampled rows and columns of this modified encoding to become proofs of their own correctness. When used as part of a data availability sampling protocol, both encoders (who encode some data using a tensor code with some slight modifications) and samplers (who sample symbols from the purported encoding and verify that these samples are correctly encoded) incur no incremental communication costs and only small additional computational costs over having done the original tensor encoding. ZODA additionally requires no trusted setup and is plausibly post-quantum secure.
Parametrizing Maximal Orders Along Supersingular $\ell$-Isogeny Paths
Suppose you have a supersingular $\ell$-isogeny graph with vertices given by $j$-invariants defined over $\mathbb{F}_{p^2}$, where $p = 4 \cdot f \cdot \ell^e - 1$ and $\ell \equiv 3 \pmod{4}$. We give an explicit parametrization of the maximal orders in $B_{p,\infty}$ appearing as endomorphism rings of the elliptic curves in this graph that are $\leq e$ steps away from a root vertex with $j$-invariant 1728. This is the first explicit parametrization of this kind and we believe it will be an aid in better understanding the structure of supersingular $\ell$-isogeny graphs that are widely used in cryptography. Our method makes use of the inherent directions in the supersingular isogeny graph induced via Bruhat-Tits trees, as studied in [1]. We also discuss how in future work other interesting use cases, such as $\ell=2$, could benefit from the same methodology.
Concurrent Security of Anonymous Credentials Light, Revisited
We revisit the concurrent security guarantees of the well-known Anonymous Credentials Light (ACL) scheme (Baldimtsi and Lysyanskaya, CCS'13). This scheme was originally proven secure when executed sequentially, and its concurrent security was left as an open problem.
A later work of Benhamouda et al. (EUROCRYPT'21) gave an efficient attack on ACL when executed concurrently, seemingly resolving this question once and for all.
In this work, we point out a subtle flaw in the attack of Benhamouda et al. on ACL and show, in spite of popular opinion, that it can be proven concurrently secure.
Our modular proof in the algebraic group model uses an ID scheme as an intermediate step and leads to a major simplification of the complex security argument for Abe's Blind Signature scheme by Kastner et al. (PKC'22).
A New Paradigm for Server-Aided MPC
The server-aided model for multiparty computation (MPC) was introduced to capture a real-world scenario where clients wish to off-load the heavy computation of MPC protocols to dedicated servers. A rich body of work has studied various trade-offs between security guarantees (e.g., semi-honest vs malicious), trust assumptions (e.g., the threshold on corrupted servers), and efficiency.
However, all existing works make the assumption that all clients must agree on employing the same servers, and accept the same corruption threshold. In this paper, we challenge this assumption and introduce a new paradigm for server-aided MPC, where each client can choose their own set of servers and their own threshold of corrupted servers. In this new model, the privacy of each client is guaranteed as long as their own threshold is satisfied, regardless of the other servers/clients. We call this paradigm per-party private server-aided MPC to highlight both a security and efficiency guarantee: (1) per-party privacy, which means that each party gets their own privacy guarantees that depend on their own choice of the servers; (2) per-party complexity, which means that each party only needs to communicate with their chosen servers. Our primary contribution is a new theoretical framework for server-aided MPC. We provide two protocols to show feasibility, but leave it as a future work to investigate protocols that focus on concrete efficiency.
Round-Optimal Compiler for Semi-Honest to Malicious Oblivious Transfer via CIH
A central question in the theory of cryptography is whether we can build protocols that achieve stronger security guarantees, e.g., security against malicious adversaries, by combining building blocks that achieve much weaker security guarantees, e.g., security only against semi-honest adversaries; and with the minimal number of rounds. An additional focus is whether these building blocks can be used only as a black-box. Since Oblivious Transfer (OT) is the necessary and sufficient building block to securely realize any two-party (and multi-party) functionality, theoreticians often focus on proving whether maliciously secure OT can be built from a weaker notion of OT.
There is a rich body of literature that provides (black-box) compilers that build malicious OT from OTs that achieve weaker security such as semi-malicious OT and defensibly secure OT, within the minimal number of rounds. However, no round-optimal compiler exists that builds malicious OT from the weakest notion of semi-honest OT, in the plain model.
Correlation intractable hash (CIH) functions are special hash functions whose properties allow instantiating the celebrated Fiat-Shamir transform, and hence reduce the round complexity of public-coin proof systems.
In this work, we devise the first round-optimal compiler from semi-honest OT to malicious OT, by a novel application of CIH for collapsing rounds in the plain model. We provide the following contributions. First, we provide a new CIH-based round-collapsing construction for general cut-and-choose. This gadget can be used generally to prove the correctness of the evaluation of a function. Then, we use our gadget to build the first round-optimal compiler from semi-honest OT to malicious OT.
Our compiler uses the semi-honest OT protocol and the other building blocks in a black-box manner. However, for technical reasons, the underlying CIH construction requires the upper bound of the circuit size of the semi-honest OT protocol used. The need for this upper-bound makes our protocol not fully black-box, hence is incomparable with existing, fully black-box, compilers.
Blind accumulators for e-voting
We present a novel cryptographic primitive, blind accumulator, aimed at constructing e-voting systems. Blind accumulators collect private keys of eligible voters in a decentralized manner not getting information about the keys. Once the accumulation is complete, a voter processes the resulting accumulator and derives a public key which refers to a private key previously added by this voter. Public keys are derived deterministically and can therefore stand as fixed voter pseudonyms. The voter can prove that the derived key refers to some accumulated private key without revealing neither that key nor the voter itself. The voter uses the accumulated private key to sign a ballot. The corresponding public key is used to verify the signature. Since the public key is fixed, it is easy to achieve verifiability, to protect against multiple submissions of ballots by the same voter or, conversely, to allow multiple submissions but count only the last one. We suggest a syntax of blind accumulators and security requirements for them. We embed blind accumulators in the Pseudonymous Key Generation (PKG) protocol which details the use of accumulators in practical settings close to e-voting. We propose an instantiation of the blind accumulator scheme whose main computations resemble the Diffie-Hellman protocol. We justify the security of the proposed instantiation.
Last updated: 2025-01-08
Compressing Encrypted Data Over Small Fields
$\newcommand{\gen}{\mathsf{Gen}}\newcommand{\enc}{\mathsf{Enc}}\newcommand{\dec}{\mathsf{Dec}}$
A recent work of Fleischhacker, Larsen, and Simkin (Eurocrypt 2023) shows how to efficiently compress encrypted sparse vectors.
Subsequently, Fleischhacker, Larsen, Obremski, and Simkin (Eprint 2023) improve upon their work and provide more efficient constructions solving the same problem.
Being able to efficiently compress such vectors is very useful in a variety of applications, such as private information retrieval, searchable encryption, and oblivious message retrieval.
Concretely, assume one is given a vector $(m_1, \dots, m_n)$ with at most $t$ distinct indices $i \in [n]$, such that $m_i \neq 0$ and assume $(\gen,\enc, \dec)$ is an additively homomorphic encryption scheme.
The authors show that one can compress $(\enc(k, m_1), \dots, \enc(k, m_n))$, without knowing the secret key $k$, into a digest with size dependent on the upper bound on the sparsity $t$, but not on the total vector length $n$.
Unfortunately, all existing constructions either only work for encryption schemes that have sufficiently large plaintext spaces or require additional encrypted auxiliary information about the plaintext vector.
In this work, we show how to generalize the results of Fleischhacker, Larsen, and Simkin to encryption schemes with arbitrarily small plaintext spaces.
Our construction follows the same general ideas laid out in previous works but modifies them in a way that allows compressing the encrypted vectors correctly with high probability, independently of the size of the plaintext space.
Highly Efficient Server-Aided Multiparty Subfield VOLE Distribution Protocol
In recent development of secure multi-party computation (MPC), pseudorandom correlations of subfield vector oblivious linear evaluation (sVOLE) type become popular due to their amazing applicability in multi-dimensional MPC protocols such as privacy-preserving biometric identification and privacy-preserving machine learning protocols. In this paper, we introduce a novel way of VOLE distribution in three-party and four-party honest majority settings with the aid of a trusted server. This new method significantly decreases the communication cost and the memory storage of random sVOLE instances. On the other hand, it also enables a streamline distribution process that can generate a sVOLE instance of an arbitrary length, which results in 100 percent of utility rate of random sVOLE in multi-dimensional MPC protocols while preserving complete precomputability.
How to use your brain for cryptography without trustworthy machines
In this work, we study cryptosystems that can be executed securely without fully trusting all machines, but only trusting the user's brain. This paper focuses on signature scheme. We first introduce a new concept called ``server-aided in-brain signature,'' which is a cryptographic protocol between a human brain and multiple servers to sign a message securely even if the user's device and servers are not completely trusted. Second, we propose a concrete scheme that is secure against mobile hackers in servers and malware infecting user's devices.
Efficient CPA Attack on Hardware Implementation of ML-DSA in Post-Quantum Root of Trust
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers.
This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation.
Specifically, we present a Correlation Power Analysis (CPA) attack targeting the open-source hardware implementation of ML-DSA within a Silicon Root of Trust framework developed through a multi-party collaboration involving leading technology companies.
Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication.
By exploiting side-channel leakage from a distinctive unique reduction algorithm and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy.
Our findings reveal that an adversary can extract the secret keys using only 10,000 power traces.
With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust.
This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
Committing AE from Sponges: Security Analysis of the NIST LWC Finalists
Committing security has gained considerable attention in the field of authenticated encryption (AE). This can be traced back to a line of recent attacks, which entail that AE schemes used in practice should not only provide confidentiality and authenticity, but also committing security. Roughly speaking, a committing AE scheme guarantees that ciphertexts will decrypt only for one key. Despite the recent research effort in this area, the finalists of the NIST lightweight cryptography standardization process have not been put under consideration yet. We close this gap by providing an analysis of these schemes with respect to their committing security. Despite the structural similarities the finalists exhibit, our results are of a quite heterogeneous nature: We break four of the schemes with effectively no costs, while for two schemes our attacks are costlier, yet still efficient. For the remaining three schemes ISAP, Ascon, and (a slightly modified version of) Schwaemm, we give formal security proofs. Our analysis reveals that sponges are well-suited for building committing AE schemes. Furthermore, we show several negative results when applying the zero-padding method to the NIST finalists.
Quantum-resistant secret handshakes with dynamic joining, leaving, and banishment: GCD revisited
Secret handshakes, introduced by Balfanz et al. [3], allow users associated with various groups to determine if they share a common affiliation. These protocols ensure crucial properties such as fairness (all participants learn the result simultaneously), affiliation privacy (failed handshakes reveal no affiliation information), and result-hiding (even participants within a shared group cannot infer outcomes of unrelated handshakes). Over time, various secret-handshake schemes have been proposed, with a notable advancement being the modular framework by Tsudik and Xu. Their approach integrates three key components: group signature schemes, centralized secure channels for each group, and decentralized group key-agreement protocols.
Building upon this modularity, we propose significant updates. By addressing hidden complexities and revising the security model, we enhance both the efficiency and the privacy guarantees of the protocol. Specifically, we achieve the novel property of Self distinction—the ability to distinguish between two users in a session without revealing their identities—by replacing the group signature primitive with a new construct, the List MAC. This primitive is inherently untraceable, necessitating adjustments to the original syntax to support stronger privacy guarantees. Consequently, we introduce the Traitor Catching paradigm, where the transcript of a handshake reveals only the identity of a traitor, preserving the anonymity of all other participants.
To showcase the flexibility and robustness of our updated framework, we present two post-quantum instantiations (a hash-based one and another based on lattices). Our approach not only corrects prior limitations but also establishes a new benchmark for privacy and security in secret handshakes.
The cool and the cruel: separating hard parts of LWE secrets
Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation [20]. Known attacks on sparse binary LWE secrets include the sparse dual attack [5] and the hybrid sparse dual-meet in the middle attack [19], which requires significant memory. In this paper, we provide a new statistical attack with low memory requirement. The attack relies on some initial lattice reduction. The key observation is that, after lattice reduction is applied to the rows of a q-ary-like embedded random matrix A, the entries with high variance are concentrated in the early columns of the extracted matrix. This allows us to separate out the “hard part” of the LWE secret. We can first solve the sub-problem of finding the “cruel” bits of the secret in the early columns, and then find the remaining “cool” bits in linear time. We use statistical techniques to distinguish distributions to identify both cruel and cool bits of the secret. We recover secrets in dimensions n = 256, 512, 768, 1024 and provide concrete attack timings. For the lattice reduction stage, we leverage recent improvements in lattice reduction (flatter [34]) applied in parallel. We also apply our new attack to RLWE with 2-power cyclotomic rings, showing that these RLWE instances are much more vulnerable to this attack than LWE.
Concrete Quantum Cryptanalysis of Shortest Vector Problem
This paper presents quantum circuits for the Nguyen–Vidick (NV) sieve algorithm to solve the Shortest Vector Problem (SVP) in lattice-based cryptography. We focus on optimizing the circuit depth of the quantum NV sieve, leveraging Grover’s algorithm to reduce the search complexity. Using the proposed quantum NV sieve, we estimate the quantum re- sources required to solve SVP for various dimensions. Specifically, for a dimension size of 512 (the parameter for Kyber-512), our implemen- tation achieves a quantum attack cost of 2126.0045 in terms of the gate count–depth product metric used by National Institute of Standards and Technology (NIST). To optimize circuit depth, we employ carry-lookahead and carry-save adders for efficient multi-addition operations. Further, our quantum NV sieve performs precise sieving by implementing fixed-point arithmetic, incorporating essential components (such as input setting, up-scaling, and two’s complement). To the best of our knowledge, previous work on quantum cryptanaly- sis of SVP using the sieve algorithm has remained theoretical, without proposing quantum circuits. Our work humbly demonstrates that the post-quantum security of lattice- based cryptography (with respect to the quantum attack complexity) falls between that of multivariate-based and code-based cryptography.
Cryptography is Rocket Science: Analysis of BPSec
Space networking has become an increasing area of development with the advent of commercial satellite networks such as those hosted by Starlink and Kuiper, and increased satellite and space presence by governments around the world. Yet, historically such network designs have not been made public, leading to limited formal cryptographic analysis of the security offered by them. One of the few public protocols used in space networking is the Bundle Protocol, which is secured by Bundle Protocol Security (BPSec), an Internet Engineering Task Force (IETF) standard. We undertake a first analysis of BPSec under its default security context, building a model of the secure channel security goals stated in the IETF standard, and note issues therein with message loss detection. We prove BPSec secure, and also provide a stronger construction, one that supports the Bundle Protocol's functionality goals while also ensuring destination awareness of missing message components.
Security Guidelines for Implementing Homomorphic Encryption
Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows performing arbitrary operations on encrypted data. Since the conception of the idea in [RAD78], it was considered a holy grail of cryptography. After the first construction in 2009 [Gen09], it has evolved to become a practical primitive with strong security guarantees. Most modern constructions are based on well-known lattice problems such as Learning with Errors (LWE). Besides its academic appeal, in recent years FHE has also attracted significant attention from industry, thanks to its applicability to a considerable number of real-world use-cases. An upcoming standardization effort by ISO/IEC aims to support the wider adoption of these techniques. However, one of the main challenges that standards bodies, developers, and end users usually encounter is establishing parameters. This is particularly hard in the case of FHE because the parameters are not only related to the security level of the system, but also to the type of operations that the system is able to handle. In this paper, we provide examples of parameter sets for LWE targeting particular security levels that can be used in the context of FHE constructions. We also give examples of complete FHE parameter sets, including the parameters relevant for correctness and performance, alongside those relevant for security. As an additional contribution, we survey the parameter selection support offered in open-source FHE libraries.
Leveled Functional Bootstrapping via External Product Tree
Multi-input and large-precision lookup table (LUT) evaluation pose significant challenges in Fully Homomorphic Encryption (FHE). Currently, two modes are employed to address this issue. One is tree-based functional bootstrapping (TFBS), which uses multiple blind rotations to construct a tree for LUT evaluation. The second is circuit bootstrapping, which decomposes all inputs into bits and utilizes a CMux tree for LUT evaluation. In this work, we propose a novel mode that is leveled functional bootstrapping. This mode utilizes the external product tree to perform multi-input functional bootstrapping. We implement TFBS and LFBS within the OpenFHE library. The results demonstrate that our method outperforms TFBS in both computational efficiency and bootstrapping key size. Specifically, for 12-bit and 16-bit input LUTs, our method is approximately two to three orders of magnitude faster than TFBS.
Finally, we introduce a novel scheme-switching framework that supports large-precision polynomial and non-polynomial evaluations. The framework leverages digital extraction techniques to enable seamless switching between the BFV and LFBS schemes.
On Witness Encryption and Laconic Zero-Knowledge Arguments
Witness encryption (WE) (Garg et al, STOC’13) is a powerful cryptographic primitive that is closely related to the notion of indistinguishability obfuscation (Barak et, JACM’12, Garg et al, FOCS’13). For a given NP-language $L$, WE for $L$ enables encrypting a message $m$ using an instance $x$ as the public-key, while ensuring that efficient decryption is possible by anyone possessing a witness for $x \in L$, and if $x\notin L$, then the encryption is hiding. We show that this seemingly sophisticated primitive is equivalent to a communication-efficient version of one of the most classic cryptographic primitives—namely that of a zero-knowledge argument (Goldwasser et al, SIAM’89, Brassard et al, JCSS’88): for any NP-language $L$, the following are equivalent:
- There exists a witness encryption for L;
- There exists a laconic (i.e., the prover communication is bounded by $O(\log n)$) special-honest verifier zero-knowledge (SHVZK) argument for $L$.
Our approach is inspired by an elegant (one-sided) connection between (laconic) zero-knowledge arguments and public-key encryption established by Berman et al (CRYPTO’17) and Cramer-Shoup (EuroCrypt’02), and the equivalence between a notion of so-called “predictable arguments”
and witness encryption by Faonio, Nielsen, and Venturi (PKC’17).
Efficient Authentication Protocols from the Restricted Syndrome Decoding Problem
In this paper, we introduce an oracle version of the Restricted Syndrome Decoding Problem (RSDP) and propose novel authentication protocols based on the hardness of this problem. They follow the basic structure of the HB-family of authentication protocols and later improvements but demonstrate several advantages.
An appropriate choice of multiplicative subgroup and ring structure gives rise to a very efficient hardware implementation compared to other \emph{Learning Parity with Noise} based approaches. In addition, the new protocols also have lower key size, lower communication costs, and potentially better completeness/soundness compared to learning-based alternatives. This is appealing in the context of low-cost, low-powered authenticating devices such as radio frequency identification (RFID) systems.
Lastly, we show that with additional assumptions, RSDP can be used to instantiate a Man-in-the-Middle secured authentication protocol.
Lossy Cryptography from Code-Based Assumptions
Over the past few decades, we have seen a proliferation of advanced cryptographic primitives with lossy or homomorphic properties built from various assumptions such as Quadratic Residuosity, Decisional Diffie-Hellman, and Learning with Errors. These primitives imply hard problems in the complexity class $\mathcal{SZK}$ (statistical zero-knowledge); as a consequence, they can only be based on assumptions that are broken in $\mathcal{BPP}^{\mathcal{SZK}}$. This poses a barrier for building advanced primitives from code-based assumptions, as the only known such assumption is Learning Parity with Noise (LPN) with an extremely low noise rate $\frac{\log^2 n}{n}$, which is broken in quasi-polynomial time.
In this work, we propose a new code-based assumption: Dense-Sparse LPN, that falls in the complexity class $\mathcal{BPP}^{\mathcal{SZK}}$ and is conjectured to be secure against subexponential time adversaries. Our assumption is a variant of LPN that is inspired by McEliece's cryptosystem and random $k\mbox{-}$XOR in average-case complexity. Roughly, the assumption states that
\[(\mathbf{T}\, \mathbf{M}, \mathbf{s} \,\mathbf{T}\, \mathbf{M} + \mathbf{e}) \quad \text{is indistinguishable from}\quad (\mathbf{T} \,\mathbf{M}, \mathbf{u}),\] for a random (dense) matrix $\mathbf{T}$, random sparse matrix $\mathbf{M}$, and sparse noise vector $\mathbf{e}$ drawn from the Bernoulli distribution with inverse polynomial noise probability.
We leverage our assumption to build lossy trapdoor functions (Peikert-Waters STOC 08). This gives the first post-quantum alternative to the lattice-based construction in the original paper. Lossy trapdoor functions, being a fundamental cryptographic tool, are known to enable a broad spectrum of both lossy and non-lossy cryptographic primitives; our construction thus implies these primitives in a generic manner. In particular, we achieve collision-resistant hash functions with plausible subexponential security, improving over a prior construction from LPN with noise rate $\frac{\log^2 n}{n}$ that is only quasi-polynomially secure.
Leakage-Free Probabilistic Jasmin Programs
This paper presents a semantic characterization of leakage-freeness through timing side-channels for Jasmin programs. Our characterization covers probabilistic Jasmin programs that are not constant-time. In addition, we provide a characterization in terms of probabilistic relational Hoare logic and prove the equivalence between both definitions. We also prove that our new characterizations are compositional and relate our new definitions to existing ones from prior work, which could only be applied to deterministic programs.
To provide practical evidence, we use the Jasmin framework to develop a rejection sampling algorithm and provide an EasyCrypt proof that ensures the algorithm's implementation is leakage-free while not being constant-time.
Ringtail: Practical Two-Round Threshold Signatures from Learning with Errors
A threshold signature scheme splits the signing key among $\ell$ parties, such that any $t$-subset of parties can jointly generate signatures on a given message. Designing concretely efficient post-quantum threshold signatures is a pressing question, as evidenced by NIST's recent call.
In this work, we propose, implement, and evaluate a lattice-based threshold signature scheme, Ringtail, which is the first to achieve a combination of desirable properties:
(i) The signing protocol consists of only two rounds, where the first round is message-independent and can thus be preprocessed offline.
(ii) The scheme is concretely efficient and scalable to $t \leq 1024$ parties. For $128$-bit security and $t = 1024$ parties, we achieve $13.4$ KB signature size and $10.5$ KB of online communication.
(iii) The security is based on the standard learning with errors (LWE) assumption in the random oracle model. This improves upon the state-of-the-art (with comparable efficiency) which either has a three-round signing protocol [Eurocrypt'24] or relies on a new non-standard assumption [Crypto'24].
To substantiate the practicality of our scheme, we conduct the first WAN experiment deploying a lattice-based threshold signature, across 8 countries in 5 continents. We observe that an overwhelming majority of the end-to-end latency is consumed by network latency, underscoring the need for round-optimized schemes.
Fiat-Shamir Bulletproofs are Non-Malleable (in the Random Oracle Model)
Bulletproofs (Bünz et al. IEEE S&P 2018) are a celebrated ZK proof system that allows for short and efficient proofs, and have been implemented and deployed in several real-world systems. In practice, they are most often implemented in their non-interactive version obtained using the Fiat-Shamir transform. A security proof for this setting is necessary for ruling out malleability attacks. These attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. An earlier version of this work (Ganesh et al. EUROCRYPT 2022) provided evidence for non-malleability of Fiat-Shamir Bulletproofs. This was done by proving simulation-extractability, which implies non-malleability, in the algebraic group model.
In this work, we generalize the former result and prove simulation extractability in the programmable random oracle model, removing the need for the algebraic group model. Along the way, we establish a generic chain of reductions for Fiat-Shamir-transformed multi-round public-coin proofs to be simulation-extractable in the (programmable) random oracle model, which may be of independent interest.
Foundations of Platform-Assisted Auctions
Today, many auctions are carried out with the help of intermediary platforms like Google and eBay. These platforms serve as a rendezvous point for the buyers and sellers, and charge a fee for its service. We refer to such auctions as platform-assisted auctions. Traditionally, the auction theory literature mainly focuses on designing auctions that incentivize the buyers to bid truthfully, assuming that the platform always faithfully implements the auction. In practice, however, the platforms have been found to manipulate the auctions to earn more profit, resulting in high-profile anti-trust lawsuits.
We propose a new model for studying platform-assisted auctions in the permissionless setting, where anyone can register and participate in the auction. We explore whether it is possible to design a dream auction in this new model, such that honest behavior is the utility-maximizing strategy for each individual buyer, the platform, the seller, as well as platform-seller or platform-buyer coalitions. Through a collection of feasibility and infeasibility results, we carefully characterize the mathematical landscape of platform-assisted auctions.
Interestingly, our work reveals exciting connections between cryptography and mechanism design. We show how cryptography can lend to the design of an efficient platform-assisted auction with dream properties. Although a line of works have also used multi-party computation (MPC) or the blockchain to remove the reliance on a trusted auctioneer, our work is distinct in nature in several dimensions. First, we initiate a systematic exploration of the game theoretic implications when the service providers (e.g., nodes that implement the MPC or blockchain protocol) are strategic and can collude with sellers or buyers. Second, we observe that the full simulation paradigm used in the standard MPC literature is too stringent and leads to high asymptotical costs. Specifically, because every player has a different private outcome in an auction protocol, to the best of our knowledge, running any generic MPC protocol among the players would incur at least $n^2$ total cost where $n$ is the number of buyers.We propose a new notion of simulation called {\it utility-dominated emulation} that is sufficient for guaranteeing the game-theoretic properties needed in an auction. Under this new notion of simulation, we show how to design efficient auction protocols with quasilinear efficiency, which gives an $n$-fold improvement over any generic approach.
Privacy-Preserving Dijkstra
Given a graph $G(V,E)$, represented as a secret-sharing of an adjacency list, we show how to obliviously convert it into an alternative, MPC-friendly secret-shared representation, so-called $d$-normalized replicated adjacency list (which we abbreviate to $d$-normalized), where the size of our new data-structure is only 4x larger -- compared to the original (secret-shared adjacency list) representation of $G$. Yet, this new data structure enables us to execute oblivious graph algorithms that simultaneously improve underlying graph algorithms' round, computation, and communication complexity. Our $d$-normalization proceeds in two steps:
First, we show how for any graph $G$, given a secret-shared adjacency list, where vertices are arbitrary alphanumeric strings that fit into a single RAM memory word, we can efficiently and securely rename vertices to integers from $1$ to $V$ that will appear in increasing order in the resulting secret-shared adjacency list. The secure renaming takes $O(\log V)$ rounds and $O((V+E)\log V)$ secure operations. Our algorithm also outputs two secret-shared arrays: a mapping from integers to alphanumeric names and its sorted inverse. Of course, if the adjacency list is already in such a format, this step could be omitted.
Second, given a secret-shared adjacency list for any graph $G$, where vertices are integers from $1$ to $V$ and are sorted, we show an oblivious $d$-normalization algorithm that takes $O(1)$ rounds and $O(V+E)$ secure operations.
We believe that both conversions are of independent interest. We demonstrate the power of our data structures by designing a privacy-preserving Dijkstra's single-source shortest-path algorithm that simultaneously achieves $O((V +E) \cdot \log V)$ secure operations and $O(V \cdot \log V \cdot \log \log\log V)$ rounds. Our secure Dijkstra algorithm works for any adjacency list representation as long as all vertex labels and weights can individually fit into a constant number of RAM memory words. Our algorithms work for two or a constant number of servers in the honest but curious setting. The limitation of our result (to only a constant number of servers) is due to our reliance on linear work and constant-round secure shuffle.
On the Independence Assumption in Quasi-Cyclic Code-Based Cryptography
Cryptography based on the presumed hardness of decoding codes -- i.e., code-based cryptography -- has recently seen increased interest due to its plausible security against quantum attackers. Notably, of the four proposals for the NIST post-quantum standardization process that were advanced to their fourth round for further review, two were code-based. The most efficient proposals -- including HQC and BIKE, the NIST submissions alluded to above -- in fact rely on the presumed hardness of decoding structured codes. Of particular relevance to our work, HQC is based on quasi-cyclic codes, which are codes generated by matrices consisting of two cyclic blocks.
In particular, the security analysis of HQC requires a precise understanding of the Decryption Failure Rate (DFR), whose analysis relies on the following heuristic: given random "sparse" vectors $e_1,e_2$ (say, each coordinate is i.i.d. Bernoulli) multiplied by fixed "sparse" quasi-cyclic matrices $A_1,A_2$, the weight of resulting vector $e_1A_1+e_2A_2$ is very concentrated around its expectation. In the documentation, the authors model the distribution of $e_1A_1+e_2A_2$ as a vector with independent coordinates (and correct marginal distribution). However, we uncover cases where this modeling fails. While this does not invalidate the (empirically verified) heuristic that the weight of $e_1A_1+e_2A_2$ is concentrated, it does suggest that the behavior of the noise is a bit more subtle than previously predicted. Lastly, we also discuss implications of our result for potential worst-case to average-case reductions for quasi-cyclic codes.
How Much Public Randomness Do Modern Consensus Protocols Need?
Modern blockchain-based consensus protocols
aim for efficiency (i.e., low communication and round complexity) while maintaining security against adaptive adversaries.
These goals are usually achieved using a public randomness beacon to select roles for each participant. We examine to what extent this randomness is necessary.
Specifically, we provide tight bounds on the amount of entropy a Byzantine Agreement protocol must consume from a beacon in order to enjoy efficiency and adaptive security. We first establish that no consensus protocol can simultaneously be efficient, be adaptively secure, and use $O(\log n)$ bits of beacon entropy. We then show this bound is tight and, in fact, a trilemma by presenting three consensus protocols that achieve any two of these three properties.
Stronger Security and Constructions of Multi-Designated Verifier Signatures
Off-the-Record (OTR) messaging is a two-party message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent. To extend OTR to group messaging we need to consider issues that are not present in the 2-party case. In group OTR (as in two-party OTR), the sender should be able to authenticate (or sign) his messages so that group members can verify who sent a message (that is, signatures should be unforgeable, even by group members). Also as in the two-party case, we want the off-the-record property: even if some verifiers are corrupt and collude, they should not be able to prove the authenticity of a message to any outsider. Finally, we need consistency, meaning that a corrupt sender cannot create confusion in the group as to what he said: if any group member accepts a signature, then all of them do.
To achieve these properties it is natural to consider Multi-Designated Verifier Signatures (MDVS), which intuitively seem to target exactly the properties we require. However, existing literature defines and builds only limited notions of MDVS, where (a) the off-the-record property (referred to as source hiding) only holds when all verifiers could conceivably collude, and (b) the consistency property is not considered.
The contributions of this paper are two-fold: stronger definitions for MDVS, and new constructions meeting those definitions. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency.
We give several constructions of our stronger notion of MDVS: one from generic standard primitives such as pseudorandom functions, pseudorandom generators, key agreement and NIZKs; one from specific instances of these primitives (for concrete efficiency); and one from functional encryption. The third construction requires an involved trusted setup step — including verification keys derived from a master secret — but this trusted setup buys us verifier-identity-based signing, for which such trusted setup is unavoidable. Additionally, in the third construction, the signature size can be made smaller by assuming a bound on colluding verifiers.
Probabilistic Attacks and Enhanced Security for "Private Set Intersection in the Internet Setting from Lightweight Oblivious PRF"
Privacy Set Intersection (PSI) has been an important research topic within privacy computation. Its main function is to allow two parties to compute the intersection of their private sets without revealing any other private information. Therefore, PSI can be applied to various real-world scenarios.
Chase and Miao presented an impressive construction ``Private set intersection in the Internet setting from lightweight oblivious prf'' (CM20 for short) at Crypto 2020, highlighting its convenient structure and optimal communication cost. However, it does have some security vulnerabilities. Let $X$ be the privacy set of user $P_1$, $Y$ be the privacy set of user $P_2$. The CM20 protocol uses a pseudorandom function (PRF) to encrypt the privacy $x\in X$ of $P_1$ into $D_1$ and the privacy $y\in Y$ of $P_2$ into $D_2$, $D_1 = D_2$ as $x=y$. It then sends random data $F_1$ to user $P_1$ and random data $F_2$ to user $P_2$ using a random oblivious transfer technique. User $P_2$ computes $\delta=D_2\oplus F_2$ and sends $\delta$ to user $P_1$, and user $P_1$ uses $\delta$ and $F_1$ to re-encrypt $D_1$. Repeat this until $P_1$ re-encrypts all the privacy in all the privacy sets $X$, packages them up and sends them to $P_2$, who completes the privacy set intersection. However, if an adversary obtains $\delta$ and $F_2$ by any means, they can recover the PRF's encryption of the user's privacy, and the recovery process is non-trivial. This significantly weakens the security of the CM20 protocol.
In this paper, we make three main contributions. First, based on the above analysis, we present a method for attacking CM20, called {\em probabilistic attacks}. This attack is based on estimating and analysing the frequency distribution of the encrypted data from the PRF and the probability distribution of the original private data, and determining the relationship between the two. Although not 100\% effective, this method of attack poses a significant threat to the security of user data.
Secondly, we introduce a new tool called the {\em perturbed pseudorandom generator} (PPRG). We show that the PPRG can overcome probabilistic attacks by replacing the random oblivious transfer and one of the hash functions (originally there were two) in CM20.
Finally, we provide a dedicated indistinguishability against chosen-plaintext attack (IND-CPA) security model for this PSI protocol. The efficiency analysis shows that the proposed PSI is comparable to CM20's PSI, whether on a PC, MAC, pad or mobile phone.
Leuvenshtein: Efficient FHE-based Edit Distance Computation with Single Bootstrap per Cell
This paper presents a novel approach to calculating the Levenshtein (edit) distance within the framework of Fully Homomorphic Encryption (FHE), specifically targeting third-generation schemes like TFHE. Edit distance computations are essential in applications across finance and genomics, such as DNA sequence alignment. We introduce an optimised algorithm that significantly reduces the cost of edit distance calculations called Leuvenshtein. This algorithm specifically reduces the number of programmable bootstraps (PBS) needed per cell of the calculation, lowering it from approximately 28 operations—required by the conventional Wagner-Fisher algorithm—to just 1. Additionally, we propose an efficient method for performing equality checks on characters, reducing ASCII character comparisons to only 2 PBS operations. Finally, we explore the potential for further performance improvements by utilizing preprocessing when one of the input strings is unencrypted. Our Leuvenshtein achieves up to $205\times$ faster performance compared to the best available TFHE implementation and up to $39\times$ faster than an optimised implementation of the Wagner-Fisher algorithm. Moreover, when offline preprocessing is possible due to the presence of one unencrypted input on the server side, an additional $3\times$ speedup can be achieved.
Dynamically Available Common Subset
Internet-scale consensus protocols used by blockchains are designed to remain operational in the presence of unexpected temporary crash faults (the so-called sleepy model of consensus) -- a critical feature for the latency-sensitive financial applications running on these systems.
However, their leader-based architecture, where a single block proposer is responsible for creating the block at each height, makes them vulnerable to short-term censorship attacks, in which the proposers profit at the application layer by excluding certain transactions.
In this work, we introduce an atomic broadcast protocol, secure in the sleepy model, that ensures the inclusion of all transactions within a constant expected time, provided that at least one participating node is non-censoring at all times.
Unlike traditional approaches, our protocol avoids designating a single proposer per block height, instead leveraging a so-called dynamically available common subset (DACS) protocol -- the first of its kind in the sleepy model. Additionally, our construction guarantees deterministic synchronization -- once an honest node confirms a block, all other honest nodes do so within a constant time, thus addressing a shortcoming of many low-latency sleepy protocols.
Publicly-Detectable Watermarking for Language Models
We present a publicly-detectable watermarking scheme for LMs: the detection algorithm contains no secret information, and it is executable by anyone. We embed a publicly-verifiable cryptographic signature into LM output using rejection sampling and prove that this produces unforgeable and distortion-free (i.e., undetectable without access to the public key) text output. We make use of error-correction to overcome periods of low entropy, a barrier for all prior watermarking schemes. We implement our scheme and find that our formal claims are met in practice.
A New Method for Solving Discrete Logarithm Based on Index Calculus
Index Calculus (IC) algorithm is the most effective probabilistic algorithm for solving discrete logarithms over finite fields of prime numbers, and it has been widely applied to cryptosystems based on elliptic curves. Since the IC algorithm was proposed in 1920, the research on it has never stopped, especially discretization of prime numbers on the finite fields, both the algorithm itself and its application have been greatly developed. Of course, there has been some research on elliptic curves,but with little success. For the IC algorithm, scholars pay more attention to how to improve the probability of solving and reduce the time complexity of calculation. It is the first time for the IICA to study the optimization problem of the IC by using the method of integer. However, the IICA only studies the case of integer up, and fails to consider the case of integer down. It is found that the integer direction of the IICA can be integer up or integer down, but the concept of modular multiplication needs to be used when integer down. After optimizing the IICA, the probability of successful solution of discrete logarithm is increased by nearly 2 times, and the number of transformations is also reduced to a certain extent, thus reducing the time complexity of solution. The re-optimized the IC algorithm greatly improves the probability of successful the IC solution. This research result poses a serious challenge to cryptosystems based on finite fields of prime numbers.
Leverage Staking with Liquid Staking Derivatives (LSDs): Opportunities and Risks
In the Proof of Stake (PoS) Ethereum ecosystem, users can stake ETH on Lido to receive stETH, a Liquid Staking Derivative (LSD) that represents staked ETH and accrues staking rewards. LSDs improve the liquidity of staked assets by facilitating their use in secondary markets, such as for collateralized borrowing on Aave or asset exchanges on Curve. The composability of Lido, Aave, and Curve enables an emerging strategy known as leverage staking, an iterative process that enhances financial returns while introducing potential risks. This paper establishes a formal framework for leverage staking with stETH and identifies 442 such positions on Ethereum over 963 days. These positions represent a total volume of 537,123 ETH (877m USD). Our data reveal that 81.7% of leverage staking positions achieved an Annual Percentage Rate (APR) higher than conventional staking on Lido. Despite the high returns, we also recognize the potential risks. For example, the Terra crash incident demonstrated that token devaluation can impact the market. Therefore, we conduct stress tests under extreme conditions of significant stETH devaluation to evaluate the associated risks. Our simulations reveal that leverage staking amplifies the risk of cascading liquidations by triggering intensified selling pressure through liquidation and deleveraging processes. Furthermore, this dynamic not only accelerates the decline of stETH prices but also propagates a contagion effect, endangering the stability of both leveraged and ordinary positions.
Wave Hello to Privacy: Efficient Mixed-Mode MPC using Wavelet Transforms
This paper introduces new protocols for secure multiparty computation (MPC) leveraging Discrete Wavelet Transforms (DWTs) for computing nonlinear functions over large domains. By employing DWTs, the protocols significantly reduce the overhead typically associated with Lookup Table-style (LUT) evaluations in MPC. We state and prove foundational results for DWT-compressed LUTs in MPC, present protocols for 9 of the most common activation functions used in ML, and experimentally evaluate the performance of our protocols for large domain sizes in the LAN and WAN settings. Our protocols are extremely fast -- for instance, when considering 64-bit inputs, computing 1000 parallel instances of the sigmoid function, with an error less than $2^{-24}$ takes only a few hundred milliseconds incurs just 29\,KiB of online communication (40 bytes per evaluation).
A Note on the Minimality of One-Way Functions in Post-Quantum Cryptography
In classical cryptography, one-way functions (OWFs) play a central role as the minimal primitive that (almost) all primitives imply. The situation is more complicated in quantum cryptography, in which honest parties and adversaries can use quantum computation and communication, and it is known that analogues of OWFs in the quantum setting might not be minimal.
In this work we ask whether OWFs are minimal for the intermediate setting of post-quantum cryptography, in which the protocols are classical while they shall resist quantum adversaries. We show that for a wide range of natural settings, if a primitive Q implies OWFs, then so does its (uniformly or non-uniformly secure) post-quantum analogue. In particular, we show that if a primitive Q implies any other primitive P that has a 2-message security game (e.g., OWFs) through a black-box classical security reduction R, then one can always (efficiently) turn any polynomial-size quantum adversary breaking P into a polynomial-size quantum adversary breaking Q. Note that this result holds even if the implementation of P using that of Q is arbitrarily non-black-box.
We also prove extensions of this result for when the reduction R anticipates its oracle adversary to be deterministic, whenever either of the following conditions hold: (1) the adversary needs to win the security game of Q only with non-negligible probability (e.g., Q is collision-resistant hashing) or (2) that either of P and Q have "falsifiable" security games (this is the case when P is OWFs). Our work leaves open answering our main question when Q implies OWFs through a non-black-box security reduction, or when P uses a more complicated security game than a two-message one.
Fast SNARK-based Non-Interactive Distributed Verifiable Random Function with Ethereum Compatibility
Distributed randomness beacons (DRBs) are fundamental for various decentralised applications, such as consensus protocols, decentralised gaming and lotteries, and collective governance protocols. These applications are heavily used on modern blockchain platforms.
This paper presents the so far most efficient direct construction and implementation of a non-interactive distributed verifiable random function (NI-DVRF) that is fully compatible with Ethereum. Our NI-DVRF scheme adopts pairings and combines techniques from secret sharing, SNARKs, and BLS signatures. The security properties of the resulting NI-DVRF scheme are formally modelled and proven in the random oracle model under standard pairing-based assumptions.
To justify the efficiency and cost claims and more generally its adoption potential in practice, the proposed NI-DVRF scheme was implemented in Rust and Solidity. Our implementation is highly optimised and is currently being investigated for deployment on the multichain layer-2 scaling solution provided by Boba Network to power its DRB service zkRand. Our experimental analysis, therefore, also evaluates performance and scalability properties of the proposed NI-DVRF and its implementation.
Indistinguishability Obfuscation from Simple-to-State Hard Problems: New Assumptions, New Techniques, and Simplification
In this work, we study the question of what set of simple-to-state assumptions suffice for constructing functional encryption and indistinguishability obfuscation (iO), supporting all functions describable by polynomial-size circuits. Our work improves over the state-of-the-art work of Jain, Lin, Matt, and Sahai (Eurocrypt 2019) in multiple dimensions.
New Assumption: Previous to our work, all constructions of iO from simple assumptions required novel pseudorandomness generators involving LWE samples and constant-degree polynomials over the integers, evaluated on the error of the LWE samples. In contrast, Boolean pseudorandom generators (PRGs) computable by constant-degree polynomials have been extensively studied since the work of Goldreich (2000). We show how to replace the novel pseudorandom objects over the integers used in previous works, with appropriate Boolean pseudorandom generators with sufficient stretch, when combined with LWE with binary error over suitable parameters. Both binary error LWE and constant degree Goldreich PRGs have been a subject of extensive cryptanalysis since much before our work and thus we back the plausibility of our assumption with security against algorithms studied in context of cryptanalysis of these objects.
New Techniques: We introduce a number of new techniques:
\begin{itemize}
\item We show how to build partially-hiding \emph{public-key} functional encryption, supporting degree-2 functions in the secret part of the message, and arithmetic $\mathsf{NC}^1$ functions over the public part of the message, assuming only standard assumptions over asymmetric pairing groups.
\item We construct single-ciphertext and single-secret-key functional encryption for all circuits with long outputs, which has the features of {\em linear} key generation and compact ciphertext, assuming only the LWE assumption.
\end{itemize}
Simplification: Unlike prior works, our new techniques furthermore let us construct {\em public-key} functional encryption for polynomial-sized circuits directly (without invoking any bootstrapping theorem, nor transformation from secret-key to public key FE), and based only on the {\em polynomial hardness} of underlying assumptions. The functional encryption scheme satisfies a strong notion of efficiency where the size of the ciphertext is independent of the size of the circuit to be computed, and grows only sublinearly in the output size of the circuit and polynomially in the input size and the depth of the circuit. Finally, assuming that the underlying assumptions are subexponentially hard, we can bootstrap this construction to achieve $iO$.
DL-SCADS: Deep Learning-Based Post-Silicon Side-Channel Analysis Using Decomposed Signal
Side-channel analysis (SCA) does not aim at the algorithm's weaknesses but rather its implementations. The rise of machine learning (ML) and deep learning (DL) is giving adversaries advanced capabilities to perform stealthy attacks. In this paper, we propose DL-SCADS, a DL-based approach along with signal decomposition techniques to leverage the power of secret key extraction from post-silicon EM/power side-channel traces. We integrate previously proven effective ideas of model ensembling and automated hyperparameter tuning with signal decomposition to develop an efficient and robust side-channel attack. Extensive experiments are performed on Advanced Encryption Standard (AES) and Post-Quantum Cryptography (PQC) to demonstrate the efficacy of our approach. The evaluation of the performance of the side-channel attack employing various decomposition techniques and comparison with the proposed approach in a range of datasets are also tabulated.
A Combinatorial Approach to IoT Data Security
This article explores the potential of Secret Sharing-Based Internet of Things (SBIoT) as a promising cryptographic element across diverse applications, including secure data storage in commercial cloud systems (Datachest), smart home environments (encompassing sensors, cameras, smart locks, and smart assistants), and e-health applications (protecting patient data and medical records). Beyond these applications, the paper makes two key contributions: the introduction of a novel cheater identification algorithm designed to verify the correct submission of shares during secret reconstruction, and empirical validation through experimental studies to support the theoretical advancements. This multifaceted approach not only demonstrates the versatility of SBIoT but also proposes innovative mechanisms to enhance security and integrity, contributing to the development of a more robust cryptographic framework.
This article expands upon the work presented in the poster "A Combinatorial Approach to IoT Data Security" at IWSEC 2023, Yokohama, Japan.
PIR with Client-Side Preprocessing: Information-Theoretic Constructions and Lower Bounds
It is well-known that classical Private Information Retrieval (PIR) schemes without preprocessing must suffer from linear server computation per query. Moreover, any such single-server PIR with sublinear bandwidth must rely on public-key cryptography. Several recent works showed that these barriers pertaining to classical PIR can be overcome by introducing a preprocessing phase where each client downloads a small hint that helps it make queries subsequently. Notably, the Piano PIR scheme (and subsequent improvements) showed that with such a client-side preprocessing, not only can we have PIR with sublinear computation and bandwidth per query, but somewhat surprisingly, we can also get it using only symmetric-key cryptography (i.e., one-way functions).
In this paper, we take the question of minimizing cryptographic assumptions to an extreme. In particular, we are the first to explore the landscape of information theoretic single-server preprocessing PIR. We make contributions on both the upper- and lower-bounds fronts. First, we show new information-theoretic constructions with various non-trivial performance tradeoffs between server computation, client space and bandwidth. Second, we prove a (nearly) tight lower bound on the tradeoff between the client space and bandwidth in information-theoretic constructions. Finally, we prove that any computational scheme that overcomes the information-theoretic lower bound and satisfies a natural syntactic requirement (which is met by all known constructions) would imply a hard problem in the complexity class SZK. In particular, this shows that Piano achieves (nearly) optimal client space and bandwidth tradeoff subject to making a black-box use of a one-way function. Some of the techniques we use for the above results can be of independent interest.
Attribute Based Encryption for Turing Machines from Lattices
We provide the first attribute based encryption (ABE) scheme for Turing machines supporting unbounded collusions from lattice assumptions. In more detail, the encryptor encodes an attribute $\mathbf{x}$ together with a bound $t$ on the machine running time and a message $m$ into the ciphertext, the key generator embeds a Turing machine $M$ into the secret key and decryption returns $m$ if and only if $M(\mathbf{x})=1$. Crucially, the input $\mathbf{x}$ and machine $M$ can be of unbounded size, the time bound $t$ can be chosen dynamically for each input and decryption runs in input specific time.
Previously the best known ABE for uniform computation supported only non-deterministic log space Turing machines (${\sf NL})$ from pairings (Lin and Luo, Eurocrypt 2020). In the post-quantum regime, the state of the art supports non-deterministic finite automata from LWE in the $\textit{ symmetric}$ key setting (Agrawal, Maitra and Yamada, Crypto 2019).
In more detail, our results are:
1. We construct the first ABE for ${\sf NL}$ from the LWE, evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) and Tensor LWE (Wee, Eurocrypt 2022) assumptions. This yields the first (conjectured) post-quantum ABE for ${\sf NL}$.
2. Relying on LWE, evasive LWE and a new assumption called $\textit{circular tensor}$ LWE, we construct ABE for all Turing machines. At a high level, the circular tensor LWE assumption incorporates circularity into the tensor LWE (Wee, Eurocrypt 2022) assumption.
Towards our ABE for Turing machines, we obtain the first CP-ABE for circuits of unbounded depth and size from the same assumptions -- this may be of independent interest.
Post-Quantum DNSSEC with Faster TCP Fallbacks
In classical DNSSEC, a drop-in replacement with quantum-safe cryptography would increase DNS query resolution times by $\textit{at least}$ a factor of $2\times$. Since a DNS response containing large post-quantum signatures is likely to get marked truncated ($\texttt{TC}$) by a nameserver (resulting in a wasted UDP round-trip), the client (here, the resolver) would have to retry its query over TCP, further incurring a $\textit{minimum}$ of two round-trips due to the three-way TCP handshake.
We present $\mathsf{TurboDNS}$: a backward-compatible protocol that eliminates $\textit{two}$ round-trips from the preceding flow by 1) sending TCP handshake data in the initial DNS/UDP flight itself, and 2) immediately streaming the DNS response over TCP after authenticating the client with a cryptographic cookie. Our experiments show that DNSSEC over $\mathsf{TurboDNS}$, with either Falcon-512 or Dilithium-2 as the zone signing algorithm, is practically as fast as the currently deployed ECDSA P-256 and RSA-2048 setups in resolving $\texttt{QTYPE}$ $\texttt{A}$ DNS queries.
Smaug: Modular Augmentation of LLVM for MPC
Secure multi-party computation (MPC) is a crucial tool for privacy-preserving computation, but it is getting increasingly complicated due to recent advancements and optimizations. Programming tools for MPC allow programmers to develop MPC applications without mastering all cryptography. However, most existing MPC programming tools fail to attract real users due to the lack of documentation, maintenance, and the ability to compose with legacy codebases. In this work, we build Smaug, a modular extension of LLVM. Smaug seamlessly brings all LLVM support to MPC programmers, including error messaging, documentation, code optimization, and frontend support to compile from various languages to LLVM intermediate representation (IR). Smaug can efficiently convert non-oblivious LLVM IR to their oblivious counterparts while applying popular optimizations as LLVM code transformations. With benchmarks written in C++ and Rust and backends for Yao and GMW protocols, we observe that Smaug performs as well as (and sometimes much better than) prior tools using domain-specific languages with similar backends. Finally, we use Smaug to compile open-source projects that
implement Minesweeper and Blackjack, producing usable two-party games with ease.
What is "legal" and "illegal?": Social Norms, Current Practices and Perceived Risks among the Cryptocurrency Users in Bangladesh
Cryptocurrency practices worldwide are seen as innovative, yet they navigate a fragmented regulatory environment. Many local authorities aim to balance promoting innovation, safeguarding consumers, and managing potential threats. In particular, it is unclear how people deal with cryptocurrencies in the regions where trading or mining is prohibited. This insight is crucial in conveying the risk reduction strategies. To address this, we conducted semi-structured interviews with 28 cryptocurrency traders and miners from Bangladesh, where the local authority is hostile towards cryptocurrencies. Our research revealed that the participants use unique strategies to mitigate risks around cryptocurrencies. Our findings indicate a prevalent uncertainty at both personal and organizational levels concerning the interpretation of laws, a situation worsened by the actions of the major financial service providers who indirectly facilitate cryptocurrency transactions. We further connect our findings to the broader issues in HCI regarding folk models, informal market and legality, and education and awareness.
Nearly Quadratic Asynchronous Distributed Key Generation
We prove that for any $1\le k\le \log n$, given a VRF setup and assuming secure erasures, there exists a protocol for Asynchronous Distributed Key Generation (ADKG) that is resilient to a strongly adaptive adversary that can corrupt up to $f<n/3$ parties. With all but negligible probability, all nonfaulty parties terminate in an expected $O(k)$ rounds and send a total expected $\tilde{O}(n^{2+1/k})$ messages.
Compact Key Storage in the Standard Model
In recent work [Crypto'24], Dodis, Jost, and Marcedone introduced Compact Key Storage (CKS) as a modern approach to backup for end-to-end (E2E) secure applications. As most E2E-secure applications rely on a sequence of secrets $(s_1,...,s_n)$ from which, together with the ciphertexts sent over the network, all content can be restored, Dodis et al. introduced CKS as a primitive for backing up $(s_1,...,s_n)$. The authors provided definitions as well as two practically efficient schemes (with different functionality-efficiency trade-offs). Both, their security definitions and schemes relied however on the random oracle model (ROM).
In this paper, we first show that this reliance is inherent. More concretely, we argue that in the standard model, one cannot have a general CKS instantiation that is applicable to all "CKS-compatible games", as defined by Dodis et al., and realized by their ROM construction. Therefore, one must restrict the notion of CKS-compatible games to allow for standard model CKS instantiations.
We then introduce an alternative standard-model CKS definition that makes concessions in terms of functionality (thereby circumventing the impossibility). More precisely, we specify CKS which does not recover the original secret $s_i$ but a derived key $k_i$, and then observe that this still suffices for many real-world applications. We instantiate this new notion based on minimal assumptions. For passive security, we provide an instantiation based on one-way functions only. For stronger notions, we additionally need collision-resistant hash functions and dual-PRFs, which we argue to be minimal.
Finally, we provide a modularization of the CKS protocols of Dodis et al. In particular, we present a unified protocol (and proof) for standard-model equivalents for both protocols introduced in the original work.
MicroNova: Folding-based arguments with efficient (on-chain) verification
We describe the design and implementation of MicroNova, a folding-based recursive argument for producing proofs of incremental computations of the form $y = F^{(\ell)}(x)$, where $F$ is a possibly non-deterministic computation (encoded using a constraint system such as R1CS), $x$ is the initial input, $y$ is the output, and $\ell > 0$. The proof of an $\ell$-step computation is produced step-by-step such that the proof size nor the time to verify it depends on $\ell$. The proof at the final iteration is then compressed, to achieve further succinctness in terms of proof size and verification time. Compared to prior folding-based arguments, a distinguishing aspect of MicroNova is the concrete efficiency of the verifier—even in a resource-constrained environment such as Ethereum’s blockchain. In particular, the compressed proof consists of $O(\log{N})$ group elements and it can be verified with $O(\log{N})$ group scalar multiplications and two pairing operations, where $N$ is the number of constraints for a single invocation of $F$. MicroNova requires a universal trusted setup and can employ any existing setup material created for the popular KZG univariate polynomial commitment scheme. Finally, we implement and experimentally evaluate MicroNova. We find that MicroNova’s proofs can be efficiently verified on the Ethereum blockchain with $\approx$2.2M gas. Furthermore, MicroNova’s prover incurs minimal overheads atop its baseline Nova’s prover.
NMFT: A Copyrighted Data Trading Protocol based on NFT and AI-powered Merkle Feature Tree
With the rapid growth of blockchain-based Non-Fungible Tokens (NFTs), data trading has evolved to incorporate NFTs for ownership verification. However, the NFT ecosystem faces significant challenges in copyright protection, particularly when malicious buyers slightly modify the purchased data and re-mint it as a new NFT, infringing upon the original owner's rights. In this paper, we propose a copyright-preserving data trading protocol to address this challenge.
First, we introduce the Merkle Feature Tree (MFT), an enhanced version of the traditional Merkle Tree that incorporates an AI-powered feature layer above the data layer. Second, we design a copyright challenge phase during the trading process, which recognizes the data owner with highly similar feature vectors and earlier on-chain timestamp as the legitimate owner. Furthermore, to achieve efficient and low-gas feature vector similarity computation on blockchain, we employ Locality-Sensitive Hashing (LSH) to compress high-dimensional floating-point feature vectors into single uint256 integers.
Experiments with multiple image and text feature extraction models demonstrate that LSH effectively preserves the similarity between highly similar feature vectors before and after compression, thus supporting similarity-based copyright challenges. Experimental results on the Ethereum Sepolia testnet demonstrate NMFT's scalability with sublinear growth in gas consumption while maintaining stable latency.
Worst-Case to Average-Case Hardness of LWE: An Alternative Perspective
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems to LWE, specifically from the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Secure Vault scheme in the Cloud Operating Model
The rising demand for data privacy in cloud-based environments has led to the development of advanced mechanisms for securely managing sensitive information. A prominent solution in this domain is the "Data Privacy Vault," a concept that is being provided commercially by companies such as Hashicorp, Basis Theory, Skyflow Inc., VGS, Evervault, Protegrity, Anonomatic, and BoxyHQ. However, no existing work has rigorously defined the security notions required for a Data Privacy Vault or proven them within a formal framework which is the focus of this paper.
Among its other uses, data privacy vaults are increasingly being used as storage for LLM training data which necessitates a scheme that enables users to securely store sensitive information in the cloud while allowing controlled access for performing analytics on specific non-sensitive attributes without exposing sensitive data. Conventional solutions involve users generating encryption keys to safeguard their data, but these solutions are not deterministic and are therefore unsuited for the LLM setting. To address this, we propose a novel framework that is deterministic as well as semantically secure. Our scheme operates in the Cloud Operating model where the server is trusted but stateless, and the storage is outsourced.
We provide a formal definition and a concrete instantiation of this data privacy vault scheme. We introduce a novel tokenization algorithm that serves as the core mechanism for protecting sensitive data within the vault. Our approach not only generates secure, unpredictable tokens for sensitive data but also securely stores sensitive data while enabling controlled data retrieval based on predefined access levels. Our work fills a significant gap in the existing literature by providing a formalized framework for the data privacy vault, complete with security proofs and a practical construction - not only enhancing the understanding of vault schemes but also offering a viable solution for secure data management in the era of cloud computing.
FO derandomization sometimes damages security
FO derandomization is a common step in protecting against chosen-ciphertext attacks. There are theorems qualitatively stating that FO derandomization preserves ROM OW-CPA security. However, quantitatively, these theorems are loose, allowing the possibility of the derandomized security level being considerably smaller than the original security level. Many cryptosystems rely on FO derandomization without adjusting parameters to account for this looseness.
This paper proves, for two examples of a randomized ROM PKE, that derandomizing the PKE degrades ROM OW-CPA security by a factor close to the number of hash queries. The first example can be explained by the size of the message space of the PKE; the second cannot. This paper also gives a concrete example of a randomized non-ROM PKE that appears to have the same properties regarding known attacks.
As a spinoff, this paper presents a 2^88-guess attack exploiting derandomization to break one out of 2^40 ciphertexts for a FrodoKEM-640 public key. This attack contradicts the official FrodoKEM claim that "the FrodoKEM parameter sets comfortably match their target security levels with a large margin". The official responses to this attack so far include (1) renaming FrodoKEM as "ephemeral FrodoKEM" and (2) proposing a newly patched "FrodoKEM".
This paper does not involve new cryptanalysis: the attacks are straightforward. What is new is finding examples where derandomization damages security.
Exploring Large Integer Multiplication for Cryptography Targeting In-Memory Computing
Emerging cryptographic systems such as Fully Homomorphic Encryption (FHE) and Zero-Knowledge Proofs (ZKP) are computation- and data-intensive. FHE and ZKP implementations in software and hardware largely rely on the von Neumann architecture, where a significant amount of energy is lost on data movements. A promising computing paradigm is computing in memory (CIM), which enables computations to occur directly within memory, thereby reducing data movements and energy consumption. However, efficiently performing large integer multiplications - critical in FHE and ZKP - is an open question, as existing CIM methods are limited to small operand sizes. In this work, we address this question by exploring advanced algorithmic approaches for large integer multiplication,
identifying the Karatsuba algorithm as the most effective for
CIM applications. Thereafter, we design the first Karatsuba multiplier for resistive CIM crossbars. Our multiplier uses a three-stage pipeline to enhance throughput and, additionally, balances memory endurance with efficient array sizes. Compared to existing CIM multiplication methods, when scaled up to the bit widths required in ZKP and FHE, our design achieves up to 916x in throughput and 281x in area-time product improvements.
Two generalizations of almost perfect nonlinearity
Almost perfect nonlinear (in brief, APN) functions are vectorial functions $F:\mathbb F_2^n\rightarrow \mathbb F_2^n$ playing roles in several domains of information protection, at the intersection of computer science and mathematics. Their definition comes from cryptography and is also related to coding theory.
When they are used as substitution boxes (S-boxes, which are the only nonlinear components in block ciphers), APN functions contribute optimally to the resistance against differential attacks. This makes of course a strong cryptographic motivation for their study, which has been very active since the 90's, and has posed interesting and difficult mathematical questions, some of which are still unanswered. \\Since the introduction of differential attacks, more recent types of cryptanalyses have been designed, such as integral attacks. No notion about S-boxes has been identified which would play a similar role with respect to integral attacks. In this paper, we study two generalizations of APNness that are natural from a mathematical point of view, since they directly extend classical characterizations of APN functions. We call these two notions strong non-normality and sum-freedom. The former existed already for Boolean functions (it had been introduced by Dobbertin) and the latter is new. \\
We study how these two notions are related to cryptanalyses (the relation is weaker for strong non-normality).
The two notions behave differently from each other while they have similar definitions. They behave differently from differential uniformity, which is a well-known generalization of APNness. We study the different ways to define them. We prove their satisfiability, their monotonicity, their invariance under classical equivalence relations and we characterize them by the Walsh transform.
\\
We finally begin a study of the multiplicative inverse function (used as a substitution box in the Advanced Encryption Standard and other block ciphers) from the viewpoint of these two notions. In particular, we find a simple expression of the sum of the values taken by this function over affine subspaces of $\mathbb F_{2^n}$ that are not vector subspaces. This formula shows that the sum never vanishes on such affine spaces. We also give a formula for the case of a vector space defined by one of its bases.
PQConnect: Automated Post-Quantum End-to-End Tunnels
This paper introduces PQConnect, a post-quantum end-to-end tunneling protocol that automatically protects all packets between clients that have installed PQConnect and servers that have installed and configured PQConnect.
Like VPNs, PQConnect does not require any changes to higher-level protocols and application software. PQConnect adds cryptographic protection to unencrypted applications, works in concert with existing pre-quantum applications to add post-quantum protection, and adds a second application-independent layer of defense to any applications that have begun to incorporate application-specific post-quantum protection.
Unlike VPNs, PQConnect automatically creates end-to-end tunnels to any number of servers using automatic peer discovery, with no need for the client administrator to configure per-server information. Each server carries out a client-independent configuration step to publish an announcement that the server's name accepts PQConnect connections. Any PQConnect client connecting to that name efficiently finds this announcement, automatically establishes a post-quantum point-to-point IP tunnel to the server, and routes traffic for that name through that tunnel.
The foundation of security in PQConnect is the server's long-term public key used to encrypt and authenticate all PQConnect packets. PQConnect makes a conservative choice of post-quantum KEM for this public key. PQConnect also uses a smaller post-quantum KEM for forward secrecy, and elliptic curves to ensure pre-quantum security even in case of security failures in KEM design or KEM software. Security of the handshake component of PQConnect has been symbolically proven using
Tamarin.
Quantum One-Time Protection of any Randomized Algorithm
The meteoric rise in power and popularity of machine learning models dependent on valuable training data has reignited a basic tension between the power of running a program locally and the risk of exposing details of that program to the user. At the same time, fundamental properties of quantum states offer new solutions to data and program security that can require strikingly few quantum resources to exploit, and offer advantages outside of mere computational run time. In this work, we demonstrate such a solution with quantum one-time tokens.
A quantum one-time token is a quantum state that permits a certain program to be evaluated exactly once. One-time security guarantees, roughly, that the token cannot be used to evaluate the program more than once. We propose a scheme for building quantum one-time tokens for any randomized classical program, which include generative AI models. We prove that the scheme satisfies an interesting definition of one-time security as long as outputs of the classical algorithm have high enough min-entropy, in a black box model.
Importantly, the classical program being protected does not need to be implemented coherently on a quantum computer. In fact, the size and complexity of the quantum one-time token is independent of the program being protected, and additional quantum resources serve only to increase the security of the protocol. Due to this flexibility in adjusting the security, we believe that our proposal is parsimonious enough to serve as a promising candidate for a near-term useful demonstration of quantum computing in either the NISQ or early fault tolerant regime.
Encrypted Multi-map that Hides Query, Access, and Volume Patterns
Uncategorized
Uncategorized
We present an encrypted multi-map, a fundamental data structure underlying
searchable encryption/structured encryption. Our protocol supports updates and
is designed for applications demanding very strong data security. Not only it
hides the information about queries and data, but also the query, access, and
volume patterns. Our protocol utilizes a position-based ORAM and an encrypted
dictionary. We provide two instantiations of the protocol, along with their
operation-type-revealing variants, all using PathORAM but with different
encrypted dictionary instantiations (AVL tree or BSkiplist). Their efficiency
has been evaluated through both asymptotic and concrete complexity analysis,
outperforming prior work while achieving the same level of strong security. We
have implemented our instantiations and evaluated their performance on two
real-world email databases (Enron and Lucene). We also discuss the strengths and
limitations of our construction, including its resizability, and highlight that
optimized solutions, even with heavy network utilization, may become practical
as network speed improves.
Last updated: 2024-12-28
Computing the Hermite Normal Form: A Survey
The Hermite Normal Form (HNF) of a matrix is an analogue of the echolon form over the integers. Any integer matrix can be transformed into its unique HNF.
A common obstacle in computing the HNF is the extensive blow up of intermediate values. As first approach to this problem, we discuss the $Modulo Determinant Algorithm$. It keeps the entries bounded by $d$, the determinant of the lattice, and has a time complexity of $\mathcal{O}(n^3\log^2 d)$, where $n$ is the dimension of the matrix. Although this algorithm is very useful if the determinant is small, in the general case, the entries still become extremely large.
Secondly, we study the $Linear Space Algorithm$. It has a time complexity of $\mathcal{O}(n^5\mathrm{polylog}(M, n))$, where $M$ denotes the largest absolute value of the input matrix. This is as fast as the best previously known algorithms, but in contrast, it assures space complexity linear in the input size, i.e. $\mathcal{O}(n^2\log M)$.
As last algorithm to compute the HNF we analyze the $Heuristic Algorithm$, which is based on the first two algorithms. It achieves a much faster runtime in practice, yielding a heuristic runtime of $\mathcal{O}(n^4\mathrm{polylog}(M, n))$, while keeping the linear space complexity.
Besides some performance speed ups, the $Linear Space Algorithm$ and $Heuristic Algorithm$ are precisely the algorithms implemented by SageMath.
An Embedded Domain-Specific Language for Using One-Hot Vectors and Binary Matrices in Secure Computation Protocols
The use of secure computation protocols within production software systems and applications is complicated by the fact that such protocols sometimes rely upon -- or are most compatible with -- unusual or restricted models of computation. We employ the features of a contemporary and widely used programming language to create an embedded domain-specific language for working with user-defined functions as binary matrices that operate on one-hot vectors. At least when working with small finite domains, this allows programmers to overcome the restrictions of more simple secure computation protocols that support only linear operations (such as addition and scalar multiplication) on private inputs. Notably, programmers are able to define their own input and output domains, to use all available host language features and libraries to define functions that operate on these domains, and to translate inputs, outputs, and functions between their usual host language representations and their one-hot vector or binary matrix forms. Furthermore, these features compose in a straightforward way with simple secure computation libraries available for the host language.
Improved Rejection Sampling for Compact Lattice Signatures
One of the primary approaches used to construct lattice-based signature schemes is through the “Fiat-Shamir with aborts” methodology. Such a scheme may abort and restart during signing which corresponds to rejection sampling produced signatures to ensure that they follow a distribution that is independent of the secret key. This rejection sampling is only feasible when the output distribution is sufficiently wide, limiting how compact this type of signature schemes can be.
In this work, we develop a new method to construct signatures influenced by the rejection condition. This allows our rejection sampling to target significantly narrower output distributions than previous approaches, allowing much more compact signatures. The combined size of a signature and a verification key for the resulting scheme is less than half of that for ML-DSA and comparable to that of compact hash-and-sign lattice signature schemes, such as Falcon.
Definition of End-to-end Encryption
This document provides a definition of end-to-end encryption (E2EE). End-to-end encryption is an application of cryptographic mechanisms to provide security and privacy to communication between endpoints. Such communication can include messages, email, video, audio, and other forms of media. E2EE provides security and privacy through confidentiality, integrity, authenticity and forward secrecy for communication amongst people.
Zero Knowledge Memory-Checking Techniques for Stacks and Queues
There are a variety of techniques for implementing read/write memory inside of zero-knowledge proofs and validating consistency of memory accesses. These techniques are generally implemented with the goal of implementing a RAM or ROM. In this paper, we present memory techniques for more specialized data structures: queues and stacks. We first demonstrate a technique for implementing queues in arithmetic circuits that requires 3 multiplication gates and 1 advice value per read and 2 multiplication gates per write. This is based on using Horner's Rule to evaluate 2 polynomials at random points and check that the values read from the queue are equal to the values written to the queue as vectors. Next, we present a stack scheme based on an optimized version of the RAM scheme of Yang and Heath that requires 5 multiplication gates and 4 advice values per read and 2 multiplication gates per write. This optimizes the RAM scheme by observing that reads and writes to a stack are already "paired" which avoids the need for inserting dummy operations for each access as in a stack.
We also introduce a different notion of "multiplexing" or "operation privacy" that is better suited to the use case of stacks and queues. All of the techniques we provide are based on evaluating polynomials at random points and using randomly evaluated polynomials as universal hash functions to check set/vector equality.
Fully Hybrid TLSv1.3 in WolfSSL on Cortex-M4
To provide safe communication across an unprotected medium such as the internet, network protocols are being established. These protocols employ public key techniques to perform key exchange and authentication. Transport Layer Security (TLS) is a widely used network protocol that enables secure communication between a server and a client. TLS is employed in billions of transactions per second. Contemporary protocols depend on traditional methods that utilize the computational complexity of factorization or (elliptic curve) logarithm mathematics problems. The ongoing advancement in the processing power of classical computers requires an ongoing increase in the security level of the underlying cryptographic algorithms. This study focuses on the analysis of Curve448 and Edwards curve Ed448, renowned for their superior security features that offer a 224-bit level of security as part of the TLSv1.3 protocol. The exponential advancement of quantum computers, however, presents a substantial threat to secure network communication that depends on classical crypto schemes, irrespective of their degree of security. Quantum computers have the capability to resolve these challenges within a feasible timeframe. In order to successfully transition to Post-Quantum secure network protocols, it is imperative to concurrently deploy both classical and post-quantum algorithms. This is done to fulfill the requirements of both enterprises and governments, while also instilling more assurance in the reliability of the post-quantum systems. This paper presents a detailed hybrid implementation architecture of the TLSv1.3 network protocol. We showcase the first deployment of Curve448 and Crystals-Kyber for the purpose of key exchanging, and Ed448 and Crystals-Dilithium for verifying the authenticity of entities and for X.509 Public Key Infrastructure (PKI). We rely upon the widely used OpenSSL library and the specific wolfSSL library for embedded devices to provide our results for server and client applications.
ClusterGuard: Secure Clustered Aggregation for Federated Learning with Robustness
Federated Learning (FL) enables collaborative model training while preserving data privacy by avoiding the sharing of raw data. However, in large-scale FL systems, efficient secure aggregation and dropout handling remain critical challenges. Existing state-of-the-art methods, such as those proposed by Liu et al. (UAI'22) and Li et al. (ASIACRYPT'23), suffer from prohibitive communication overhead, implementation complexity, and vulnerability to poisoning attacks. Alternative approaches that utilize partially connected graph structures (resembling client grouping) to reduce communication costs, such as Bell et al. (CCS'20) and ACORN (USENIX Sec'23), face the risk of adversarial manipulation during the graph construction process.
To address these issues, we propose ClusterGuard, a secure clustered aggregation scheme for federated learning. ClusterGuard leverages Verifiable Random Functions (VRF) to ensure fair and transparent cluster selection and employs a lightweight key-homomorphic masking mechanism, combined with efficient dropout handling, to achieve secure clustered aggregation. Furthermore, ClusterGuard incorporates a dual filtering mechanism based on cosine similarity and norm to effectively detect and mitigate poisoning attacks.
Extensive experiments on standard datasets demonstrate that ClusterGuard achieves over 2x efficiency improvement compared to advanced secure aggregation methods. Even with 20% of clients being malicious, the trained model maintains accuracy comparable to the original model, outperforming state-of-the-art robustness solutions. ClusterGuard provides a more efficient, secure, and robust solution for practical federated learning.
zkFFT: Extending Halo2 with Vector Commitments & More
This paper introduces zkFFT, a novel zero-knowledge argument designed to efficiently generate proofs for FFT (Fast Fourier Transform) relations. Our approach enables the verification that one committed vector is the FFT of another, addressing an efficiency need in general-purpose non-interactive zero-knowledge proof systems where the proof relation utilizes vector commitments inputs.
We present a concrete enhancement to the Halo2 proving system, demonstrating how zkFFT optimizes proofs in scenarios where the proof relation includes one or more vector commitments. Specifically, zkFFT incorporates streamlined logic within Halo2 and similar systems, augmenting proof and verification complexity by only $O(\text{log}N)$, where $N$ is the vector size. This represents a substantial improvement over conventional approach, which often necessitates specific circuit extensions to validate the integrity of vector commitments and their corresponding private values in the arithmetic framework of the proof relation. The proposed zkFFT method supports multiple vector commitments with only a logarithmic increase in extension costs, making it highly scalable. This capability is pivotal for practical applications involving multiple pre-committed values within proof statements.
Apart from Halo2, our technique can be adapted to any other zero-knowledge proof system that relies on arithmetization, where each column is treated as an evaluation of a polynomial over a specified domain, computes this polynomial via FFT, and subsequently commits to the resulting polynomial using a polynomial commitment scheme based on inner-product arguments. Along with efficient lookup and permutation arguments, zkFFT will streamline and significantly optimize the generation of zero-knowledge proofs for arbitrary relations.
Beyond the applications in augmenting zero-knowledge proof systems, we believe that the formalized zkFFT argument can be of independent interest.